Search Results

Search found 9682 results on 388 pages for 'anonymous types'.

Page 340/388 | < Previous Page | 336 337 338 339 340 341 342 343 344 345 346 347  | Next Page >

  • Writing a method to 'transform' an immutable object: how should I approach this?

    - by Prog
    (While this question has to do with a concrete coding dilemma, it's mostly about what's the best way to design a function.) I'm writing a method that should take two Color objects, and gradually transform the first Color into the second one, creating an animation. The method will be in a utility class. My problem is that Color is an immutable object. That means that I can't do color.setRGB or color.setBlue inside a loop in the method. What I can do, is instantiate a new Color and return it from the method. But then I won't be able to gradually change the color. So I thought of three possible solutions: 1- The client code includes the method call inside a loop. For example: int duration = 1500; // duration of the animation in milliseconds int steps = 20; // how many 'cycles' the animation will take for(int i=0; i<steps; i++) color = transformColor(color, targetColor, duration, steps); And the method would look like this: Color transformColor(Color original, Color target, int duration, int steps){ int redDiff = target.getRed() - original.getRed(); int redAddition = redDiff / steps; int newRed = original.getRed() + redAddition; // same for green and blue .. Thread.sleep(duration / STEPS); // exception handling omitted return new Color(newRed, newGreen, newBlue); } The disadvantage of this approach is that the client code has to "do part of the method's job" and include a for loop. The method doesn't do it's work entirely on it's own, which I don't like. 2- Make a mutable Color subclass with methods such as setRed, and pass objects of this class into transformColor. Then it could look something like this: void transformColor(MutableColor original, Color target, int duration){ final int STEPS = 20; int redDiff = target.getRed() - original.getRed(); int redAddition = redDiff / steps; int newRed = original.getRed() + redAddition; // same for green and blue .. for(int i=0; i<STEPS; i++){ original.setRed(original.getRed() + redAddition); // same for green and blue .. Thread.sleep(duration / STEPS); // exception handling omitted } } Then the calling code would usually look something like this: // The method will usually transform colors of JComponents JComponent someComponent = ... ; // setting the Color in JComponent to be a MutableColor Color mutableColor = new MutableColor(someComponent.getForeground()); someComponent.setForeground(mutableColor); // later, transforming the Color in the JComponent transformColor((MutableColor)someComponent.getForeground(), new Color(200,100,150), 2000); The disadvantage is - the need to create a new class MutableColor, and also the need to do casting. 3- Pass into the method the actual mutable object that holds the color. Then the method could do object.setColor or similar every iteration of the loop. Two disadvantages: A- Not so elegant. Passing in the object that holds the color just to transform the color feels unnatural. B- While most of the time this method will be used to transform colors inside JComponent objects, other kinds of objects may have colors too. So the method would need to be overloaded to receive other types, or receive Objects and have instanceof checks inside.. Not optimal. Right now I think I like solution #2 the most, than solution #1 and solution #3 the least. However I'd like to hear your opinions and suggestions regarding this.

    Read the article

  • SSL connection errors from Apache

    - by Yang
    I'm running a (self-signed) SSL cert site on Apache/2.2.14 on Ubuntu 10.04, but various browsers are giving errors on half the connection attempts. Just now saw this transient error from Chrome: "Error 126 (net::ERR_SSL_BAD_RECORD_MAC_ALERT): Unknown error." Hit refresh and the problem goes away for a while. wget too: $ wget --no-check-certificate https://dev.foo.com/deps/ --2010-09-08 19:30:26-- https://dev.foo.com/deps/ Resolving dev.foo.com... 184.72.53.220 Connecting to dev.foo.com|184.72.53.220|:443... connected. OpenSSL: error:0407006A:rsa routines:RSA_padding_check_PKCS1_type_1:block type is not 01 OpenSSL: error:04067072:rsa routines:RSA_EAY_PUBLIC_DECRYPT:padding check failed OpenSSL: error:1408D07B:SSL routines:SSL3_GET_KEY_EXCHANGE:bad signature Unable to establish SSL connection. Run it right away again and it works: $ wget --no-check-certificate https://dev.foo.com/deps/ --2010-09-08 19:30:29-- https://dev.foo.com/deps/ Resolving dev.foo.com... 184.72.53.220 Connecting to dev.foo.com|184.72.53.220|:443... connected. WARNING: cannot verify dev.foo.com's certificate, issued by `/CN=dev.foo.com': Self-signed certificate encountered. HTTP request sent, awaiting response... 200 OK Length: 3157 (3.1K) [text/html] Saving to: `index.html' 100%[======================================>] 3,157 --.-K/s in 0s 2010-09-08 19:30:29 (48.6 MB/s) - `index.html' saved [3157/3157] In my sites-enabled/default-ssl: SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key The cert: -----BEGIN CERTIFICATE----- MIIBszCCARwCCQCa0TzNwqLgsTANBgkqhkiG9w0BAQUFADAeMRwwGgYDVQQDExNk ZXYucGFydHlvbmRhdGEuY29tMB4XDTEwMDgyNzA2MzA1N1oXDTIwMDgyNDA2MzA1 N1owHjEcMBoGA1UEAxMTZGV2LnBhcnR5b25kYXRhLmNvbTCBnzANBgkqhkiG9w0B AQEFAAOBjQAwgYkCgYEAzXDEULpCUqIc9hV/ESFapkckR2uoYINA81DvG2aQZ9Ot Q30OwX2ae2CC4bSzJEIVlahU8vjVrWpmpa28NEhQbqh4ywwbl1XDrEVYI6Gkfimf snJhOKyaVrEhlwutYtBjmsz3ZIqwymMPm/6smVcSS5dJIynlSmtltxX6ivPcO8UC AwEAATANBgkqhkiG9w0BAQUFAAOBgQBGxHVkpSSOnZjzuySRepjhAlV/yhe9Fx23 fh12WrjQMEi98B7JEuNSLXDWckUN7O6XRc3RzKmazcGHJqzhn0Ov6gAmAE2XjZ/x VW21xmaLwk+KgYKFJbJJaP3jMSpU7I3aa11wqAkR2Zd4Nkm9N0YXYIzcBdfztTVI Et8mEHBFdg== -----END CERTIFICATE----- The cert is in turn generated via: $ make-ssl-cert generate-default-snakeoil --force-overwrite Apache version. $ apache2 -V Server version: Apache/2.2.14 (Ubuntu) Server built: Apr 13 2010 20:22:19 Server's Module Magic Number: 20051115:23 Server loaded: APR 1.3.8, APR-Util 1.3.9 Compiled using: APR 1.3.8, APR-Util 1.3.9 Architecture: 64-bit Server MPM: Worker threaded: yes (fixed thread count) forked: yes (variable process count) Server compiled with.... -D APACHE_MPM_DIR="server/mpm/worker" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=128 -D HTTPD_ROOT="" -D SUEXEC_BIN="/usr/lib/apache2/suexec" -D DEFAULT_PIDLOG="/var/run/apache2.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="/etc/apache2/mime.types" -D SERVER_CONFIG_FILE="/etc/apache2/apache2.conf" I don't administer the network, hardware, etc. - this is all running on Amazon EC2. I'm not running a load-balancer or anything else in front of the server. I'm making direct TCP connections to that host (AFAIK). Any ideas? Thanks in advance for any help.

    Read the article

  • Why does DEP kill IE when accessing Microsoft FTP?

    - by Sammy
    I start up IE (9.0.8112.16421) with about:blank and I go to ftp://ftp.microsoft.com/ I press Alt, click View and then Open FTP Site in Windows Explorer. At this point IE stops responding and eventually crashes (though the window is still active, sometimes) and I get the usual Windows dialog box saying that the program has stopped working. From this dialog box I click on the option to try to find solutions to the problem and the progress bar just keeps scrolling without giving me any result page whatsoever, so I have to abort by clicking Cancel. Then I get the bubble type of pop-up message from the system tray saying that DEP has stopped the program from executing. What gives? Why would DEP (part of Microsoft Windows) be preventing IE (a Microsoft product) from performing a perfectly legitimate action from Microsoft's own FTP site? The OS is Windows Vista HP SP2, Swedish locale. Screenshots as follows... Update: I normally have UAC disabled, but I have discovered that enabling it has an effect on IE when I click the FTP option from the View menu, just as I suspected. I basically tried starting IE in its 32-bit and 64-bit version, with and without add-ons, and switching UAC on and off, and then trying to go to View and the FTP option (as shown above). Here are the results. With UAC off and DEP on Action: IE 32-bit, normal start, go to ftp://ftp.microsoft.com/, view menu, FTP option. Result: crash Action: IE 32-bit, extoff, go to ftp://ftp.microsoft.com/, view menu, FTP option. Result: crash Action: IE 64-bit, normal start, go to ftp://ftp.microsoft.com/, view menu, FTP option. Result: information & warning message Action: IE 64-bit, extoff, go to ftp://ftp.microsoft.com/, view menu, FTP option. Result: information & warning message This is the information and warning message I get if I use IE 64-bit: The first message is an FTP proxy warning. It says that the folder ftp://ftp.microsoft.com/ will be write-protected because proxy server is not configured to allow full access. It goes on to say that if I want to move, paste, change name or delete files I must use another type of proxy, and that I should contact the system admin for more information (the usual recommendation when they have no clue of what's going on). What the heck is all this about? I don't even use a proxy server, as you can see from the next screenshot (Internet Options, Connections, LAN settings dialog). That second message only states that the FTP site cannot be viewed in (Windows) Explorer. With UAC off, I always get these two messages when running the 64-bit version of IE. With UAC on and DEP on Action: IE 32-bit, normal start, go to ftp://ftp.microsoft.com/, view menu, FTP option. Result: crash Action: IE 32-bit, extoff, go to ftp://ftp.microsoft.com/, view menu, FTP option. Result: security warning message, prompts to allow action Action: IE 64-bit, normal start, go to ftp://ftp.microsoft.com/, view menu, FTP option. Result: security warning message, prompts to allow action Action: IE 64-bit, extoff, go to ftp://ftp.microsoft.com/, view menu, FTP option. Result: security warning message, prompts to allow action As you can see from this list, if I have UAC enabled I actually get rid of these messages and opening the FTP site in Windows Explorer (from IE) actually works (except for 32-bit version which still crashes). Here is the security warning message: The fact that the 32-bit IE still crashes could be an indicator that this has something to do with one or several add-ons in that bit-version of IE. The 32-bit IE doesn't crash if it's started with the extoff flag. If this is affecting only the 32-bit IE then it's only normal that the 64-bit IE doesn't have this problem because it would not be using any of the add-ons used by the 32-bit version, they are not compatible with 64-bit (although some add-ons work both with 32-bit and 64-bit IE). Figuring out which add-on (if any) is causing this problem is a whole new question... but I seem to be closer to an answer now, and a possible solution. I could of course just add IE (32-bit) in the exclusion list of DEP. In fact, I have already tested this and it causes IE to perform this task without hiccups. But I don't really want to disable DEP, or force it on all Windows programs and services (except the ones I strictly specify in the exception list). (In other words DEP can't really be completely disabled, you can only switch between two modes of operation.) Update 2: This is interesting... I start 32-bit IE, go to ftp://ftp.microsoft.com/ and click on View, and Open FTP Site in Windows Explorer. The result is a crash!! Then I start 32-bit IE with extoff flag to disable add-ons, I go to ftp://ftp.microsoft.com/ and click on View, and Open FTP Site in Windows Explorer. I get the security warning, as expected with UAC enabled, and it opens up in Windows Explorer. Now... I close Windows Explorer, and I close IE. I then start 32-bit IE (normal start, with add-ons), I go to ftp://ftp.microsoft.com/ and click on View, and Open FTP Site in Windows Explorer. Now this time it doesn't crash! Instead, I get the screenshot number 5 as seen above. This is the FTP proxy warning message. Now get this... if I click the close button to get rid of this message, what happens is that Firefox starts up, and it goes to ftp://ftp.microsoft.com/ The fact that this works with 32-bit IE (with add-ons) the second time around, is because I am still logged in as anonymous to the FTP server. The log-in has not timed out yet. Standard log-in timeout for FTP servers is usually 60 to 120 seconds. I got logged in to it the first time I ran 32-bit IE with the extoff flag (no add-ons) which actually works and connects using Windows Explorer. Update 3: The connection to the FTP server has timed out by now. So now if I run 32-bit IE (with add-ons) and repeat the steps as before it crashes, just as expected... In conclusion: If I have already been connected to the FTP server via Windows Explorer, and I go to this FTP address in 32-bit IE and I pick the FTP option from the view menu to open it in Windows Explorer, it gives me a FTP proxy server warning and then opens the address in default web browser (Firefox in my case). If I have not been connected to the FTP server via Windows Explorer previously, and I go to this FTP address in 32-bit IE and I pick the FTP option from the view menu top open it in Windows Explorer, then it crashes IE! This is just great... It's not that I care much for using Internet Explorer or the Windows Explorer to log in to FTP servers. This just shows why IE is not the best browser choice. This reminds me of the time when Microsoft was enforcing the use of Internet Explorer as default browser for opening web links and other web resources, despite the fact that the user had installed an alternative browser on the system. Even if the user explicitly set the default browser to be something else and not Internet Explorer in the Windows options, IE would still pop up sometimes, depending on what web resources the user was trying to access. Setting default browser had no effect. It was hard-coded that IE is the browser of choice, especially when accessing Microsoft product or help pages. The web page would actually say that you are not using IE, and that you must open it in IE to view it. Unfortunately you would not be able to open it manually in a different browser by simply copying and pasting the URL from the address bar, because it would show a different URL, and the original URL would re-direct to the "you are using the wrong browser" page so you would not have the time to cut it to clipboard. Thankfully those days are over. Now-days Microsoft is forced to distribute IE and WMP free versions of Windows for the EU market. The way it should be! These programs have to be optional, not mandatory.

    Read the article

  • SSL connection errors from Apache

    - by Yang
    I'm running a (self-signed) SSL cert site on Apache/2.2.14 on Ubuntu 10.04, but various browsers are giving errors on half the connection attempts. Just now saw this transient error from Chrome: "Error 126 (net::ERR_SSL_BAD_RECORD_MAC_ALERT): Unknown error." Hit refresh and the problem goes away for a while. wget too: $ wget --no-check-certificate https://dev.partyondata.com/deps/ --2010-09-08 19:30:26-- https://dev.partyondata.com/deps/ Resolving dev.partyondata.com... 184.72.53.220 Connecting to dev.partyondata.com|184.72.53.220|:443... connected. OpenSSL: error:0407006A:rsa routines:RSA_padding_check_PKCS1_type_1:block type is not 01 OpenSSL: error:04067072:rsa routines:RSA_EAY_PUBLIC_DECRYPT:padding check failed OpenSSL: error:1408D07B:SSL routines:SSL3_GET_KEY_EXCHANGE:bad signature Unable to establish SSL connection. Run it right away again and it works: $ wget --no-check-certificate https://dev.partyondata.com/deps/ --2010-09-08 19:30:29-- https://dev.partyondata.com/deps/ Resolving dev.partyondata.com... 184.72.53.220 Connecting to dev.partyondata.com|184.72.53.220|:443... connected. WARNING: cannot verify dev.partyondata.com's certificate, issued by `/CN=dev.partyondata.com': Self-signed certificate encountered. HTTP request sent, awaiting response... 200 OK Length: 3157 (3.1K) [text/html] Saving to: `index.html' 100%[======================================>] 3,157 --.-K/s in 0s 2010-09-08 19:30:29 (48.6 MB/s) - `index.html' saved [3157/3157] In my sites-enabled/default-ssl: SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key The cert: -----BEGIN CERTIFICATE----- MIIBszCCARwCCQCa0TzNwqLgsTANBgkqhkiG9w0BAQUFADAeMRwwGgYDVQQDExNk ZXYucGFydHlvbmRhdGEuY29tMB4XDTEwMDgyNzA2MzA1N1oXDTIwMDgyNDA2MzA1 N1owHjEcMBoGA1UEAxMTZGV2LnBhcnR5b25kYXRhLmNvbTCBnzANBgkqhkiG9w0B AQEFAAOBjQAwgYkCgYEAzXDEULpCUqIc9hV/ESFapkckR2uoYINA81DvG2aQZ9Ot Q30OwX2ae2CC4bSzJEIVlahU8vjVrWpmpa28NEhQbqh4ywwbl1XDrEVYI6Gkfimf snJhOKyaVrEhlwutYtBjmsz3ZIqwymMPm/6smVcSS5dJIynlSmtltxX6ivPcO8UC AwEAATANBgkqhkiG9w0BAQUFAAOBgQBGxHVkpSSOnZjzuySRepjhAlV/yhe9Fx23 fh12WrjQMEi98B7JEuNSLXDWckUN7O6XRc3RzKmazcGHJqzhn0Ov6gAmAE2XjZ/x VW21xmaLwk+KgYKFJbJJaP3jMSpU7I3aa11wqAkR2Zd4Nkm9N0YXYIzcBdfztTVI Et8mEHBFdg== -----END CERTIFICATE----- The cert is in turn generated via: $ make-ssl-cert generate-default-snakeoil --force-overwrite Apache version. $ apache2 -V Server version: Apache/2.2.14 (Ubuntu) Server built: Apr 13 2010 20:22:19 Server's Module Magic Number: 20051115:23 Server loaded: APR 1.3.8, APR-Util 1.3.9 Compiled using: APR 1.3.8, APR-Util 1.3.9 Architecture: 64-bit Server MPM: Worker threaded: yes (fixed thread count) forked: yes (variable process count) Server compiled with.... -D APACHE_MPM_DIR="server/mpm/worker" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=128 -D HTTPD_ROOT="" -D SUEXEC_BIN="/usr/lib/apache2/suexec" -D DEFAULT_PIDLOG="/var/run/apache2.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="/etc/apache2/mime.types" -D SERVER_CONFIG_FILE="/etc/apache2/apache2.conf" Any ideas? Thanks in advance for any help.

    Read the article

  • connect() failed (111: Connection refused) while connecting to upstream

    - by Burning the Codeigniter
    I'm experiencing 502 gateway errors when accessing a PHP file in a directory (http://domain.com/dev/index.php), the logs simply says this: 2011/09/30 23:47:54 [error] 31160#0: *35 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xx.xx.xx, server: domain.com, request: "GET /dev/ HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "domain.com" I've never experienced this before, how do I do a solution for this type of 502 gateway error? This is the nginx.conf: user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } #mail { # # See sample authentication script at: # # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript # # # auth_http localhost/auth.php; # # pop3_capabilities "TOP" "USER"; # # imap_capabilities "IMAP4rev1" "UIDPLUS"; # # server { # listen localhost:110; # protocol pop3; # proxy on; # } # # server { # listen localhost:143; # protocol imap; # proxy on; # } #}

    Read the article

  • MySQL died during the night on a 12.04.1 Ubuntu

    - by Olivier
    I can't explain why, but somehow during the night, one of my MySQL running on an Ubuntu 12.04.1 box broke. The service is running but I can't login anymore (to SQL), the previous password is not working anymore. It does not looks like the server has been compromised (nothing in /var/auth.log) It looks like some automatic security upgrade (server is configured to perform those) has occured and broke something. The MySQL server has restarted a couple of times in the logs at the time errors started to happen (I get email when CRON task fail). In the logs it complains about an unset root password (I do have cron job running all day using SQL so the password was set & working for months). Anyway I can't login without password either! Do you have any idea of what could have happened? How do I get my databases back? This line looks strange : Nov 6 06:36:12 ns398758 mysqld_safe[6676]: ERROR: 1064 You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'ALTER TABLE user ADD column Show_view_priv enum('N','Y') CHARACTER SET utf8 NOT ' at line 1 Here is the full log below : Nov 6 06:36:06 ns398758 mysqld_safe[6586]: Nov 6 06:36:06 ns398758 mysqld_safe[6586]: PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER ! Nov 6 06:36:06 ns398758 mysqld_safe[6586]: To do so, start the server, then issue the following commands: Nov 6 06:36:06 ns398758 mysqld_safe[6586]: Nov 6 06:36:06 ns398758 mysqld_safe[6586]: /usr/bin/mysqladmin -u root password 'new-password' Nov 6 06:36:06 ns398758 mysqld_safe[6586]: /usr/bin/mysqladmin -u root -h ns398758.ovh.net password 'new-password' Nov 6 06:36:06 ns398758 mysqld_safe[6586]: Nov 6 06:36:06 ns398758 mysqld_safe[6586]: Alternatively you can run: Nov 6 06:36:06 ns398758 mysqld_safe[6586]: /usr/bin/mysql_secure_installation Nov 6 06:36:06 ns398758 mysqld_safe[6586]: Nov 6 06:36:06 ns398758 mysqld_safe[6586]: which will also give you the option of removing the test Nov 6 06:36:06 ns398758 mysqld_safe[6586]: databases and anonymous user created by default. This is Nov 6 06:36:06 ns398758 mysqld_safe[6586]: strongly recommended for production servers. Nov 6 06:36:06 ns398758 mysqld_safe[6586]: Nov 6 06:36:06 ns398758 mysqld_safe[6586]: See the manual for more instructions. Nov 6 06:36:06 ns398758 mysqld_safe[6586]: Nov 6 06:36:06 ns398758 mysqld_safe[6586]: Please report any problems with the /usr/scripts/mysqlbug script! Nov 6 06:36:06 ns398758 mysqld_safe[6586]: Nov 6 06:36:06 ns398758 mysqld_safe[6632]: 121106 6:36:06 [Note] Plugin 'FEDERATED' is disabled. Nov 6 06:36:06 ns398758 mysqld_safe[6632]: 121106 6:36:06 InnoDB: The InnoDB memory heap is disabled Nov 6 06:36:06 ns398758 mysqld_safe[6632]: 121106 6:36:06 InnoDB: Mutexes and rw_locks use GCC atomic builtins Nov 6 06:36:06 ns398758 mysqld_safe[6632]: 121106 6:36:06 InnoDB: Compressed tables use zlib 1.2.3.4 Nov 6 06:36:06 ns398758 mysqld_safe[6632]: 121106 6:36:06 InnoDB: Initializing buffer pool, size = 128.0M Nov 6 06:36:06 ns398758 mysqld_safe[6632]: 121106 6:36:06 InnoDB: Completed initialization of buffer pool Nov 6 06:36:06 ns398758 mysqld_safe[6632]: 121106 6:36:06 InnoDB: highest supported file format is Barracuda. Nov 6 06:36:07 ns398758 mysqld_safe[6632]: 121106 6:36:07 InnoDB: Waiting for the background threads to start Nov 6 06:36:08 ns398758 mysqld_safe[6632]: 121106 6:36:08 InnoDB: 1.1.8 started; log sequence number 29276459701 Nov 6 06:36:08 ns398758 mysqld_safe[6632]: 121106 6:36:08 InnoDB: Starting shutdown... Nov 6 06:36:09 ns398758 mysqld_safe[6632]: 121106 6:36:09 InnoDB: Shutdown completed; log sequence number 29276459701 Nov 6 06:36:11 ns398758 mysqld_safe[6676]: 121106 6:36:11 [Note] Plugin 'FEDERATED' is disabled. Nov 6 06:36:11 ns398758 mysqld_safe[6676]: 121106 6:36:11 InnoDB: The InnoDB memory heap is disabled Nov 6 06:36:11 ns398758 mysqld_safe[6676]: 121106 6:36:11 InnoDB: Mutexes and rw_locks use GCC atomic builtins Nov 6 06:36:11 ns398758 mysqld_safe[6676]: 121106 6:36:11 InnoDB: Compressed tables use zlib 1.2.3.4 Nov 6 06:36:11 ns398758 mysqld_safe[6676]: 121106 6:36:11 InnoDB: Initializing buffer pool, size = 128.0M Nov 6 06:36:11 ns398758 mysqld_safe[6676]: 121106 6:36:11 InnoDB: Completed initialization of buffer pool Nov 6 06:36:11 ns398758 mysqld_safe[6676]: 121106 6:36:11 InnoDB: highest supported file format is Barracuda. Nov 6 06:36:11 ns398758 mysqld_safe[6676]: 121106 6:36:11 InnoDB: Waiting for the background threads to start Nov 6 06:36:12 ns398758 mysqld_safe[6676]: 121106 6:36:12 InnoDB: 1.1.8 started; log sequence number 29276459701 Nov 6 06:36:12 ns398758 mysqld_safe[6676]: ERROR: 1064 You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'ALTER TABLE user ADD column Show_view_priv enum('N','Y') CHARACTER SET utf8 NOT ' at line 1 Nov 6 06:36:12 ns398758 mysqld_safe[6676]: 121106 6:36:12 [ERROR] Aborting Nov 6 06:36:12 ns398758 mysqld_safe[6676]: Nov 6 06:36:12 ns398758 mysqld_safe[6676]: 121106 6:36:12 InnoDB: Starting shutdown... Nov 6 06:36:13 ns398758 mysqld_safe[6676]: 121106 6:36:13 InnoDB: Shutdown completed; log sequence number 29276459701 Nov 6 06:36:13 ns398758 mysqld_safe[6676]: 121106 6:36:13 [Note] /usr/sbin/mysqld: Shutdown complete Nov 6 06:36:13 ns398758 mysqld_safe[6676]: Nov 6 06:36:13 ns398758 mysqld_safe[6697]: 121106 6:36:13 [Note] Plugin 'FEDERATED' is disabled. Nov 6 06:36:13 ns398758 mysqld_safe[6697]: 121106 6:36:13 InnoDB: The InnoDB memory heap is disabled Nov 6 06:36:13 ns398758 mysqld_safe[6697]: 121106 6:36:13 InnoDB: Mutexes and rw_locks use GCC atomic builtins Nov 6 06:36:13 ns398758 mysqld_safe[6697]: 121106 6:36:13 InnoDB: Compressed tables use zlib 1.2.3.4 Nov 6 06:36:13 ns398758 mysqld_safe[6697]: 121106 6:36:13 InnoDB: Initializing buffer pool, size = 128.0M Nov 6 06:36:13 ns398758 mysqld_safe[6697]: 121106 6:36:13 InnoDB: Completed initialization of buffer pool Nov 6 06:36:13 ns398758 mysqld_safe[6697]: 121106 6:36:13 InnoDB: highest supported file format is Barracuda. Nov 6 06:36:13 ns398758 mysqld_safe[6697]: 121106 6:36:13 InnoDB: Waiting for the background threads to start Nov 6 06:36:14 ns398758 mysqld_safe[6697]: 121106 6:36:14 InnoDB: 1.1.8 started; log sequence number 29276459701 Nov 6 06:36:14 ns398758 mysqld_safe[6697]: 121106 6:36:14 InnoDB: Starting shutdown... Nov 6 06:36:15 ns398758 mysqld_safe[6697]: 121106 6:36:15 InnoDB: Shutdown completed; log sequence number 29276459701 Nov 6 06:36:15 ns398758 mysqld_safe[6718]: 121106 6:36:15 [Note] Plugin 'FEDERATED' is disabled. Nov 6 06:36:15 ns398758 mysqld_safe[6718]: 121106 6:36:15 InnoDB: The InnoDB memory heap is disabled Nov 6 06:36:15 ns398758 mysqld_safe[6718]: 121106 6:36:15 InnoDB: Mutexes and rw_locks use GCC atomic builtins Nov 6 06:36:15 ns398758 mysqld_safe[6718]: 121106 6:36:15 InnoDB: Compressed tables use zlib 1.2.3.4 Nov 6 06:36:15 ns398758 mysqld_safe[6718]: 121106 6:36:15 InnoDB: Initializing buffer pool, size = 128.0M Nov 6 06:36:15 ns398758 mysqld_safe[6718]: 121106 6:36:15 InnoDB: Completed initialization of buffer pool Nov 6 06:36:15 ns398758 mysqld_safe[6718]: 121106 6:36:15 InnoDB: highest supported file format is Barracuda. Nov 6 06:36:15 ns398758 mysqld_safe[6718]: 121106 6:36:15 InnoDB: Waiting for the background threads to start Nov 6 06:36:16 ns398758 mysqld_safe[6718]: 121106 6:36:16 InnoDB: 1.1.8 started; log sequence number 29276459701 Nov 6 06:36:16 ns398758 mysqld_safe[6718]: ERROR: 1050 Table 'plugin' already exists Nov 6 06:36:16 ns398758 mysqld_safe[6718]: 121106 6:36:16 [ERROR] Aborting Nov 6 06:36:16 ns398758 mysqld_safe[6718]: Nov 6 06:36:16 ns398758 mysqld_safe[6718]: 121106 6:36:16 InnoDB: Starting shutdown... Nov 6 06:36:17 ns398758 mysqld_safe[6718]: 121106 6:36:17 InnoDB: Shutdown completed; log sequence number 29276459701 Nov 6 06:36:17 ns398758 mysqld_safe[6718]: 121106 6:36:17 [Note] /usr/sbin/mysqld: Shutdown complete Nov 6 06:36:17 ns398758 mysqld_safe[6718]: Nov 6 06:36:19 ns398758 /etc/mysql/debian-start[6816]: Upgrading MySQL tables if necessary. Nov 6 06:36:20 ns398758 /etc/mysql/debian-start[6819]: /usr/bin/mysql_upgrade: the '--basedir' option is always ignored Nov 6 06:36:20 ns398758 /etc/mysql/debian-start[6819]: Looking for 'mysql' as: /usr/bin/mysql Nov 6 06:36:20 ns398758 /etc/mysql/debian-start[6819]: Looking for 'mysqlcheck' as: /usr/bin/mysqlcheck Nov 6 06:36:20 ns398758 /etc/mysql/debian-start[6819]: Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' Nov 6 06:36:20 ns398758 /etc/mysql/debian-start[6819]: Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' Nov 6 06:36:20 ns398758 /etc/mysql/debian-start[6819]: col_digitas.acos OK Nov 6 06:36:20 ns398758 /etc/mysql/debian-start[6819]: col_digitas.aros OK ...

    Read the article

  • enabling gzip with htaccess...why is it hit or miss?

    - by adam-asdf
    I have shared hosting through Justhost. I use the HTML5 Boilerplate .htaccess (have tried other methods from here and there without luck) the compression part is as follows: <IfModule mod_deflate.c> # Force deflate for mangled headers developer.yahoo.com/blogs/ydn/posts/2010/12/pushing-beyond-gzipping/ <IfModule mod_setenvif.c> <IfModule mod_headers.c> SetEnvIfNoCase ^(Accept-EncodXng|X-cept-Encoding|X{15}|~{15}|-{15})$ ^((gzip|deflate)\s*,?\s*)+|[X~-]{4,13}$ HAVE_Accept-Encoding RequestHeader append Accept-Encoding "gzip,deflate" env=HAVE_Accept-Encoding </IfModule> </IfModule> # Compress all output labeled with one of the following MIME-types <IfModule mod_filter.c> AddOutputFilterByType DEFLATE application/atom+xml \ application/javascript \ application/json \ application/rss+xml \ application/vnd.ms-fontobject \ application/x-font-ttf \ application/xhtml+xml \ application/xml \ font/opentype \ image/svg+xml \ image/x-icon \ text/css \ text/html \ text/plain \ text/x-component \ text/xml </IfModule> </IfModule> However, it isn't working—at least I don't think—My home page (html) isn't compressing, the CSS and some of the JS aren't gzipped. It is failing on HTML, CSS and JS. However, some things are (or were, who knows what it will look like when you check) gzipped. My domain is http://adaminfinitum.com/ What is weird is that the (Google) PageSpeed browser extension for Firefox (whatever the current version is [Nov. 2012]) gives me a 95% speed rating (and no warnings about compression), yet YSlow and Chrome developer tools both flag me about gzip, as does a tool I found on here while researching this. To reduce cookies I set up a subdomain on my site and I thought maybe that was it so I added an .htaccess there also, but no luck. To reduce http requests I embedded some of webfonts and images in CSS (HTML5 BP stipulates not to compress images, and apparently '.woff' files are already compressed) so I thought maybe that was it and I spent all day separating and asynchronously loading those portions (via Modernizr.load) but that hasn't helped either...if anything it made it worse due to increasing http requests (I realize speed scores of async resources may be misleading). Researching this, it seems to be a fairly common issue but I haven't found an explanation/solution. I don't think it is a MIME-type issue, I have quadruple checked (and thrice edited) my .htaccess files. My hosting company said they run Apache 2.2.22 and I have looked at everything I can find. What gives?

    Read the article

  • Communication software wanted: email, sms, IM, phone calls [closed]

    - by user63835
    I am searching for a software solution that integrates / unifies my communication. I use email, instant messaging, SMS and phone. I would like to get all emails, SMS, instant messaging dialogs and meta-data about phone calls into one application. Important is that I can access all past communication with one application. There should be a global address-book to map the communication data to persons or organizations. I want all the communication data in one place to access and backup it easily. The software solution is not required to be multi-user application or server application. It is just for one user (me) only, but server or multi-user applications are not excluded. I may run it on a server hardware. It should run on Linux (Lubuntu / Ubuntu prefered). Free and OpenSource software is prefered. It would be nice if I could perform new communication (like writing a new email, sms, etc.) with one application, but that is not a must have requirement. I could also work with different applications dedicated for different types of communication like IM-application for IM and email-application for email, if all that communication data from the specialized applications will be delivered to one single place where I can access and backup it. I have an android phone and currently I am using Google contacts as the address-book. In the long term this may change, to get back the control over my data. I did some Internet search but did not find a nice solution, yet. If I am looking for unified messaging and unified communication, am I on the right track? The current Thunderbird version has IM functionality integrated. Did not try it, yet. For SMS it may be possible to use an app to send every SMS (incoming and outgoing) as an email, but I am not sure if those SMS-emails can be mapped to an address-book contact. I don't remember exactly, but isn't there a Google android app (I think Google voice) integrating SMS into google services? But in Germany this function has not been released, yet. Maybe a groupware solution would solve the requirements, but I don't have much experience with it. As communication possibilites are groing, I am woundering that there seems to be such a big gap of solutions. I can't believe I am the only one who would like solution, better integrating all the communication channels more easily. If you know a software solution that solves these requirements (partly) I would be glad if you tell me about it. Thanks in advance.

    Read the article

  • Cannot login to SQL Server 2008 R2 with Windows authentication

    - by Ian Boyd
    When i try to connect to SQL Server (2008 R2) using Windows authentication: i cannot: Checking the Windows Application event log, i find the error: Login failed for user 'AVATOPIA\ian'. Reason: Token-based server access validation failed with an infrastructure error. Check for previous errors. [CLIENT: ] Log Name: Application Source: MSSQLSERVER Event ID: 18456 Level: Information User: AVATOPIA\ian OpCode: Task Category: Logon i can login to the computer itself using Windows authentication. i can log into SQL Server using the local Windows Administrator account. We can connect to 8 other SQL Servers on the domain using Windows Authentication. Just this one, whitch is the only one that is 2008 R2 is failing. So i assume it's a bug with *2008 R2. Note: i cannot logon locally, or remotely, using Windows authentication. i can login locally and remotely using SQL Server Authentication. Update Note: It's not limited to SQL Server Management Studio, standalone applications that connect using Windows authentication: fail: Note: It's not a client problem, as we can connect fine to other (non-SQL Server 2008 R2 machines): i'm sure there's a technote or knowledge base article describing why SQL Server 2008 R2 is broken by default, but i can't find it. Update 2 Matt figure out the change that Microsoft made so that SQL Server 2008 R2 is broken by default: Administrators are no longer administrators All that remains is to figure out how to make Administrators administrators. One of these days i'm going to start a list of changes around Microsoft's "broken by default" initiative. Steps to reproduce the problem How do i add a group to the sysadmin fixed server role? Here's the steps i try, that don't work: Click Add: Click Object Types: Ensure that you have no ability to add groups: and click OK. Under Enter the object names to select, enter Administrators: Click Check Names, and ensure that you are not allowed to add groups: and click Cancel. Click Browse..., and ensure that you have no ability to add groups: You should now still not have added any group to the sysadmin role. Additional information SQL Server Management Studio is being run as an administrator: SQL Server is set to use Windows Authentication: tried while logged into SQL with both sa and the only other sysadmin domain account (screenshot can be supplied for those who don't believe)

    Read the article

  • Distinction between an extranet and a DMZ

    - by Markus Yrjölä
    I've been reading about intranets, extranets, DMZs and VPNs now, and I'd need some clarifications related to extranets and DMZs. I understand that they are different types of concepts - extranet allows limited access to some intranet resources, while DMZ is a subnet that sits between the internet and intranet and hosts the external-faced services. However, I'd like to know what is their distinction in practice in a usual setup? The Wikipedia article on extranets says that extranets are similar to DMZs because they are used for the same purpose (providing access to some services/resources without exposing the whole intranet). The article also states that an extranet is a part of a VPN, and this TechNet article also states that extranet access is often implemented similarly to remote intranet access, e.g. with a VPN. The TechNet article also says that commonly the extranet is hosted inside the DMZ. This Pearson article says "Although [the DMZ] is technically located within the intranet, [it] can serve as the extranet as well". This is slightly confusing. Consider this scenario: A company has a B2C website hosted in the DMZ. The website can be accessed from anywhere, but requires user authentication. The underlying web app has its database inside the intranet and also interacts with some web services that are hosted inside the intranet (i.e. it accesses intranet resources). The way I see it, the website does effectively offer a restricted access to the intranet. But can it be considered an extranet? If we take the Wikipedia definition of an extranet literally - "An extranet is a computer network that allows controlled access from outside of an organization's intranet" - I think it can. Let's say that the above can't be considered an extranet. What if we change the scenario slightly, and say it's a B2B website, where the access is e.g. limited to connections coming from a specific business partner (by using site-to-site VPN, for example). In this case it surely is an extranet, right? If this is the case, then the difference between extranet services and any other services hosted in the DMZ is simply access restrictions?

    Read the article

  • How to set 2 conditions / criterias for VLOOKUP / LOOKUP / etc in OpenOffice Calc (or Excel)

    - by MestreLion
    I have this spreadsheet that started as a silly aid for a game (Mafia Wars 2), but grew into a tricky spreadsheet question. In the game your character have 9 "slots" for weapons and armors, 1 for each "type": Light Weapon, Heavy Weapon, Body Armor, Head Armor, etc. So I made a list of all weapons and armors available in the game, 1 item per row. Example: SHOP ITEM TYPE ITEM NAME ATK DEF PRICE EQUIPPED? Marketplace Weapon Light Konrad Knife 16 5 5.500 Marketplace Weapon Light Ice Queen 19 6 8.200 Marketplace Armor Body Up Layered Polym 0 31 8.600 Marketplace Armor Body Up Full Shield 7 42 17.650 Marketplace Weapon Heavy Konrad Bullpup 53 25 24.500 Marketplace Weapon Heavy Full Moon Blow 73 12 24.500 x Marketplace Armor Body Low Knee Pads 17 26 14.200 x Marketplace Armor Body Low Army Boots 15 55 24.500 Bone Yard Weapon Light Bone Launcher 41 2 9.400 x Neon Strip Vehicle Ground Supercharged 41 34 24.500 Dead End Weapon Heavy Sharp Sickle 21 5 24.500 Dead End Armor Body Low Unholy Boots 5 36 15.000 Dead End Armor Head Hockey Mask 5 18 15.900 x Last columns is an indication of the items i have already bought and equipped (marked with "x"). What I need is a formula that, for each "slot" (item type), returns info related to the item of that kind that I am using. That would be: ITEM TYPE SHOP NAME ITEM NAME ATK DEF PRICE Weapon Light Bone Yard Bone Launcher 41 2 9.400 Weapon Heavy Marketplace Full Moon Blow 73 12 24.500 Weapon Special -- -- -- -- -- Armor Body Up -- -- -- -- -- Armor Body Low Marketplace Knee Pads 17 26 14.200 Armor Head Dead End Hockey Mask 5 18 15.900 Vehicle Ground -- -- -- -- -- Vehicle Water -- -- -- -- -- Vehicle Air -- -- -- -- -- The item types are fixed, so they can be hard coded. Each row for an item type. So, for 1st result line, it would return data from the row where both 2nd column is "Weapon Light" and last column is "x". Basically I need a LOOKUP (or VLOOKUP, or anything else) that uses 2 criteria to find a given row, the item type and the X marker. Question is: HOW? I am using OpenOffice Calc 3.2.1, but since it shares so many functions with MS Excel, answers for Excel are also fine (as long as it only uses regular formulas, no VBScript or Macros or VBA etc) Last but not least, suggestions / solutions for rearranging the data so it makes this problem easier to solve are also welcome. Thanks!

    Read the article

  • How do I get the latest FastCGI and PHP versions to peacefully coexist on IIS 6?

    - by BHelman
    I have been going round and round trying to get any sort of PHP running on IIS 6. I somehow managed to successfully get version 5.1.4 running using the php5isapi.dll file. However, I want to upgrade a website to begin using a Content Management System. I have never dug into CMS before so I'm open to programs that are easy to use. I am currently looking into TomatoCMS and ImpressCMS - but that's beside the point. I have never done an installation with PHP before and I think I'm getting familiar with how it works. However the current situation is this. Microsoft's Web Platform Installer 2.0 installed FastCGI for me. I need to upgrade to PHP 5.3.1 for a CMS system. So I downloaded the Windows installer and let it go at it. After consulting several other blog articles, I believe I know how it is supposed to work but I am currently not having luck. THE SETUP *.php is a registered extension in IIS 6 for all websites (on Win 2k3). The application that it calls is C:\Windows\system32\inetsvr\fcgiext.dll, like it should. The fcgiext.ini config has the proper lines: [Types] php=PHP [PHP] ext=C:\program files\PHP\php-cgi.exe And the php.ini file also has the correct configs. All extensions are disabled and I changed the correct things for FastCGI. And everything is registered correctly with the PATH variable. Everything is exactly how it should be. BUT when I launch the "info.php" page () on another computer, I get the following error: FastCGI Error The FastCGI Handler was unable to process the request. Error Details: * Section [PHP] not found in config file. * Error Number: 1413 (0x80070585). * Error Description: Invalid index. HTTP Error 500 - Server Error. Internet Information Services (IIS) A quick Google search reveals that I have it all setup correctly as far as the INI's go and the mapping of the php extension. I am completely at a loss. Does anyone have any suggestions? Although the server is hosting three small websites, I don't really care what I have to do to it to get it to work.

    Read the article

  • Configuration issue with HttpRealipModule (CloudFlare) in nginx configuration file

    - by Tyrx
    I've been attempting to use HttpRealipModule with the CloudFlare IP range in my main nginx configuration file but upon restarting nginx I'll just get a standard `"configuration file /etc/nginx/nginx.conf test failed" and my site will go down. This is what I've been attempting to do with my nginx.conf; user www-data; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { # Basic Settings set_real_ip_from 204.93.240.0/24; set_real_ip_from 204.93.177.0/24; set_real_ip_from 199.27.128.0/21; set_real_ip_from 173.245.48.0/20; set_real_ip_from 103.22.200.0/22; set_real_ip_from 141.101.64.0/18; set_real_ip_from 108.162.192.0/18; set_real_ip_from 190.93.240.0/20; set_real_ip_from 188.114.96.0/20; set_real_ip_from 2400:cb00::/32; set_real_ip_from 2606:4700::/32; set_real_ip_from 2803:f800::/32; real_ip_header CF-Connecting-IP; client_max_body_size 50m; client_header_timeout 5; keepalive_timeout 5; port_in_redirect off; sendfile on; server_tokens off; server_name_in_redirect off; tcp_nopush on; tcp_nodelay on; types_hash_max_size 2048; # MIME include /etc/nginx/mime.types; default_type application/octet-stream; # Logging Settings access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log warn; # Gzip Settings gzip on; gzip_disable "msie6"; gzip_min_length 1400; gzip_types text/plain text/css text/javascript text/xml application/x-javascript application/xml application/xml+rss; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } What's wrong with that configuration file?

    Read the article

  • Getting Classic ASP to work in .js files under IIS 7

    - by Abdullah Ahmed
    I am moving a clients classic asp webapp to a new IIS7 based server. The site contains some .js files which have javascript but also classic asp in <% % tags which contains a bunch of conditional statements designed to spit out pieces of javascript based on session state variables. Here's a brief example of what the file could be like.... var arrHOFFSET = -1; var arrLeft ="<"; var arrRight = ">"; <% If ((Session("dashInv") = "True") And ((Session("systemLevelStaff") = "4") Or (Session("systemLevelCompany") = "4"))) Then %> addMainItem("/MgmtTools/WelcomeInventory.asp?wherefrom=salesMan","",81,"center","","",0,0,"","","","",""); <% Else %> <% If (Session("dashInv") = "False") And ((Session("systemLevelStaff") = "4") Or (Session("systemLevelCompany") = "4")) Then %> <% Else %> addMainItem("/calendar/welcome.asp","",81,"center","","",0,0,"","","","",""); <% End If %> <% End If %> defineSubmenuProperties(135,"center","center",-3,0,"","","","","","",""); Currently this file (named custom.js for example) will start throwing js errors, because the server doesnt seem to recognize the asp code in it and therefore does not parse it. I know I need to somehow specify that a .js file should also be treated like an .asp file and run through parsing it. However I am not sure how to go about doing this. Here is what I've tried so far... Under the Server node in IIS under HANDLER MAPPINGS I created a new Script Map with the following settings. Request Path: *.js Executable: C:\Windows\System32\inetsrv\asp.dll Name: ASPClassicInJSFiles Mapping: Invoke Handler only if request is mapped to : File Verbs: All verbs Access: Script I also created a similar handler under the site node itself. Under MIME Types .js is defined as application/x-javascript None of these work. If I simply rename the file to have .asp extension then things work, however this app is poorly coded and has literally 100's of files with the .js files included in them under various names and locations, so rename, search and replace is the last option I have.

    Read the article

  • Nginx phpmyadmin redirecting to / instead of /phpmyadmin upon login

    - by Frederik Nielsen
    I am having issues with my phpmyadmin on my nginx install. When I enter <ServerIP>/phpmyadmin and logs in, I get redirected to <ServerIP>/index.php?<tokenstuff> instead of <ServerIP>/phpmyadmin/index.php?<tokenstuff> Nginx config file: user nginx; worker_processes 5; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 2; #gzip on; include /etc/nginx/conf.d/*.conf; } Default.conf: server { listen 80; server_name _; #charset koi8-r; #access_log /var/log/nginx/log/host.access.log main; location / { root /usr/share/nginx/html; index index.php index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { root /usr/share/nginx/html; try_files $uri =404; fastcgi_pass unix:/tmp/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # location ~ /\.ht { deny all; } location /phpmyadmin { root /usr/share/; index index.php index.html index.htm; location ~ ^/phpmyadmin/(.+\.php)$ { try_files $uri =404; root /usr/share/; fastcgi_pass unix:/tmp/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $request_filename; include fastcgi_params; fastcgi_param PATH_INFO $fastcgi_script_name; } location ~* ^/phpmyadmin/(.+\.(jpg|jpeg|gif|css|png|js|ico|html|xml|txt))$ { root /usr/share/; } } } (Any general tips on tidying op those config files are accepted too)

    Read the article

  • Port forwarding problem

    - by Steve
    I have a modem connecting to ADSL2 network and a router connecting to the modem. The rest of the machines all connect to the router. The modem has IP as 192.168.1.1 and the router's IP is 192.168.0.1. From the modem configuration, I can see that the modem thinks the router's IP is 192.168.1.2. I can visit the router by either using 192.168.0.1 or 192.168.1.2. Now I forward a port from the router to a private machine. It works. I can test it by typing 192.168.1.2 and it is redirected to the private machine. But if I use 192.168.0.1, it is still the router's configuration page. I also do a port forwarding on my modem. Since the modem sees only the router, I can only forward the port to the router's specific port. And I am thinking that by doing this, I can reach the private machine after two times port forwarding, once on the modem and once on the router. I also have a static public IP. I want to achieve the goal that when someone types the public IP, he will be redirected to the private machine. But when I use some online port forwarding tester, the result always says that the port is closed on the public IP. I have the questions: Why my router has two IPs? Why using one IP I can see the port forwarding result while using the other I cannot? I think the port forwarding only works when visiting from outside, rather than from both outside and inside. Otherwise, if I set port forwarding on my router/modem on port 80, I will never be able to see its original configuration page again. Everything is forwarded. Am I right? How can I achieve my goal described above? By achieve this, I will have a dedicated server of my own and the users can visit from the public IP. Anyone can correct me on any mistakes I made? I am using Netconn modem and D-Link DIR-300 router. Thank you very much for any help. Edit: Consider I have correctly setup the whole thing. Now I want to test my website by using public IP to visit it, but the port forwarding doesn't work. Does it consider that I am inside the local network and not using the port forwarding? If so, how can I do it? I ask my friends (outside my local network) to have a try and they can see the website. What should I do so that from the inside, I can do the testing? Thank you very much.

    Read the article

  • Configuring OpenLDAP and SSL

    - by Stormshadow
    I am having trouble trying to connect to a secure OpenLDAP server which I have set up. On running my LDAP client code java -Djavax.net.debug=ssl LDAPConnector I get the following exception trace (java version 1.6.0_17) trigger seeding of SecureRandom done seeding SecureRandom %% No cached client session *** ClientHello, TLSv1 RandomCookie: GMT: 1256110124 bytes = { 224, 19, 193, 148, 45, 205, 108, 37, 101, 247, 112, 24, 157, 39, 111, 177, 43, 53, 206, 224, 68, 165, 55, 185, 54, 203, 43, 91 } Session ID: {} Cipher Suites: [SSL_RSA_WITH_RC4_128_MD5, SSL_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA, SSL_RSA_W ITH_3DES_EDE_CBC_SHA, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, SSL_RSA_WITH_DES_CBC_SHA, SSL_DHE_RSA_WITH_DES_CBC_SHA, SSL_DHE_DSS_WITH_DES_CBC_SH A, SSL_RSA_EXPORT_WITH_RC4_40_MD5, SSL_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA] Compression Methods: { 0 } *** Thread-0, WRITE: TLSv1 Handshake, length = 73 Thread-0, WRITE: SSLv2 client hello message, length = 98 Thread-0, received EOFException: error Thread-0, handling exception: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake Thread-0, SEND TLSv1 ALERT: fatal, description = handshake_failure Thread-0, WRITE: TLSv1 Alert, length = 2 Thread-0, called closeSocket() main, handling exception: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake javax.naming.CommunicationException: simple bind failed: ldap.natraj.com:636 [Root exception is javax.net.ssl.SSLHandshakeException: Remote host closed connection during hands hake] at com.sun.jndi.ldap.LdapClient.authenticate(Unknown Source) at com.sun.jndi.ldap.LdapCtx.connect(Unknown Source) at com.sun.jndi.ldap.LdapCtx.<init>(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURL(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURLs(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxInstance(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getInitialContext(Unknown Source) at javax.naming.spi.NamingManager.getInitialContext(Unknown Source) at javax.naming.InitialContext.getDefaultInitCtx(Unknown Source) at javax.naming.InitialContext.init(Unknown Source) at javax.naming.InitialContext.<init>(Unknown Source) at javax.naming.directory.InitialDirContext.<init>(Unknown Source) at LDAPConnector.CallSecureLDAPServer(LDAPConnector.java:43) at LDAPConnector.main(LDAPConnector.java:237) Caused by: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.performInitialHandshake(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readDataRecord(Unknown Source) at com.sun.net.ssl.internal.ssl.AppInputStream.read(Unknown Source) at java.io.BufferedInputStream.fill(Unknown Source) at java.io.BufferedInputStream.read1(Unknown Source) at java.io.BufferedInputStream.read(Unknown Source) at com.sun.jndi.ldap.Connection.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.io.EOFException: SSL peer shut down incorrectly at com.sun.net.ssl.internal.ssl.InputRecord.read(Unknown Source) ... 9 more I am able to connect to the same secure LDAP server however if I use another version of java (1.6.0_14) I have created and installed the server certificates in the cacerts of both the JRE's as mentioned in this guide -- OpenLDAP with SSL When I run ldapsearch -x on the server I get # extended LDIF # # LDAPv3 # base <dc=localdomain> (default) with scope subtree # filter: (objectclass=*) # requesting: ALL # # localdomain dn: dc=localdomain objectClass: top objectClass: dcObject objectClass: organization o: localdomain dc: localdomain # admin, localdomain dn: cn=admin,dc=localdomain objectClass: simpleSecurityObject objectClass: organizationalRole cn: admin description: LDAP administrator # search result search: 2 result: 0 Success # numResponses: 3 # numEntries: 2 On running openssl s_client -connect ldap.natraj.com:636 -showcerts , I obtain the self signed certificate. My slapd.conf file is as follows ####################################################################### # Global Directives: # Features to permit #allow bind_v2 # Schema and objectClass definitions include /etc/ldap/schema/core.schema include /etc/ldap/schema/cosine.schema include /etc/ldap/schema/nis.schema include /etc/ldap/schema/inetorgperson.schema # Where the pid file is put. The init.d script # will not stop the server if you change this. pidfile /var/run/slapd/slapd.pid # List of arguments that were passed to the server argsfile /var/run/slapd/slapd.args # Read slapd.conf(5) for possible values loglevel none # Where the dynamically loaded modules are stored modulepath /usr/lib/ldap moduleload back_hdb # The maximum number of entries that is returned for a search operation sizelimit 500 # The tool-threads parameter sets the actual amount of cpu's that is used # for indexing. tool-threads 1 ####################################################################### # Specific Backend Directives for hdb: # Backend specific directives apply to this backend until another # 'backend' directive occurs backend hdb ####################################################################### # Specific Backend Directives for 'other': # Backend specific directives apply to this backend until another # 'backend' directive occurs #backend <other> ####################################################################### # Specific Directives for database #1, of type hdb: # Database specific directives apply to this databasse until another # 'database' directive occurs database hdb # The base of your directory in database #1 suffix "dc=localdomain" # rootdn directive for specifying a superuser on the database. This is needed # for syncrepl. rootdn "cn=admin,dc=localdomain" # Where the database file are physically stored for database #1 directory "/var/lib/ldap" # The dbconfig settings are used to generate a DB_CONFIG file the first # time slapd starts. They do NOT override existing an existing DB_CONFIG # file. You should therefore change these settings in DB_CONFIG directly # or remove DB_CONFIG and restart slapd for changes to take effect. # For the Debian package we use 2MB as default but be sure to update this # value if you have plenty of RAM dbconfig set_cachesize 0 2097152 0 # Sven Hartge reported that he had to set this value incredibly high # to get slapd running at all. See http://bugs.debian.org/303057 for more # information. # Number of objects that can be locked at the same time. dbconfig set_lk_max_objects 1500 # Number of locks (both requested and granted) dbconfig set_lk_max_locks 1500 # Number of lockers dbconfig set_lk_max_lockers 1500 # Indexing options for database #1 index objectClass eq # Save the time that the entry gets modified, for database #1 lastmod on # Checkpoint the BerkeleyDB database periodically in case of system # failure and to speed slapd shutdown. checkpoint 512 30 # Where to store the replica logs for database #1 # replogfile /var/lib/ldap/replog # The userPassword by default can be changed # by the entry owning it if they are authenticated. # Others should not be able to see it, except the # admin entry below # These access lines apply to database #1 only access to attrs=userPassword,shadowLastChange by dn="cn=admin,dc=localdomain" write by anonymous auth by self write by * none # Ensure read access to the base for things like # supportedSASLMechanisms. Without this you may # have problems with SASL not knowing what # mechanisms are available and the like. # Note that this is covered by the 'access to *' # ACL below too but if you change that as people # are wont to do you'll still need this if you # want SASL (and possible other things) to work # happily. access to dn.base="" by * read # The admin dn has full write access, everyone else # can read everything. access to * by dn="cn=admin,dc=localdomain" write by * read # For Netscape Roaming support, each user gets a roaming # profile for which they have write access to #access to dn=".*,ou=Roaming,o=morsnet" # by dn="cn=admin,dc=localdomain" write # by dnattr=owner write ####################################################################### # Specific Directives for database #2, of type 'other' (can be hdb too): # Database specific directives apply to this databasse until another # 'database' directive occurs #database <other> # The base of your directory for database #2 #suffix "dc=debian,dc=org" ####################################################################### # SSL: # Uncomment the following lines to enable SSL and use the default # snakeoil certificates. #TLSCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem #TLSCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key TLSCipherSuite TLS_RSA_AES_256_CBC_SHA TLSCACertificateFile /etc/ldap/ssl/server.pem TLSCertificateFile /etc/ldap/ssl/server.pem TLSCertificateKeyFile /etc/ldap/ssl/server.pem My ldap.conf file is # # LDAP Defaults # # See ldap.conf(5) for details # This file should be world readable but not world writable. HOST ldap.natraj.com PORT 636 BASE dc=localdomain URI ldaps://ldap.natraj.com TLS_CACERT /etc/ldap/ssl/server.pem TLS_REQCERT allow #SIZELIMIT 12 #TIMELIMIT 15 #DEREF never Why is it that I can connect to the same server using one version of JRE while I cannot with another ?

    Read the article

  • Blank Page: wordpress on nginx+php-fpm

    - by troutwine
    Good day. While this post discusses a similar setup to mine serving blank pages occasionally after having made a successful installation, I am unable to serve anything but blank pages. My setup: Wordpress 3.0.4 nginx 0.8.54 php-fpm 5.3.5 (fpm-fcgi) Arch Linux Configuration Files php-fpm.conf: [global] pid = run/php-fpm/php-fpm.pid error_log = log/php-fpm.log log_level = notice [www] listen = 127.0.0.1:9000 listen.owner = www listen.group = www listen.mode = 0660 user = www group = www pm = dynamic pm.max_children = 50 pm.start_servers = 20 pm.min_spare_servers = 5 pm.max_spare_servers = 35 pm.max_requests = 500 nginx.conf: user www; worker_processes 1; error_log /var/log/nginx/error.log notice; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; gzip on; include /etc/nginx/sites-enabled/*.conf; } /etc/nginx/sites-enabled/blog_sharonrhodes_us.conf: upstream php { server 127.0.0.1:9000; } server { error_log /var/log/nginx/us/sharonrhodes/blog/error.log notice; access_log /var/log/nginx/us/sharonrhodes/blog/access.log; server_name blog.sharonrhodes.us; root /srv/apps/us/sharonrhodes/blog; index index.php; location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } location / { # This is cool because no php is touched for static content try_files $uri $uri/ /index.php?q=$uri&$args; } location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; #NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini include fastcgi_params; fastcgi_intercept_errors on; fastcgi_pass php; } location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { expires max; log_not_found off; } }

    Read the article

  • Nginx / uWsgi / Django site can handle more traffic with rewrite URL

    - by Ludo
    Hi there. I'm running a Django app, using uWsgi behind Nginx. I've been doing some performance tuning and load testing using ApacheBench and have discovered something unexpected which I wonder if someone could explain for me. In my Nginx config I have a rewrite directive which catches lots of different URL permutations and then forwards them to the canonical URL I wish to use, eg, it traps www.mysite.com/whatever, www.mysite.co.uk/whatever and forwards them all to http://mysite.com/whatever. If I load test against any of the URLs listed with a redirect (ie, NOT the canonical URL which it is eventually forwarded to), it can serve 15000 concurrent connections without breaking a sweat. If I load test against the canonical URL, which the above test I would have expected got forwarded to anyway, it can't handle nearly as much. It will drop about 4000 of the 15000 requests, and can only handle about 9000 reliably. This is the command line I'm using to test: ab -c15000 -n15000 http://www.mysite.com/somepath/ and ab -c15000 -n15000 http://mysite.com/somepath/ I've tried several different types - it makes no different which order I do them in. This doesn't make sense to me - I can understand why the requests involving a redirect may not handle quite so many concurrent connections, but it's happening the other way round. Can anyone explain? I'd really prefer it if the canonical URL was the one which could handle more traffic. I'll post my Nginx config below. Thanks loads for any help! server { server_name www.somesite.com somesite.net www.somesite.net somesite.co.uk www.somesite.co.uk; rewrite ^(.*) http://somesite.com$1 permanent; } server { root /home/django/domains/somesite.com/live/somesite/; server_name somesite.com somesite-live.myserver.somesite.com; access_log /home/django/domains/somesite.com/live/log/nginx.log; location / { uwsgi_pass unix:////tmp/somesite-live.sock; include uwsgi_params; } location /media { try_files $uri $uri/ /index.html; } location /site_media { try_files $uri $uri/ /index.html; } location = /favicon.ico { empty_gif; } }

    Read the article

  • Tying down a cloud by virtualizing everything and then locking VMs to real hardware as necessary

    - by tudor
    I'm looking for a cloud software solution that: Can run on both server and desktop machines; Virtualizes hardware and has the option of exposing each real machine to the cloud; Allows a VM to be "locked" to a set of real hardware capabilities and stay there until moved (e.g. a user's "real" desktop); Allows a VM to link to some types of devices elsewhere (e.g. USB/serial via ethernet); and Is geography-aware to control movement of VMs between real networks. I'm aware that this may be the holy grail of virtualization, and I've searched alot. Some solutions appear to meet some criteria but not others. Most cloud implementations appear to ignore real hardware, for example. I realise that this may be solved by using three different implementations in combination: A standard cloud server farm. A bare-metal network backup utility with PXEBoot. VNC and/or VDI. (VNC obviously would require the real hardware to be running.) This combination, however, has some serious drawbacks that I'd like to solve by treating it as one system. My explanation follows... I have a network of real servers and desktops in multiple locations. I've virtualized servers before using Virtualbox and that's worked quite well. I've even connected USB devices to VMs on servers. I would like to virtualize the desktops in all my offices to facilitate movement of desktops, remote access (e.g. VDI) and bare-metal backups. However, I know that there are problems with this. For example, some desktops have specific hardware (e.g. 3D graphics cards, USB devices, etc) that limit their mobility. Geographic constraints also limit movement in that VMs can be moved easily within offices, but transferring between offices is not always preferable. What I would like to find is a system that can virtualize everything from bare-metal easily by maintaining an abstraction layer on each client and server machine that exposes the hardware available and runs as a cloud. Then certain VMs would be "locked" to specific hardware (so that, e.g. the VM runs only on their own desktop.) This would be required for situations where speed is important (e.g. 3D graphics pass-through). In addition, abstracted low-speed devices (e.g. USB) could be piped from real hardware to a VM in the cloud. This is important since if a VM is taken down, another VM can connect to the real hardware for minimum downtime.

    Read the article

  • Nginx + PHP - No input file specified

    - by F21
    I am running Ubuntu Desktop 12.04 with nginx 1.2.6. PHP is PHP-FPM 5.4.9. This is the relevant part of my nginx.conf: http { include mime.types; default_type application/octet-stream; sendfile on; root /www keepalive_timeout 65; server { server_name testapp.com; root /www/app/www/; index index.php index.html index.htm; location ~ \.php$ { fastcgi_intercept_errors on; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } server { listen 80 default_server; index index.html index.php; location ~ \.php$ { fastcgi_intercept_errors on; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } } In my hosts file, I redirect 2 domains: testapp.com and test.com to 127.0.0.1. My web files are all stored in /www. From the above settings, if I visit test.com/phpinfo.php and test.com/app/www, everything works as expected and I get output from PHP. However, if I visit testapp.com, I get the dreaded No input file specified. error. So, at this point, I pull out the log files and have a look: 2012/12/19 16:00:53 [error] 12183#0: *17 FastCGI sent in stderr: "Unable to open primary script: /www/app/www/index.php (No such file or directory)" while reading response header from upstream, client: 127.0.0.1, server: testapp.com, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "testapp.com" This baffles me because I have checked again and again and /www/app/www/index.php definitely exists! This is also validated by the fact that test.com/app/www/index.php works which means the file exists and the permissions are correct. Why is this happening and what are the root causes of things breaking for just the testapp.com v-host?

    Read the article

  • nginx crashes on ssl after about a minute

    - by Scott
    Here are my configuration files ssl.conf # HTTPS server # server { listen 443 ssl; server_name api.domain.com; error_log /var/log/nginx/api.error.log; location / { root /var/www/api.domain.com; index index.html index.php index.php; try_files $uri $uri/ /index.php?$args; } ssl on; ssl_certificate /etc/nginx/api.domain.com.crt; ssl_certificate_key /etc/nginx/api.domain.com.key; ssl_session_timeout 5m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 location ~ \.php$ { # root html; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_param SCRIPT_FILENAME /var/www/api.domain.com$fastcgi_script_name; fastcgi_param HTTPS on; include fastcgi_params; } location ~ /\.ht { deny all; } } nginx.conf user nginx; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; gzip on; include /etc/nginx/conf.d/*.conf; } I have a server running on port 80 that runs with no issues. As soon as I turn on this api server running on ssl, it works for about a minute and then crashes and gives a 504 Gateway Time-out. Running nginx/1.2.3

    Read the article

  • nginx, php-cgi and "No input file specified."

    - by Stephen Belanger
    I'm trying to get nginx to play nice with php-cgi, but it's not quite working how I'd like. I'm using some set variables to allow for dynamic host names--basically anything.local. I know that stuff is working because I can access static files properly, however php files don't work. I get the standard "No input file specified." error which normally occurs when the file doesn't exist, but it definitely does exist and the path is correct because I can access the static files in the same path. It could possibly be a permissions thing, but I'm not sure how that could be an issue. I'm running this on Windows under my own user account, so I think it should have permission unless php-cgi is running under a different user without me telling it to. . Here's my config; worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; gzip on; server { # Listen for HTTP listen 80; # Match to local host names. server_name *.local; # We need to store a "cleaned" host. set $no_www $host; set $no_local $host; # Strip out www. if ($host ~* www\.(.*)) { set $no_www $1; rewrite ^(.*)$ $scheme://$no_www$1 permanent; } # Strip local for directory names. if ($no_www ~* (.*)\.local) { set $no_local $1; } # Define default path handler. location / { root ../Users/Stephen/Documents/Work/$no_local.com/hosts/main/docs; index index.php index.html index.htm; # Route non-existent paths through Kohana system router. try_files $uri $uri/ /index.php?kohana_uri=$request_uri; } # pass PHP scripts to FastCGI server listening on 127.0.0.1:9000 location ~ \.php$ { root ../Users/Stephen/Documents/Work/$no_local.com/hosts/main/docs; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi.conf; } # Prevent access to system files. location ~ /\. { return 404; } location ~* ^/(modules|application|system) { return 404; } } }

    Read the article

  • Installing Phusion Passenger 4.0.20 on Ubuntu 13.10

    - by tempestfire2002
    So I'm trying to install Passenger on the newest version of KUbuntu (13.10). I installed Apache2 using the apache2-mpm-worker package using the Muon Package Manager. And these are the commands I ran. rvmsudo gem install passenger rvmsudo passenger-install-apache2-module But I keep getting the following errors: [Fri Oct 18 15:52:13.227790 2013] [core:warn] [pid 13095] AH00111: Config variable ${APACHE_LOCK_DIR} is not defined [Fri Oct 18 15:52:13.227933 2013] [core:warn] [pid 13095] AH00111: Config variable ${APACHE_PID_FILE} is not defined [Fri Oct 18 15:52:13.227969 2013] [core:warn] [pid 13095] AH00111: Config variable ${APACHE_RUN_USER} is not defined [Fri Oct 18 15:52:13.227991 2013] [core:warn] [pid 13095] AH00111: Config variable ${APACHE_RUN_GROUP} is not defined [Fri Oct 18 15:52:13.228026 2013] [core:warn] [pid 13095] AH00111: Config variable ${APACHE_LOG_DIR} is not defined [Fri Oct 18 15:52:13.231737 2013] [core:warn] [pid 13095:tid 3074562624] AH00111: Config variable ${APACHE_RUN_DIR} is not defined [Fri Oct 18 15:52:13.232760 2013] [core:warn] [pid 13095:tid 3074562624] AH00111: Config variable ${APACHE_LOG_DIR} is not defined [Fri Oct 18 15:52:13.233043 2013] [core:warn] [pid 13095:tid 3074562624] AH00111: Config variable ${APACHE_LOG_DIR} is not defined [Fri Oct 18 15:52:13.233078 2013] [core:warn] [pid 13095:tid 3074562624] AH00111: Config variable ${APACHE_LOG_DIR} is not defined AH00526: Syntax error on line 74 of /etc/apache2/apache2.conf: Invalid Mutex directory in argument file:${APACHE_LOCK_DIR} -------------------------------------------- WARNING: Apache doesn't seem to be compiled with the 'prefork', 'worker' or 'event' MPM Phusion Passenger has only been tested on Apache with the 'prefork', the 'worker' and the 'event' MPM. Your Apache installation is compiled with the '' MPM. We recommend you to abort this installer and to recompile Apache with either the 'prefork', the 'worker' or the 'event' MPM. Press Ctrl-C to abort this installer (recommended). Press Enter if you want to continue with installation anyway. The result of my running apache2ctl -V is: Server version: Apache/2.4.6 (Ubuntu) Server built: Aug 9 2013 14:31:04 Server's Module Magic Number: 20120211:23 Server loaded: APR 1.4.8, APR-UTIL 1.5.2 Compiled using: APR 1.4.8, APR-UTIL 1.5.2 Architecture: 32-bit Server MPM: worker threaded: yes (fixed thread count) forked: yes (variable process count) Server compiled with.... -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=256 -D HTTPD_ROOT="/etc/apache2" -D SUEXEC_BIN="/usr/lib/apache2/suexec" -D DEFAULT_PIDLOG="/var/run/apache2.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="mime.types" -D SERVER_CONFIG_FILE="apache2.conf" As can be seen, the server is compiled with the worker MPM, so why is passenger complaining? And how do I solve the above errors (warnings, really, but to be safe, I'd like to not have any warnings)? Thanks.

    Read the article

  • Filtering Security Logs by User and Logon Type

    - by Trido
    I have been asked to find out when a user has logged on to the system in the last week. Now the audit logs in Windows should contain all the info I need. I think if I search for Event ID 4624 (Logon Success) with a specific AD user and Logon Type 2 (Interactive Logon) that it should give me the information I need, but for the life of my I cannot figure out how to actually filter the Event Log to get this information. Is it possible inside of the Event Viewer or do you need to use an external tool to parse it to this level? I found http://nerdsknowbest.blogspot.com.au/2013/03/filter-security-event-logs-by-user-in.html which seemed to be part of what I needed. I modified it slightly to only give me the last 7 days worth. Below is the XML I tried. <QueryList> <Query Id="0" Path="Security"> <Select Path="Security">*[System[(EventID=4624) and TimeCreated[timediff(@SystemTime) &lt;= 604800000]]]</Select> <Select Path="Security">*[EventData[Data[@Name='Logon Type']='2']]</Select> <Select Path="Security">*[EventData[Data[@Name='subjectUsername']='Domain\Username']]</Select> </Query> </QueryList> It only gave me the last 7 days, but the rest of it did not work. Can anyone assist me with this? EDIT Thanks to the suggestions of Lucky Luke I have been making progress. The below is my current query, although as I will explain it isn't returning any results. <QueryList> <Query Id="0" Path="Security"> <Select Path="Security"> *[System[(EventID='4624')] and System[TimeCreated[timediff(@SystemTime) &lt;= 604800000]] and EventData[Data[@Name='TargetUserName']='john.doe'] and EventData[Data[@Name='LogonType']='2'] ] </Select> </Query> </QueryList> As I mentioned, it wasn't returning any results so I have been messing with it a bit. I can get it to produce the results correctly until I add in the LogonType line. After that, it returns no results. Any idea why this might be? EDIT 2 I updated the LogonType line to the following: EventData[Data[@Name='LogonType'] and (Data='2' or Data='7')] This should capture Workstation Logons as well as Workstation Unlocks, but I still get nothing. I then modify it to search for other Logon Types like 3, or 8 which it finds plenty of. This leads me to believe that the query works correctly, but for some reason there are no entries in the Event Logs with Logon Type equalling 2 and this makes no sense to me. Is it possible to turn this off?

    Read the article

< Previous Page | 336 337 338 339 340 341 342 343 344 345 346 347  | Next Page >