Search Results

Search found 10941 results on 438 pages for 'location'.

Page 102/438 | < Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >

  • NGINX: Setting Different FastCGI Read Timeouts for Folders

    - by Hart Jones
    I have a PHP file in my /batch/ folder I run with an hourly CRON job. Unfortunately this process can often take a few minutes to complete so I have had to increase my fastcgi_read_timeout to 300 seconds for the entire server. I was wondering if it would be possible to change the fastcgi_read_timeout directive for only files in my /batch/ folder and not the entire server. For example something like this... location ~ \.php$ { fastcgi_pass localhost:9000; fastcgi_read_timeout 5; location /batch/ { fastcgi_read_timeout 300; } include /usr/local/nginx/conf/fastcgi_params; } So basically all PHP files on the server would have a 5 second timeout except PHP files in my /batch/ folder. Is this possible?

    Read the article

  • Nginx Installation on Ubuntu giving 500 error

    - by user750301
    I just installed nginx on ubuntu 12.04 LTS. When i access localhost it gives me : 500 Internal Server Error nginx/1.2.3 error_log has following rewrite or internal redirection cycle while internally redirecting to "/index.html", client: 127.0.0.1, server: localhost, request: "GET / HTTP/1.1", host: "localhost" This is default nginx configuration: nginx.conf has: include /etc/nginx/sites-enabled/*; /etc/nginx/sites-enabled/default has following root /usr/share/nginx/www; index index.html index.htm; # Make site accessible from http://localhost/ server_name localhost; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ /index.html; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules }

    Read the article

  • nginx folder redirect

    - by orbalon
    I'm trying redirect from an exact folder in nginx.conf Given the URL: domain.com/path1/path2/path3 Redirect to: sub.domain.com/path1/path2/path3 Here's what I have so far: location ~* ^/path1[\/?]$ { rewrite ^/(.*) http:sub.domain.com/$1 break; } I had it working with location /path1 { rewrite ^/(.*) http:sub.domain.com/$1 break; } The problem with that is it also redirects a page like domain.com/path1moretext/someotherpath to sub.domain.com/path1moretext/someotherpath Which is not what I want. (had to take out the "//" in the href code above because this is my first post, sorry).

    Read the article

  • How to make Windows 7 use a different name for the Program Files folder?

    - by Renato Silva
    How can I properly rename the Program Files folder in Windows 7? That is, not simply rename the directory and create symlinks, but make Windows itself see the location of installed programs as something else. I have already renamed the directory to Programs (using desktop.ini for a localized name in explorer), and made Program Files into a symlink to that, but I wonder if it's ever possible to erase the symlink by configuring Windows to, under all circumstances, assume Programs as the actual name for the programs directory. I heard that it should be possible to choose the name from Windows installation, but I'm not sure. Besides, I don't want to reinstall Windows from scratch. Before you mention %ProgramFiles%, no, it doesn't work. I'm not sure if Windows has that location hard-coded, or if replacing all occurrences in the registry would be enough.

    Read the article

  • Copying files locally on a network folder

    - by Altay_H
    I noticed that copying a large file from one location on a network drive to another location on the same network drive takes much longer than copying it locally. Instead of copying the file locally, the network computer sends the file to my remote computer, which sends it back to the same network computer. This means the files are being transferred over the network completely unnecessarily. Is there a way to fix this issue? It's becoming a real hassle to manage the video files on my network drive. Note: This is the case with both Windows and Linux (using Samba) network folders.

    Read the article

  • Redmine on Redhat/CentOS 5 Without using virtual hosts

    - by flyclassic
    I've have followed all the steps to install Redmine on CentOS 5, except for the Apache part: http://www.redmine.org/projects/redmine/wiki/HowTo_install_Redmine_on_CentOS_5 I do not want to configure a virtualhost as we are not using virtual hosts. Can I configure Redmine to run with http://hostname/redmine? Apparently it doesn't work for my case. Redmine was extracted in to the webserver document root /var/www/html/ called /var/www/html/redmine What I did was added a redmine.conf to /etc/httpd/conf.d/ with the following configuration and restarted the server: <Location "/redmine"> Options Indexes ExecCGI FollowSymLinks -MultiViews Order allow,deny Allow from all AllowOverride all PassengerEnabled On RailsBaseURI /var/www/html/redmine RailsEnv production </Location> now i got this error Further information about the error may have been written to the application's log file. Please check it in order to analyse the problem. Error message: No such file or directory - config/environment.rb Exception class: Errno::ENOENT Application root: /var/www/html Where have I gone wrong?

    Read the article

  • nginx proxypath https redirect fails without trailing slash

    - by Thermionix
    I'm trying to setup Nginx to forward requests to several backend services using proxy_pass. The links on the pages that lack trailing slashes do have https:// in front, but get redirected to a http request with a trailing slash - which ends in connection refused - I only want these services to be available through https. So if a link is too https://example.com/internal/errorlogs in a browser when loaded https://example.com/internal/errorlogs gives Error Code 10061: Connection refused (it redirects to http://example.com/internal/errorlogs/) If I manually append the trialing slash https://example.com/internal/errorlogs/ it loads I've tried with varied trailing forward slashes appended to the proxypath and location in proxy.conf to no effect, have also added server_name_in_redirect off; This happens on more than one app under nginx, and works in apache reverse proxy Config files; proxy.conf location /internal { proxy_pass http://localhost:8081/internal; include proxy.inc; } .... more entries .... sites-enabled/main server { listen 443; server_name example.com; server_name_in_redirect off; include proxy.conf; ssl on; } proxy.inc proxy_connect_timeout 59s; proxy_send_timeout 600; proxy_read_timeout 600; proxy_buffer_size 64k; proxy_buffers 16 32k; proxy_pass_header Set-Cookie; proxy_redirect off; proxy_hide_header Vary; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_set_header Accept-Encoding ''; proxy_ignore_headers Cache-Control Expires; proxy_set_header Referer $http_referer; proxy_set_header Host $host; proxy_set_header Cookie $http_cookie; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Ssl on; proxy_set_header X-Forwarded-Proto https; curl output -$ curl -I -k https://example.com/internal/errorlogs/ HTTP/1.1 200 OK Server: nginx/1.0.5 Date: Thu, 24 Nov 2011 23:32:07 GMT Content-Type: text/html;charset=utf-8 Connection: keep-alive Content-Length: 14327 -$ curl -I -k https://example.com/internal/errorlogs HTTP/1.1 301 Moved Permanently Server: nginx/1.0.5 Date: Thu, 24 Nov 2011 23:32:11 GMT Content-Type: text/html;charset=utf-8 Connection: keep-alive Content-Length: 127 Location: http://example.com/internal/errorlogs/

    Read the article

  • Nginx Retry of Requests ( Nginx - Haproxy Combination )

    - by vaibhav
    I wanted to ask about Nginx Retry of Requests. I have a Nginx running at the backend which then sends the requests to HaProxy which then passes it on the web server and the request is processed. I am reloading my Haproxy config dynamically to provide elasticity. The problem is that the requests are dropped when I reload Haproxy. So I wanted to have a solution where I can just retry that from Nginx. I looked through the proxy_connect_timeout, proxy_next_upstream in http module and max_fails and fail_timeout in server module. I initially only had 1 server in the upstream connections so I just that up twice now and less requests are getting dropped ( only when ) have say the same server twice in upstream , if I have same server 3-4 times drops increase ). So , firstly I wanted to now , that when a request is not able to establish connection from Nginx to Haproxy so while reloading it seems that conneciton is seen as error and straightway the request is dropped . So how can I either specify the time after the failure I want to retry the request from Nginx to upstream or the time before which Nginx treats it as failed request. ( I have tried increaing proxy_connect_timeout - didn't help , mail_retires , fail_timeout and also putting the same upstream server twice ( that gave the best results so far ) Nginx Conf File upstream gae_sleep { server 128.111.55.219:10000; } server { listen 8080; server_name 128.111.55.219; root /var/apps/sleep/app; # Uncomment these lines to enable logging, and comment out the following two #access_log /var/log/nginx/sleep.access.log upstream; error_log /var/log/nginx/sleep.error.log; access_log off; #error_log /dev/null crit; rewrite_log off; error_page 404 = /404.html; set $cache_dir /var/apps/sleep/cache; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://gae_sleep; client_max_body_size 2G; proxy_connect_timeout 30; client_body_timeout 30; proxy_read_timeout 30; } location /404.html { root /var/apps/sleep; } location /reserved-channel-appscale-path { proxy_buffering off; tcp_nodelay on; keepalive_timeout 55; proxy_pass http://128.111.55.219:5280/http-bind; } }

    Read the article

  • Linux: close a program with command line (not kill it)

    - by CodeNoob
    Some applications only allow one running instance (like eclipse, if you want to use the same workspace). So if I log in from another location, I have to kill/close the application that was open when I previously logged in from another location. I always use kill command to kill the running process. But, if you kill a process, it might not save some states that are useful for future usage. However, if you "close" it properly byclicking on the close button, for example, it will save the states properly. Is there a way to properly "close" an application from command line? I know this can vary from applications, so let's be a bit more generic: how to send a signal to another running application, and this signal works just as if we click the "close" button in the top bar? thanks!

    Read the article

  • SVN multiple repositories in subfolders

    - by fampinheiro
    I'm using apache+svn apache config file: LoadModule dav_module modules/mod_dav.so LoadModule dav_svn_module modules/mod_dav_svn.so LoadModule authz_svn_module modules/mod_authz_svn.so <Location /code> DAV svn SVNParentPath "c:/repositories" </Location> Imagine i have this file structure (in every t? i have one svn repository) c repositories uc1 0809v t1 t2 t3 0809i t1 t2 uc2 t1 t2 t1 I can access the repositories using: svn://domain.com/code/uc1/0809v/t1 svn://domain.com/code/uc1/0809v/t2 svn://domain.com/code/uc1/0809v/t3 I want to access them using the urls: http://domain.com/code/uc1/0809v/t1 http://domain.com/code/uc1/0809v/t2 http://domain.com/code/uc1/0809v/t3 and see the content of the repository in the browser. If i create the repository on the root of the svn folder i can see the repository (http://domain.com/code/t1) when i try the other urls i get the error Could not open the requested SVN filesystem My question is, It is possible to do a search in all subfolders looking for svn repositories?

    Read the article

  • Why does using nginx as a reverse proxy break local links?

    - by tsvallender
    I've just set up nginx as a reverse proxy, so some sites served from the box are served directly by it and others are forwarded to a Node.js server. The site being served by Node.js, however, is displayed with no CSS or images, so I assume the links are somehow being broken, but don't know why. The following is the only file in /etc/nginx/sites-enabled: server { listen 80; ## listen for ipv4 listen [::]:80 default ipv6only=on; ## listen for ipv6 server_name dev.my.site; access_log /var/log/nginx/localhost.access.log; location / { root /var/www; index index.html index.htm; } location /myNodeSite { proxy_pass http://127.0.0.1:8080/; proxy_redirect off; proxy_set_header Host $host; } } I had thought perhaps it was trying to find them in /var/www due to the first entry, but removing that doesn't seem to help.

    Read the article

  • nginx, php-fpm, and multiple roots - how to properly try_files?

    - by Carson C.
    I have a server context which is rooted in a login application. The login application handles, well, logins, and then returns a redirect to "/app" on the same server if a login is successful. The application is rooted elsewhere, which is handled by the location block shown here: location ^~ /app { alias /usr/share/nginx/www/website.com/content/public; location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/tmp/php5-fpm.sock; include fastcgi_params; } } This works just fine, however the $uri getting passed to PHP still contains /app, even though I am using alias rather than root. Because of this, the try_files directive fails to a 404 unless I link app -> ./ in /usr/share/nginx/www/website.com/content/public. It's obviously silly to have that link in there, and if that link ever gets lost, bam dead website without an obvious cause. The next thing I tried... Was to remove the try_files directive entirely. This allowed me to rm the app link in my /public folder, and PHP had no problem locating the file and executing it. I used that to dump my $_SERVER global from PHP, and found that "SCRIPT_FILENAME" => "/usr/share/nginx/www/website.com/content/public/index.php" when the browser URI is /app. This is exactly right. Based on my fastcgi_params below, this led me to beleive that try_files $request_filename =404; should work, but no dice. nginx still doesn't find the file, and returns 404. So for right now, it will only work without any try_files directive. PHP finds the file, whereas try_files could not. I understand this may be a PHP security risk. Can anyone indicate how to move forward? The nginx logs don't contain anything relating to the failed try_files attempt, as far as I can see. fastcgi_aparams fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param HTTPS $server_https;

    Read the article

  • Excel help vlookup

    - by user123953
    I need a little help with some excel Employee Locations Hours OT Mr.One Station 1 40 6 Mrs.Seven Station 2 30 6 Mr.Two Station 3 30 4 Mr.Three Station 4 40 4 Mrs.Eight Station 1 32 6 Mr.Four Station 2 32 7 Mrs.Nine Station 3 40 6 Mr.Five Station 4 40 7 Mr.Six Station 1 25 2 Mrs.Ten Station 2 40 3 Mr.Eleven Station 3 60 1 I have spreadsheet with to worksheets one is the data sheet (shown above) on the other sheet is a summary, that has the Locations column as data validation list. I wanna use the data validation list to pull all the people and info from a specific location. I tried using a vlookup put I only know how to use to pull one person at a time not a group of specific to a location.

    Read the article

  • FreeBSD 8.2 + Apache 2.2 + mod_auth_pam2: unable to authenticate

    - by zneak
    I've installed Apache 2.2 and mod_auth_pam2 from ports, but I can't get local UNIX authentication to work. When I access the protected part of my local website, I do get the authentication request, and with pam_permit.so, it works. However, when I change pam_permit.so to the real thing, pam_unix.so, I get this message in httpd-error.log: [error] PAM: user 'foo' - not authenticated: authentication error This is the relevant part of my Apache config, though I don't think it's the problem as it works with pam_permit.so: <Location /foo> AuthBasicAuthoritative Off AuthPAM_Enabled on AuthPAM_FallThrough off AuthType Basic AuthName "Secret place" Require valid-user </Location> This is my /etc/pam.d/httpd, though I don't think it's the problem either, since it works with pam_permit.so: auth required pam_unix.so account required pam_unix.so So what am I missing? What does it take to have pam_unix.so work for httpd under FreeBSD?

    Read the article

  • How to configure nginx so it works with Express?

    - by Michal Stefanow
    I'm trying to configure nginx so it proxy_pass requests to my node apps. Question on StackOverflow got many upvotes: http://stackoverflow.com/questions/5009324/node-js-nginx-and-now and I'm using config from there. (but since question is about server configuration it is supposed to be on ServerFault) Here is the nginx configuration: server { listen 80; listen [::]:80; root /var/www/services.stefanow.net/public_html; index index.html index.htm; server_name services.stefanow.net; location / { try_files $uri $uri/ =404; } location /test-express { proxy_pass http://127.0.0.1:3002; } location /test-http { proxy_pass http://127.0.0.1:3003; } } Using plain node: var http = require('http'); http.createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/plain'}); res.end('Hello World\n'); }).listen(3003, '127.0.0.1'); console.log('Server running at http://127.0.0.1:3003/'); It works! Check: http://services.stefanow.net/test-http Using express: var express = require('express'); var app = express(); // app.get('/', function(req, res) { res.redirect('/index.html'); }); app.get('/index.html', function(req, res) { res.send("blah blah index.html"); }); app.listen(3002, "127.0.0.1"); console.log('Server running at http://127.0.0.1:3002/'); It doesn't work :( See: http://services.stefanow.net/test-express I know that something is going on. a) test-express is NOT running b) text-express is running (and I can confirm it is running via command line while ssh on the server) root@stefanow:~# service nginx restart * Restarting nginx nginx [ OK ] root@stefanow:~# curl localhost:3002 Moved Temporarily. Redirecting to /index.html root@stefanow:~# curl localhost:3002/index.html blah blah index.html I tried setting headers as described here: http://www.nginxtips.com/how-to-setup-nginx-as-proxy-for-nodejs/ (still doesn't work) proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; I also tried replacing '127.0.0.1' with 'localhost' and vice versa Please advise. I'm pretty sure I miss some obvious detail and I would like to learn more. Thank you.

    Read the article

  • How do I get rid of phantom bookmarks in Google Chrome on Mac OS X 10.6?

    - by Philip
    I'm running Chrome 5.0.375.38 on OS X 10.6 Snow Leopard and although I'm positive that when I installed it I told it NOT to import my Firefox bookmarks, it nevertheless still accessed my OLD Firefox bookmarks (including some that I deleted) when I used the location bar. HOWEVER, when I opened the bookmarks manager, it said that I have no bookmarks whatsoever. Seeking to solve this problem, I installed XMarks on both FF and Chrome, and forced Chrome to download the server bookmarks. Now Chrome lists all my current FF bookmarks, but STILL sees the old, phantom bookmarks from when I first installed Chrome in the location bar, even though when I search for these same bookmarks in the bookmarks manager they don't show up. Aargh! Any ideas? Even if there's some way to force-kill-wipeout-clean-erase ALL my Chrome bookmarks that's fine as long as it kills the phantom ones b/c I can still overwrite with XMarks. Thanks!

    Read the article

  • How to redirect http requests to http (nginx)

    - by spuder
    There appear to be many questions and guides out there that instruct how to setup nginx to redirect http requests to https. Many are outdated, or just flat out wrong. server { listen *:80; server_name <%= @fqdn %>; #root /nowhere; #rewrite ^ https://$server_name$request_uri? permanent; #rewrite ^ https://$server_name$request_uri permanent; #return 301 https://$server_name$request_uri; #return 301 http://$server_name$request_uri; #return 301 http://192.168.33.10$request_uri; return 301 http://$host$request_uri; } server { listen *:443 ssl default_server; server_name <%= @fqdn %>; server_tokens off; root <%= @git_home %>/gitlab/public; ssl on; ssl_certificate <%= @gitlab_ssl_cert %>; ssl_certificate_key <%= @gitlab_ssl_key %>; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers AES:HIGH:!ADH:!MDF; ssl_prefer_server_ciphers on; location / { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback, see below try_files $uri $uri/index.html $uri.html @gitlab; } # if a file, which is not found in the root folder is requested, # then the proxy pass the request to the upsteam (gitlab puma) location @gitlab { proxy_read_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_connect_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_redirect off; ect.... I've restarted after every configuration change, and yet I still only get the 'Welcome to nginx' page when visiting http://192.168.33.10. whereas https://192.168.33.10 works perfectly. Why will nginx still not redirect http requests to https? tailf /var/log/nginx/access.log 192.168.33.1 - - [22/Oct/2013:03:41:39 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:24.0) Gecko/20100101 Firefox/24.0" 192.168.33.1 - - [22/Oct/2013:03:44:43 +0000] "GET / HTTP/1.1" 200 133 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:24.0) Gecko/20100101 Firefox/24.0" tailf /var/log/nginx/gitlab_error.lob 2013/10/22 02:29:14 [crit] 27226#0: *1 connect() to unix:/home/git/gitlab/tmp/sockets/gitlab.socket failed (2: No such file or directory) while connecting to upstream, client: 192.168.33.1, server: gitlab.localdomain, request: "GET / HTTP/1.1", upstream: "http://unix:/home/git/gitlab/tmp/sockets/gitlab.socket:/", host: "192.168.33.10" Resources http://wiki.nginx.org/Pitfalls How to make nginx redirect How to force or redirect to SSL in nginx? nginx ssl redirect Nginx & Https Redirection https://www.tinywp.in/301-redirect-wordpress/ How to force or redirect to SSL in nginx?

    Read the article

  • Apache - How to disable gzip content encoding (eg DEFLATE) for one set of URLs?

    - by Rory
    I have a ubuntu apache webserver and I have enabled mod_deflate to gzip all the content. However there's one folder I'd like to disable the mod_deflate for. I was going to do something like this: <Location /myfolder> RemoveOutputFilter DEFLATE </Location> But that doesn't work. Rational: I am trying to debug an XMLRPC server and I am using wireshark to see what gets past in the HTTP requests, since the replies are gzipped, I can't see what's going on.

    Read the article

  • To Delete a tape from the ACSLSlibrary

    - by Senthil Kumar
    Hi Anyone there can help me out for an issue as below: There was a stuck tape in the drive and so the stuck tape was removed, but i need to logically delete the tape entry from NB, so that the same media can be inserted back for operations. Netback thinks that the tape is still in that location, hence it should be removed so that the entry is not there and NB does not recognise that the tape in that location, so the same tape can be taken in through inventory. The NB used is NB5.1 Any command to delete this entry, this is a clustered based Environment (Active/Passive), and we use a ACSLS library (Physical) as well a Switch-SN6000(Logical) Kindly help me out as when we tried to delete the media from GUI it said- Could not delete- Cannot delete assigned volume (92).

    Read the article

  • Registry in Windows7 - appears in powershell, but not regedit

    - by Dan
    Hi. My software is writing to the registry (HKCU:\software\classes\clsid\). The key that I'm writing isn't appearing when I go to that location in regedit. However, if I navigate to that location in powershell, then I see ONLY the entry I added, and not the other class ids that I see in regedit. It's almost as if there's two versions of the registry. I'm using Windows7 (moved recently from XP, so there's probably some weird virtualization stuff going on which I've not learnt yet! ;-)). Thanks for any help with this, Dan.

    Read the article

  • Problem with running php script using mysql on tomcat

    - by Peter
    I am using tomcat 6 with JavaBridge. I have stored my php script in the following location. C:\Program Files\apache-tomcat-6.0.26\webapps\JavaBridge\project\test.php In test.php I am using curl and mysql. The php.ini in JavaBridge is stored in the following location C:\Program Files\apache-tomcat-6.0.26\webapps\JavaBridge\WEB-INF\cgi\php.ini and its contents are - extension_dir="C:\Program Files\apache-tomcat-6.0.26\webapps\JavaBridge\WEB-INF\cgi\x86-windows\ext" include_path="C:\Program Files\apache-tomcat-6.0.26\webapps\JavaBridge\WEB-INF\pear;." there is also a config file called mysql.ini whose contents are - extension = php_mysql.dll I had also installed wamp earlier so I copied all the dll's from C:\wamp\bin\php\php5.3.0\ext to C:\Program Files\apache-tomcat-6.0.26\webapps\JavaBridge\WEB-INF\cgi\x86-windows\ext When I start tomcat and run my script I get the following error - Fatal error: Call to undefined function mysqli_connect() in C:\Program Files\apache-tomcat-6.0.26\webapps\JavaBridge\project\test.php on line 534 Please help.

    Read the article

  • Digest authentication not working: endless cycles of asking for user/pass

    - by bcmcfc
    I'm trying to setup my SVN repository for access remotely. In doing so I have some settings under Apache's dav_svn.conf file. When navigating to hostname/svn, or using Tortoise to do the same it prompts for the user name and password as expected. However, when entering the correct user name and pass that were set in the password file linked to under AuthUserFile it just asks for the credentials again. I think I'm probably missing something simple? The server is running Ubuntu Server 9.10. Accessing SVN remotely does currently work if the authentication lines of dav_svn.conf are commented out. These are the contents of the dav_svn.conf file: <Location /svn> DAV svn SVNPath /home/svn/repo AuthType Digest AuthName "Subversion Repository" AuthDigestDomain /svn/ AuthUserFile /etc/svn_authfile Require valid-user </Location>

    Read the article

  • Could not establish a secure connection to server with safari

    - by pharno
    Safari tells me that it couldnt open the page, because it couldnt establish a secure connection to the server. However, other browsers (opera, firefox) can open the page. Also, theres nothing in the apache error log. The certificate is selfsigned, and uses standart values. (seen here: http://www.knaupes.net/tutorial-ssl-zertifikat-selbst-erstellen-und-signieren/ ) ssl config: SSLEngine on #SSLInsecureRenegotiation on SSLCertificateFile /home/gemeinde/certs/selfsigned/gemeinde.crt SSLCertificateKeyFile /home/gemeinde/certs/selfsigned/gemeinde.key #SSLCACertificateFile /home/gemeinde/certs/Platinum_G2.pem #SSLOptions +StdEnvVars <Location "/"> SSLOptions +StdEnvVars +OptRenegotiate SSLVerifyClient optional SSLVerifyDepth 10 </Location>

    Read the article

  • route port 3000 to apache2 alias

    - by user223470
    I have a meteor application running on port 3000. I can successfully connect to the program with www.myurl.com:3000, but would rather connect to it via www.myurl.com/myappname. I started with the instructions on this web site: http://www.andrehonsberg.com/article/deploy-meteorjs-vhosts-ubuntu1204-mongodb-apache-proxy and I have the following Apache configuration file: <VirtualHost *:80> ServerName myurl.com ProxyRequests off <Proxy *> Order deny,allow Allow from all </Proxy> <Location /> ProxyPass http://localhost:3000/ ProxyPassReverse http://localhost:3000/ </Location> </VirtualHost> I do not know how to continue from here to get the program on www.mysite.com/myapp. In other situations, I would use an Alias within the Apache configuration file, but that doesn't seem like the right direction to go in this case. How do I configure Apache to send port 3000 to www.myurl.com/myapp?

    Read the article

  • Flash Player Automatic Updater on Windows Startup

    - by Mikee
    Hi all, Adobe Flash Player is set to automatically check for updates on Windows startup. I've always wondered where exactly it is set to do this. Checking the running services, as well as msconfig does not yield its location. The message in question looks like this: http://www.technipages.com/disable-an-update-to-your-adobe-flash-player-is-available-message-forever.html I know how to disable it via Adobe's web site (instructions are included in link above), but I'm interested in knowing where exactly in Windows is this set to perform this action? I have done some research on this, and people keep saying to check the following registry locations: HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\RunOnce or the HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Run However, I have checked those locations, and I still cannot locate where this updater is stored. I'm pretty sure that malware also uses this technique to automatically load upon startup, and since it's not in the typical location(s) that a user would look, it's well hidden. Thank you for the assistance!

    Read the article

< Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >