Search Results

Search found 12857 results on 515 pages for 'spatial index'.

Page 411/515 | < Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >

  • Why is this setting for Name-based Virtual Host settings not working?

    - by Kave
    I have two domains (siteA.com & SiteB.com) that point to the same webserver and I would like to show different web pages for each. The steps I have taken so far are: Copy the default site (siteA) to siteB 1) sudo cp /etc/apache2/sites-available/default /etc/apache2/sites-available/siteB 2) sudo vim /etc/apache2/sites-available/siteB <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /var/www/siteB <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/siteB> Options Indexes FollowSymLinks MultiViews AllowOverride FileInfo Indexes Order allow,deny allow from all </Directory> </VirtualHost *:80> Then I created under /var/www/siteB and created a sample index.html in there. However when I load my domain siteB.com I still get directed to /var/www/siteA. Why is that? Do I have to rename the /etc/apache2/sites-available/default to /etc/apache2/sites-available/siteA as well? UPDATE: Thanks to the answer below it seems I had forgotten next to enabling the site also another entry: <VirtualHost *:80> ServerAdmin [email protected] ServerName siteB.com ServerAlias www.siteB.com </VirtualHost *:80> in order to include all subdomains as well then do: <VirtualHost *:80> ServerAdmin [email protected] ServerName siteB.com ServerAlias *.siteB.com </VirtualHost *:80> Same goes for siteA.

    Read the article

  • Configure Apache + Passenger to serve static files from different directory

    - by Rory Fitzpatrick
    I'm trying to setup Apache and Passenger to serve a Rails app. However, I also need it to serve static files from a directory other than /public and give precedence to these static files over anything in the Rails app. The Rails app is in /home/user/apps/testapp and the static files in /home/user/public_html. For various reasons the static files cannot simply be moved to the Rails public folder. Also note that the root http://domain.com/ should be served by the index.html file in the public_html folder. Here is the config I'm using: <VirtualHost *:80> ServerName domain.com DocumentRoot /home/user/apps/testapp/public RewriteEngine On RewriteCond /home/user/public_html/%{REQUEST_FILENAME} -f RewriteCond /home/user/public_html/%{REQUEST_FILENAME} -d RewriteRule ^/(.*)$ /home/user/public_html/$1 [L] </VirtualHost> This serves the Rails application fine but gives 404 for any static content from public_html. I have also tried a configuration that uses DocumentRoot /home/user/public_html but this doesn't serve the Rails app at all, presumably because Passenger doesn't know to process the request. Interestingly, if I change the conditions to !-f and !-d and the rewrite rule to redirecto to another domain, it works as expected (e.g. http://domain.com/doesnt_exist gets redirected to http://otherdomain.com/doesnt_exist) How can I configure Apache to serve static files like this, but allow all other requests to continue to Passenger?

    Read the article

  • sharepoint crawl not indexing main site

    - by user22215
    Guys I'm having some strange search issues' going on with my main portal application. First off let me give you a little back ground on the problem web app. Our Sharepoint environment was originally set up by a consultant that did not follow best practices. She used one web app to house our companies' intranet site, ssp, and mysites. Since than I have provisioned a new ssp that I have segmented correctly I moved all of our other sites over to the new ssp with out any problems . However, I could not assign the main portal app to the new ssp since the portal app housed the ssp site collection. So I deleted the ssp site collection after that I deleted the ssp and assigned the portal app to my new ssp. Now this is where the problem starts when I attempt to crawl this application the crawl starts than stops 5 seconds later with a status of success also it reports that 1 item was successfully crawled. The funny thing is the main portal app has nearly 30000 items. I have tracked the problem down to the web app if I create a test web app than restore the content I have no problem crawling all 30000 items. Also all of my other web apps that use the same ssp have no problem completing crawls. I don't see anything in the ULS logs or server 2003's event viewer. Also I'm using a separate dedicated index server that's configured to crawl itself via host file configuration. I would like to fix this problem with out having to recreate our main portal site due to the fact that we have several custom code modifications where DLL's were registered to the IIS bin folder also I don't even want to get into the Silverlight mods that were done. Any help with this problem is much appreciated Same problem as minehttp://www.experts-exchange.com/OS/Microsoft_Operating_Systems/Server/MS-SharePoint/Q_23885820.html

    Read the article

  • Document Map in MS Word 2007 going bonkers

    - by rzlines
    I'm working on a large project report in Microsoft Word 2007 and have been using the document map to generate the index. I have been carefully selecting the headers that need to be added to the document map but I saved the document and opened it up today to work on it - the document map has added whatever it pleases there. This is a temporary fix from a post that I found after extensive searching that works, but when I save and close the document and open it up again I face the same dilemma: I have noticed that when Word stuffs up the document map after opening the file, I can undo this by using the UNDO button. Word calls it ’Autoformat’. I have also fixed a file that has had the document map screwed permanently (i.e saved with it) by selecting all (CTRL+A),selecting the PARAGRAPH drop down menu in the HOME TAB and in the OUTLINE drop down box, selecting ’Body Text’. This removed all the problems and did not seem to affect my outline level paragraph headings. This is also another temporary fix but I have to be on my toes not to let Word auto format at the start of the document. I also can't afford to entirely turn off auto format as I need it. I’ve solved this problem for me. When you open the file, a progress bar at the bottom first says Opening (ESC to Cancel) and then it says Word is formatting the document (ESC to Cancel). If I cancel the second process, TOC fine. No cancelling, TOC screwed. Can anyone work out how to switch off the autoformatting? This is the post in which i found for the temporary fix

    Read the article

  • configuring apache with mod_mono for .net app

    - by Mystere Man
    I'm having a huge problem getting mod_mono and apache configured to work correctly. I've had this working at one time, but I can't seem to figure out where i'm going wrong. I'm using mono-server4. I'm trying to use a seperate port from the main website. So I have in /etc/apache2/sites-available (with a link from sites-enabled) a vhost configuration that looks like this: <VirtualHost *:9999> ServerName XXX ServerAdmin web-admin@XXX DocumentRoot /var/xxx MonoServerPath XXX "/usr/bin/mod-mono-server4" MonoDebug XXX true MonoSetEnv XXX MONO_IOMAP=all MonoApplications XXX "/:/var/xxx" <Location "/"> Allow from all Order allow,deny MonoSetServerAlias XXX SetHandler mono SetOutputFilter DEFLATE SetEnvIfNoCase Request_URI "\.(?:gif|jpe?g|png)$" no-gzip dont-vary </Location> <IfModule mod_deflate.c> AddOutputFilterByType DEFLATE text/html text/plain text/xml text/javascript </IfModule> </VirtualHost> I used mono-server4-admin to create the application mono-server4-admin --path=/var/xxx --app=/XXX --port=9999 When i start apache, it gives the error: Syntax error on line 13 of /etc/apache2/sites-enabled/xxx: Server alias 'XXX, not found. This corresponds with the MonoSetServerAlias statement. So I commented it out, and when I do that apache starts. However, when I try to access the site, I get a 500 error. The access log indicates that it's trying to access the app on port 80, rather than 9999. I'm not sure what the problem is here. Can anyone help me get figure out where I went wrong? My mono-server4-hosts.conf contains this: # start /etc/mono-server4/conf.d/RMRSite/10_XXX Alias /XXX "/var/xxx" AddMonoApplications default "/XXX:/var/xxx" <Directory /var/xxx> SetHandler mono <IfModule mod_dir.c> DirectoryIndex index.aspx </IfModule> </Directory> # end /etc/mono-server4/conf.d/XXX/10_XXX Also, my /etc/mono-server4/conf.d/XXX/10_XXX contains this: This is the configuration file for the XXX virtualhost path = /var/xxx alias = /XXX vhost = localhost port = 9999

    Read the article

  • What does this example bash startup script do?

    - by Dimitri
    I am trying to set up GNU Octave on my computer (Mac OS X 10.7.4). I am newbie in using Terminal and I need help to understand what the following script actually does: if [ -f ~/.bashrc ];then<br> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;. ~/.bashrc<br> fi<br> PATH=$PATH:/usr/local/bin<br> BASH_ENV=~/.bashrc<br> export BASH_ENV PATH<br> export GNUTERM=aqua<br> alias octave="/Applications/Octave.app/Contents/Resources/bin/octave"<br> alias gnuplot="/Applications/Gnuplot.app/Contents/Resources/bin/gnuplot"<br> (taken from here: http://wikibox.stanford.edu/me112/index.php/Main/OctaveMatlabNotes) So this script begins with the simple conditional if statement. I don't understand the conditional expression - what is -f and .bashrc? What the statement . ~/.bashrc actually does? Then 2 variables are defined PATH and BASH_ENV. Why are they exported? Why GNUTERM=aqua is exported even if it's not defined anywhere? All I need is a script that would allow me to run Octave by simply typing octave in the terminal. I don't need an alias for the gnu plot. Thanks

    Read the article

  • Cacti Login Page: Infinite loop occurs

    - by beicha
    Apache 2.2.3 | PHP 5.1.6 | MySQL 5.0.77 I followed cacti installation guide to install latest cacti 0.8.7h on CentOS 5.5 (64-bit). The installation of PHP/Apache/MySQL went smoothly until I finished the setup, and came to the login page. I can login http://.../cacti/index.php with admin account but the new page is redirected to the same login page with the message "Please enter your Cacti user name and password below" This is a infinite loop! If I use a wrong admin password I get the correct error message "Invalid User Name/Password Please Retype". [Same problem here] If I login use Guest/guest account, "Error: Access Denied, user account disabled." displays. The Cacti log file (./cacti/log/cacti.log) is empty. I Googled and seems this problem has existed for a long time, but no followup solutions were found on the forum posts I found. Anyone can help me on this problem? If more information needed, please let me know. Nov 18, 2011 UPDATE: I re-installed Cacti, this question remains UNSOLVED.

    Read the article

  • SSLVerifyClient optional with location-based exceptions

    - by Ian Dunn
    I have a site that requires authentication in order to access certain directories, but not others. (The "directories" are really just rewrite rules that all pass through /index.php) In order to authenticate, the user can either login with a standard username/password, or submit a client-side x509 certificate. So, Apache's vhost conf looks something like this: SSLCACertificateFile /etc/pki/CA/certs/redacted-ca.crt SSLOptions +ExportCertData +StdEnvVars SSLVerifyClient none SSLVerifyDepth 1 <LocationMatch "/(foo-one|foo-two|foo-three)"> SSLVerifyClient optional </LocationMatch> That works fine, but then large file uploads fail because of the behavior documented in bug 12355. The workaround for that is to set SSLVerifyClient require (or optional) as the default, so now the conf looks like this SSLCACertificateFile /etc/pki/CA/certs/redacted-ca.crt SSLOptions +ExportCertData +StdEnvVars SSLVerifyClient optional SSLVerifyDepth 1 <LocationMatch "/(bar-one|bar-two|bar-three)"> SSLVerifyClient none </LocationMatch> That fixes the upload problem, but the SSLVerifyClient none doesn't work for bar-one, bar-two, etc. Those directories are still prompted to present a certificate. Additionally, I also need the root URL to accessible without the user being prompted for a certificate. I'm afraid that will cancel out the workaround, though.

    Read the article

  • IIS virtual location path

    - by Worp
    Sorry in advance, for this seems to be a very noobish question and should be easily fixable, yet I can't work it out: I am setting up windows authentication for a website running on IIS 8.5: The Yii framework of the website takes input like http://localhost/mywebsite/index.php/site/loginand processes it, using the "login" action of the "site" Controller. I flipped on URL Rewrite to have nicer URLs, leaving the URL with /mywebsite/site/login. Now I need to set up windows authentication for this location. Very specifically ONLY for this very exact location. Only the /site/login location of mywebsite needs to have authentication. Every other location needs to have anonymous. Since it is a "virtual location", I don't know how to do it. I can set up win-auth for files, directories, virtual directories, etc. but not for virtual locations that do not map to any file but only to a Controller/Action in Yii. The working counterpart in Apache is but i can't "translate this into IIS". I have read that IIS does have the "~" symbol but I just couldn't make it work yet. Could it be used to achieve authentication on a location basis? I have looked around virtual directories as well, which seem to simply be a kind of "symlink" to actual folders on the harddrive. Can they be used differently to "create a virtual folder in a location that doesn't really exist to manage its properties"? Help is much appreciated.

    Read the article

  • Apache proxy: Why is one vhost returning Forbidden while the other one works?

    - by Stefan Majewsky
    I have a Java application that needs to talk to another intranet website using HTTPS in both directions. After fighting with Java's SSL implementations for some time, I gave up on that, and have now set up an Apache that's supposed to act as a bidirectional reverse proxy: external app ---(HTTPS request)---> Apache ---(local HTTP request)---> Java app This direction works just fine, however the other direction does not: Java app ---(local HTTP request)---> Apache ---(HTTPS request)---> external app This is the configuration for the vhost implementing the second proxy: Listen 127.0.0.1:8081 <VirtualHost appgateway:8081> ServerName appgateway.local SSLProxyEngine on ProxyPass / https://externalapp.corp:443/ ProxyPassReverse / https://externalapp.corp:443/ ProxyRequests Off AllowEncodedSlashes On # we do not need to apply any more restrictions here, because we listened on # local connections only in the first place (see the Listen directive above) <Proxy https://externalapp.corp:443/*> Order deny,allow Allow from all </Proxy> </VirtualHost> A curl http://127.0.0.1:8081/ should serve the equivalent of https://externalapp.corp, but instead results in 403 Forbidden, with the following message in the Apache error log: [Wed Jun 04 08:57:19 2014] [error] [client 127.0.0.1] Directory index forbidden by Options directive: /srv/www/htdocs/ This message completely puzzles me: Yes, I have not set up any permissions on the DocumentRoot of this vhost, but everything works fine for the other proxy direction where I haven't. For reference, here's the other vhost: Listen this_vm_hostname:443 <VirtualHost javaapp:443> ServerName javaapp.corp SSLEngine on SSLProxyEngine on # not shown: SSLCipherSuite, SSLCertificateFile, SSLCertificateKeyFile SSLOptions +StdEnvVars ProxyPass / http://localhost:8080/ ProxyPassReverse / http://localhost:8080/ ProxyRequests Off AllowEncodedSlashes On # Local reverse proxy authorization override <Proxy http://localhost:8080/*> Order deny,allow Allow from all </Proxy> </VirtualHost>

    Read the article

  • Configure Nginx to render static files and rewrite file extension or proxy_pass

    - by Pardoner
    I've set up Nginx to handle all my static files else proxy_pass to a Node.js server. It's working fine but I'm having difficulty rewriting the url so that it remove the .html file extension. upstream my_upstream { server 127.0.0.1:8000; keepalive 64; } server { listen 80; server_name staging.mysite.com; root /var/www/staging.mysite.org/public; access_log /var/logs/staging.mysite.org.access.log; error_log /var/logs/staging.mysite.org.error.log; location ~ ^/(images/|javascript/|css/|robots.txt|humans.txt|favicon.ico) { rewrite (.*)\.html $1 permanent; try_files $uri.html $uri/ /index.html; access_log off; expires max; } location / { proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_set_header Connection ""; proxy_http_version 1.1; proxy_cache one; proxy_cache_key sfs$request_uri$scheme; proxy_pass http://my_upstream; } }

    Read the article

  • Bash shell prompt: where is $RET?

    - by Evgeni Sergeev
    I was reading this https://wiki.archlinux.org/index.php/Color_Bash_Prompt and ended up with the following: # Stores the status of each command in $RET PROMPT_COMMAND='RET=$?;' # A colour. RED_SHELL='\e[0;36m' # Prints "Status 1" if RET is 1, for example. RET_VISUALISE='$(if [[ $RET != 0 ]]; then echo -ne "Status \[$RED_SHELL\]$RET\n" && RET=0; fi;)' # What to print for each prompt. PS1="$RET_VISUALISE\[\e]0;\w\a\]\n\[\e[32m\]\u@\h \t \[\e[33m\]\w\[\e[0m\]\n\$ " This does almost what I want, except when I press Enter,Enter,Enter multiple times after a command that returned status != 0. In this case it prints "Status 1" every time I press Enter. This is what the && RET=0; part was supposed to get rid of. Also, I don't understand why env | grep RET only shows the PS1 contents. What is the scope of $RET ?

    Read the article

  • scponly worked but didn't chroot the home folder, the user can still browse the entire server.

    - by Mint
    So I followed the "Chroot and Debian" tutorial in http://sublimation.org/scponly/wiki/index.php/FAQ Then when I log into user "upload" via ssh I have no access to the command line (this is what I wanted). But then when I SFTP into the upload user I can still see all the root files (/), it didn't chroot me to just /home/upload whats going on? …. I added this to the end of my /etc/ssh/sshd_config file, then done a restart Subsystem sftp internal-sftp UsePAM yes Match User upload ChrootDirectory /home/upload AllowTCPForwarding no X11Forwarding no ForceCommand internal-sftp Then when I log into sftp I can only see my upload folder (this is what I want), but now scp doesn't work :P SCP will accept my password then: debug1: Next authentication method: password [email protected]'s password: debug1: Authentication succeeded (password). debug1: channel 0: new [client-session] debug1: Requesting [email protected] debug1: Entering interactive session. debug1: Sending environment. debug1: Sending env LANG = en_NZ.UTF-8 debug1: Sending command: scp -v -t /test It will hang on that last debug message. Any help would be greatly appreciated. Note, running Debian Lenny

    Read the article

  • Removing trailing slashes in WordPress blog hosted on IIS

    - by Zishan
    I have a WordPress blog hosted in my IIS virtual directory that has all URLs ending with a forward slash. For example: http://www.example.com/blog/ I have the following rules defined in my web.config: <rule name="wordpress" patternSyntax="Wildcard"> <match url="*" /> <conditions> <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" /> </conditions> <action type="Rewrite" url="index.php" /> </rule> <rule name="Redirect-domain-to-www" patternSyntax="Wildcard" stopProcessing="true"> <match url="*" /> <conditions> <add input="{HTTP_HOST}" pattern="example.com" /> </conditions> <action type="Redirect" url="http://www.example.com/blog/{R:0}" /> </rule> In addition, I tried adding the following rule for removing trailing slashes: <rule name="Remove trailing slash" stopProcessing="true"> <match url="(.*)/$" /> <conditions> <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" /> </conditions> <action type="Redirect" redirectType="Permanent" url="{R:1}" /> </rule> It seems that the last rule doesn't work at all. Anyone around here who has attempted to remove trailing slashes from WordPress blogs hosted on IIS?

    Read the article

  • realtek lan driver problem - win 7

    - by tango _123
    Hi, my new notebook (ACER TM 8371) with Win 7(x32) professional as OS, has strage behaviour with respect to LAN. when I plug or unplug my charger cable, or my notebook goes in stad-by, or just a do nothing, the netwrok connection is lost. All the energy savings functionbality have been in-activated. if i check the network adapter information, it says "a network cabel is unplugged or corrupted", but it is not true.....the cable is in and it is not corrupted (i tried several which works on other machines). if a tried to di-habilitate and hence re-habilitate, it allows di-habilitation and not to further re-habilitate (it says "no driver for the network card" are available). in some cases, also, the driver disappears (i cannot see it anylonger). if i restart the notebook, everything comes back to normality, but.....today in 3 hours i had to restart 6 times!!! can anyone help me? win 7 says the driver is the last one, so no need to update it. here are some techinical information aboiut network card and driver: Name [00000015] Realtek PCIe GBE Family Controller Adapter type Ethernet 802.3 Product type Realtek PCIe GBE Family Controller Installated YES ID device PNP PCI\VEN_10EC&DEV_8168&SUBSYS_02831025&REV_02\4&266E4C4F&0&00E2 last reset 20/11/2009 11:27 Index 15 Service name RTL8167 I/O Port 0x0000FF00-0x0000FFFF Memory address 0xE2500000-0xE2500FFF Memory address 0xE1400000-0xE140FFFF IRQ Channel IRQ 18 Driver c:\windows\system32\drivers\rt86win7.sys (7.3.522.2009, 164,00 KB (167.936 Byte), 28/08/2009 22:12)

    Read the article

  • Weird Apache behaviour and with files again

    - by afifio
    Hi and thanks for stopping by. I have read Weird Apache problem with file, I have read Weird Apache problem with file ...and its not the problem Setup single XAMPP installation on Windows, single windows user, 2HD, 1 is a portable USB. All is fine, until I move the xampp to new portable HD Symptom Old php files - works fine, new one doesnt http://127.0.0.1/Ajax/index.php - yay http://127.0.0.1/test2/t.php - display the source code http://127.0.0.1/Ajax/test2/t.php - display the source code http://127.0.0.1/Ajax/t.php - display the source code Extra Info IIS+MS Web Development stuff, .NET4, Asp, etc is being installed and still hast reboot yet. .htaccess also seems doesnt work Apache2 conf file was modified to Averride All and still it doesnt care. One of the directory supposed to treat .htm as php yet got text, created another directory and edit a phpinfo, still another text, browse to phpmyadmin, viola, works fine Suspect Does Apache honour XP security and permission ? If so, this is a single user computer. Does Apache dont like my new hard disk/new place ? Why it doesnt execute the php in new directory but happily execute in old folder ? Thanks for the riddle answers

    Read the article

  • 7ZIP - Command Line Compression | Can Never Keep it Simple

    - by OneTwoYou
    I've been Googleing for a few hours on how to just compress a file inside a directory and I can't find anything. I found how to just compress a folder in general. Now I wish to know how I can compress a folder in a folder with a file. Current code: 7zG.exe a -tzip "test.zip" dontcompressme/compressme/new.txt pause As you can see above, I don't want to compress the first folder, but only the second and what ever is within that folder. I have the 7zG.exe sitting in the main folder and I have some files that are three folders in, but I don't know how to only compress those. Here is my directory list: Folder One (don't compress) Folder Two (don't compress) Folder Three (okay to compress) Document One.txt (okay to compress) Document Two.txt (okay to compress) Index.html (okay to compress) Does anyone know how I can do this in the most simplest way ever invented by man? Cause whenever I go to a website using Google it goes throw all these methods on how to compress a folder, but not do it the way I wish it to do. It makes me kinda upset cause I can't get a simple and straight forward answer. Thank you if you answer my question.

    Read the article

  • Puppet write hosts using api call

    - by Ben Smith
    I'm trying to write a puppet function that calls my hosting environment (rackspace cloud atm) to list servers, then update my hosts file. My get_hosts function is currently this: require 'rubygems' require 'cloudservers' module Puppet::Parser::Functions newfunction(:get_hosts, :type => :rvalue) do |args| unless args.length == 1 raise Puppet::ParseError, "Must provide the datacenter" end DC = args[0] USERNAME = DC == "us" ? "..." : "..." API_KEY = DC == "us" ? "..." : "..." AUTH_URL = DC == "us" ? CloudServers::AUTH_USA : CloudServers::AUTH_UK DOMAIN = "..." cs = CloudServers::Connection.new(:username => USERNAME, :api_key => API_KEY, :auth_url => AUTH_URL) cs.list_servers_detail.map {|server| server.map {|s| { s[:name] + "." + DC + DOMAIN => { :ip => s[:addresses][:private][0], :aliases => s[:name] }}} } end end And I have a hosts.pp that calls this and 'should' write it to /etc/hosts. class hosts::us { $hosts = get_hosts("us") hostentry { $hosts: } } define hostentry() { host{ $name: ip => $name[ip], host_aliases => $name[aliases] } } As you can imagine, this isn't currently working and I'm getting a 'Symbol as array index at /etc/puppet/manifests/hosts.pp:2' error. I imagine, once I've realised what I'm currently doing wrong there will be more errors to come. Is this a good idea? Can someone help me work out how to do this?

    Read the article

  • Attempting to emulate Apache MultiViews with Nginx try_files

    - by Samuel Bierwagen
    I want a request to http://example.com/foobar to return http://example.com/foobar.jpg. (Or .gif, .html, .whatever) This is trivial to do with Apache MultiViews, and it seems like it would be equally easy in Nginx. This question seems to imply that it'd be easy as try_files $uri $uri/ index.php; in the location block, but that doesn't work. try_files $uri $uri/ =404; doesn't work, nor does try_files $uri =404; or try_files $uri.* =404; Moving it between my location / { block and the regexp which matches images has no effect. Crucially, try_files $uri.jpg =404; does work, but only for .jpg files, and it throws a configuration error if I use more than one try_files rule in a location block! The current server { block: server { listen 80; server_name example.org www.example.org; access_log /var/log/nginx/vhosts.access.log; root /srv/www/vhosts/example; location / { root /srv/www/vhosts/example; } location ~* \.(?:ico|css|js|gif|jpe?g|es|png)$ { expires max; add_header Cache-Control public; try_files $uri =404; } } Nginx version is 1.1.14.

    Read the article

  • SharePoint blog site won't search local site... you can only search for Mysites and users

    - by Don
    I have a Howto company Blog site that i post to for my clients to access for help. For some reason it has stopped letting anyone search on it. I can search for Mysites or users. But when you drop down the tab to search: This Site: "blog site name" you get the following reply: No results matching your search were found. Check your spelling. Are the words in your query spelled correctly? Try using synonyms. Maybe what you're looking for uses slightly different words. Make your search more general. Try more general terms in place of specific ones. Try your search in a different scope. Different scopes can have different results. I have tried the following command: from the Index server 1-net stop osearch 2-net start osearch 3-iisreset /noforce But still not able to search a local blog site I can only search for users and Sites. please help Don

    Read the article

  • Disk doesn't contain a valid partition table

    - by Jeevan Dongre
    I was running a m1.small instance ec2 ubuntu instance. I was running out of disk space, so I upgraded my instance to medium. When I upgraded I actually got 429.5 GB of space and after that I added 10 gb of volume too. When I run the "sudo fdisk -l" command I got this results. Disk /dev/sda1: 8589 MB, 8589934592 bytes 255 heads, 63 sectors/track, 1044 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sda1 doesn't contain a valid partition table Disk /dev/sda2: 429.5 GB, 429461078016 bytes 255 heads, 63 sectors/track, 52212 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sda2 doesn't contain a valid partition table Disk /dev/sdf: 10.7 GB, 10737418240 bytes 255 heads, 63 sectors/track, 1305 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 sda1 is the primary parition and sda2 is what I got added upgrading my system to medium. But the problem persists, I am not able to pull the code from git, it is giving me this error. remote: Counting objects: 409, done. remote: Compressing objects: 100% (236/236), done. fatal: write error: No space left on device fatal: index-pack failed

    Read the article

  • VBA Solution to VLOOKUP with Hyperlinks

    - by Emily2
    I am looking for some help with a VBA solution for preserving hyperlinks when using VLOOKUP on Excel (2010). I have a load of data on Sheet 1 for internal use only, and a cut-down version of this on Sheet 2. Instead of recreating Sheet 2 everytime, I am looking to have a working version which updates everytime Sheet1 is updated. Thus, I have used VLOOKUP on Sheet 2 so that only the desired info is returned on sheet 2. However, the problem was that sheet 1 contained in many cells Hyperlinks to external websites, and this would not pull through to Sheet2 using VLOOKUP. With some help, however, using the following VBA solution the hyperlinks now pull through: Function GetHyperLink(r As Range) As String If r.Hyperlinks.Count Then GetHyperLink = r.Hyperlinks(1).Address End If End Function And I am using the following formula in the relevant cell(s) in Sheet2: =HYPERLINK(GetHyperLink(INDEX('Sheet 1'!$B$1:$B$10001,MATCH(A4,'Sheet 1'!$A$1:$A$10001,0))),(VLOOKUP(A4,'Sheet 1'!$A$1:$B$10001,2,FALSE))) However, the problem is with formatting: every cell on Sheet2 is formatted blue and underlined, even although some of them do not contain a hyperlink! Is someone able to help with a VBA solution/formula to fix this last piece of the puzzle? Many thanks, in anticipation.

    Read the article

  • Unable to set initcwnd on a Hetzner server

    - by Sergi
    We just ordered a bunch of Hetzner EX40SSD servers with the minimal Debian install image that they provide and everything is just fine except that looking at tcpdumps for fine tuning the network from various locations the initcwnd param seems to be stuck at 6 no matter how we change it. By default Debian 3.2 kernels should have that setting to 10 so it's pretty strange. Is it possible that the NIC driver or a custom setting in the Hetzner Debian image is limiting this param? Even if we set it to 4, like the old kernel default, it doesn't work. Any ideas would be much appreciated! Does anyone know if the NIC drivers provided by default by Debian have some kind on limitation. In a long thread in http://www.webhostingtalk.com/showthread.php?t=1200617&highlight=hetzner they talk about a page http://wiki.hetzner.de/index.php/Installation_des_r8168-Treibers/en where Hetzner states that the included Realtek r8168 driver is not working properly, but nowhere do they say that the initcwnd could be affected. Tomorrow i will try to install a CentOs image and see if Debian is the problem...Last resort would be to install a custom debian image, but that is a pain in the ass! Thanks!

    Read the article

  • Remove an apache alias subdirectory

    - by Hippyjim
    I'm using Apache 2 on Ubuntu 12.04. I added an alias for a subdirectory, to point to gitweb. I realised I should probably make it accessible only on https - so I removed the alias and restarted Apache. I can still navigate to http://xyz/gitweb - even with no alias in any of my config files. How do I remove it? EDIT The config file looked like this before: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /home/administrator/webroot <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/administrator/webroot/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> Alias /gitweb/ /usr/share/gitweb/ <Directory /usr/share/gitweb/> Options ExecCGI +FollowSymLinks +SymLinksIfOwnerMatch AllowOverride All order allow,deny Allow from all AddHandler cgi-script cgiDirectory Index gitweb.cgi </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> And this after: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /home/administrator/webroot <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/administrator/webroot/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost>

    Read the article

  • Log backups "stalling" on SQL 2008?

    - by MattK
    I have interited a box running SQL Server 2008 and Windows 2003, and have had a few events where largeish (35GB) log backups "stall", both before and after the installation of SQL 2008 SP1. The server log ships to a standby, so regular log backups are taken at 15 minute intervals. However, after an index reorg causes the log to grow to about 35GB (on a DB with about 17GB of data), the next log backup runs to ~95% completion, then seems to stop. The process shows as suspended, with a wait state of BACKUPIO. CPU, read, and write activity on the SPID also does not change, and the process stays in this state for hours, when normally a backup of this size should complete in about 20 minutes. This server has a single RAID-1 volume, thus the source database files and destination backup files are on the same volume. However, I cannot determine if another process is blocking the backup. The backup SPID cannot be killed, and the only way to terminate the log backup and clear the lock on the backup file is to cycle the SQL Server service. There was one event where the backup terminated completely, with an error that another process had locked the backup file, but no details about what that process was. Can anyone suggest a cause or diagnostic process to this situation?

    Read the article

< Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >