Search Results

Search found 8370 results on 335 pages for 'seo friendly urls'.

Page 291/335 | < Previous Page | 287 288 289 290 291 292 293 294 295 296 297 298  | Next Page >

  • Limiting access in Silverlight\Pivotviewer

    - by sparaflAsh
    I'm going to deploy a pivotviewer application. As some of you might know this silverlight application load a .cxml index file for a group of images. My need is to make .cxml file and image files not accessible for the user. Now, if I don't have a need I usually code like this in C# and the file is hosted in the documentroot: _cxml = new CxmlCollectionSource(new Uri("http://www.myurl.it/Collection.cxml", UriKind.Absolute)); This means that my cxml and then the images are available by http for everyone who knows the URI. I'm a newbie to server configuration, so any help/hint would be deeply appreciated. Someone suggested me to take the files out of the root, but it seems like I can't go to pick them up if they are not a URL in Silverlight. At least I didn't managed to understand how. Someone else suggested me to play with web.config file to hide URLs, but I don't really know where to start. My question is, what's the best practice to hide my stuff? Obviously I can edit the question if you need more details.

    Read the article

  • Ubuntu server or Debian server (to run C++ apps developed on Ubuntu)

    - by skyeagle
    I have written a number of C++ server side daemons for my website, using my Ubuntu 9.10 dev machine. The C++ apps I mentioned above are "GUI-less" daemons (and libraries used by the daemons). I am now about to host my website and need to decide whether to go with Debian server or Ubuntu server. In a nutshell, here is the situation: I developed on Ubuntu desktop because I preferred the more friendly GUI I would like to deploy on Debian Server because of the (perceived?) robustness of the Debian server over Ubuntu server (I may be totally wrong here - and in fact, this is really what this question is all about) If Debian server is indeed more robust than Ubuntu server, then I have no choice but to go with Debian server - BUT, will my Ubuntu developed C++ apps run on the server? (or do I need to recompile them on the server? (I'd HATE to have to do this, because I want to keep the server machine clean and light - no GUI, no dev tools etc). This last question is really about binary compatability between Ubuntu and Debian. I want the server to be robust, secure and stable, and simply act as a server (i.e. LAMP and very little else - no GUI etc). Given that requirement, and the fact that I need to run my C++ apps (developed on Ubuntu 9.10), I need advice on which OS to choose for the server. Ideally, any advice will be backed with a reason. I am particularly interested in hearing from people who have been in an identical situation, or done something similar.

    Read the article

  • Google respond differently to two identical nginx setups and 200 codes; any ideas?

    - by Yuji Tomita
    I'm rather confused... I have a linode.com VPS which has been cloned recently, so the settings are the same between nginx servers. One lives on a dev subdomain, one on a www. I'm trying to run a google experiment on my live server, which claims: Web server rejects utm_expid. Your server doesn't support added query arguments in URLs. My logs show on the dev server where it works: 74.125.186.32 - - [13/Sep/2012:13:33:45 -0700] "GET /product/iphone-case/?utm_expid=25706866-0 HTTP/1.1" 200 12521 "-" "Google_Analytics_Content_Experiments 74.125.186.32 - - [13/Sep/2012:13:33:45 -0700] "GET /product/iphone-case/?ab_reviews=True&utm_expid=25706866-0 HTTP/1.1" 200 14679 "-" "Google_Analytics_Content_Experiments My production server shows google making a second request. 74.125.186.41 - - [13/Sep/2012:13:34:49 -0700] "GET /product/iphone-case/?ab_reviews=on&utm_expid=25706866-1 HTTP/1.1" 200 12104 "-" "Google_Analytics_Content_Experiments 74.125.186.41 - - [13/Sep/2012:13:34:49 -0700] "GET /product/iphone-case/?utm_expid=25706866-1 HTTP/1.1" 200 12122 "-" "Google_Analytics_Content_Experiments 74.125.186.41 - - [13/Sep/2012:13:34:49 -0700] "GET /product/iphone-case/ <--- A second request for some reason. HTTP/1.1" 200 12522 "-" "Google_Analytics_Content_Experiments I'm not sure how google determines why it needs to send a second request without the querystring. The original request has clearly sent a 200 OK status response. Does anybody have any suggestions where to look next? The HTML (compared by diff) on the two pages is exactly the same.

    Read the article

  • Apache2 Virtualhost practice config issue

    - by sisko
    I am practicing virtualhost configuration. In my /var/www directory I have created 3 directories called test1, test2 and test3 each of which has a simple index.php script in it. I:E test1/index.php etc. In /etc/apache2/sites-available/test1 I have the following configuration: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName test1 DocumentRoot /var/www/test1 <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/test1/> Options -Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> All the other sites have a similar virtualHost definition. I have enabled the site(the symlink appears in sites-enabled) and I have restarted apache. However, when I visit localhost/test1, I get a 404 Error. My error log show the following message: [Wed Oct 23 06:22:52 2013] [error] [client 127.0.0.1] File does not exist: /var/www/test1/test1 I don't know why I get the double test1/test1 in the error logs. I'm trying to find the right virtualHost setup which will allow all 3 test websites to be served from their URLs I:E test1/index.php, test2/index.php and test3/index.php. Can anyone help me out, please?

    Read the article

  • Installing mod_pagespeed (Apache module) on CentOS

    - by Sid B
    I have a CentOS (5.7 Final) system on which I already have Apache (2.2.3) installed. I have installed mod_pagespeed by following the instructions on: http://code.google.com/speed/page-speed/download.html and got the following while installing: # rpm -U mod-pagespeed-*.rpm warning: mod-pagespeed-beta_current_x86_64.rpm: Header V4 DSA signature: NOKEY, key ID 7fac5991 [ OK ] atd: [ OK ] It does appear to be installed properly: # apachectl -t -D DUMP_MODULES Loaded Modules: ... pagespeed_module (shared) And I've made the following changes in /etc/httpd/conf.d/pagespeed.conf Added: ModPagespeedEnableFilters collapse_whitespace,elide_attributes ModPagespeedEnableFilters combine_css,rewrite_css,move_css_to_head,inline_css ModPagespeedEnableFilters rewrite_javascript,inline_javascript ModPagespeedEnableFilters rewrite_images,insert_img_dimensions ModPagespeedEnableFilters extend_cache ModPagespeedEnableFilters remove_quotes,remove_comments ModPagespeedEnableFilters add_instrumentation Commented out the following lines in mod_pagespeed_statistics <Location /mod_pagespeed_statistics> **# Order allow,deny** # You may insert other "Allow from" lines to add hosts you want to # allow to look at generated statistics. Another possibility is # to comment out the "Order" and "Allow" options from the config # file, to allow any client that can reach your server to examine # statistics. This might be appropriate in an experimental setup or # if the Apache server is protected by a reverse proxy that will # filter URLs in some fashion. **# Allow from localhost** **# Allow from 127.0.0.1** SetHandler mod_pagespeed_statistics </Location> As a separate note, I'm trying to run the prescribed system tests as specified on google's site, but it gives the following error. I'm averse to updating wget on my server, as I'm sure there's no need for it for the actual module to function correctly. ./system_test.sh www.domain.com You have the wrong version of wget. 1.12 is required.

    Read the article

  • network policy + WPA enterprise (tkip) Windows 2008 R2

    - by Aceth
    hi I've attempted the following guide and in a bit of a pickle. http://techblog.mirabito.net.au/?p=87 My main goal is to have a username / password based wireless authentication with active directory integration. I keep getting the error Network Policy Server denied access to a user. Contact the Network Policy Server administrator for more information. User: Security ID: domain\rhysbeta Account Name: rhysbeta Account Domain: domain Fully Qualified Account Name: domain\rhysbeta Client Machine: Security ID: NULL SID Account Name: - Fully Qualified Account Name: - OS-Version: - Called Station Identifier: 00-12-BF-00-71-3C:wirelessname Calling Station Identifier: 00-23-76-5D-1E-31 NAS: NAS IPv4 Address: 0.0.0.0 NAS IPv6 Address: - NAS Identifier: - NAS Port-Type: Wireless - IEEE 802.11 NAS Port: 2 RADIUS Client: Client Friendly Name: Belkin54g Client IP Address: x.x.x.10 Authentication Details: Connection Request Policy Name: Secure Wireless Connections Network Policy Name: Secure Wireless Connections Authentication Provider: Windows Authentication Server: srvr.example.com Authentication Type: EAP EAP Type: - Account Session Identifier: - Logging Results: Accounting information was written to the local log file. Reason Code: 22 Reason: The client could not be authenticated because the Extensible Authentication Protocol (EAP) Type cannot be processed by the server. ` I would love to have it so that non domain devices

    Read the article

  • redirect landing without changing url

    - by Gerard
    I'm working with Apache and Joomla. I would like to have some URIs that land to the same landing page, but avoiding the URL changes. So, for example: examplecom/luggage/delay/iberia examplecom/luggage/delay/aireuropa examplecom/luggage/delay/ryanair Need to point to examplecom/luggage/delay but the address bar must show the different airline companies. I've been messing arround with mod_rewrite with the following codes: 1: RewriteCond %{REQUEST_URI} /equipaje/problemas-de-equipaje/perdida-equipaje-iberia [NC,OR] RewriteCond %{REQUEST_URI} /equipaje/problemas-de-equipaje/perdida-equipaje-vueling [NC,OR] RewriteCond %{REQUEST_URI} /equipaje/problemas-de-equipaje/perdida-equipaje-ryanair [NC,OR] RewriteCond %{REQUEST_URI} /equipaje/problemas-de-equipaje/perdida-equipaje-aireuropa [NC,OR] RewriteRule ^(.*)$ /equipaje/problemas-de-equipaje/perdida-equipaje [P] 2: RewriteRule ^equipaje/problemas-de-equipaje/perdida-equipaje-iberia equipaje/problemas-de-equipaje/perdida-equipaje [NC,L] RewriteRule ^equipaje/problemas-de-equipaje/perdida-equipaje-vueling equipaje/problemas-de-equipaje/perdida-equipaje [NC,L] RewriteRule ^equipaje/problemas-de-equipaje/perdida-equipaje-ryanair equipaje/problemas-de-equipaje/perdida-equipaje [NC,L] RewriteRule ^equipaje/problemas-de-equipaje/perdida-equipaje-aireuropa equipaje/problemas-de-equipaje/perdida-equipaje [NC,L] The idea is to have different urls for AdWords but then I'll avoid the indexing of that company landpages to avoid replicated content. I've been searching for several examples and trying to understand how the module works, but until now I haven't acomplished with that... Is it possible to do that with mod_rewrite? Many thanks!

    Read the article

  • Appears to be "randomly" switching between the acl matched backend and the default backend

    - by Xoor
    I have HAProxy acting as a proxy in front of: An NGinx instance An in-house load balancer in front of multiple dynamic services exposed with socket.io (websockets) My problem is that from time to time my connections are proxied correctly to my socket.io cluster, and then randomly it fallsback to routing to NGinx which obviously is annoying and meaningless since NGinx isn't mean't to handle the request. This happens when requesting for URLs of the format : http://mydomain.com/backends/* There's an ACL in the HAProxy config to match the '/backends/*' path. Here's a simplified version of my HAProxy config (removed extra unrelated entries and changed names): global daemon maxconn 4096 user haproxy group haproxy nbproc 4 defaults mode http timeout server 86400000 timeout connect 5000 log global #this frontend interface receives the incoming http requests frontend http-in mode http #process all requests made on port 80 bind *:80 #set a large timeout for websockets timeout client 86400000 # Default Backend default_backend www_backend # Loadfire (socket cluster) acl is_loadfire_backends path_beg /backends use_backend loadfire_backend if is_loadfire_backends # NGinx backend backend www_backend server www_nginx localhost:12346 maxconn 1024 # Loadfire backend backend loadfire_backend option forwardfor # This sets X-Forwarded-For option httpclose server loadfire localhost:7101 maxconn 2048 It's really quite confusing for me why the behaviour appears to be "random", since being hard to reproduce it's hard to debug. I appreciate any insight on this.

    Read the article

  • How to email photo from Ubuntu F-Spot application via Gmail?

    - by Norman Ramsey
    My father runs Ubuntu and wants to be able to use the Gnome photo manager, F-Spot, to email photos. However, he must use Gmail as his client because (a) it's the only client he knows how to use and (b) his ISP refuses to reveal his SMTP password. I've got as far as setting up Firefox to use GMail to handle mailto: links and I've also configured firefox as the system default mailer using gnome-default-applications-properties. F-Spot presents a mailto: URL with an attach=file:///tmp/mumble.jpg header. So here's the problem: the attachment never shows up. I can't tell if Firefox is dropping the attachment header, if GMail doesn't support the header, or what. I've learned that: There's no official header in the mailto: URL RFC that explains how to add an attachment. I can't find documentation on how Firefox handles mailto: URLs that would explain to me how to communicate to Firefox that I want an attachment. I can't find any documentation for GMail's URL API that would enable me to tell GMail directly to start composing a message with a given file as an attachement. I'm perfectly capable of writing a shell script to interpolate around F-Spot to massage the URL that F-Spot presents into something that will coax Firefox into doing the right thing. But I can't figure out how to persuade Firefox to start composing a GMail message with a local file attached. Any help would be greatly appreciated.

    Read the article

  • IE9 apprears to be ignoring RewriteRule in htaccess file

    - by mouli
    I have a site that uses SEF URLs and htaccess RewriteRules to serve up the pages. This has worked fine for several years until the arrival of IE9. Now it appears that the links are not being rewritten and the site is dead in the water. I have tried different compatabilty modes, to no avail, and I've played with the Rewrite Rules over and over, tried different doctypes and a few other browser settings. I agree that it cannot in theory be a browser specific problem if the problem is with the htaccess file but this site works in IE8, firefox and chrome. I have run the rewriterule through a validator and it looks fine. Any ideas would be appreciated as I am running out of ideas. The site is www.marlboroughsounds.co.nz a sample link is http://www.marlboroughsounds.co.nz/walking/freedom-walk-queen-charlotte-track/4dfw and the rewrite rule thats not working looks like this: RewriteRule ^walking/.*/([a-z0-9_]*)/?$ /walking.php?act_code=$1 [L] The link fails and it serves up a browser 404 page, not even the custom 404 I have for the site. Any ideas would be much appreciated as I am stumped.

    Read the article

  • NGINX Remove index.php /index.php/something/more/ to /something/more

    - by Gaston
    I'm trying to clean urls in NGINX using framework DooPHP. This = - http://example.com/index.php/something/more/ To This = - http://example.com/something/more/ I want to remove (clean url) the "index.php" from the url if someone try to enter in the first form. Like a permanent redirect. How to do this config on NGINX? Thanks. [Update: Actual nginx config] server { listen 80; server_name vip.example.com; rewrite ^/(.*) https://vip.example.com/$1 permanent; } server { listen 443; server_name vip.example.com; error_page 404 /vip.example.com/404.html; error_page 403 /vip.example.com/403.html; error_page 401 /vip.example.com/401.html; location /vip.example.com { root /sites/errors; } ssl on; ssl_certificate /etc/nginx/config/server.csr; ssl_certificate_key /etc/nginx/config/server.sky; if (!-e $request_filename){ rewrite /.* /index.php; } location / { auth_basic "example Team Access"; auth_basic_user_file config/htpasswd; root /sites/vip.example.com; index index.php; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /sites/vip.example.com$fastcgi_script_name; include fastcgi_params; fastcgi_param PATH_INFO $fastcgi_script_name; } }

    Read the article

  • How can I disable Kerberos authentication for only the root of my site?

    - by petRUShka
    I have Kerberos-based authentication and I want to disable it on only root url: http://mysite.com/. And I want it to continue to work fine on any other page like http://mysite.com/page1. I have such things in my .htaccess: AuthType Kerberos AuthName "Domain login" KrbAuthRealms DOMAIN.COM KrbMethodK5Passwd on Krb5KeyTab /etc/httpd/httpd.keytab require valid-user I want to turn it off only for root URL. As workaround it is possible to turn off using .htaccess in virtual host config. Unfortunately I don't know how to do it. Part of my vhost.conf: <Directory /home/user/www/current/public/> Options -MultiViews +FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> UPD. I'm using Apache/2.2.3 (Linux/SUSE) I tried to use such version of .htaccess: SetEnvIf Request_URI ^/$ rootdir=1 Allow from env=rootdir Satisfy Any AuthType Kerberos AuthName "Domain login" KrbAuthRealms DOMAIN.COM KrbMethodK5Passwd on Krb5KeyTab /etc/httpd/httpd.keytab require valid-user Unfortunately such config turn Kerberos AuthType for all URLs. I tried to place first 3 lines SetEnvIf Request_URI ^/$ rootdir=1 Allow from env=rootdir Satisfy Any after main block, but it didn't help me.

    Read the article

  • Why do my Xcode default font starts to look ugly after some time, until I restart?

    - by mystify
    I plugged in an external monitor. All resolutions match perfectly. MacBookPro LCD is closed. After restarting Xcode editor fonts look very bad. Only in Xcode. When I restart the mac and DON'T use an external monitor, fonts look all right again. When I attach the monitor and close the LCD of the MacBookPro, fonts look nice. Then I close Xcode and reopen it: Fonts suck. Only way to get fonts look good is to disconnect external monitor and reboot, then reconnect external monitor, close LCD, wait, hit any key and let the external monitor be the only one. Fonts look nice - until I restart Xcode. I think it happens any time Xcode is launched with external monitor attached and ugly fonts survive until reboot. Unplugging external monitor and restarting Xcode doesn't help. It seems like Xcode isn't antialiasing them properly after something happens. Is there a fix for this problem? EDIT: After trying a few more times, it seems it is possible to get fonts to look nice by disconnecting external monitor and reopening xcode. Here are some little snapshots: GOOD FONT: UGLY FONT: You can see how dirty the ugly font looks. It's very hard to read and hurts in the eyes. Believe me. It sucks. Sometimes the little "i" are almost invisible. I make use of the very eye-friendly Dusk style of Xcode (go to preferences and choose that, if you haven't already. A real pleasure for your eyes)

    Read the article

  • Software to automate website screenshot capture

    - by Leniel Macaferi
    Do you know any software that can automate the process of getting screenshots of every page of a website? It would act like a spider/crawler/robot. You name it... For example: I developed a website and now I'd like to get a screenshot of every page of the site. I of course could do it manually (a lot of work). For each module of the site (Student, Payment, etc) I have different pages (Create, Edit, Details, Delete, etc) forms. The thing I'm looking for is a software that can visit every link of the site and then capture the screen - a software that can automate the whole process. It would also be good if the software allowed the user to pass a list of URLs to capture screenshots allowing even more fine grained configuration. EDIT: I tried Selenium mentioned by Aaron in his answer but I managed to find an app that does exactly what I needed. It's called Paparazzi!. I wrote a blog post to showcase my attempt at Selenium and the findings regarding Paparazzi!'s batch capture functionality: Software to automate website screenshot capture

    Read the article

  • Standalone WLST for both WebLogic 8.1 and 9.2?

    - by imiric
    Hi, I'm writing a simple script to facilitate changing JDBC connection URLs in several WL environments, among these both v8.1 and v9.2. I want to create a standalone script, outside of any WL installation, just including wlst.jar/jython.jar/weblogic.jar, that will work both on WL 8.1 and 9.2 (obviously by referencing different MBeans). Now, this works OK for WL 8.1. I copy weblogic.jar from the server, and have managed to get ahold of both wlst.jar and jython.jar (wasn't easy, Oracle doesn't host them anymore). Also I need to make sure to locally run under the same JRE as the server (WL8.1 runs on Java 1.4.2). But if I try to connect to WL 9.2 from this setup, I get a NullPointerException when trying to access any MBean (probably because I'm running on JRE 1.4.2 and WL 9.2 uses 1.5.0). Also, I am unable to create a standalone environment for WL 9.2. If I copy weblogic.jar from 9.2 and run WLST like so: java -cp "wlst.jar:jython.jar:weblogic-92.jar" weblogic.WLST I get a java.lang.NoClassDefFoundError: weblogic/management/configuration/RepositoryMBean error. I can't find this class in weblogic92/server/lib, but it IS inside weblogic.jar from WL 8.1. So I'm really losing my patience here... Is there any way to create a standalone WLST client that can connect to any version of WebLogic (8.1 & 9.2 in the meantime)? I really wouldn't want to have to ssh into the WL environment to run my WLST script... Any ideas/suggestions are welcome. Thanks, Ivan

    Read the article

  • Disable the user of Internet explorer through policies when called from HTML help

    - by Stephane
    Hello, I have a locked down environment where users are prohibited from doing, well, basically anything but run the specific programs we specify. We just switched a program from using the venerable "WinHELP" help format to HTML help (CHM) but that seem to have an unwanted and rather dangerous side effect: when a user click on a hyperlink inside the HTML help, a new internet explorer window is opened and the user is free to browse and do terrible things to my server (well, not that much, but still...) I have checked the session in this case and the IE window is actually hosted within the help engine: there is no iexplore.exe process running in the user session (and it cannot: it's explicitly prohibited). We have disable all help right now until we find a solution. I'm working with the help team to have all external URLs removed from the help file but that is going to be a long and error-prone task. Meanwhile, I've checked all the group policies option but I have to say that I was unable to find anything that would prevent a standalone IE window hosted in a random process from running. I don't want to disable WinHTTP or the IE rendering engine or anything of the sort. But I need to prevent all users members of a specific AD user group from ever having an IE window displayed to them. The servers are running Windows 2003 and Citrix metaframe 4.5. Thanks in advance

    Read the article

  • Wildcard DNS entry to match lang subdomain

    - by Adam Benayoun
    Hey, We have a website www.example.com pointing to x.x.x.1 and a system with multiples minisites all having subdomains.examples.com pointing to x.x.x.2 Basically what we have in place is a wildcard DNS entry who could basically match any possible subdomain, once reaching x.x.x.2, the apache vhost would intercept and basically redirect it to a php script who in turn would know what minisites to serve. On www.example.com however, we server contents which are translates in several languages, until few weeks ago you could switch languages by clicking on a flag and you'd be served with the translated content. The only problem is that the URL wouldn't change and SEO wise this isn't the best solution. Now I cannot change the way subdomain are handled (being redirected to x.x.x.2) since we have hundreds, if not thousands of minisites live. I have to come up with a solution to have language.example.com redirecting to x.x.x.1 and then a rewrite rule who would basically rewrite the fake subdomain into a URL in order to pass the parameter of the language to example.com On solution is to list all possible language as DNS entries right before the wildcard DNS entry. The other solution which I am almost sure is not feasible is to have some kind of regex in a DNS entry matching all subdomain with 2 letters ( en|es|fr|cn|cl etc... ) Any ideas?

    Read the article

  • Using %v in Apache LogFormat definition matches ServerName instead of specific vhost requested

    - by Graeme Donaldson
    We have an application which uses a DNS wildcard, i.e. *.app.example.com. We're using Apache 2.2 on Ubuntu Hardy. The relevant parts of the Apache config are as follows. In /etc/apache2/httpd.conf: LogFormat "%v %h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" vlog In /etc/apache2/sites-enabled/app.example.com: ServerName app.example.com ServerAlias *.app.example.com ... CustomLog "|/usr/sbin/vlogger -s access.log /var/log/apache2/vlogger" vlog Clients access this application using their own URL, e.g. company1.app.example.com, company2.app.example.com, etc. Previously, the %v in the LogFormat directive would match the hostname of the client request, and we'd get several subdirectories under /var/log/apache2/vlogger corresponding to the various client URLs in use. Now, %v appears to be matching the ServerName value, so we only get one log under /var/log/apache2/vlogger/app.example.com. This breaks our logfile analysis because the log file has no indication of which client the log relates to. I can fix this easily by changing the LogFormat to this: LogFormat "%{Host}i %h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" vlog This will use the HTTP Host: header to tell vlogger which subdirectory to create the logs in and everything will be fine. The only concern I have is that this has worked in the past and I can't find any indication that this has changed recently. Is anyone else using a similar config, i.e. wildcard + vlogger and using %v? Is it working fine?

    Read the article

  • Home Server: cpu virtualisation, what to choose?

    - by Huygens
    I'm looking for virtualisation solutions for storage and OS for a home server. A sort of private cloud where I manage the storage space independently of the VM one. This question focus on VM (or compute instance) management and what would best suit my needs. (I have another question related to the storage management). My use cases are: A backup server: rsync and other services running. A personal cloud server: a kind of owned dropbox system, à la ownCloud. " users foreseen. A media server: streaming videos and displaying photos. Here my environement and wishes: Server: HP Proliant MicroServer with 8 GB RAM (AMD Turion dual core with AMD-V technology) OS types: only Linux (perhaps a *BSD VM in the future) Linux distributions do not matter, I'm familiar with RHEL, Fedora, Suse, Ubuntu, but any other recommandation will be fine 2-3 VMs foreseen: backup server, owncloud server and media server (optional). Those are only servers, so no graphical console needed (I don't need VirtualBox) By VM I mean a virtualised environment like KVM, Xen, etc. or a compute instance like with OpenStack storage should be "virtualised/cloudified" see my other question. VM should be able to be migrated to another server in the future if performance cannot be fullfilled anymore by the current server It does not matter if installation of such setup is complicated as long as management tools allow for easy maintenance I don't have Windows at home, so solution should be Linux friendly and would be nice to be web based. But native apps are OK too. System should be easy to enhance: by adding a new server to migate some of the VMs to it. So it's really a kind of private cloud on which I could run some Linux OS. I would prefer free (libre, as in a free speach) and open source tools. But it does not have to be free as in a free beer. So Xen, KVM, VitualBox or OpenStack? What would you recommend?

    Read the article

  • IIS 8 URL Redirect on site level

    - by jackncoke
    I am trying to do a simple 301 perm redirect to another url in IIS 8. The end results would be if i navigated to domain2.com i would end up on domain1.com. We are moving from IIS 6 to a new server and have aprox 600+ sites that will be configured on this IIS 8 box. All of these sites run a property CMS and are looking at the same directory for source code. In IIS 6 i would just go to the Home directory tab of each site and check the box that says "Permanent Redirect" and provide a URL. With IIS 8 there is "HTTP Redirect" and this looks like it would do the trick but it is being applied to all the sites in IIS 8. Not on the site level like it use to be in IIS 6. I also looked into URL Rewriting module for IIS 8 but it seems to take rules in the style of a firewall and i am not sure if i could effectly create rules that would cater to 600+ sites. I am looking for the easiest way to have redirects on my site level so that that customers with multiple domains can have there sites redirect to there main domain for seo purposes. I feel like this was so easily achieved in IIS 6 that i must be overlooking something in the new version.

    Read the article

  • Adding a GET parameter to URL causes 404 error

    - by Adrian Grigore
    I'm trying to install the syntaxhighlightter evolved plugin to my wordpress blog. I've uploaded and activated the plugin, but it did not work. I've looked into the page source code and found out that the plugin style is loaded from the following URL: http://devermind.com/wp-content/plugins/syntaxhighlighter/syntaxhighlighter/styles/shCore.css?ver=2.0.320 This causes a 404 error (page not found). The strange thing though is that when I remove the GET parameters, the CSS loads ok: http://devermind.com/wp-content/plugins/syntaxhighlighter/syntaxhighlighter/styles/shCore.css What could be causing this problem and how can I fix this? Unfortunately I don't know how to make wordpress drop the GET parameters when loading the stylesheet. EDIT: As I just found out, this happens only in Firefox (3.0.11). IE loads both URLs above just fine. Not that this would be of any help though, so any suggestions would be appreciated. SECOND EDIT: I tried this on my laptop and it works fine with Firefox 3.08. So this really seems to be a browser problem after all.

    Read the article

  • Why is lighttpd and fastcgi keeping sending me the *.scgi file instead of the website content?

    - by e-satis
    I have the following config: server.modules = ( "mod_compress", "mod_access", "mod_alias", "mod_rewrite", "mod_redirect", "mod_secdownload", "mod_h264_streaming", "mod_flv_streaming", "mod_accesslog", "mod_auth", "mod_status", "mod_expire", "mod_fastcgi" ) [...] fastcgi.server = ( ".php" => (( "bin-path" => "/usr/bin/php-cgi", "socket" => "/var/tmp/lighttpd/php-fastcgi.socket" + var.PID, "max-procs" => 1, "kill-signal" => 9, "idle-timeout" => 10, "bin-environment" => ( "PHP_FCGI_CHILDREN" => "200", "PHP_FCGI_MAX_REQUESTS" => "1000" ), "/pyapps/essai/blondes.fcgi" => ( "main" => ( "socket" => "/var/tmp/lighttpd/django-fastcgi.socket", ), ), "bin-copy-environment" => ( "PATH", "SHELL", "USER" ), "broken-scriptfilename" => "enable" ))) [...] $HTTP["host"] =~ "(^|www\.)cam\.com(\:[0-9]*)?$" { server.document-root = "/home/cam/web/" accesslog.filename = "/home/cam/log/access.log" server.errorlog = "/home/cam/log/error.log" server.follow-symlink = "enable" # files to check for if .../ is requested server.indexfiles = ( "index.php", "index.html", "index.htm", "index.rb") url.rewrite = ( "^(/blondes/.*)$" => "/pyapps/essai/blondes.fcgi$1" ) } I have the following dir tree: /home/tv/web/ `-- pyapps `-- essai `-- __init__.py `-- blondes.fcgi `-- blondes.pid `-- django-fcgi.py `-- manage.py `-- manage.pyo `-- plop `-- settings.py `-- urls.py No error when restarting lighthttpd. The I run: ./manage.py runfcgi method=prefork socket=/var/tmp/lighttpd/django-fastcgi.socket daemonize=false pidfile=blondes.pid No errors neither. I then go to http://cam.com/blondes/. I offers me to download an empty file. I checked permissions but everything is set to the same user and group, and they work for the PHP site. The file /var/tmp/lighttpd/django-fastcgi.socket exists. When I reload the page, I got no output in error logs, nor in the manage.py runfcgi command. I probably missed something obvious, but what ?

    Read the article

  • Nginx Tries to download file when rewriting non-existent url

    - by Vince Kronlein
    All requests to a non-existent file should be re-written to index.php?name=$1 All other requests should be processed as normal. With this server block, the server is trying to download all non-existent urls: server { server_name www.domain.com; rewrite ^(.*) http://domain.com$1 permanent; } server { listen 80; server_name domain.com; client_max_body_size 500M; index index.php index.html index.htm; root /home/username/public_html; location ~ /\.ht { deny all; } location ~ \.php$ { try_files $uri = 404; fastcgi_split_path_info ^(.+\.php)(/.+)$; include fastcgi_params; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass 127.0.0.1:9002; } location ~* ^.+\.(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|rss|atom|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ { access_log off; log_not_found off; expires max; } location /plg { } location / { if (!-f $request_filename){ rewrite ^(.*)$ /index.php?name=$1 break; } } } I've checked to see that my default_type = text/html instead of octet stream, not sure what the deal is.

    Read the article

  • SSL wildcard certificates and trailing 'www'

    - by user173326
    I've got a wildcard SSL certificate for *.mydomain.com. I'm using nginx, and redirecting all traffic for http to https, and also rewriting the URLs without a trailing www (if there is one). So it has, 1) http://subdomain.mydomain.com ---> https://subdomain.mydomain.com 2) http://www.subdomain.mydomain.com ---> https://subdomain.mydomain.com 3) https://www.subdomain.mydomain.com ---> https://subdomain.mydomain.com 4) https://subdomain.mydomain.com ---> https://subdomain.mydomain.com However, since my cert is for *.mydomain.com, case 3 gets an SSL error in chrome ('This is probably not the site that you are looking for!'), but if you click through it gets redirected and all is well. I understand why, since the initial connection is for https with a www (2 levels of subdomains), which doesn't match what is on the wildcard certificate. I thought a solution would be to get an additional cert for *.*.mydomain.com to cover www.*.mydomain.com. But it seems like that won't work. I spoke to agents from namecheap and comodo, and both said *.*.mydomain.com was not possible. I also came across this: https://support.quovadisglobal.com/KB/a60/will-ssl-work-with-multilevel-wildcards.aspx Is there a solution to this? To be able to cover www.*.mydomain.com?

    Read the article

  • WordPress permalinks not working, everything seems fine

    - by javipas
    I have a WordPress blog I've migrated from another CMS, and I've being having a lot of problems with my permalinks structure: lots of articles give a 404, although they are there, somewhere, published. The site is www.muycomputerpro.com (MCP for short), and for example an article that should be found is: http://muycomputerpro.com/Actualidad/Especiales/2009-las-grandes-crecen-en-la-bolsa If I do a search on the search tool at MCP, the result is there (see EnlacesMCP-1.jpg) But when I click on the link, our 404 error page appears (see EnlacesMCP-2.jpg) The weird thing is, the article is published, and the permalink is the right one, as you can see on this screenshot of the WordPress CMS: The permalink (below the title) is correct (http://www.muycomputerpro.com/Actualidad/Especiales/2009-las-grandes-crecen-en-la-bolsa/) but it does not work. In fact, if I try to use the short link (http://www.muycomputerpro.com/?p=5023) the article does not show either. I've accessed my WordPress DB and I've search the article to see if there is something wrong there, but from what I can tell all the fields are OK, here's a screenshot: I really don't know what is causing this. The permalink structure should work (I'm using the "Custom permalink" plugin to preserve the old URLs that had a alphanumeric code at the end of the postname) and the permalink config on wordpress is "/%postname%/". I really need help :(

    Read the article

< Previous Page | 287 288 289 290 291 292 293 294 295 296 297 298  | Next Page >