Search Results

Search found 27295 results on 1092 pages for 'cross site'.

Page 232/1092 | < Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >

  • How to tunnel a local port onto a remote server

    - by Trevor Rudolph
    I have a domain that i bought from DynDNS. I pointed the domain at my ip adress so i can run servers. The problem I have is that I don't live near the server computer... Can I use an ssh tunnel? As I understand it, this will let me access to my servers. I want the remote computer to direct traffic from port 8080 over the ssh tunnel to the ssh client, being my laptop's port 80. Is this possible? EDIT: verbose output of tunnel macbookpro:~ trevor$ ssh -R *:8080:localhost:80 -N [email protected] -v OpenSSH_5.2p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /Users/trevor/.ssh/config debug1: Reading configuration data /etc/ssh_config debug1: Connecting to site.com [remote ip address] port 22. debug1: Connection established. debug1: identity file /Users/trevor/.ssh/identity type -1 debug1: identity file /Users/trevor/.ssh/id_rsa type -1 debug1: identity file /Users/trevor/.ssh/id_dsa type 2 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.2 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'site.com' is known and matches the RSA host key. debug1: Found key in /Users/trevor/.ssh/known_hosts:9 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Trying private key: /Users/trevor/.ssh/identity debug1: Trying private key: /Users/trevor/.ssh/id_rsa debug1: Offering public key: /Users/trevor/.ssh/id_dsa debug1: Authentications that can continue: publickey,password debug1: Next authentication method: password [email protected]'s password: debug1: Authentication succeeded (password). debug1: Remote connections from *:8080 forwarded to local address localhost:80 debug1: Requesting [email protected] debug1: Entering interactive session. debug1: remote forward success for: listen 8080, connect localhost:80 debug1: All remote forwarding requests processed

    Read the article

  • PHP/Linux File Permissions

    - by user1733435
    May I ask a question about file permission. I set up Ubuntu server where Apache got running. I have simple php upload form and able to upload file to /var/www/site/uploads as follows. sandbox@sandbox-virtual-machine:/var/www/site/uploads$ ll total 1736 drwxrwxrwx 2 www-data www-data 4096 Oct 18 02:53 ./ drwxrwxrwx 3 sandbox sandbox 4096 Oct 18 00:42 ../ -rw-r--r-- 1 www-data www-data 145998 Oct 18 02:53 3d wallpaper pic.jpg -rw-r--r-- 1 www-data www-data 166947 Oct 18 02:53 3D Wallpapers 9.jpg -rw-r--r-- 1 www-data www-data 1451489 Oct 18 02:53 6453_3d_landscape_hd_wallpapers_green.jpg Is there anyway to upload files and they show up as -rw-r--r-- 1 sandbox sandbox 145998 Oct 18 02:53 3d wallpaper pic.jpg -rw-r--r-- 1 sandbox sandbox 166947 Oct 18 02:53 3D Wallpapers 9.jpg -rw-r--r-- 1 sandbox sandbox 1451489 Oct 18 02:53 6453_3d_landscape_hd_wallpapers_green.jpg so that I could straight away feed them to waiting/running shell script. Right now waiting script(move,checksums,rename,resize,etc) unable to do anything to uploaded files with attributes of www-data. If I just do as local account, such as sandbox@sandbox-virtual-machine:/var/www/site/uploads$touch testfile then the script is able to run as I would like to. Any suggestion would be grateful,thanks in advance as well. Thanks for everyone giving help to me,that I was able to progress. Now I am close to getting solved and append the output sandbox@sandbox-virtual-machine:/var/www/site/uploads$ ll total 388 drwxrwxrwx 2 www-data www-data 4096 Oct 18 04:22 ./ drwxrwxrwx 3 sandbox sandbox 4096 Oct 18 04:17 ../ -rw-r--r-- 1 sandbox sandbox 166947 Oct 18 04:21 3D Wallpapers 9.jpg -rw-r--r-- 1 sandbox sandbox 219808 Oct 18 04:20 adafruit_pi.png -rw-rw-r-- 1 sandbox sandbox 0 Oct 18 04:22 test How may I set permission to uploaded files like 'test' only w difference in middle group. Such as adafruit_pi.png Vs test. Which statement shall I insert to php code,please?

    Read the article

  • Need to link WP Blog with Rails App on Heroku

    - by John Glass
    I have a client who wants to migrate his Rails app to Heroku. However the client also has a blog associated with his domain that runs on WordPress. Currently, the WordPress blog is running happily alongside the Rails app, but once we migrate to Heroku, that clearly won't be possible. The url for the app is like http://mydomain.com, and the url for the blog is like http://mydomain/blog. I realize that the best long-term solution is to redo the blog in a Rails format like Toto or Jekyll. But in the short term, what is the best way to continue hosting the WP blog where it is (or somewhere) but use Heroku to run the app? The client doesn't want the blog to be on a subdomain, but to remain at mydomain/blog for SEO reasons and also since there is traffic to the blog. I have two ideas: Use rack_rewrite or refraction (or just a regular old 301 and Apache mod_rewrite) on the old (non-Heroku) server to redirect the main url from the old site to Heroku. In this case, I can just leave the Wordpress blog running happily where it is. I think?? Is there a reason to choose one of those options (rack_rewrite, refraction, or mod_rewrite) over the others if I do it this way? Switch the DNS info to point to the Heroku site, and then use a 301 redirect from the blog to the old site. But then I'll have to get the old (non-Heroku) site on a subdomain and use some kind of rewrite rules anyway so it looks like it isn't a subdomain. Are either of these approaches preferable, or is there another way to do it that's easier that I'm missing?

    Read the article

  • Service Unavailable - App Pool disabled error...anyone know why?

    - by andy
    Hi guys A small number of our sites intermediately experience an error that is a mystery to us. Our set up is: Windows Server 2003 IIS6 .NET 3.5 We have about 35 Websites running on IIS Each site has it's own App Pool Each App Pool has the identity: Network Service The symptom is: The Application Pool for a site stops, and when you browse to the site, you only see: SERVICE UNAVAILABLE In the Event Logs, we have one Error, that looks like this: Application pool 'AppPool1' is being automatically disabled due to a series of failures in the process(es) serving that application pool. And we have several Warnings leading up to the error, that look like this (note: the warnings look the same, except for the process id): A process serving application pool 'AppPool1' suffered a fatal communication error with the World Wide Web Publishing Service. The process id was '292'. The data field contains the error number. What's causing this? As I said, it doeasn't happen very often....the last time was about 6 months ago...and it usually happens with the same site/App pool. Any ideas? cheers!

    Read the article

  • moving plone or recovering data

    - by Atilla Filiz
    I had a simple plone site running on my Ubuntu 8.10 box, setup by following instructions on https://help.ubuntu.com/community/forum/server/Plone Now the computer got an Ubuntu 9.04 install from scratch, and I want to get the site up and running. As far as I understand several python libraries from 9,04 are not compatible with plone. I tried installing plon3-site from ubuntu repos, and I got this: $bin/instance start Traceback (most recent call last): File "bin/instance", line 103, in ? import plone.recipe.zope2instance.ctl File "/media/Robocup2009/robocup2009/eggs/plone.recipe.zope2instance-2.7-py2.4.egg/plone/recipe/zope2instance/__init__.py", line 16, in ? import zc.buildout File "/media/Robocup2009/robocup2009/eggs/zc.recipe.egg-1.1.0-py2.4.egg/zc/__init__.py", line 1, in ? __import__('pkg_resources').declare_namespace(__name__) ImportError: No module named pkg_resources I have python-pkg-resources installed. I actually don't care about plone, I just found it easy to use. My site is just photo and video archive of several users, with low traffic but several gigabytes of data. What would you suggest? Is it easier to get plone working on my box and start my instance(how?), or recover the content from the instance folder(how?) and setup another content management system(which?)

    Read the article

  • Backup and Archive Strategy Question

    - by OneNerd
    I am having trouble finding a backup strategy for our code assets that 'just works' without any manual intervention. Goal is to have an off-site backup (a synchronized one) so that when we check-in files, create builds, etc. to the network drive, the entire folder structure is automatically synchronized and backed-up (in real time, or 1x per day) at some off-site location so if our office blows up, we don't lose all of our data. I have looked into some online backup services, but have not yet had any success. Some are quirky/buggy, others limit file size and/or kinds of files (which doesn't work well for developer files). Everything gets checked in and saved to a single server (on a Raid Mirror), so we just need to have a folder on that server backed up/synchronized to some off-site location. So my question is this. What are you using for your off-site backup strategy. What software, system, or service? Is there a be-all/end-all system of backing up your code assets that I just haven't found yet? Thanks

    Read the article

  • Web hosting for multiple web sites providing system isolation

    - by Justin
    We have a small number of projects where we expect the client will not be maintaining the installed versions of applications we install to power the site (such as Drupal). Given that an important part of security is keeping things updated, we don't want to host these projects on our Plesk-powered dedicated servers that currently host lots of our other client's websites. Our goal is to find a host where we can deploy isolated instances (be these slices, virtual servers, grid servers, etc) for each individual (or groups of 2-3) web sites as we need them. These instances would be completely separate, so that if one web site were hacked it would not impact any other site. Typical hosting requirements: Linux Apache PHP 5 MySQL Supports Drupal Ability to setup a cron task (but we don't need SSH access) Daily backups Virtualized/cloud hosting (we want to avoid shared) Pricing per site is around $25/month OS is patched automatically Some options we have considered but won't work: MediaTemple: Two major data center-wide security incidents and recent downtime foster doubt about this host's technical ability. Slicehost: This would require us to manage the entire server, which we don't want to do. Rackspace Cloud Sites (formerly Mosso): No backup options. Do you have any recommended hosting options for given these requirements?

    Read the article

  • adding trac to apache2 configuration file

    - by Michael
    I currently have apache2 running from a mythtv/mythweb install. This made two config files available in sites-enabled. One of them ("default-mythbuntu") has the VirtualHost directive and seems like a normal file (except a change to the directory index). There is also a mythweb.conf file that only has directives and sets various variables for mythweb. I want to host a trac site as well. According to this site: http://trac.edgewall.org/wiki/TracOnUbuntu there are some setting I need to set for the Trac site. They give me directions for making a VirtualHost, but I think I should use the current VirtualHost and just add the directives (I'll need to change the default location they point to from the site above to just point to the trac location). Where should I put these directives? Can I make a Trac.conf with just the settings for Trac and enable it, or do I need to put them in the default-mythbuntu file? I don't like that later because it doesn't separate out the Trac configs. How does Apache know that the mythweb (and the trac.conf I want to make) belong to the virtualhost defined in the default-mythbuntu? It is the only virtualhost that is being defined on my system if that matters.

    Read the article

  • IIS7 binding to subdomain causing authentication errors (TFS 2010)

    - by Tommy Jakobsen
    I'm trying to bind a IIS web site (Team Foundation Services 2010) to a subdomain, which is causing authentication errors. First I'll explain what I've done to set it up. This is the fist time I do this, so please correct me if I'm wrong. The web server is a stand-alone Windows Server 2008 R2 x64, running IIS7 with .NET Framework 4. I have the following A-records, pointing to my server: server.mydomain.com *.server.mydomain.com So all subdomains of server.mydomain.com points to the server. In IIS7 I have a web site (TFS 2010) on port 8080, with a virtual directory (named tfs) that is using Windows Authentication. I have one binding on the web site pointing to all unassigned IP addresses, port 8080 and having a host name of tfs.server.mydomain.com. Now, shouldn't I be able to access the virtual directory through: http://tfs.server.mydomain.com/tfs That is not working. However, I can access it through: http://tfs.server.mydomain.com:8080/tfs But, it won't let me authenticate using a Windows account (Server\Username). A windows account that I can authenticate with, when accessing the site through http://localhost:8080/tfs. What am I missing here?

    Read the article

  • What would cause an IIS6 website to be unavailable remotely randomly for a few minutes at a time?

    - by jskunkle
    Website is served by iis6 on windows server 2003. Never saw this problem once for months in beta. We made the new site live yesterday - its getting more traffic than in beta but not that much - resource utilization on the server and speed are fine. Today the site has been unavailable remotely a few (4?) times for a few minutes at a time. If you visit any page on the site - nothing is ever returned and eventually the request times out. While this is happening - I can connect to the server via remote desktop and the site loads fine from the live url when running a browser on the server locally. Other websites on the server continiue to function fine the entire time (using the same instance of iis, different app pools). Other computers on the same network can't access the website either. Other than not serving content - the server seems to behave normally - scheduled jobs in our custom job system continue to run, etc. We've looked at the iis logs quickly and we don't see any traffic out of the ordinary - no traffic spikes, etc. Any ideas? Thanks, Shane

    Read the article

  • IIS7 - Web Deployment Tool - SetParam/SetParamFile to set http and https bindings + Cert

    - by Andras Zoltan
    Hi, we're currently using the MS Web Deployment Tool to sync a live website and some WebServices from a staging box to two live servers. The staging box hosts the site on any IP on port 17000, whereas the two live servers are load-balanced and have a different IP for each of them. At present, I generate two separate packages for deployment - one for each machine - using the sync operation and specifying a DestinationBinding parameter as follows: msdeploy -verb:sync -source:WebServer,computerName=localhost -dest:package="machinename.zip" -setParam:type="DestinationBinding",scope="SiteName",value="ip_address:port:". (Split across multiple lines to make it easier to read!) I run this twice, with a different target filename and ip address for each of the two machines. When it comes to deployment, I simply do a sync from each package to its respective live site. I know, I know - I should be able to do it by generating one parameterised package and then perhaps using the SetParamFile switch for each of the two Servers - believe me I'd like to, but the documentation on doing this is frankly non-existent. Now I need to configure and deploy both HTTP and HTTPS binding for this site; including also the ssl cert that is to be used. I've added an SSL binding for the site on the staging box - which uses a development cert (which will need to be replaced - or should the staging box be using the live cert?), and now the above command line has the effect of replacing the target IP on both http and https entries. It appears that I cannot specify multiple bindings plus the cert information in the DestinationBinding value in the -setParam above, so anyone know how would I go about doing this? Any help greatly appreciated.

    Read the article

  • Problem posting multipart form data using Apache with mod_proxy to a mongrel instance

    - by Ryan E
    I am attempting to simulate my site's production environment as closely as I can on my local machine. This is a rails site that uses Apache w/ mod_proxy to forward requests to a mongrel cluster. On my Mac OSX Leopard machine, I have the default install of apache running and have configured a vhost to use mod_proxy to to forward requests to a local running mongrel instance on port 3000. <Proxy balancer://mongrel_cluster-development> BalancerMember http://127.0.0.1:3000 </Proxy> For the most part, this is working fine. I can browse my development site using the ServerName of the vhost I configured and can confirm that requests are being properly forwarded to the mongrel instance. However, there is a page on the site that has a multipart form that is used to upload an image to the server. When I post this form, there is a delay of about 5 minutes and the browser ultimately returns a Bad Request Your browser sent a request that this server could not understand. In the error log for my vhost: [Tue Sep 22 09:47:57 2009] [error] (70007)The timeout specified has expired: proxy: prefetch request body failed to 127.0.0.1:3000 (127.0.0.1) from ::1 () This same form works fine if I browse directly to the mongrel instance (http://127.0.0.1:3000). Anybody have any idea what the problem might be and how to fix it? If there is any important information that I neglected to include, post a comment, and I can add to this question. Note: Upon further investigation, this appears to be a problem specific to Safari. The form works fine in Firefox.

    Read the article

  • Sharepoint/WSS Reporting Services Integration woes

    - by mhollers
    after a number of failed attempts i seem to have successfully installed the Reporting services add-in to my WSS farm. However, I seem to be missing most of the enhanced functionality eg no report library template, no report center site template. the only additional functionality available is the report viewer web part. background: 2 server WSS 3.0 farm with CA (Central admin) WFE (web front end) and reporting services addin installed on 1, and SQL05 SP2 with Reporting services (RS) and all databases installed on other. I have a VM environment set up and have rolled this back and repeated a number of times. I have configured RS within CA and activated 'Report Server INtegration Feature'. Within the 'site settings' I have a 'Reporting Services' heading with a 'manage shared schedules' item/link, not sure if there should be other options? I was of the understanding that to view reports within sharepoint i could either create a new site using the 'report center' template or add a report library to an existing site, neither of which seems available I am at a loss as to what to do, as all online information seems to do with dealing with installation issues/errors, which i seem to have eventually got past

    Read the article

  • IIS7 binding to subdomain causing authentication errors

    - by Tommy Jakobsen
    I'm trying to bind a IIS web site to a subdomain, which is causing authentication errors. First I'll explain what I've done to set it up. This is the fist time I do this, so please correct me if I'm wrong. The web server is a stand-alone Windows Server 2008 R2 x64, running IIS7 with .NET Framework 4. I have the following A-records, pointing to my server: server.mydomain.com *.server.mydomain.com So all subdomains of server.mydomain.com points to the server. In IIS7 I have a web site on port 8080, with a virtual directory (named virtual) that is using Windows Authentication. I have one binding on the web site pointing to all unassigned IP addresses, port 8080 and having a host name of sub.server.mydomain.com. Now, shouldn't I be able to access the virtual directory through: http://sub.server.mydomain.com/virtual That is not working. However, I can access it through: http://sub.server.mydomain.com:8080/virtual But, it won't let me authenticate using a Windows account (Server\Username). A windows account that I can authenticate with, when accessing the site through http://localhost:8080/virtual. What am I missing here?

    Read the article

  • SCCM 2012 Clients no longer detecting

    - by user3685428
    Here is the scenario I had a fully functioning SCCM 2012 site server with the DP, MP, SUP, Application catalog, etc. roles configured and working. There is only one server on this site. Everything was great but i was not happy with SUP, so i decided to create a separate WSUS server and configure Windows Updates through GPOs. That setup worked great as well so i went ahead and removed the SUP role from SCCM and removed the WSUS feature from my SCCM server (they were configured on the same SCCM Server). I did not notice any problems right away. A couple days later i noticed that the OSD deployments were giving errors, and after a couple hours of trying suggestions from Google, i was able to uninstall PXE and make a few changes and reinstall with WDS to get it working again. Again, thought everything was fine and continued on. The last couple days i have noticed that any new machine deployed or installing the Client will show in the SCCM console as "No" Client. The client machines will show connected to a site but the software center shows "IT Organization" instead of our site like the previous clients. The existing clients all seem to be functioning normally. they still receive application distributions and configuration baselines, etc. Reinstalling, uninstalling and reinstalling, repairing does not fix the problems and this happens on all new clients. ClientLocation.log shows it connecting to the correct MP. Nothing odd in any of the logs except for the ClientMessaging.log which repeats continuously this line: <![LOG[Raising event: instance of CCM_CcmHttp_Status { ClientID = "GUID:0450fde3-ab82-41bf-9c33-87a18113744b"; DateTime = "20140528214824.993000+000"; HostName = "SOUNDWAVE.domain.org"; HRESULT = "0x00000000"; ProcessID = 4092; StatusCode = 0; ThreadID = 3720; }; ]LOG]!><time="16:48:24.994+300" date="05-28-2014" component="CcmMessaging" context="" type="1" thread="3720" file="event.cpp:706"> thanks

    Read the article

  • How do you test your porn filter

    - by Zoredache
    For testing antivirus we have EICAR, for SPAM, we have GTUBE. Is there a standard site that is or should be included in blacklists that you can use for testing instead of going to your favorite porn site in front of your boss, the CEO, or someone else who feels that seeing such a site is an excuse for a sexual harassment suit? Update This is less about getting permission for me to test, though that answer is useful. I do have both permission and responsibility to actually make sure the filter is running. I am able test the filter is functioning with a netcat. Instead, I am hoping there is a standard domain name that is blocked by most/all filters for testing. I need to be able to share this with my boss and users. I need to be able to demonstrate what happens when someone go to a filtered page. I need to have a way to quickly prove to others that the filter is working without asking them to go to some site that will not cause grief if for some reason the filter is not working. If there isn't already a good domain for this purpose I may simply have to register a domain myself, and then add the domain to all the filters I am responsible for.

    Read the article

  • Permissions Required for Sharepoint Backups

    - by Wyatt Barnett
    We are in the process of rolling out an extranet for some of our partners using WSS 3.0 as the platform. We already use it internally for a variety of things, and we are using the following powershell script to backup the server: param( $url="http://localhost", $backupFolder="c:\" ) [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint") $site= new-Object Microsoft.SharePoint.SPSite($url) $names=$site.WebApplication.Sites.Names foreach ($name in $names) { $n2 = "" if ($name.Length -eq 0) { $n2="ROOT" } else { $n2 = $name } $tmp=$n2.Replace("/", "_") + ".sbk" $saveas = "" if ($backupFolder.Length -eq 0) { $saveas = $tmp } else { $saveas = join-path -path $backupFolder -childPath $tmp } $site.WebApplication.Sites.Backup($name, $saveas, "true") write-host "$n2 backed up to $saveas." } This script works perfectly on the current installation running as our domain backup user. On the new box, it fails when ran as the backup user--claiming "The web application located at http://extranet/" could not be found. That url does, in fact, work so I'm fairly certain it isn't anything that dumb and rather is some permissions issue. Especially because, when executed from my security context, the script works perfectly. I have tried making the backup user a farm owner, as well as added him to the various site collection admin groups on the extranet. The one major difference between the extranet and the intranet server is that the extranet has an alternative access mapping (for https://xnet.example.com) and also uses forms authentication for that mapping. Anyhow, what permissions (or other voodoo) do I need to setup to get this script to work properly?

    Read the article

  • ASP.NET Web API returns 404 for PUT only on some servers

    - by Greg Bacchus
    Ok, I have been racking my brain and the internet for a solution to this. I just can't figure it out. I have written a site that uses ASP.NET MVC Web API and all working nicely until I put it on staging server. The site works fine on my local machine and the dev web server. Both dev and staging servers are Win Server 2008 R2. The problem is this: basically the site works, but there are some API calls that use the HTTP PUT method. These fail on staging returning a 404, but work fine elsewhere. The first problem that I came across and fixed was in Request Filtering. But still getting the 404. I have turned on tracing in IIS and get the following problem. 168. -MODULE_SET_RESPONSE_ERROR_STATUS ModuleName IIS Web Core Notification 16 HttpStatus 404 HttpReason Not Found HttpSubStatus 0 ErrorCode 2147942402 ConfigExceptionInfo Notification MAP_REQUEST_HANDLER ErrorCode The system cannot find the file specified. (0x80070002) The configs are the same on dev and staging, matter of fact the whole site is a direct copy. Why would the GETs and POSTs work, but not the PUTs? Thanks Greg

    Read the article

  • How do I make Nginx redirect all requests for files which do not exist to a single php file?

    - by Richard
    I have the following nginx vhost config: server { listen 80 default_server; access_log /path/to/site/dir/logs/access.log; error_log /path/to/site/dir/logs/error.log; root /path/to/site/dir/webroot; index index.php index.html; try_files $uri /index.php; location ~ \.php$ { if (!-f $request_filename) { return 404; } fastcgi_pass localhost:9000; fastcgi_param SCRIPT_FILENAME /path/to/site/dir/webroot$fastcgi_script_name; include /path/to/nginx/conf/fastcgi_params; } } I want to redirect all requests that don't match files which exist to index.php. This works fine for most URIs at the moment, for example: example.com/asd example.com/asd/123/1.txt Neither of asd or asd/123/1.txt exist so they get redirected to index.php and that works fine. However, if I put in the url example.com/asd.php, it tries to look for asd.php and when it can't find it, it returns 404 instead of sending the request to index.php. Is there a way to get asd.php to be also sent to index.php if asd.php doesn't exist?

    Read the article

  • Getting a "403 access denied" error instead of serving file (using django, gunicorn nginx)

    - by Finglish
    Getting a "403 access denied" error instead of serving file (using django, gunicorn nginx) I am attempting to use nginx to serve private files from django. For X-Access-Redirect settings I followed the following guide http://www.chicagodjango.com/blog/permission-based-file-serving/ Here is my site config file (/etc/nginx/site-available/sitename): server { listen 80; listen 443 default_server ssl; server_name localhost; client_max_body_size 50M; ssl_certificate /home/user/site.crt; ssl_certificate_key /home/user/site.key; access_log /home/user/nginx/access.log; error_log /home/user/nginx/error.log; location / { access_log /home/user/gunicorn/access.log; error_log /home/user/gunicorn/error.log; alias /path_to/app; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_pass http://127.0.0.1:8000; proxy_connect_timeout 100s; proxy_send_timeout 100s; proxy_read_timeout 100s; } location /protected/ { internal; alias /home/user/protected; } } I then tried using the following in my django view to test the download: response = HttpResponse() response['Content-Type'] = "application/zip" response['X-Accel-Redirect'] = '/protected/test.zip' return response but instead of the file download I get: 403 Forbidden nginx/1.1.19 Please note: I have removed all the personal data from the the config file, so if there are any obvious mistakes not related to my error that is probably why. My nginx error log gives me the following: 2012/09/18 13:44:36 [error] 23705#0: *44 directory index of "/home/user/protected/" is forbidden, client: 80.221.147.225, server: localhost, request: "GET /icbdazzled/tmpdir/ HTTP/1.1", host: "www.icb.fi"

    Read the article

  • Cisco ASA user authentication options - OpenID, public RSA sig, others?

    - by Ryan
    My organization has a Cisco ASA 5510 which I have made act as a firewall/gateway for one of our offices. Most resources a remote user would come looking for exist inside. I've implemented the usual deal - basic inside networks with outbound NAT, one primary outside interface with some secondary public IPs in the PAT pool for public-facing services, a couple site-to-site IPSec links to other branches, etc. - and I'm working now on VPN. I have the WebVPN (clientless SSL VPN) working and even traversing the site-to-site links. At the moment I'm leaving a legacy OpenVPN AS in place for thick client VPN. What I would like to do is standardize on an authentication method for all VPN then switch to the Cisco's IPSec thick VPN server. I'm trying to figure out what's really possible for authentication for these VPN users (thick client and clientless). My organization uses Google Apps and we already use dotnetopenauth to authenticate users for a couple internal services. I'd like to be able to do the same thing for thin and thick VPN. Alternatively a signature-based solution using RSA public keypairs (ssh-keygen type) would be useful to identify user@hardware. I'm trying to get away from legacy username/password auth especially if it's internal to the Cisco (just another password set to manage and for users to forget). I know I can map against an existing LDAP server but we have LDAP accounts created for only about 10% of the user base (mostly developers for Linux shell access). I guess what I'm looking for is a piece of middleware which appears to the Cisco as an LDAP server but will interface with the user's existing OpenID identity. Nothing I've seen in the Cisco suggests it can do this natively. But RSA public keys would be a runner-up, and much much better than standalone or even LDAP auth. What's really practical here?

    Read the article

  • Moving Zend Framework 2 from apache to nginx

    - by Aleksander
    I would like to move site that uses Zend Framework 2 from Apache to Nginx. The problem is that site have 6 modules, and apache handles it by aliases defined in httpd-vhosts.conf, #httpd-vhosts.conf <VirtualHost _default_:443> ServerName localhost:443 Alias /develop/cpanel "C:/webapps/develop/mil_catele_cp/public" Alias /develop/docs/tech "C:/webapps/develop/mil_catele_tech_docs/public" Alias /develop/docs "C:/webapps/develop/mil_catele_docs/public" Alias /develop/auth "C:/webapps/develop/mil_catele_auth/public" Alias /develop "C:/webapps/develop/mil_web_dicom_viewer/public" DocumentRoot "C:/webapps/mil_catele_homepage" </VirtualHost> in httpd.conf DocumentRoot is set to C:/webapps. Sites are avialeble at for example localhost/develop/cpanel. Framework handles further routing. In Nginx I was able to make only one site available by specifing root C:/webapps/develop/mil_catele_tech_docs/public; in server block. It works only because docs module don't depend on auth like others, and site was at localhost/. In next attempt: root C:/webapps; location /develop/auth { root C:/webapps/develop/mil_catele_auth/public; try_files $uri $uri/ /develop/mil_catele_auth/public/index.php$is_args$args; } Now as I enter localhost/develop/cpanel it gets to correct index.php but can't find any resources (css,js files). I have no Idea why reference paths in browswer's GET requsts changed to https://localhost/css/bootstrap.css form https://localhost/develop/auth/css/bootstrap.css as it was on apache. This root directive seems not working. Nginx handles php by using fastCGI location ~ \.(php|phtml)?$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param APPLICATION_ENV production; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } I googled whole day, and found nothing usefull. Can someone help me make this configuration work like on Apache?

    Read the article

  • What is easiest no fail way to publish asp.net app?

    - by Maestro1024
    What is easiest no fail way to publish asp.net app? Sorry a bit of an open ended question but I am having issues deploying an asp.net report project and any solution to get the site up is fine. I am running Win7/SQL 2008 and want to publish a asp.net report site that I created in VS 2008. Website launches when I run in debug in Visual studio but I want to publish the site so that it can be seen on the LAN. I published the files off to a folder and started up the IIS manager and added a new site and pointed to that folder. Set the permission on the folder to share to everyone. However when I go to the DNS name I put in for the website it does not launch. Any ideas on this? I see websites out there talking about a web sharing tab on the folder properties but I do not see that when I go to folders. Why might that be? Another avenue I have not pursued yet is publishing directly to a website. Has anyone tried that? Is that better or worse than publishing to filesystem?

    Read the article

  • Remote Desktop Connection issues

    - by stead1984
    I have a server at a remote site, the sites are connected to each other a site-to-site VPN connection using Cisco ASA 5510 firewalls. One end is managed by me, the other managed by the remote location's IT, between the 2 of us is another party who manage and route the connections. Remote desktop has been working fine with no problems then recently I noticed it was working for ONE server over the VPN which it previously had done. All the routes seem fine and I can still ping the remote server and even download files from an FTP site on the remote server.... so the VPN seems fine. Remote Desktop works fine to the remote server within the remote location but not over the VPN. I don't understand why it's stopped working, I originally thought it was a rule in place by the other party but they stress it's not them. The only thing that has changed on the server initiating the RDP connection is that it now runs file services sharing a folder. The source server (remote location) may or may not have had updates applied. Any idea's?

    Read the article

  • Network Load Balancing and AnyCast Routing

    - by user126917
    Hi All can anyone advise on problems with the following? I am planning on installing the following setup on my estate: I have 2 sites that both have a large amount of users. Goals are to keep things simple for the users and to have automatic failover above the database level. Our Database will exist at the primary site and be async mirrored to the secondary site with manual failover procedures.The database generate sequential ID's so distributing it is not an option. I plan to site IIS boxes at both sites with all of the business logic on them and heavy operations. The connections to SQL will be lightweight and DB reads will be cached on IIS. On this layer I plan to use Windows network load balancing and have the same IP or IPs across all IIS boxes at both sites. This way there will be automatic failover and no single point of failure. Also users can have one web address regardless of which site they are in automatically be network load balanced to their local IIS. This is great but obviously our two sites are on different subnets and as this will be one IP address with most of our traffic we can't go broadcasting everything across the link between the sites. To solve this problem we plan to use AnyCast routing over our network layer to route the traffic to the most local box that is listening which will be defined by the network load balancing. Has anyone used this setup before? Can anyone think of any issues with this? Also some specifics I can't find anywhere at the moment. If my Windows box is assigned an IP and listening on that IP but network load balancing is not accepting specific traffic then will AnyCast route away from that? Also can I AnyCast on a socket level?

    Read the article

< Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >