Search Results

Search found 2853 results on 115 pages for 'amazon cloudfront'.

Page 45/115 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • Redirect all subdomains to subfolders

    - by alfonso
    I'd like to add a rule so that all subdomains get redirected to a subfolder. For example: app1.example.com -> example.com/app1 app2.example.com -> example.com/app2 something.example.com -> example.com/something All subdomains will only be one level deep. Questions Which DNS providers allow me to do this? Are these alternatives feasible? Redirect them all to a special webapp with a static IP that redirects to the proper subfolder. How can I know from which subdomain they came from? Programatically create each rule when I need it. Which DNS providers have API access to add rules? I think Amazon Route 53 might be the answer here.

    Read the article

  • Backing Up User Data when data is not in use. Should I be concerned?

    - by jberryman
    This may be a dumb question. I would like to use duplicity to make backups to Amazon S3 of directories, each of which contains a different user's data. Each directory could be written to at any time. So I have two questions: Should I be concerned that a scheduled backup of a directory might occur in the middle of data being written to files in the directory, resulting in a corrupted backup? And if that is a valid concern, how would I go about temporarily delaying an operation while IO was happening, to try to minimize that effect. Thanks for the advice

    Read the article

  • Suspicious activity in access logs - someone trying to find phpmyadmin dir - should I worry?

    - by undefined
    I was looking over the access logs for a server that we are running on Amazon Web Services. I noticed that someone was obviously trying to find the phpmyadmin directory - they (or a bot) were trying different paths eg - admin/phpmyadmin/, db_admin, ... and the list goes on. Actually there isnt a database on this server and so this was not a problem, they were never going to find it, but should I be worried about such snooping? Is this just a really basic attempt at getting in to our system? Actually our database is held on another managed server which I assume is protected from such intrusions. What are your views on such sneaky activity?

    Read the article

  • intermittent SSH with ssh_exchange_identification error

    - by rafamvc
    My ssh connection to my server works every 30 min for around 10 min. Things that I figure out that might be the problem: The server is underload (it is a database server), but on those spare moments that I can connect, it is still under the same load, which doesnt make sense. The server is a ubuntu, and the consolekit was using a lot of virtual memory. Restarted the consolekit and it seems to be using a right amount of memory now. It is not the host alows or deny. Those are setup properly. It is not a firewall problem. Those settings were working and the same settings are working for other similar machines. It is on the ec2. Amazon cloud.

    Read the article

  • SQL – What is the latest Version of NuoDB? – A Quick Contest to Get Amazon Gift Cards

    - by Pinal Dave
    We had a great contest earlier last week - What ACID stands in the Database? – Contest to Win 24 Amazon Gift Cards and Joes 2 Pros 2012 Kit. It has received quite a few responses. Just like any other contest, not everyone was winner. The kind folks at NuoDB decided to give another chance to everyone who have not won in the last contest. This means if you have missed to take part in the earlier contest or if you have taken part and not won, you still have one more chance to win Amazon Gift Card. Here is the quick contest: You just have to go and download NuoDB. The first 10 people who will download the NuoDB will get 10 – USD 10 cards. Remaining everyone will be entered into a lucky draw of Amazon Gift cards of USD 50. Winners will be announced in next 24 hours. Bonus Round: If you have entered in the contest above, you can also enter to win latest Beginning SSRS Joes 2 Pros book. You just have to leave a comment over here with your experience about your experience with NuoDB and what is the latest version of the product. Here are few of the blog post I wrote earlier on that subject: Part 1 – Install NuoDB in 90 Seconds Part 2 – Manage NuoDB Installation Part 3 – Explore NuoDB Database Part 4 – Migrate from SQL Server to NuoDB Part 5 - NuoDB and Third Party Explorer – SQuirreL SQL Client, SQL Workbench/J and DbVisualizer Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • How do I install and use the cli53 tools on Windows?

    - by pavlos
    I'm trying to find the simplest way to import a large number of BIND zone files in to Route 53. I've had a quick look at the AWS CLI and AWS Tools for Windows PowerShell but they don't seem to include a zone file import option like the AWS Route53 GUI does. The cli53 utility on the other hand does, but is written in Python and appears to have a series of pre-requisites to get going which I'm having troubles working out for Windows. I can find plenty of examples of setting it up under Linux but only one reference to a PowerShell example here, but it doesn't explain how to install cli53 in the first place. The other option I'm exploring is to use the BIND to Amazon Route 53 Conversion Tool perl script to first convert the zone files to the Route53 CreateHostedZoneRequest XML format and then use the AWS New-R53HostedZone PowerShell cmdlet to import the zones. After the zones have been imported I'll be looking at running a script to validate what has been created in Route53 matches with the existing nameserver prior to updating each domains nameserver records - I was planning on whipping something up using the new PS4.0 Resolve-DnsName cmdlet, but let me know if you have any better suggestions. Any assistance would be greatly appreciated - thanks. (By the way, I had more reference links in my post but ServerFault won't allow me to post more than 2 links being a new member; and for this same reason I also can't comment on Vasili's example in the other linked thread)

    Read the article

  • TeamCity EC2 Integration via ISA Server

    - by Tim Long
    I have a TeamCity server which is actually installed on SBS 2003 Premium with ISA Server (firewall/proxy) installed. My ADSL connection has multiple IP addresses, which all resolve directly to my SBS external NIC. The NIC is therefore multi-homed and I have allocated one of the IP addresses specifically to TeamCity. In ISA, I've created an access rule to allow the traffic in. I can access my TeamCity server externally and view the web interface, that all works fine. I want to use the Amazon EC2 integration in TeamCity to launch build agents 'in the cloud'. The problem I am having is that when the agent starts, it sees the server and registers, then just sits there waiting. On the server side, the agent appears as 'disconnected'. Examining the settings, the agent's IP address appears to be that of the external NIC. What I think might be happening is that the traffic is undergoing Network Address Translation (NAT) so that TeamCity always thinks the agent is locally installed and therefore can't communicate with the actual remote agent. This seems to happen even though I have a permanent static IP address dedicated to TeamCity. So, the question is this. How can I make traffic to a specific IP address pass through the ISA server un-NATted?

    Read the article

  • How can I close a port that appears to be orphaned by Xvfb?

    - by Jim Fiorato
    I'm running Xvfb on a FC8 Amazon EC2 image. On occasion Xvfb will crash (unable at the moment to find out the reason for the crash), and after crashing the TCP port will appear to be orphaned. I'm unable to get a PID to kill any process that may be using it. I'm starting Xvfb with: Xvfb :7 -screen 0 1024x768x24 & Examples of what I'm working with are below, the Xvfb port is (was) 6007: # netstat -ap Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 *:ssh *:* LISTEN 1894/sshd tcp 0 0 *:6007 *:* LISTEN - tcp 0 352 ip-10-84-69-165.ec2.int:ssh c-71-194-253-238.hsd1:51689 ESTABLISHED 2981/0 udp 0 0 *:bootpc *:* 1817/dhclient udp 0 0 *:bootpc *:* 1463/dhclient Active UNIX domain sockets (servers and established) Proto RefCnt Flags Type State I-Node PID/Program name Path unix 2 [ ] DGRAM 871 668/udevd @/org/kernel/udev/udevd unix 2 [ ACC ] STREAM LISTENING 5385 1880/dbus-daemon /var/run/dbus/system_bus_socket unix 6 [ ] DGRAM 5353 1867/rsyslogd /dev/log unix 2 [ ] DGRAM 11861 2981/0 unix 2 [ ] DGRAM 5461 1974/crond unix 2 [ ] DGRAM 5451 1904/console-kit-da unix 3 [ ] STREAM CONNECTED 5438 1880/dbus-daemon /var/run/dbus/system_bus_socket unix 3 [ ] STREAM CONNECTED 5437 1904/console-kit-da unix 3 [ ] STREAM CONNECTED 5396 1880/dbus-daemon unix 3 [ ] STREAM CONNECTED 5395 1880/dbus-daemon unix 2 [ ] DGRAM 5361 1871/rklogd # lsof -i COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME dhclient 1463 root 3u IPv4 4704 UDP *:bootpc dhclient 1817 root 4u IPv4 5173 UDP *:bootpc sshd 1894 root 3u IPv4 5414 TCP *:ssh (LISTEN) sshd 2981 root 3u IPv4 11825 TCP ip-10-84-69-165.ec2.internal:ssh->c-71-194-253-238.hsd1.il.comcast.net:51689 (ESTABLISHED) Attempting to force the port closed with iptables doesn't seem to work either. iptables -A INPUT -p tcp --dport 6007 -j DROP I'm at a loss as to how to reclaim/free the port. From what I can tell, this port will remain in this state until the EC2 instance is shut down. So, how can I close this port so I can restart Xvfb?

    Read the article

  • BitLocker with Windows DPAPI Encryption Key Management

    - by bigmac
    We have a need to enforce resting encryption on an iSCSI LUN that is accessible from within a Hyper-V virtual machine. We have implementing a working solution using BitLocker, using Windows Server 2012 on a Hyper-V Virtual Server which has iSCSI access to a LUN on our SAN. We were able to successfully do this by using the "floppy disk key storage" hack as defined in THIS POST. However, this method seems "hokey" to me. In my continued research, I found out that the Amazon Corporate IT team published a WHITEPAPER that outlined exactly what I was looking for in a more elegant solution, without the "floppy disk hack". On page 7 of this white paper, they state that they implemented Windows DPAPI Encryption Key Management to securely manage their BitLocker keys. This is exactly what I am looking to do, but they stated that they had to write a script to do this, yet they don't provide the script or even any pointers on how to create one. Does anyone have details on how to create a "script in conjunction with a service and a key-store file protected by the server’s machine account DPAPI key" (as they state in the whitepaper) to manage and auto-unlock BitLocker volumes? Any advice is appreciated.

    Read the article

  • Can't install NPM after installing Node on EC2 Linux instance?

    - by frequent
    I'm trying my first attempt on getting a node server set up on an amazon ec2 linux instance. I think I made it quite far. First problem I ran into was when trying to make Node the connection timed out after a while, so I need three attempts until I got this: LINK(target) /home/ec2-user/node/out/Release/node: Finished touch /home/ec2-user/node/out/Release/obj.target/node_dtrace_header.stamp touch /home/ec2-user/node/out/Release/obj.target/node_dtrace_provider.stamp touch /home/ec2-user/node/out/Release/obj.target/node_dtrace_ustack.stamp touch /home/ec2-user/node/out/Release/obj.target/node_etw.stamp make[1]: Leaving directory `/home/ec2-user/node/out' ln -fs out/Release/node node Which tells me, "Node is done", although I'm not sure it is also working as it should. Following this,this and this tutorial, I'm now stuck at installing npm. I think I first cloned into the wrong folder, which always gave me error 127, but even if I'm doing this: cd ~ git clone git://github.com/isaacs/npm.git cd npm sudo -s PATH=/usr/local/bin:$PATH make install I'm still getting this: #after cloning# make[1]: Entering directory `/root/npm' node cli.js install bash: node: command not found make[1]: *** [node_modules/.bin/ronn] Error 127 make[1]: Leaving directory `/root/npm' make: *** [man/man3/start.3] Error 2 Question:: Since I'm pretty much a newby at everything I'm trying here, can someone please tell me what I'm doing wrong and how to get npm to install? Also, in case I cloned into the wrong folder, is there a way to remove the "false clone" or is this not written to disk until I call make install and I don't need to worry? Thanks for helping out!

    Read the article

  • Apache doesn't immediately notice a change in the document root

    - by Tom
    We use capistrano for website deployments and our Apache document root is a symlink to a particular code release. The deployment procedure switches the symlink from the old release to the new release as the final step of the deployment. We are migrating our webservers from real servers running RHEL 5.6 to Amazon EC2 virtual machines running Ubuntu 11.10 and the new servers are suffering from a problem where Apache doesn't immediately notice the change to it's document root when the symlink is switched. It can take a second or so (and I think I've even seen it take a couple of minutes). It's kind of like Apache has cached the physical path of the symlink for some time. Does anyone know some Apache settings I could look at to get it to "scan" for changes to it's served files quicker. Thoughts: I read that the disks on virtual machines are much slower (since they are network attached storage). Perhaps the filesystem cache somehow works differently too? If so, is there anything that can be done? The website runs PHP code. Perhaps there is some PHP config differences between RHEL and Ubuntu? I checked realpath_cache_ttl but both servers have it commented out: e.g. ; Duration of time, in seconds for which to cache realpath information for a given ; file or directory. For systems with rarely changing files, consider increasing this ; value. ; http://www.php.net/manual/en/ini.core.php#ini.realpath-cache-ttl ;realpath_cache_ttl = 120 We do use the APC opcode cache but don't think it's the issue due to experimentation. The PHP code is in different file paths for each deployment and we ensure stat=1. Here is a similar question that is very interesting: 294107 - but doesn't provide an answer for me. One solution would be to reload Apache everytime we modify the document root symlink. I'll do this if we can't find another solution.

    Read the article

  • Rails/Mongo across multiple different geo-regions

    - by wmarbut
    I have a system that by necessity requires physical presence in three or more different locations and I need advice on structuring in such a way that my database stays replicated in a timely manner without horrible latency. I've seen mysql access and replication be incredibly slow when the application server was trying to talk to a node that wasn't physically collocated. In this case I am using mongodb. The stack is linux/passenger/ruby/rails/mongodb. The database is write heavy and read light. The infrastructure is Amazon EC2 The application layer must be physically located in 3 or more different locations. I can't justify this requirement further than it is a requirement. The database, however needn't be located in more than one location if it can be written to quickly from other locations. From reading mongo's documentation, mongo replication seems like more of a candidate than sharding b/c my datastore is not huge. However I don't see anything that addresses the issue of speed for servers communicating across large distances with potentially high latency.

    Read the article

  • AWS Linux EC2: yum won't run with plugins

    - by Patrick
    Short Version: yum commands on my Amazon Linux EC2 AMI only work with --noplugins. Long Version: A couple of days ago, I ran yum update at the behest of the SSH Login MoTD telling me I had updates to install. About midway through the update (specifically while updating the kernel), the update abruptly ended (79 of 138 items completed). The website I host on EC2 got weird for a few minutes, but eventually seemed to stabilize back out (maybe EC2 restarted itself?), and I didn't have further issues (other than MySQL started running out of memory, but I think that's probably unrelated to this). Today, I went to install gcc-c++ (with yum install gcc-c++). When I did, I got the following message: Loaded plugins: priorities, security, update-motd, upgrade-helper Config error: Command "updateinfo" already defined and I get that for any command I can think to run using yum. However, If I throw in the --noplugins flag, then magically it seems to work. To be clear, when I installed a different package a week ago, it worked totally correctly, so the yum update is the only thing I can think of that changed. I could find nothing on Google with regard to "updateinfo" already defined (with and without quotes). I tried running yum update --noplugins which spit out a message telling me that I should have run yum-complete-transaction instead, but proceeded to try to update something on its own. When that completed, I tried yum-complete-transaction but that gave me a message about the transactions not lining up correctly, so it removed the old transaction (Probably since I should have completed the first transaction before trying to update again, if I had known). Based on the SF question "Linux EC2 Broken Yum", I've also tried yum clean all --noplugins (fails the same with plugins) which just gives me Cleaning repos: amzn-main amzn-updates rpmforge Cleaning up everything I also tried package-cleanup --problems Loaded plugins: priorities, update-motd, upgrade-helper No Problems Found and package-cleanup --dupes Gives a lot of dupes, so I pasted them here: http://pastebin.com/VVFQEkTT instead of inline. At this point, I'm not sure what else there even is to check.

    Read the article

  • MySQL Master-Master w/ multiple read slave cost effective setup in AWS

    - by Ross
    I've been evaluating Amazon Web Services RDS for MySQL and costing out potential scenarios involving a simple multi-AZ deployment read/write setup vs. a multi-AZ deployment mysql master (hot-standby) with additional read-only slaves. the issue I'm trying to cost-optimize includes their reserved instance vs on-demand instances. Situation 1: purchase reserved multi-az setup for Extra-large-hi-mem(17GB RAM) instance for $5200/yr and have my application query the master all the time. the problem is, if I don't need all the resources of the (17GB RAM) all the time and therefore, especially not a hot-standby, what alternatives for savings can a better topology create, like potentially situation 2 below: Situation 2: purchase reserved multi-az setup using smaller master instances than above for the master-master hot-standby to receive the writes only. Then create and load balance several read-only slaves off the master and add/remove and/or scale up/down the read slaves based on demand. This might only cost $1000 + the on-demand usage of the read slaves. My thinking is, if I have a variable read-intensive application load, with low write load, the single level topology in situation 1 means I'm paying for a lot of resources at the write level of topology when I don't need them there. My hope is that situation 2 can yield cost savings from smaller reserved instances on the master-master resource level allowing me to scale up and down and/or out on the read-level according to demand as needed. Does anyone see a downside to doing this or know of some reason this isn't possible with RDS? Any other thoughts or advice always welcome of course. Thanks in advance, R

    Read the article

  • SSL Returning Blank Page, No Catalina Errors

    - by Mr.Peabody
    This is my second, maybe third, time configuring SSL with Tomcat. Earlier I had created a self signed, which worked, and now using my signed is proving fruitless. I am using Tomcat, operating from the Amazon Linux API. When using the signed cert/keystore, my server is starting normally without errors. However, when trying to navigate to the domain it is giving me an "ERR_SSL_VERSION_OR_CIPHER_MISMATCH" error. My server.xml file looks as follows: <Connector port="8443" maxHttpHeaderSize="8192" maxThreads="150" minSpareThreads="25" maxSpareThreads="75" enableLookups="false" disableUploadTimeout="true" acceptCount="100" scheme="https" secure="true" SSLEnabled="true" clientAuth="false" sslProtocol="TLS" keystoreFile="/home/ec2-user/.keystore/starchild.jks" keystorePass="d6b5385812252f180b961aa3630df504" /> It couldn't hurt to also mention that I'm using a wildcard certificate. Please let me know if anything looks amiss! EDIT: After looking more into this, I've determined there may be nothing is wrong with the Server.xml, or the listening ports. This is becoming more of an actual certificate error, as the curl request is giving me this error: curl: (35) Unknown SSL protocol error in connection to jira.mywebsite.com:-9824 Though, I can't seem to figure out what the "-9824" is. When comparing this curl to another similar setup (using the same Wildcard Certificate) it's turning up the full handshake, which is to be expected. I believe this is now between the protocol/cypher set default on JIRA servers.

    Read the article

  • Creating Active Directory on an EC2 box

    - by Chiggins
    So I have Active Directory set up on a Windows Server 2008 Amazon EC2 server. Its set up correctly I think, I never got any errors with it. Just to test that I got it all set up correctly, I have a Windows 7 Professional virtual machine set up on my network to join to AD. I set the VM to use the Active Directory box as its DNS server. I type in my domain to join it, but I get the following error: DNS was successfully queried for the service location (SRV) resource record used to locate a domain controller for domain "ad.win.chigs.me": The query was for the SRV record for _ldap._tcp.dc._msdcs.ad.win.chigs.me The following domain controllers were identified by the query: ip-0af92ac4.ad.win.chigs.me However no domain controllers could be contacted. Common causes of this error include: - Host (A) or (AAAA) records that map the names of the domain controllers to their IP addresses are missing or contain incorrect addresses. - Domain controllers registered in DNS are not connected to the network or are not running. It seems that I can talk to Active Directory, but when I'm trying to contact the Domain Controller, its giving a private IP to connect to, at least thats what I can make out of it. Here are some nslookup results. > win.chigs.me Server: ec2-184-73-35-150.compute-1.amazonaws.com Address: 184.73.35.150 Non-authoritative answer: Name: ec2-184-73-35-150.compute-1.amazonaws.com Address: 10.249.42.196 Aliases: win.chigs.me > ad.win.chigs.me Server: ec2-184-73-35-150.compute-1.amazonaws.com Address: 184.73.35.150 Name: ad.win.chigs.me Address: 10.249.42.196 win.chigs.me and ad.win.chigs.me are CNAME's pointing to my EC2 box. Any idea what I need to do so that I can join my virtual machine to the EC2 Active Directory set up I have? Thanks!

    Read the article

  • Production monitoring for EC2 instances

    - by Janine
    I'm setting up my first production instance on EC2 and want to make sure I have all necessary monitoring in place. There are three different types of things I want to monitor: Is the instance running? EC2 instances can be terminated without warning if the underlying hardware fails, and as far as I know they aren't automatically restarted. So if not, start it back up. Is UNIX running properly? This is the usual stuff about CPU load, disk space, etc. Is the website responding? If not, restart it. I initially set up Nagios on a physical server outside the cloud, but it is really only helpful for item 2. It can tell me if the instance is gone or if the website is not responding, but as far as I can tell it can't execute any commands to fix the situation. My Googling on this subject has yielded a plethora of options - Cacti, Monit, God, Ganglia, and probably more I'm forgetting now. I don't have time to research them all. I am aware of Amazon's Cloudwatch but it doesn't seem to do anything that my Nagios installation doesn't already do. If you already have something like this in place, can you please share what has worked well for you?

    Read the article

  • APC fragmentation on EC2 Micro for Wordpress + W3TC

    - by Maarten Provo
    I'm trying to optimize APC for my Amazon EC2 Micro server running one Wordpress-site with W3TC. I've started with the settings advised by TechZilla in another topic but I keep getting high fragmentation with 50% of space being free. I've uploaded an image to http://www.maartenprovo.be/downloads/apc.jpg but I can't post it here since I need at least 10 reputation. What values can I optimize to prevent fragmentation? [apc] apc.enabled=1 apc.shm_segments=1 ;32M per WordPress install apc.shm_size=164M ;Leave at 2M or lower. WordPress does't have any file sizes close to 2M apc.max_file_size=2M ;Relative to the number of cached files apc.num_files_hint=1000 ;Relative to the size of WordPress apc.user_entries_hint=4096 ;The number of seconds a cache entry is allowed to idle in a slot before APC dumps the cache apc.ttl=7200 apc.user_ttl=7200 apc.gc_ttl=3600 ;Auto update chache files on change in WP-ADMIN or W3TC apc.stat=1 ;This MUST be 0, WP can have errors otherwise! apc.include_once_override=0 ;Only set to 1 while debugging apc.enable_cli=0 ;Allow 2 seconds after a file is created before it is cached to prevent users from seeing half-written/weird pages apc.file_update_protection=2 ;Ignore files apc.filters apc.slam_defense = 0 apc.write_lock = 1 apc.cache_by_default=1 apc.use_request_time=1 apc.mmap_file_mask=/var/tmp/apc.XXXXXX apc.stat_ctime=0 apc.canonicalize=1 apc.write_lock=1 apc.report_autofilter=0 apc.rfc1867=0 apc.rfc1867_prefix =upload_ apc.rfc1867_name=APC_UPLOAD_PROGRESS apc.rfc1867_freq=0 apc.rfc1867_ttl=3600 apc.lazy_classes=0 apc.lazy_functions=0

    Read the article

  • Dedicated virtual setup is slow with WordPress

    - by kovshenin
    Hey. I'm running a Fedora linux server on the Amazon EC2 platform. I'm pretty sure there's something wrong with my configuration as it seems to be very slow. SSH sometimes takes over 30 seconds to connect, a WordPress generated web page could take 5 seconds to load, and it could take 20 seconds to load, which is pretty awkward. MySQL queries are all executed in less than a second, so I don't think that's the case. I'm not really sure where the issue lies, but a simple page written in PHP loads instantly. A fresh WordPress installation starts lagging. Same works perfect on grid hosting at MediaTemple for instance, so I'm pretty sure I missed something. If you could please direct me to the right tools and articles which would help me out. Thanks so much! Fedora Core 8, php 5.2.6, MySQL 5.0.45, OpenSSH 4.7p1, OpenSSL 0.9.8b. PHP is configured as a module to Apache 2.2.9, all websites based on virtual hosts. I have some on-going php scripts running from time to time in the background via cron. Thanks.

    Read the article

  • Permission issue for apache

    - by Aamir Adnan
    Environment Details: Amazon Ec2 Ubuntu 12.04 Django + mod_wsgi + python 2.6 web server: apache2 I have mounted a 10GB ebs volume to an instance to /mnt/ebs1/. After mounting the volume and formatting, I have placed all my project files in /mnt/ebs1/project. the wsgi file is in /mnt/ebs1/project/apache/django.wsgi. The content of wsgi file is: import os, sys sys.path.insert(0, '/mnt/ebs1/project') sys.path.insert(1, '/mnt/ebs1') os.environ['DJANGO_SETTINGS_MODULE'] = 'project.configs.common.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler() My httpd.conf file looks as: LoadModule wsgi_module /usr/lib/apache2/modules/mod_wsgi.so WSGIPythonHome /usr/bin/python2.6 WSGIScriptAlias / /mnt/ebs1/project/apache/django.wsgi <Directory /mnt/ebs1/project> Order allow,deny Allow from all </Directory> <Directory /mnt/ebs1/project/apache> Order allow,deny Allow from all </Directory> Alias /static/ /mnt/ebs1/project/static/ <Directory /mnt/ebs1/project/static> Order deny,allow Allow from all </Directory> The above configurations gives me Forbidden: You don't have permission to access / on this server. I tried to find the user which is running apache using ps aux which is www-data and has group www-data. I have tried to change the ownership of /mnt/ebs1 and its subdirectories using chown -R www-data:www-data /mnt/ebs1 but that still does not solve the problem. Can any one tell me what I am doing wrong or have missed?

    Read the article

  • Vagrant doesn't detect chef-solo unless re-installed

    - by nightowl
    I am using Vagrant to test my Chef recipes in Amazon AWS, and I am encountering an irritating issue: I initially assumed that Vagrant would install chef itself (as it does when using Virtual Box as the provider) but it seems that this needs to be done using the cloud-init script. However, even after I successfully installed the chef gem via cloud-init I was still getting the following error: The chef binary (eitherchef-soloorchef-client) was not found A quick google of this error suggested three probable causes: Chef had failed to install It had installed, but the directory was not in the $PATH environment variable It had installed and in the $PATH but with incorrect permissions I logged in and double checked; chef-solo and chef-client were installed; The path variable for the user, sudo and root all included /usr/local/bin and permissions were all fine. I managed to solve this problem by uninstalling and reinstalling the gem using sudo gem install chef. I don't understand why this should resolve the issue and it is a bit of a problem if I have to ssh into a test box and manually install the gem every time. Does anyone have any suggestions why this might be happening?

    Read the article

  • EC2: is an instance's public DNS stable? Can I rely on it not changing?

    - by Aseem Kishore
    I'm new to Amazon EC2. I've launched my first instance, and am using it as a web server. I see that it has a public DNS (a public URL), e.g.: ec2-123-45-6-789.compute-1.amazonaws.com I can successfully go to this server in my browser, hit it via cURL, etc. I want to use this web server for a back-end service in an app I'm building, so I placed this URL in my app's config, and it works great. But when I manually stop and re-started my instance, I see that the public DNS changes! I've read that this happens when you explicitly stop and re-start, but doesn't happen if you just "reboot". I don't plan on explicitly stopping and re-starting this server ever, but my question is: will this public DNS ever change on its own for any reason? E.g. if the machine abnormally crashes, or whatever. In other words, is it safe to ship an app that's wired to this URL? Thanks!

    Read the article

  • Most cost efficient way to backup Subversion data to S3?

    - by sludge
    I'm looking at using S3 as an offsite backup repo for my Subversion database. When I dump my SVN database, it's about 10 gigabytes. I would like to avoid the charge of uploading that data repeatedly. The anatomy of this large file such that new changes to Subversion modify the tail of the file, with everything else staying the same. Because Amazon S3 does not allow you to "patch" files with changes, I will have to upload ten gigs every time I instantiate a backup after doing a simple submit to Subversion. Here are the options as I see them: Option 1 I am looking at duplicity which has --volsize which splits data over an amount of megs. Is it possible to split the Subversion dumps using this so further incremental backups are measured in megabytes? Option 2 Can I just backup the hot subversion repository? This seems like a bad idea if it is in the middle of writing a submit. However, I have the option of taking the repo offline between the hours of midnight and 4am. Each revision in my Berkeley DB uses a file as its record.

    Read the article

  • Using NFS for scalable PHP/MySQL web application

    - by Jeroen Moons
    Here's the situation: I have a PHP/MySQL web application that accepts user uploads (pdf files). From these pdf files' pages a preview image is made on the fly and presented to the web app's users. Some pdfs might be on the large side, most will be under 50 MB but some extreme cases could be as large as a few hundred MB. A little waiting for the preview image for large pdf files is acceptable but no more than a minute let's say. Everything is running on one server for now, but soon the app will hit the server's limit on both storage and processing power. My idea to solve the problem: To deal with this situation I had the idea of having one or more pdf processing servers as needed, and one or more file storage servers. These two types of servers are mounted to the server on which the actual app runs using NFS. The app could then use GearMan to delegate pdf processing tasks to these processing servers. The processing server can mount the storage server and read the file stored there, process it and write its output to that server. The servers I'm talking about will be amazon ec2 instances. The web app returns a link to the resulting pdf preview image on the storage server that was used which can then be used on the front end to show the image to the user. My question: I have zero experience with apps that use multiple servers, is this idea viable or is there a better way to do it? Is an NFS setup fast and reliable enough for this situation?

    Read the article

  • EC2 configuration for medium load service on Django

    - by Luberg
    I have created a very basic Django application which puts an email to the database (Coming soon page for a startup). I launched a t1.micro instance to try out which load it can carry out. Nginx+FastCGI from Django+sqllite/postgres - tried both. blitz.io test gave me a pretty unhappy result (just 100 users within 1 minute): This rush generated 542 successful hits in 1.0 min and we transferred 809.01 KB of data in and out of your app. The average hit rate of 8.81/second translates to about 761,612 hits/day. You got bigger problems though: 87.28% of the users during this rush experienced timeouts or errors! I tried both to put varnish, disabled Debub mode in django and started fastcgi in threaded mode - nothing helps. This is not gonna be a super highload page - just a coming soon page to save email of subscribers, it should carry at least 500-1000 users at the same time in peak... I believe t1.micro is super small for that, but I also have tried small instance - not better result.. Please let me know should I use something different from Amazon EC2, or to pick smth better than t1.micro, or I that is definetely a configuration issues?...

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >