Search Results

Search found 2845 results on 114 pages for 'amazon dynamodb'.

Page 44/114 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • Tomcat access logs - are failed requests included?

    - by Maxim Eliseev
    We have a RESTful web service (Java, hosted in Tomcat on Ubuntu on Amazon EC2). From time to time it fails (not every week). When it fails, Java CPU consumption goes to 100% and it takes all available memory. It does not finish by itself. I have to restart the server. There is nothing suspicious in Tomcat access logs. I guess one of our users could submit a very "heavy" request which brought the server down. Is it possible this request is not in Tomcat logs since it never finished?

    Read the article

  • EC2 hostname ubuntu and ejabberd

    - by aelbaz
    I have questions related to the host name in Ubuntu EC2 instances. I have a IPS elastics for hosts that want to be seen from the internet and I have pointed out in the DNS entries with the computer name to those ips. For example, for elastic IP 11.11.11.11 DNS I added my computer name www.example.com. But I also want to rename the machines which they have, because it is a parameter of the service running on them (ejabberd server). EC2 instances are restarted when changing the host name, and seen on the client requesting dhcp hostname to dhcp Amazon. My question is ... What is the safest method to change the hostname: dhcp client modify, insert the command in rc.local, etc. ..? Could I have a problem with the internal resolution of traffic between EC2 instances? thanks

    Read the article

  • Cant access EC2 hosted website

    - by Himanshu Page
    For some reason, I am unable to access our website www.doccaster.com (Bad request nginx). We are hosted on amazon EC2 with elastic ip associated to it. The weird part is a) I can access it through the public dns url http://ec2-184-73-195-180.compute-1.amazonaws.com b) My co founder who is located in another city can access it via www.doccaster.com. I observed that my instance was failing reachability check, so I launched a new one and assigned it the the elastic ip. I tried to ping the ip address 184.73.195.180 from my machine but no success. Any help will be really appreciated. More details I ran the following command on my server netstat -lntp | grep -E 'apache|httpd' and it displays :::80 for httpd . Is this accurate ? Should it be 0:0:0:80 ? or doesnt matter?

    Read the article

  • How to Access an AWS Instance with RDC when behind a Private Subnet of a VPC

    - by dalej
    We are implementing a typical Amazon VPC with Public and Private Address - with all servers running the Windows platform. The MS SQL instances will be on the private subnet with all IIS/web servers on the public subnet. We have followed the detailed instructions at Scenario 2: VPC with Public and Private Subnets and everything works properly - until the point where you want to set up a Remote Desktop Connection into the SQL server(s) on the private subnet. At this point, the instructions assume you are accessing a server on the public subnet and it is not clear what is required to RDC to a server on a private subnet. It would make sense that some sort of port redirection is necessary - perhaps accessing the EIP of the Nat instance to hit a particular SQL server? Or perhaps use an Elastic Load Balancer (even though this is really for http protocols)? But it is not obvious what additional setup is required for such a Remote Desktop Connection?

    Read the article

  • AWS:EC2:: Could not connect FTP client?

    - by heathub
    My Server OS: Amazon Linux I am trying to set up ftp. I have: Installed vsftpd open port 20-21 open port 1024 - 1048 Basically, I followed every of these steps Start vsftpd service (the status indicate [ok]) I use filezilla for my ftp client. Here is my setting/configuration: Host: ec2-XX-XX-XXX-XX.compute-1.amazonaws.com Port: -(blank, but I have tried 20 and 21 though) Server Type: FTP - File Transder Protocol Logon Type: Normal Username: (tried root and ec2-user) Transfer mode: Tried passive and active I always has this error: Status: Waiting to retry... Status: Resolving address of ec2-XX-XX-XXX-XX.compute-1.amazonaws.com Status: Connecting to XX.XX.XXX.XX:21... Error: Connection timed out Error: Could not connect to server Have I missed any configuration/settings? EDIT After execute the /sbin/iptables -L -n Here is the result: Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination

    Read the article

  • Setup local EC2 style cloud?

    - by John Kramlich
    I was recently given 3 dual opteron 2400 servers with 4GB of RAM and 120GB hard drives. I am interested in setting up something similar to Amazon's EC2 for my own personal web development use. Basically, I would like to spin up instances from an ISO or other disk images and have them available to test and develop software. Are there open source solutions I can use to accomplish this? I am assuming one of the machines will need to act as a controller of some sort for the other two. I use Sun's VirtualBox on my local development machine to virtualize various versions of Microsoft Windows. However, I'm not sure if that's the best tool for what I am trying to achieve. I apologize in advance if this question is to vague to get meaningful responses. I am new to cloud computing and fairly new at server administration.

    Read the article

  • Force ID of user created by apt-get

    - by Bart van Heukelom
    Context: I'm automatically installing postgresql-9.1 on an Ubuntu server with apt-get. This creates the required postgres user. The Postgres data is on an external volume that survives reinstalls. This data is obviously owned by the postgres user. The problem I'm having is that the ownership is not recorded under the name postgres, but under the UID that postgres had at creation time. When the server is reinstalled, postgres sometimes gets a different UID, and no longer owns the data directory, and thus does not work. Question: Can I force the UID of the user postgres created by apt-get to something fixed? Or is there another way to solve my problem? (As you may have deduced, this is on Amazon EC2 with the data on an EBS volume)

    Read the article

  • Cloud hosting and single hardware point of failure?

    - by PeterB
    From talking to sales I thought Rackspace Cloud was running on a SAN and compute nodes (as VMWare's offerings do), only to find out it doesn't, so when the host server goes down for maintenance all cloud servers on the server go down (in our case for 2.5 hours). I understand Amazon EC2 also has this single-server point of failure. Which cloud hosting solutions don't rely on a single server? I've yet to find a list by architecture Is there a term that distinguishes between these types of 'cloud'? Is one of these 'grid computing' and the other 'virtualisation'? Can a SAN backed solution provide the same reliability as 2 mirrored cloud servers on (say) Rackspace Cloud? I am more familiar with the VMWare architecture and would like to understand the advantages and disadvantages of each approach. I understand the standard architecture is to have multiple cloud servers and mirrored data between them; until we need multiple database servers I'm wondering if a SAN/node hosting solution would provide the lack of downtime we need without the added complexity.

    Read the article

  • Where and how does Kindle Cloud Reader store downloaded books, on a Windows 7 system?

    - by einpoklum
    I use Firefox and sometimes Chrome, on Windows 7. Amazon's in-browser Kindle Cloud Reader lets you "download" books for local/offline viewing. Where are these stored, given my OS+browser combination? I've searched the Users subdirectory for my user, and could not find a relevant (separate) file in there, specifically not in the Firefox and Chrome profile directories. To clarify, the files are obviously not downloaded as-is and are stored in some potentially-obfuscated format, possibly in the browser's local store and possibly elsewhere. The question is, where and how exactly? (This was the first of this question, but wasn't answered there since it was not the main focus of the question.)

    Read the article

  • Duplicating an instance into a new VPC from a Snapshot

    - by Remmus
    We have a group of instances in an Amazon VPC we use for our live environment. We have a big release to do and want to test that the deployment will run smoothly. I have created a second VPC, created instances of the same size on the same private ips and then removed their original volumes and attached new volumes that were created from snapshots of the live environment. Unfortunately none of the instance will allow me to connect. They start running fine, but I don't get any system logs appear and can't connect. The only thing I can think of is that the new instance was created from a new AMI as the old one is deprecated due to new security fixes. Is this a problem? If so can I fix it in any way? And if this isn't a problem, does anyone have any ideas how I can fix it?

    Read the article

  • What's better for deploying a website + DB on EC2: 2 small VM or a large one?

    - by devguy
    I'm planning the deployment of a mid-sized website with a SQL Server Standard DB. I've chosen Amazon EC2 to deploy it. I now have to choose between these 2 options: 1) get 2 small instances (1 core each, 1.7 GB of ram each): one for the IIS front-end, one for running the DB. Note: these "small instances" can only run the 32-bit version of Win2008 Server 2) a single large instance (4 cores, 7.5 gb of ram) where I'd install both IIS and the SQL Server. Note: this large instance can only run the 64-bit version of Win2008 Server What's better in terms on performance, scalability, ease of management (launch up a new instance while I backup the principal instance) etc. All suggestions and points of view are welcome!

    Read the article

  • Problems with the backup

    - by marcodv
    I did a script which run around 4 o'clock in the morning, for backup all the mysql databases and the config file for 250 linux vm. The problem is that it tooks ages for complete and more than 50% of these vm, need more than 8 hours for complete. More or less all the vm had the same configuration,I mean Same amount of ram same amount of disk space same number of cpu Debian 6.0.5 I am saving these backup on amazon s3, because is the cheapest solutions that I've found. Now my questions is: Has anyone some solutions or suggestions about that? On one blog I've read that probably the ionice and nice combination could be good work around about that. any thought?

    Read the article

  • create a CNAME record for AWS LoadBalancer DNS name

    - by t q
    I am trying to setup a loadBalancer on AWS. The A-Record it gave me looks like myLoadBalancer-**********.us-east-1.elb.amazonaws.com however when i try to put that in my domain registrars A-Record, i get an errorIP address is not valid. Must be of type x.x.x.x where x is 0-255. amazons solution is you should create a CNAME record for the LoadBalancer DNS name, or use Amazon Route 53 to create a hosted zone. route 53 gives me DNS numbers but if i use that then my email doesnt work from the registrar. question: is there a way to use route 53 and retain my emails? or should i create a CNAME record for the LoadBalancer DNS name, if so how do i do this ... not sure what this means?

    Read the article

  • Advice for an EC2 Architecture and Deployment Strategy

    - by Mark
    My company is currently migrating several websites and PHP web applications (standard LAMP stack) from three in-house servers to Amazon EC2. Because we had only three servers, we clustered several low-traffic websites with perhaps one high-traffic web application, and served them from the same server. The server admin has pretty much copied the previous architecture wholesale onto the EC2 instances, simply upping the instance size to account for the highest traffic client that occupies that particular instance. This architecture might be okay if it wasn't for deployment. Any time one of these sites/apps changes, it means redeploying the entire instance, along with the 30 sites/apps it hosts, instead of just updating one. How can we architect our cloud in a more modular fashion? Should each app get its own appropriately-sized instance? What is the best strategy for deployment in this type of situation?

    Read the article

  • How to maximize parallel download from S3

    - by StCee
    I got a lot of images to load from Amazon S3 on a single page, and sometimes it takes quite some time to load all the images. I heard that splitting the images to load from different sub-domains would help parallel downloads, however what is the actual implementation on that? While it is easy to split for sub-domains like static,image, etc; Should I make like 10 sub-domains (image1, image2...) to load say 100 images? Or is there some clever ways to do? (By the way I am considering using memcache to cache the S3images; I am not sure if it is possible. I would be grateful for any further comments. Thanks a lot!

    Read the article

  • Redirect all subdomains to subfolders

    - by alfonso
    I'd like to add a rule so that all subdomains get redirected to a subfolder. For example: app1.example.com -> example.com/app1 app2.example.com -> example.com/app2 something.example.com -> example.com/something All subdomains will only be one level deep. Questions Which DNS providers allow me to do this? Are these alternatives feasible? Redirect them all to a special webapp with a static IP that redirects to the proper subfolder. How can I know from which subdomain they came from? Programatically create each rule when I need it. Which DNS providers have API access to add rules? I think Amazon Route 53 might be the answer here.

    Read the article

  • Backing Up User Data when data is not in use. Should I be concerned?

    - by jberryman
    This may be a dumb question. I would like to use duplicity to make backups to Amazon S3 of directories, each of which contains a different user's data. Each directory could be written to at any time. So I have two questions: Should I be concerned that a scheduled backup of a directory might occur in the middle of data being written to files in the directory, resulting in a corrupted backup? And if that is a valid concern, how would I go about temporarily delaying an operation while IO was happening, to try to minimize that effect. Thanks for the advice

    Read the article

  • Suspicious activity in access logs - someone trying to find phpmyadmin dir - should I worry?

    - by undefined
    I was looking over the access logs for a server that we are running on Amazon Web Services. I noticed that someone was obviously trying to find the phpmyadmin directory - they (or a bot) were trying different paths eg - admin/phpmyadmin/, db_admin, ... and the list goes on. Actually there isnt a database on this server and so this was not a problem, they were never going to find it, but should I be worried about such snooping? Is this just a really basic attempt at getting in to our system? Actually our database is held on another managed server which I assume is protected from such intrusions. What are your views on such sneaky activity?

    Read the article

  • intermittent SSH with ssh_exchange_identification error

    - by rafamvc
    My ssh connection to my server works every 30 min for around 10 min. Things that I figure out that might be the problem: The server is underload (it is a database server), but on those spare moments that I can connect, it is still under the same load, which doesnt make sense. The server is a ubuntu, and the consolekit was using a lot of virtual memory. Restarted the consolekit and it seems to be using a right amount of memory now. It is not the host alows or deny. Those are setup properly. It is not a firewall problem. Those settings were working and the same settings are working for other similar machines. It is on the ec2. Amazon cloud.

    Read the article

  • SQL – What is the latest Version of NuoDB? – A Quick Contest to Get Amazon Gift Cards

    - by Pinal Dave
    We had a great contest earlier last week - What ACID stands in the Database? – Contest to Win 24 Amazon Gift Cards and Joes 2 Pros 2012 Kit. It has received quite a few responses. Just like any other contest, not everyone was winner. The kind folks at NuoDB decided to give another chance to everyone who have not won in the last contest. This means if you have missed to take part in the earlier contest or if you have taken part and not won, you still have one more chance to win Amazon Gift Card. Here is the quick contest: You just have to go and download NuoDB. The first 10 people who will download the NuoDB will get 10 – USD 10 cards. Remaining everyone will be entered into a lucky draw of Amazon Gift cards of USD 50. Winners will be announced in next 24 hours. Bonus Round: If you have entered in the contest above, you can also enter to win latest Beginning SSRS Joes 2 Pros book. You just have to leave a comment over here with your experience about your experience with NuoDB and what is the latest version of the product. Here are few of the blog post I wrote earlier on that subject: Part 1 – Install NuoDB in 90 Seconds Part 2 – Manage NuoDB Installation Part 3 – Explore NuoDB Database Part 4 – Migrate from SQL Server to NuoDB Part 5 - NuoDB and Third Party Explorer – SQuirreL SQL Client, SQL Workbench/J and DbVisualizer Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • How do I install and use the cli53 tools on Windows?

    - by pavlos
    I'm trying to find the simplest way to import a large number of BIND zone files in to Route 53. I've had a quick look at the AWS CLI and AWS Tools for Windows PowerShell but they don't seem to include a zone file import option like the AWS Route53 GUI does. The cli53 utility on the other hand does, but is written in Python and appears to have a series of pre-requisites to get going which I'm having troubles working out for Windows. I can find plenty of examples of setting it up under Linux but only one reference to a PowerShell example here, but it doesn't explain how to install cli53 in the first place. The other option I'm exploring is to use the BIND to Amazon Route 53 Conversion Tool perl script to first convert the zone files to the Route53 CreateHostedZoneRequest XML format and then use the AWS New-R53HostedZone PowerShell cmdlet to import the zones. After the zones have been imported I'll be looking at running a script to validate what has been created in Route53 matches with the existing nameserver prior to updating each domains nameserver records - I was planning on whipping something up using the new PS4.0 Resolve-DnsName cmdlet, but let me know if you have any better suggestions. Any assistance would be greatly appreciated - thanks. (By the way, I had more reference links in my post but ServerFault won't allow me to post more than 2 links being a new member; and for this same reason I also can't comment on Vasili's example in the other linked thread)

    Read the article

  • TeamCity EC2 Integration via ISA Server

    - by Tim Long
    I have a TeamCity server which is actually installed on SBS 2003 Premium with ISA Server (firewall/proxy) installed. My ADSL connection has multiple IP addresses, which all resolve directly to my SBS external NIC. The NIC is therefore multi-homed and I have allocated one of the IP addresses specifically to TeamCity. In ISA, I've created an access rule to allow the traffic in. I can access my TeamCity server externally and view the web interface, that all works fine. I want to use the Amazon EC2 integration in TeamCity to launch build agents 'in the cloud'. The problem I am having is that when the agent starts, it sees the server and registers, then just sits there waiting. On the server side, the agent appears as 'disconnected'. Examining the settings, the agent's IP address appears to be that of the external NIC. What I think might be happening is that the traffic is undergoing Network Address Translation (NAT) so that TeamCity always thinks the agent is locally installed and therefore can't communicate with the actual remote agent. This seems to happen even though I have a permanent static IP address dedicated to TeamCity. So, the question is this. How can I make traffic to a specific IP address pass through the ISA server un-NATted?

    Read the article

  • How can I close a port that appears to be orphaned by Xvfb?

    - by Jim Fiorato
    I'm running Xvfb on a FC8 Amazon EC2 image. On occasion Xvfb will crash (unable at the moment to find out the reason for the crash), and after crashing the TCP port will appear to be orphaned. I'm unable to get a PID to kill any process that may be using it. I'm starting Xvfb with: Xvfb :7 -screen 0 1024x768x24 & Examples of what I'm working with are below, the Xvfb port is (was) 6007: # netstat -ap Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 *:ssh *:* LISTEN 1894/sshd tcp 0 0 *:6007 *:* LISTEN - tcp 0 352 ip-10-84-69-165.ec2.int:ssh c-71-194-253-238.hsd1:51689 ESTABLISHED 2981/0 udp 0 0 *:bootpc *:* 1817/dhclient udp 0 0 *:bootpc *:* 1463/dhclient Active UNIX domain sockets (servers and established) Proto RefCnt Flags Type State I-Node PID/Program name Path unix 2 [ ] DGRAM 871 668/udevd @/org/kernel/udev/udevd unix 2 [ ACC ] STREAM LISTENING 5385 1880/dbus-daemon /var/run/dbus/system_bus_socket unix 6 [ ] DGRAM 5353 1867/rsyslogd /dev/log unix 2 [ ] DGRAM 11861 2981/0 unix 2 [ ] DGRAM 5461 1974/crond unix 2 [ ] DGRAM 5451 1904/console-kit-da unix 3 [ ] STREAM CONNECTED 5438 1880/dbus-daemon /var/run/dbus/system_bus_socket unix 3 [ ] STREAM CONNECTED 5437 1904/console-kit-da unix 3 [ ] STREAM CONNECTED 5396 1880/dbus-daemon unix 3 [ ] STREAM CONNECTED 5395 1880/dbus-daemon unix 2 [ ] DGRAM 5361 1871/rklogd # lsof -i COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME dhclient 1463 root 3u IPv4 4704 UDP *:bootpc dhclient 1817 root 4u IPv4 5173 UDP *:bootpc sshd 1894 root 3u IPv4 5414 TCP *:ssh (LISTEN) sshd 2981 root 3u IPv4 11825 TCP ip-10-84-69-165.ec2.internal:ssh->c-71-194-253-238.hsd1.il.comcast.net:51689 (ESTABLISHED) Attempting to force the port closed with iptables doesn't seem to work either. iptables -A INPUT -p tcp --dport 6007 -j DROP I'm at a loss as to how to reclaim/free the port. From what I can tell, this port will remain in this state until the EC2 instance is shut down. So, how can I close this port so I can restart Xvfb?

    Read the article

  • BitLocker with Windows DPAPI Encryption Key Management

    - by bigmac
    We have a need to enforce resting encryption on an iSCSI LUN that is accessible from within a Hyper-V virtual machine. We have implementing a working solution using BitLocker, using Windows Server 2012 on a Hyper-V Virtual Server which has iSCSI access to a LUN on our SAN. We were able to successfully do this by using the "floppy disk key storage" hack as defined in THIS POST. However, this method seems "hokey" to me. In my continued research, I found out that the Amazon Corporate IT team published a WHITEPAPER that outlined exactly what I was looking for in a more elegant solution, without the "floppy disk hack". On page 7 of this white paper, they state that they implemented Windows DPAPI Encryption Key Management to securely manage their BitLocker keys. This is exactly what I am looking to do, but they stated that they had to write a script to do this, yet they don't provide the script or even any pointers on how to create one. Does anyone have details on how to create a "script in conjunction with a service and a key-store file protected by the server’s machine account DPAPI key" (as they state in the whitepaper) to manage and auto-unlock BitLocker volumes? Any advice is appreciated.

    Read the article

  • Can't install NPM after installing Node on EC2 Linux instance?

    - by frequent
    I'm trying my first attempt on getting a node server set up on an amazon ec2 linux instance. I think I made it quite far. First problem I ran into was when trying to make Node the connection timed out after a while, so I need three attempts until I got this: LINK(target) /home/ec2-user/node/out/Release/node: Finished touch /home/ec2-user/node/out/Release/obj.target/node_dtrace_header.stamp touch /home/ec2-user/node/out/Release/obj.target/node_dtrace_provider.stamp touch /home/ec2-user/node/out/Release/obj.target/node_dtrace_ustack.stamp touch /home/ec2-user/node/out/Release/obj.target/node_etw.stamp make[1]: Leaving directory `/home/ec2-user/node/out' ln -fs out/Release/node node Which tells me, "Node is done", although I'm not sure it is also working as it should. Following this,this and this tutorial, I'm now stuck at installing npm. I think I first cloned into the wrong folder, which always gave me error 127, but even if I'm doing this: cd ~ git clone git://github.com/isaacs/npm.git cd npm sudo -s PATH=/usr/local/bin:$PATH make install I'm still getting this: #after cloning# make[1]: Entering directory `/root/npm' node cli.js install bash: node: command not found make[1]: *** [node_modules/.bin/ronn] Error 127 make[1]: Leaving directory `/root/npm' make: *** [man/man3/start.3] Error 2 Question:: Since I'm pretty much a newby at everything I'm trying here, can someone please tell me what I'm doing wrong and how to get npm to install? Also, in case I cloned into the wrong folder, is there a way to remove the "false clone" or is this not written to disk until I call make install and I don't need to worry? Thanks for helping out!

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >