Search Results

Search found 3463 results on 139 pages for 'amazon webservices'.

Page 17/139 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • SQL Express 2008 R2 on Amazon EC2 instance: tons of free memory, poor performance

    - by gravyface
    The old SQL Express 2005 was running on a low-end single Xeon CPU Dell server, RAID 5 7200 disks, 2 GB RAM (SBS 2003). I have not done any baseline measurements on the old physical server, but the Web app is used by half a dozen people (maybe 2 concurrently), so I figured "how bad can an Amazon EC2 instance be?". It's pretty horrible: a difference of 8 seconds of load time on one screen. First of all, I'm not a SQL guru, but here's what I've tried: Had a Small Instance, now running a c1.medium (High Cpu Medium) Windows 2008 32-bit R2 EBS-backed instance running IIS 7.5 and SQL Express 2008 R2. No noticeable improvement. Changed Page File from fixed 256 to Automatic. Setup a Striped Mirror from within Disk Management with two attached 1 GB EBS volumes. Moved database and transaction log, left everything else on the boot EBS volume. No noticeable change. Looked at memory, ~1000 MB of physical memory free (1.7 GB total). Changed SQL instance to use a minimum of 1024 RAM; restarted server, no change in memory usage. SQL still only using ~28MB of RAM(!). So I'm thinking: this database is tiny (28MB), why isn't the whole thing cached in RAM? Surely that would speed up performance. The transaction log is 241 MB. Seems kind of large in comparison -- has this not been committed? Is it a cause of performance degradation? I recall something about Recovery Models and log sizes somewhere in my travels, but not positive. Another thing: the old server was running SQL Express 2005. Not sure if that has any impact, but I tried changing the compatibility level from SQL 2000 to 2008, but that had no effect. Anyways, what else can I try here? Seems ridiculous to throw more virtual hardware at this thing. I know I/O is going to be rough on EBS volumes, but surely others are successfully running small .NET/SQL apps on reasonably priced instances?

    Read the article

  • How to move Mdadm RAID drive (EBS based) to different AWS Instance

    - by Stanley
    We have a media-rich web application that is hosted on AWS. We have several Web Servers and we have an NFS server. On the NFS server (Linux server) we have several EBS volumes that are mounted and we've used mdadm to implement the different mounted volumes as a single RAID volume. The Web Servers simply access the NFS storage through a mount point. Amazon has now let us know that they will be performing power maintenance on this server in a couple of days time. Since all our media is on here it would render our site unusable for the hours while Amazon is working on it. We want to try and prevent this downtime. I was thinking that we can prevent server downtime by perhaps setting up a new server temporarily and attaching the EBS drives (raid volume) to that server and have our web servers point there during maintenance. This is a very high risk operation since this involves several terabytes of our production data. What would be the safe way to move over our logical raid drive (md0) to a new amazon instance? I was hoping that I could start with building the new server, mounting the ebs volumes and assembling the RAID partition using mdadm --assemble --scan before unmounting from the existing instance so that I can first test that everything works and thus having it mounted on two instances at the same time, but I don't believe that is possible with the way that filesystems work. How do I move a Linux software RAID to a new machine? suggests a way to move drives, but isn't really a cloud-based question. Perhaps there are simpler ways to prevent system downtime with our solution being hosted on the cloud? I have considered taking an EBS snapshot, but that tries to replicate all the many terabytes of mounted storage, so this is not a practical solution. Any ideas?

    Read the article

  • simple and reliable centralized logging inside Amazon VPC

    - by Nakedible
    I need to set up centralized logging for a set of servers (10-20) in an Amazon VPC. The logging should be as to not lose any log messages in case any single server goes offline - or in the case that an entire availability zone goes offline. It should also tolerate packet loss and other normal network conditions without losing or duplicating messages. It should store the messages durably, at the minimum on two different EBS volumes in two availability zones, but S3 is a good place as well. It should also be realtime so that the messages arrive within seconds of their generation to two different availability zones. I also need to sync logfiles not generated via syslog, so a syslog-only centralized logging solution would not fulfill all the needs, although I guess that limitation could be worked around. I have already reviewed a few solutions, and I will list them here: Flume to Flume to S3: I could set up two logservers as Flume hosts which would store log messages either locally or in S3, and configure all the servers with Flume to send all messages to both servers, using the end-to-end reliability options. That way the loss of a single server shouldn't cause lost messages and all messages would arrive in two availability zones in realtime. However, there would need to be some way to join the logs of the two servers, deduplicating all the messages delivered to both. This could be done by adding a unique id on the sending side to each message and then write some manual deduplication runs on the logfiles. I haven't found an easy solution to the duplication problem. Logstash to Logstash to ElasticSearch: I could install Logstash on the servers and have them deliver to a central server via AMQP, with the durability options turned on. However, for this to work I would need to use some of the clustering capable AMQP implementations, or fan out the deliver just as in the Flume case. AMQP seems to be a yet another moving part with several implementations and no real guidance on what works best this sort of setup. And I'm not entirely convinced that I could get actual end-to-end durability from logstash to elasticsearch, assuming crashing servers in between. The fan-out solutions run in to the deduplication problem again. The best solution that would seem to handle all the cases, would be Beetle, which seems to provide high availability and deduplication via a redis store. However, I haven't seen any guidance on how to set this up with Logstash and Redis is one more moving part again for something that shouldn't be terribly difficult. Logstash to ElasticSearch: I could run Logstash on all the servers, have all the filtering and processing rules in the servers themselves and just have them log directly to a removet ElasticSearch server. I think this should bring me reliable logging and I can use the ElasticSearch clustering features to share the database transparently. However, I am not sure if the setup actually survives Logstash restarts and intermittent network problems without duplicating messages in a failover case or similar. But this approach sounds pretty promising. rsync: I could just rsync all the relevant log files to two different servers. The reliability aspect should be perfect here, as the files should be identical to the source files after a sync is done. However, doing an rsync several times per second doesn't sound fun. Also, I need the logs to be untamperable after they have been sent, so the rsyncs would need to be in append-only mode. And log rotations mess things up unless I'm careful. rsyslog with RELP: I could set up rsyslog to send messages to two remote hosts via RELP and have a local queue to store the messages. There is the deduplication problem again, and RELP itself might also duplicate some messages. However, this would only handle the things that log via syslog. None of these solutions seem terribly good, and they have many unknowns still, so I am asking for more information here from people who have set up centralized reliable logging as to what are the best tools to achieve that goal.

    Read the article

  • DNS with name.com and Amazon S3

    - by aledalgrande
    I have a website on a bucket in Amazon S3, and recently started to get emails from Google "Googlebot can't access your site". When I go to Webmaster Tools and I try to fetch in fact it doesn't work. Also people in locations different from mine sometimes reported they could not access the website. Now for curiosity I tried from my terminal: $ host xxx xxx is an alias for xxx.s3-website-us-west-1.amazonaws.com. xxx.s3-website-us-west-1.amazonaws.com is an alias for s3-website-us-west-1.amazonaws.com. s3-website-us-west-1.amazonaws.com has address yyy.yyy.yyy.yyy And when I try with dig: $ dig xxx ; <<>> DiG 9.8.3-P1 <<>> xxx ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 17860 ;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;xxx. IN A ;; ANSWER SECTION: xxx. 300 IN CNAME xxx.s3-website-us-west-1.amazonaws.com. xxx.s3-website-us-west-1.amazonaws.com. 60 IN CNAME s3-website-us-west-1.amazonaws.com. s3-website-us-west-1.amazonaws.com. 60 IN A yyy ;; Query time: 1514 msec ;; SERVER: 75.75.75.75#53(75.75.75.75) ;; WHEN: Fri Aug 22 12:32:13 2014 ;; MSG SIZE rcvd: 127 It seems OK to me. Why would Google tell me there is a DNS error? UPDATE: Google also cannot fetch robots.txt, but I can fetch it from my browser. UPDATE 2: I have a forwarding on the root to the www.* hostname: $ dig thenifty.me ; <<>> DiG 9.8.3-P1 <<>> thenifty.me ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 49286 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;thenifty.me. IN A ;; AUTHORITY SECTION: thenifty.me. 300 IN SOA ns1hwy.name.com. support.name.com. 1 10800 3600 604800 300 ;; Query time: 148 msec ;; SERVER: 75.75.75.75#53(75.75.75.75) ;; WHEN: Fri Aug 22 13:32:56 2014 ;; MSG SIZE rcvd: 88

    Read the article

  • Amazon API ItemSearch returns (400) Bad Request.

    - by BuzzBubba
    I'm using a simple example from Amazon documentation for ItemSearch and I get a strange error: "The remote server returned an unexpected response: (400) Bad Request." This is the code: public static void Main() { //Remember to create an instance of the amazon service, including you Access ID. AWSECommerceServicePortTypeClient service = new AWSECommerceServicePortTypeClient(new BasicHttpBinding(), new EndpointAddress( "http://webservices.amazon.com/onca/soap?Service=AWSECommerceService")); AWSECommerceServicePortTypeClient client = new AWSECommerceServicePortTypeClient( new BasicHttpBinding(), new EndpointAddress("http://webservices.amazon.com/onca/soap?Service=AWSECommerceService")); // prepare an ItemSearch request ItemSearchRequest request = new ItemSearchRequest(); request.SearchIndex = "Books"; request.Title = "Harry+Potter"; request.ResponseGroup = new string[] { "Small" }; ItemSearch itemSearch = new ItemSearch(); itemSearch.Request = new ItemSearchRequest[] { request }; itemSearch.AWSAccessKeyId = accessKeyId; // issue the ItemSearch request try { ItemSearchResponse response = client.ItemSearch(itemSearch); // write out the results foreach (var item in response.Items[0].Item) { Console.WriteLine(item.ItemAttributes.Title); } } catch(Exception e) { Console.ForegroundColor = ConsoleColor.Red; Console.WriteLine(e.Message); Console.ForegroundColor = ConsoleColor.White; Console.WriteLine("Press any key to quit..."); Clipboard.SetText(e.Message); } Console.ReadKey(); What is wrong?

    Read the article

  • Amazon EC2 pem file stopped working suddenly

    - by Jashwant
    I was connecting to Amazon EC2 through SSH and it was working well. But all of a sudden, it stopped working. I am not able to connect anymore with the same key file. What can go wrong ? Here's the debug info. ssh -vvv -i ~/Downloads/mykey.pem [email protected] OpenSSH_6.1p1 Debian-4, OpenSSL 1.0.1c 10 May 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to ec2-54-222-60-78.eu.compute.amazonaws.com [54.229.60.78] port 22. debug1: Connection established. debug3: Incorrect RSA1 identifier debug3: Could not load "/home/jashwant/Downloads/mykey.pem" as a RSA1 public key debug1: identity file /home/jashwant/Downloads/mykey.pem type -1 debug1: identity file /home/jashwant/Downloads/mykey.pem-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1.1 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1.1 pat OpenSSH_5* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.1p1 Debian-4 debug2: fd 3 setting O_NONBLOCK debug3: load_hostkeys: loading entries for host "ec2-54-222-60-78.eu.compute.amazonaws.com" from file "/home/jashwant/.ssh/known_hosts" debug3: load_hostkeys: found key type ECDSA in file /home/jashwant/.ssh/known_hosts:4 debug3: load_hostkeys: loaded 1 keys debug3: order_hostkeyalgs: prefer hostkeyalgs: [email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,[email protected],[email protected],[email protected],[email protected],ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss,ecdsa-sha2-nistp256 debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5 debug1: kex: server->client aes128-ctr hmac-md5 none debug2: mac_setup: found hmac-md5 debug1: kex: client->server aes128-ctr hmac-md5 none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: ECDSA d8:05:8e:fe:37:2d:1e:2c:f1:27:c2:e7:90:7f:45:48 debug3: load_hostkeys: loading entries for host "ec2-54-222-60-78.eu.compute.amazonaws.com" from file "/home/jashwant/.ssh/known_hosts" debug3: load_hostkeys: found key type ECDSA in file /home/jashwant/.ssh/known_hosts:4 debug3: load_hostkeys: loaded 1 keys debug3: load_hostkeys: loading entries for host "54.229.60.78" from file "/home/jashwant/.ssh/known_hosts" debug3: load_hostkeys: found key type ECDSA in file /home/jashwant/.ssh/known_hosts:5 debug3: load_hostkeys: loaded 1 keys debug1: Host 'ec2-54-222-60-78.eu.compute.amazonaws.com' is known and matches the ECDSA host key. debug1: Found key in /home/jashwant/.ssh/known_hosts:4 debug1: ssh_ecdsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: jashwant@jashwant-linux (0x7f827cbe4f00) debug2: key: /home/jashwant/Downloads/mykey.pem ((nil)) debug1: Authentications that can continue: publickey debug3: start over, passed a different list publickey debug3: preferred gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: jashwant@jashwant-linux debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey debug1: Trying private key: /home/jashwant/Downloads/mykey.pem debug1: read PEM private key done: type RSA debug3: sign_and_send_pubkey: RSA 9b:7d:9f:2e:7a:ef:51:a2:4e:fb:0c:c0:e8:d4:66:12 debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey debug2: we did not send a packet, disable method debug1: No more authentication methods to try. Permission denied (publickey). I've already googled everything and checked : Public DNS is same (It hasnt changed), Username is ubuntu as it's a Ubuntu AMI ( Used the same earlier), Permission is 400 on mykey.pem file ssh port is enabled via security groups ( Used the same ealier )

    Read the article

  • Data Structure Behind Amazon S3s Keys (Filtering Data Structure)

    - by dimo414
    I'd like to implement a data structure similar to the lookup functionality of Amazon S3. For those of you who don't know what I'm taking about, Amazon S3 stores all files at the root, but allows you to look up groups of files by common prefixes in their names, therefore replicating the power of a directory tree without the complexity of it. The catch is, both lookup and filter operations are O(1) (or close enough that even on very large buckets - S3's disk equivalents - both operations might as well be O(1))). So in short, I'm looking for a data structure that functions like a hash map, with the added benefit of efficient (at the very least not O(n)) filtering. The best I can come up with is extending HashMap so that it also contains a (sorted) list of contents, and doing a binary search for the range that matches the prefix, and returning that set. This seems slow to me, but I can't think of any other way to do it. Does anyone know either how Amazon does it, or a better way to implement this data structure?

    Read the article

  • Safety of Amazon Community AMIs

    - by Koran
    Hi, I am working for a scientific project and would like to use the Amazon EC2 to host the same. Now, I was going through the existing AMIs to create my own, but I found one AMI in the community list of AMIs - which fits perfectly. My question is the following - are the community AMIs safe? Is it possible for a malicious user to create an AMI in such a way that say the usernames/passwords etc can be sent to a remote server or something? Does Amazon do any sort of checking to make sure it cannot happen? I couldnt find the answers to these in Amazon site. I am pretty sure that you guys would have felt the same issues - could you please provide your answers? Regards K

    Read the article

  • Amazon S3 as secure backup without multiple invoices

    - by Tom Viner
    I'm storing copies of database backups on Amazon S3 using the Python Boto library. But I worry that if my web server was hacked, those backups could be deleted using the credentials I need to do the upload. Ok, so I know you can grant permissions to another Amazon email address, so I can imagine doing that after an upload then removing the original user's write access BUT in this scenario I now end up with 2 accounts and 2 sets of invoices to give to accounts every month. Is there a solution to this that doesn't require a new Amazon account for each web server I run?

    Read the article

  • Custom Windows AMI for Amazon EC2

    - by cutyMiffy
    Hello: I am new to the Amazon EC2 services and I am planning to host a Windows based vm on Amazon. I have lot of fuzziness about AMIs on Amazon. So my first question would be, am I be able to install softwares and frameworks(e.g. Silverlight, .NET, etc) on a existing Windows instance that I select? And if not, how can I create a custom AMI so that I can install these softwares that I need before I submit it to Amamzon and launch? Thanks a lot for your help :) Deeply appriciated

    Read the article

  • Godaddy registrar and Amazon Web Services EC2/Route 53

    - by soshannad
    Ok - I currently have a site hosted on Godaddy and they are the registrar for my domain. My goal is to have my site hosted on AWS with an EC2 server, which I have already set up and it is ready to go. In order to migrate my domain name to Amazon I have set up an A record with Godaddy and another A record with Route 53 (Amazon's routing service) with both of them pointing to the new static IP of the AWS site. My question is - Godaddy told me that I should leave my nameservers as Godaddy since my email is with them and set up an MX record wit Amazon pointing to it. Does this sound correct? Can you leave nameservers with Godaddy and have A records pointed to the new IP? Are there any benefits/cons to this? *FOR THE RECORD - My site is DOWN right now after doing the change - Godaddy says it will take 2 hours to come back, but I'm not sure if their nameserver recommendation is correct.

    Read the article

  • Error after installing mysqlnd_ms

    - by user997226
    I am working on Amazon EC2 using the Amazon Linux AMI which is based on CentOS. I have installed php54 and php54-mysqlnd. I then do a "sudo pecl install myslqnd_ms" This installs fine. I add the extension into the php.ini file. Then I start httpd, and when I do in the error log I see: PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib64/php/modules/mysqlnd_ms.so' - /usr/lib64/php/modules/mysqlnd_ms.so: undefined symbol: mysqlnd_globals in Unknown on line 0 Googling this lead me to: https://bugs.php.net/bug.php?id=62276&edit=1 and http://forums.famillecollet.com/viewtopic.php?id=242 neither of which helped me much. I am trying to stick to stock pre built binaries from the main YUM repo if possible. What would be the best solution here? Thanks in advance

    Read the article

  • Restoring WordPress EC2 instance from snapshot results in 403 Forbidden error

    - by Eric Matthew Turano
    This problem has been perplexing me for weeks now. Here's how the issue goes: Launch Amazon Linux 64-bit instance, successfully install WordPress, and site is active w/ no issues Create snapshot of the instance's root volume Shut down instance Create volume from snapshot, attach to instance, and reboot instance Associate Elastic IP with instance Once that's done and I try logging onto the site, I am redirected to myurl.com/wp-admin/install.php and greeted with this message: Forbidden: You don't have permission to access /wp-admin/install.php on this server. Apache/2.2.25 (Amazon) Server at www.myurl.com Port 80 Port 80 is open on the inbound security group settings, so that's not the issue. Keep in mind all I am doing is merely creating a new volume and attaching it to the same instance, and this issue comes up. What am I doing wrong, and how can I create a complete backup of my instance without this error occuring?

    Read the article

  • How to use instances with s3 load balancing?

    - by Slay
    I have some questions about instances and load balancing in amazon s3. I can configured an instances in s3, but i do not understand how to deal with many instances in s3. Currently, my instance is loaded with mysql, php etc (ALL IN ONE). However how do i ensure my instances is scaling? E.g if i have a site that is suppose to be handled using 3 instances and using amazon rds. Do i need to host my code base in the 3 instances? how do people normally do this? like facebook has 1000+ servers. Do they host their code base in all the 1000+ servers? Thanks

    Read the article

  • AWS VPC - why have a private subnet at all?

    - by jkim
    In Amazon VPC, the VPC creation wizard allows one to create a single "public subnet" or have the wizard create a "public subnet" and a "private subnet". Initially, the public and private subnet option seemed good for security reasons, allowing webservers to be put in the public subnet and database servers to go in the private subnet. But I've since learned that EC2 instances in the public subnet are not reachable from the Internet unless you associate an Amazon ElasticIP with the EC2 instance. So it seems with just a single public subnet configuration, one could just opt to not associate an ElasticIP with the database servers and end up with the same sort of security. Can anyone explain the advantages of a public + private subnet configuration? Are the advantages of this config more to do with auto-scaling, or is it actually less secure to have a single public subnet?

    Read the article

  • Why are access modifiers on web service proxy methods important

    - by cand
    I'm creating interface to an external web service with C# client generated from WSDL. And in this client class I have methods with signature like: public ResponseType InvokeMethod(RequestType request). I want to change its access modifier to protected, but then web service responds with "web service method name is not valid" exception. Do You know why is that so? I understand that maintaining method name can be important for some reasons, but why can't I change this access modifier? Shouldn't it be a matter of my code what access I want to give to this method? Thanks for all the answers in advance.

    Read the article

  • Keep website and webservices warm with zero coding

    - by oazabir
    If you want to keep your websites or webservices warm and save user from seeing the long warm up time after an application pool recycle, or IIS restart or new code deployment or even windows restart, you can use the tinyget command line tool, that comes with IIS Resource Kit, to hit the site and services and keep them warm. Here’s how: First get tinyget from here. Download and install the IIS 6.0 Resource Kit on some PC. Then copy the tinyget.exe from “c:\program files…\IIS 6.0 ResourceKit\Tools'\tinyget” to the server where your IIS 6.0 or IIS 7 is running. Then create a batch file that will hit the pages and webservices. Something like this: SET TINYGET=C:\Program Files (x86)\IIS Resources\TinyGet\tinyget.exe"%TINYGET%" -srv:dropthings.omaralzabir.com -uri:http://dropthings.omaralzabir.com/ -status:200"%TINYGET%" -srv:dropthings.omaralzabir.com -uri:http://dropthings.omaralzabir.com/WidgetService.asmx?WSDL - status:200 First I am hitting the homepage to keep the webpage warm. Then I am hitting the webservice URL with ?WSDL parameter, which allows ASP.NET to compile the service if not already compiled and walk through all the operations and reflect on them and thus loading all related DLLs into memory and reducing the warmup time when hit. Tinyget gets the servers name or IP in the –srv parameter and then the actual URI in the –uri. I have specified what’s the HTTP response code to expect in –status parameter. It ensures the site is alive and is returning http 200 code. Besides just warming up a site, you can do some load test on the site. Tinyget can run in multiple threads and run loops to hit some URL. You can literally blow up a site with commands like this: "%TINYGET%" -threads:30 -loop:100 -srv:google.com -uri:http://www.google.com/ -status:200 Tinyget is also pretty useful to run automated tests. You can record http posts in a text file and then use it to make http posts to some page. Then you can put matching clause to check for certain string in the output to ensure the correct response is given. Thus with some simple command line commands, you can warm up, do some transactions, validate the site is giving off correct response as well as run a load test to ensure the server performing well. Very cheap way to get a lot done.

    Read the article

  • Amazon.com Cutting Off Colorado Affiliates

    - by Joe Mayo
    I received an email from Amazon.com today, essentially cutting off my affiliate status because I'm in Colorado. Colorado recently passed legislation that requires retailers to either collect sales tax for on-line transactions or engage in an onerous process that makes you wish you had collected sales tax.  After I Tweeted this, Mike Jones tweeted a link to the legislation.  Here's an excerpt from Amazon.com's email: "Dear Colorado-based Amazon Associate: We are writing from the Amazon Associates Program to inform you that the Colorado government recently enacted a law to impose sales tax regulations on online retailers. The regulations are burdensome and no other state has similar rules. The new regulations do not require online retailers to collect sales tax. Instead, they are clearly intended to increase the compliance burden to a point where online retailers will be induced to "voluntarily" collect Colorado sales tax -- a course we won't take. We and many others strongly opposed this legislation, known as HB 10-1193, but it was enacted anyway. Regrettably, as a result of the new law, we have decided to stop advertising through Associates based in Colorado. We plan to continue to sell to Colorado residents, however, and will advertise through other channels, including through Associates based in other states. There is a right way for Colorado to pursue its revenue goals, but this new law is a wrong way. As we repeatedly communicated to Colorado legislators, including those who sponsored and supported the new law, we are not opposed to collecting sales tax within a constitutionally-permissible system applied even-handedly. The US Supreme Court has defined what would be constitutional, and if Colorado would repeal the current law or follow the constitutional approach to collection, we would welcome the opportunity to reinstate Colorado-based Associates. You may express your views of Colorado's new law to members of the General Assembly and to Governor Ritter, who signed the bill. Your Associates account has been closed as of March 8, 2010, and we will no longer pay advertising fees for customers you refer to Amazon.com after that date. Please be assured that all qualifying advertising fees earned prior to March 8, 2010, will be processed and paid in accordance with our regular payment schedule. Based on your account closure date of March 8, any final payments will be paid by May 31, 2010. We have enjoyed working with you and other Colorado-based participants in the Amazon Associates Program, and wish you all the best in your future.   Best Regards,   The Amazon Associates Team"

    Read the article

  • Amazon EC2 Instance - How to find SQL Server Password

    - by Prashant
    Hi I have created an Amazon EC2 Instance that provides Windows Server 2008 with SQL Sever 2008 pre-installed. Now in order to use the SQL Server for creating databases, or restoring backups of the databases that I have on my local machine, I need the "sa" password for SQL Server 2008. I have tried using the following but with no luck: sa password "blank password" "same password as the admin password for my EC2 instance" Could someone please guide me as to how to get started with using the Amazon EC2 Datacenter with respect to the "sa" password. Thanks

    Read the article

  • Amazon S3 enforcing access control

    - by KandadaBoggu
    I have several PDF files stored in Amazon S3. Each file is associated with a user and only the file owner can access the file. I have enforced this in my download page. But the actual PDF link points to Amazon S3 url, which is accessible to anybody. How do I enforce the access-control rules for this url?(without making my server a proxy for all the PDF download links)

    Read the article

  • Existing Maven Project on Amazon Beanstalk

    - by Abhishek Ranjan
    I am trying to migrate my existing Maven project to Amazon Beanstalk. Looking at amazon's documentation,i don't see any maven project deployment instructions. I tried to upload the war file generated but the application is not coming up on beanstalk. I would like to know if there is any existing documentation to deploy on beanstalk from maven. I have Spring Data JPA,Spring MVC application,do i need to do specific configuration or move configuration files from within the WAR file.

    Read the article

  • Amazon S3 components for Delphi 2010

    - by Mick
    Besides the Amazon Integrator from /n software, are there any other Amazon S3 components available that can be used with Delphi 2010? I would use the one from /n software, but it has some issues (e.g. GetObjectInfo doesn't work if the object is stored in a specific location) and limitations (e.g. copying objects doesn't let you define replacement meta-data). I don't have the time or resources to create such a component myself. Thanks!

    Read the article

  • Amazon API library for Python?

    - by Kevin
    What Python libraries do folks use for querying Amazon product data? (Amazon Associates Web Service - used to be called E-Commerce API, or something along those lines). Based on my research, PyAWS (http://pyaws.sourceforge.net/) seems okay, but still pretty raw (and hasn't been updated in a while). Wondering if there's an obvious canonical library that I'm just missing.

    Read the article

  • Amazon EC2 spot instances - is there a catch ?

    - by gareth_bowles
    I needed to start a new EC2 instance today and decided to try out the new spot instances, where you can reduce your instance cost by bidding on the maximum per-hour price you're prepared to pay. Since today's spot price was only 3.5c / hour, compared with 8.5c / hour for an on-demand instance, I was wondering: if I just bid a really high price, say 10c / hour, can I effectively be sure of getting a much cheaper long-running instance than an on-demand instance (since the spot instances are only charged by the current spot price) ? I suppose it's theoretically possible for the spot price to go over the on-demand price, but as far as I can tell from the data on the AWS site, the spot price has always been well below that.

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >