Search Results

Search found 2852 results on 115 pages for 'amazon simpledb'.

Page 38/115 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • Cloud service and IM protocol advice, for a backend to group chat mobile app

    - by Jonathan
    Overview I’m going to develop an app on Android and iOS. It will allow users to set up group ‘chat rooms’ and talk on chat rooms set up by other users. The service needs to be highly scalable, such that it could accommodate a massive increase in users overnight (we can only dream). Chat requirements The chat protocol used should be flexible: it should allow me to determine who can view/post on ‘chat rooms’ based on certain other factors, as determined by the first poster/creator of the particular ‘chat room’. It should also allow for users to simply install the app and begin using the service, after only providing a simple nickname (which could be changed later). Chat protocol plans Having looked around I think the XMPP protocol is the best candidate. In particular the Multi-user chat extension looks like what I’ll need. Would this be most suited to my requirements, or do you know another potential solution? Cloud service I have been deciding between Amazon Web Services, Google App Engine and Windows Azure. I’m coming to the conclusion that Azure will be best, as it is easier to manage than AWS (ease of scalability will be a key factor in the design), I think it may be less restricted than GAE, plus Azure will soon have toolkits to allow easy interfacing with both Android and iOS phones. Is this the decision you would have made, or would you recommend/look into other cloud services? General project philosophy I have only recently started looking into this project’s feasibility, and am no expert on any of its aspects. So wherever possible I will leave the actual implementations to experts, i.e. choosing a higher-level cloud service, using a well-documented plugin of a, proven reliable, group chat protocol etc. My background I have some programming knowledge from a computer science degree. Main languages I’ve used have been Java and Python, but I don’t want this to affect design decisions for the project. The most appropriate languages for the task should be used, i.e. I don’t mind learning a lot of new skills (my current programming levels are relatively basic anyway). Thank you Thanks for reading, and any advice you have about any aspect would be greatly appreciated :-)

    Read the article

  • Ubuntu: Move fsbackup backups to Amazon S3

    - by Alexander Gladysh
    I have a legacy server (Ubuntu 9.10 Karmic x86), where previous admin set up backups with fsbackup. This server lives in a VPS (under some kind of Xen), and it is low on HDD space (16 GB total). Now it came to a point, where fsbackup backups take more space than the rest of data in the system. The filesystem is 100% filled, and I already cleaned up all that I could, aside from actual backups. I do not have any experience managing fsbackup, and I do not want to break or lose the backups. Googling fsbackup gives surprisingly low quality results... Here is how my backups look like: $ sudo ls -lh /var/archives total 8.1G -rw-rw---- 1 root root 318 2011-01-06 06:26 myserver-20110106.md5 -rw-rw---- 1 root root 258 2011-01-07 06:26 myserver-20110107.md5 -rw-rw---- 1 root root 318 2011-01-08 06:26 myserver-20110108.md5 -rw-rw---- 1 root root 318 2011-01-09 06:26 myserver-20110109.md5 -rw-rw---- 1 root root 346 2011-01-10 06:43 myserver-20110110.md5 -rw-rw---- 1 root root 14M 2011-01-06 06:26 myserver-all-mysql-databases.20110106.sql.bz2 -rw-rw---- 1 root root 14M 2011-01-07 06:26 myserver-all-mysql-databases.20110107.sql.bz2 -rw-rw---- 1 root root 14M 2011-01-08 06:26 myserver-all-mysql-databases.20110108.sql.bz2 -rw-rw---- 1 root root 14M 2011-01-09 06:26 myserver-all-mysql-databases.20110109.sql.bz2 -rw-rw---- 1 root root 862 2011-01-10 06:43 myserver-all-mysql-databases.20110110.sql.bz2 -rw-rw---- 1 root root 827K 2011-01-03 06:25 myserver-etc.20110103.master.tar.gz -rw-rw---- 1 root root 16K 2011-01-06 06:25 myserver-etc.20110106.tar.gz -rw-rw---- 1 root root 16K 2011-01-07 06:25 myserver-etc.20110107.tar.gz -rw-rw---- 1 root root 16K 2011-01-08 06:25 myserver-etc.20110108.tar.gz -rw-rw---- 1 root root 16K 2011-01-09 06:25 myserver-etc.20110109.tar.gz -rw-rw---- 1 root root 827K 2011-01-10 06:25 myserver-etc.20110110.master.tar.gz -rw------- 1 root root 36K 2011-01-10 06:25 myserver-etc.incremental.bin -rw-rw---- 1 root root 29M 2011-01-03 06:25 myserver-home.20110103.master.tar.gz -rw-rw---- 1 root root 11K 2011-01-06 06:25 myserver-home.20110106.tar.gz -rw-rw---- 1 root root 14K 2011-01-07 06:25 myserver-home.20110107.tar.gz -rw-rw---- 1 root root 11K 2011-01-08 06:25 myserver-home.20110108.tar.gz -rw-rw---- 1 root root 11K 2011-01-09 06:25 myserver-home.20110109.tar.gz -rw-rw---- 1 root root 2.0M 2011-01-10 06:25 myserver-home.20110110.master.tar.gz -rw------- 1 root root 27K 2011-01-10 06:25 myserver-home.incremental.bin -rw-rw---- 1 root root 1.5G 2011-01-03 06:29 myserver-opt.20110103.master.tar.gz -rw-rw---- 1 root root 1.5M 2011-01-06 06:25 myserver-opt.20110106.tar.gz -rw-rw---- 1 root root 1.5M 2011-01-07 06:25 myserver-opt.20110107.tar.gz -rw-rw---- 1 root root 1.5M 2011-01-08 06:25 myserver-opt.20110108.tar.gz -rw-rw---- 1 root root 1.5M 2011-01-09 06:25 myserver-opt.20110109.tar.gz -rw-rw---- 1 root root 1.5G 2011-01-10 06:30 myserver-opt.20110110.master.tar.gz -rw------- 1 root root 201K 2011-01-10 06:30 myserver-opt.incremental.bin -rw-rw---- 1 root root 2.3G 2011-01-03 06:41 myserver-srv.20110103.master.tar.gz -rw-rw---- 1 root root 44M 2011-01-06 06:26 myserver-srv.20110106.tar.gz -rw-rw---- 1 root root 27M 2011-01-07 06:25 myserver-srv.20110107.tar.gz -rw-rw---- 1 root root 39M 2011-01-08 06:26 myserver-srv.20110108.tar.gz -rw-rw---- 1 root root 2.0M 2011-01-09 06:25 myserver-srv.20110109.tar.gz -rw-rw---- 1 root root 2.7G 2011-01-10 06:42 myserver-srv.20110110.master.tar.gz -rw------- 1 root root 3.4M 2011-01-10 06:42 myserver-srv.incremental.bin I'm thinking about moving backups to Amazon S3, but before that I have to free some space, so the server can work. Perhaps I can mount /var/archives to an Amazon S3 bucket somehow... Any advice?

    Read the article

  • Non-Relational Database Design

    - by Ian Varley
    I'm interested in hearing about design strategies you have used with non-relational "nosql" databases - that is, the (mostly new) class of data stores that don't use traditional relational design or SQL (such as Hypertable, CouchDB, SimpleDB, Google App Engine datastore, Voldemort, Cassandra, SQL Data Services, etc.). They're also often referred to as "key/value stores", and at base they act like giant distributed persistent hash tables. Specifically, I want to learn about the differences in conceptual data design with these new databases. What's easier, what's harder, what can't be done at all? Have you come up with alternate designs that work much better in the non-relational world? Have you hit your head against anything that seems impossible? Have you bridged the gap with any design patterns, e.g. to translate from one to the other? Do you even do explicit data models at all now (e.g. in UML) or have you chucked them entirely in favor of semi-structured / document-oriented data blobs? Do you miss any of the major extra services that RDBMSes provide, like relational integrity, arbitrarily complex transaction support, triggers, etc? I come from a SQL relational DB background, so normalization is in my blood. That said, I get the advantages of non-relational databases for simplicity and scaling, and my gut tells me that there has to be a richer overlap of design capabilities. What have you done? FYI, there have been StackOverflow discussions on similar topics here: the next generation of databases changing schemas to work with Google App Engine choosing a document-oriented database

    Read the article

  • Welcome to the Oracle Retail International Blog

    - by sarah.taylor(at)oracle.com
    Welcome to the first post of the new Oracle Retail International Blog. Retail is an international business and today's successful retailers view themselves in the context of a global market. A niche fashion business in Tokyo will learn marketing strategies from the luxury brands of Milan, an independent grocer in Oslo will source the same global brands as a supermarket in Oklahoma, and every retailer in the world will measure their multi-channel operation against the international e-commerce giant Amazon.  Why? Because today's customer is a global customer with unparalleled expectations on choice, price and service. Today's consumers have access to more information on retail than ever before. Technology allows people to shop from their home, their office or from the phone in their pocket, wherever they are and at whatever time suits them. Customers are using the web to search for products and promotions. They are also using the web to develop their voice in commenting on products and services that have delighted or disappointed. In an information rich industry, this customer element creates a new world of data. The best retailers are developing eagle eyes for reading customer activity and turning it into profitable decisions. Ultimately, whether you choose to compete or shop on price, service, product innovation, excellent operations or all of the above - the international world of retail has become an inspiration for all - retailer and consumer alike.  Retail as an industry is growing and diversifying at a faster rate than ever before. Yet it is still the customer who picks the winners and the losers on the retail field. Economic circumstances transform the rules, but it is still the customer who dictates the game, the pace, the price, and the perception of the brand. Wise retailers never rest on their laurels. They are always shopping for ideas on how to improve and differentiate the offer at every touch point to meet the customer's needs better than anyone else and to gain each customer's loyalty at a time when loyalty can be cheap. With this blog, I hope that we might provide a hub for discussion around what unifies retail and how technology supports both the retailer and customer experience. Despite the competitive nature of this market, we hope that this will provide an opportunity to share experiences and lessons learnt with a view that knowledge can only help this industry to grow and develop. At Oracle we've been supporting retailers for many years. Many of us have worked within retail organisations all over the world, myself included. With this in mind, I don't feel it is too bold a statement to say that Oracle understands retail. We wouldn't be so heavily integrated in some of the biggest and most well-known names in retail if we didn't. With this blog, we intend to create a community of international retailers that can exchange ideas and experiences, debate collective challenges and drive a better understanding of this continually evolving industry. Events such as the World Retail Congress and NRF's Big Show bring enormous value to the retail industry providing platforms for discussion and learning but they happen once a year. We wanted to create a platform for discussion on a different level and that like retail, is always on. We hope not only to bring commitment to being not only the infrastructure that brings all of their systems together within a retail business, but an infrastructure that supports the industry internationally to grow and flourish through creating a platform for networking, discussion, creativity, vision and strategy. Please feel free to ask questions or comment using the comments functionality.  You might also want to visit our other Oracle Retail social media sites: Facebook - http://www.facebook.com/oracleretail YouTube - http://www.youtube.com/user/oracleretail Twitter - http://twitter.com/#!/oracleretailInsight-Driven Retailing Blog - http://blogs.oracle.com/retail/

    Read the article

  • Amazon EC2 SQL Server Connection

    - by cnxmax
    I have two instances running on Amazon AWS EC2. One is running MSSQL Server 2005, the other is running a web application. I CAN connect to the database in my app using a connection string that references the Public IP of my EC2 instance running SQL Server. I CANNOT connect from the web app server if I change the connection string to reference the database servers Private IP Address. But I can connect if I run that same code on the database server itself. I can remote desktop from the app server to the database server using the private IP. I have a feeling there is something in my SQL Sever configuration that is preventing this remote connection. I have remote connections enabled, I have it set to listen on all IP addresses. Any ideas? Other things I've done: - Added exceptions to Windows Firewall - Tried connecting to using EC2 DNS Names

    Read the article

  • Unable to connect to Amazon EC2 without using PPK file

    - by Krishna
    I have a build job which runs on Hudson and synchronizes content from an Amazon AWS server. This is written in shell I have a PPK file given to me which can establish the connectivity Here is the problem. The build script I use doesn't establish the connectivity in the code. So, I manually connect the host thro the PPK file using Putty and then run the job, then it works fine I am new to the shell stuff. Could someone help me out by suggesting how I can establish connectivity using the PPK file in the shell so I do not have to do it manually thro Putty? Thanks, Krishna

    Read the article

  • Upgrading Fedora on Amazon to 12 but getting libssl.so.* & libcrypto.so.* are missing

    - by bateman_ap
    I am upgrading to Fedora 12 on a Amazon EC2 using help here: http://www.ioncannon.net/system-administration/894/fedora-12-bootable-root-ebs-on-ec2/ I managed to do a 64 bit instance OK, however facing some problems with a standard one. On the final bit of the install from 11 to 12 I am getting an error: Error: Missing Dependency: libcrypto.so.8 is needed by package httpd-tools-2.2.1.5-1.fc11.1.i586 (installed) Error: Missing Dependency: libssl.so.8 is needed by package httpd-tools-2.2.1.5-1.fc11.1.i586 (installed) This is referenced in the comments from the link above but all it says is: Q: Apache failed, or libssl.so.* & libcrypto.so.* are missing A: These versions are mssing the symlinks they require. Easy fix, go symlink them to the newest versions in /lib However I am afraid I don't know how to do this. If it is any help I tried running the command locate libssl.so and got: /lib/libssl.so.0.9.8b /lib/libssl.so.6

    Read the article

  • Lost sudo/su on Amazon EC2 instance

    - by barrycarter
    I have an Amazon EC2 instance. I can login just fine, but neither "su" nor "sudo" work now (they worked fine previously): "su" requests a password, but I login using ssh keys, and I don't think the root user even has a password. "sudo <anything>" does this: sudo: /etc/sudoers is owned by uid 222, should be 0 sudo: no valid sudoers sources found, quitting I probably did "chown ec2-user /etc/sudoers" (or, more likely "chown -R ec2-user /etc" because I was sick of rsync failing), so this is my fault. How do I recover? I stopped the instance and tried the "View/Change User Data" option on the AWS EC2 console, but this didn't help. EDIT: I realize I could kill this instance and create a new one, but was hoping to avoid something that extreme.

    Read the article

  • Weird DNS bug - external server resolves to internal IP

    - by emilecantin
    I have a server that is hosted by my university. I have root access, but no control over network setup, firewall, etc. This server's DNS resolves to an internal IP here on campus (10.x.x.x), and an external IP outside campus. I also have a few servers hosted at Amazon, and they mostly work well. However, one of them started to resolve the university server by its internal IP address. This causes problems, as 10.x.x.x on Amazon EC2 is someone else. I have connected to the Amazon server with SSH agent forwarding a few times in the past, to access a Git repository on the university server. Any idea what could cause this? EDIT: Here's my /etc/resolv.conf # Generated by dhcpcd for interface eth0 search ec2.internal nameserver 172.16.0.23 Here's the output of dig myserver.myuniversity.ca.: ; <<>> DiG 9.8.1-P1 <<>> myserver.myuniversity.ca. ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 34470 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;myserver.myuniversity.ca. IN A ;; ANSWER SECTION: myserver.myuniversity.ca. 537586 IN A 10.43.x.x ;; Query time: 2 msec ;; SERVER: 172.16.0.23#53(172.16.0.23) ;; WHEN: Wed Nov 28 16:07:21 2012 ;; MSG SIZE rcvd: 60 Here's the expected output (on another Amazon server): ; <<>> DiG 9.8.1-P1 <<>> myserver.myuniversity.ca. ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 8045 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;myserver.myuniversity.ca. IN A ;; ANSWER SECTION: myserver.myuniversity.ca. 601733 IN A x.x.239.1 ;; Query time: 1 msec ;; SERVER: 172.16.0.23#53(172.16.0.23) ;; WHEN: Wed Nov 28 16:09:36 2012 ;; MSG SIZE rcvd: 60

    Read the article

  • Can't get basic web servers working on EC2 RedHat

    - by Yarin
    I'm trying to get some basic Python web servers (Flask, Tornado) turned up on the EC2. On the Amazon-flavored Linux AMI (Amazon Linux AMI 2013.03.1) they work no problem, but the same web servers installed on the RedHat quicklaunch AMI (Red Hat Enterprise Linux 6.4) don't work at all- All I get is connection failure errors when I try to browse to them. Both these servers share the same security group, with the relevant ports (5000, 5010) open, so I'm trying to understand why RedHat would not be not working.

    Read the article

  • IAM / AWS Access control via Windows Azure Active Directory

    - by Haroon
    I am trying to figure out how to configure IAM in Amazon AWS to use Windows Azure Active Directory. I found http://blogs.aws.amazon.com/security/post/Tx71TWXXJ3UI14/Enabling-Federation-to-AWS-using-Windows-Active-Directory-ADFS-and-SAML-2-0, however it is about configuring ADFS. WAAD supports SAML 2.0 http://azure.microsoft.com/en-us/documentation/articles/fundamentals-identity/ Has anyone figured it out yet?

    Read the article

  • Why do I have to run aptitude update twice to install Ruby?

    - by Willie Wheeler
    Summary. I have a fresh EC2 Precise 64-bit instance (ami-82fa58eb). After launching the instance, I want to install ruby1.9.1 (among others). This doesn't work: aptitude update && apt-get -o Dpkg::Options::="--force-confnew" --force-yes -fuy dist-upgrade && aptitude install -y ruby1.9.1 ruby1.9.1-dev make as Aptitude can't find the Ruby package. But this works: aptitude update && aptitude update && apt-get -o Dpkg::Options::="--force-confnew" --force-yes -fuy dist-upgrade && aptitude install -y ruby1.9.1 ruby1.9.1-dev make I would like to understand why I need to run aptitude update twice. Details. The first and second runs look pretty different. First run: Ign http://security.ubuntu.com precise-security InRelease Ign http://archive.ubuntu.com precise InRelease Get: 1 http://security.ubuntu.com precise-security Release.gpg [198 B] Ign http://archive.ubuntu.com precise-updates InRelease Get: 2 http://security.ubuntu.com precise-security Release [49.6 kB] Hit http://archive.ubuntu.com precise Release.gpg Get: 3 http://archive.ubuntu.com precise-updates Release.gpg [198 B] Hit http://archive.ubuntu.com precise Release Get: 4 http://security.ubuntu.com precise-security/main amd64 Packages [161 kB] Get: 5 http://archive.ubuntu.com precise-updates Release [49.6 kB] Get: 6 http://security.ubuntu.com precise-security/restricted amd64 Packages [3,969 B] Hit http://archive.ubuntu.com precise/main amd64 Packages Get: 7 http://security.ubuntu.com precise-security/universe amd64 Packages [43.8 kB] Hit http://archive.ubuntu.com precise/restricted amd64 Packages Hit http://archive.ubuntu.com precise/universe amd64 Packages Get: 8 http://security.ubuntu.com precise-security/multiverse amd64 Packages [2,180 B] Hit http://archive.ubuntu.com precise/multiverse amd64 Packages Get: 9 http://security.ubuntu.com precise-security/main i386 Packages [165 kB] Hit http://archive.ubuntu.com precise/main i386 Packages Hit http://archive.ubuntu.com precise/restricted i386 Packages Hit http://archive.ubuntu.com precise/universe i386 Packages Hit http://archive.ubuntu.com precise/multiverse i386 Packages Get: 10 http://security.ubuntu.com precise-security/restricted i386 Packages [3,968 B] Hit http://archive.ubuntu.com precise/main TranslationIndex Get: 11 http://security.ubuntu.com precise-security/universe i386 Packages [44.0 kB] Hit http://archive.ubuntu.com precise/multiverse TranslationIndex Get: 12 http://security.ubuntu.com precise-security/multiverse i386 Packages [2,369 B] Get: 13 http://security.ubuntu.com precise-security/main TranslationIndex [73 B] Hit http://archive.ubuntu.com precise/restricted TranslationIndex Get: 14 http://security.ubuntu.com precise-security/multiverse TranslationIndex [71 B] Hit http://archive.ubuntu.com precise/universe TranslationIndex Get: 15 http://security.ubuntu.com precise-security/restricted TranslationIndex [71 B] Get: 16 http://archive.ubuntu.com precise-updates/main amd64 Packages [382 kB] Get: 17 http://security.ubuntu.com precise-security/universe TranslationIndex [73 B] Get: 18 http://security.ubuntu.com precise-security/main Translation-en [76.5 kB] Get: 19 http://security.ubuntu.com precise-security/multiverse Translation-en [995 B] Get: 20 http://security.ubuntu.com precise-security/restricted Translation-en [978 B] Get: 21 http://security.ubuntu.com precise-security/universe Translation-en [27.2 kB] Get: 22 http://archive.ubuntu.com precise-updates/restricted amd64 Packages [6,755 B] Get: 23 http://archive.ubuntu.com precise-updates/universe amd64 Packages [129 kB] Get: 24 http://archive.ubuntu.com precise-updates/multiverse amd64 Packages [8,677 B] Get: 25 http://archive.ubuntu.com precise-updates/main i386 Packages [387 kB] Get: 26 http://archive.ubuntu.com precise-updates/restricted i386 Packages [6,732 B] Get: 27 http://archive.ubuntu.com precise-updates/universe i386 Packages [130 kB] Get: 28 http://archive.ubuntu.com precise-updates/multiverse i386 Packages [9,672 B] Get: 29 http://archive.ubuntu.com precise-updates/main TranslationIndex [3,564 B] Get: 30 http://archive.ubuntu.com precise-updates/multiverse TranslationIndex [2,605 B] Get: 31 http://archive.ubuntu.com precise-updates/restricted TranslationIndex [2,461 B] Get: 32 http://archive.ubuntu.com precise-updates/universe TranslationIndex [2,850 B] Get: 33 http://archive.ubuntu.com precise/main Translation-en [726 kB] Get: 34 http://archive.ubuntu.com precise/multiverse Translation-en [93.4 kB] Get: 35 http://archive.ubuntu.com precise/restricted Translation-en [2,395 B] Get: 36 http://archive.ubuntu.com precise/universe Translation-en [3,341 kB] Get: 37 http://archive.ubuntu.com precise-updates/main Translation-en [188 kB] Get: 38 http://archive.ubuntu.com precise-updates/multiverse Translation-en [5,414 B] Get: 39 http://archive.ubuntu.com precise-updates/restricted Translation-en [1,484 B] Get: 40 http://archive.ubuntu.com precise-updates/universe Translation-en [77.3 kB] Ign http://archive.ubuntu.com precise/main Translation-en_US Ign http://archive.ubuntu.com precise/multiverse Translation-en_US Ign http://archive.ubuntu.com precise/restricted Translation-en_US Ign http://archive.ubuntu.com precise/universe Translation-en_US Fetched 6,137 kB in 11s (538 kB/s) Reading package lists... Second run: Ign http://us-east-1.ec2.archive.ubuntu.com precise InRelease Ign http://us-east-1.ec2.archive.ubuntu.com precise-updates InRelease Get: 1 http://us-east-1.ec2.archive.ubuntu.com precise Release.gpg [198 B] Get: 2 http://us-east-1.ec2.archive.ubuntu.com precise-updates Release.gpg [198 B] Ign http://security.ubuntu.com precise-security InRelease Get: 3 http://us-east-1.ec2.archive.ubuntu.com precise Release [49.6 kB] Get: 4 http://us-east-1.ec2.archive.ubuntu.com precise-updates Release [49.6 kB] Get: 5 http://us-east-1.ec2.archive.ubuntu.com precise/main Sources [934 kB] Hit http://security.ubuntu.com precise-security Release.gpg Hit http://security.ubuntu.com precise-security Release Get: 6 http://us-east-1.ec2.archive.ubuntu.com precise/universe Sources [5,019 kB] Get: 7 http://security.ubuntu.com precise-security/main Sources [42.8 kB] Get: 8 http://security.ubuntu.com precise-security/universe Sources [13.5 kB] Hit http://security.ubuntu.com precise-security/main amd64 Packages Hit http://security.ubuntu.com precise-security/universe amd64 Packages Hit http://security.ubuntu.com precise-security/main i386 Packages Get: 9 http://us-east-1.ec2.archive.ubuntu.com precise/main amd64 Packages [1,273 kB] Hit http://security.ubuntu.com precise-security/universe i386 Packages Get: 10 http://us-east-1.ec2.archive.ubuntu.com precise/universe amd64 Packages [4,786 kB] Hit http://security.ubuntu.com precise-security/main TranslationIndex Hit http://security.ubuntu.com precise-security/universe TranslationIndex Hit http://security.ubuntu.com precise-security/main Translation-en Hit http://security.ubuntu.com precise-security/universe Translation-en Get: 11 http://us-east-1.ec2.archive.ubuntu.com precise/main i386 Packages [1,274 kB] Get: 12 http://us-east-1.ec2.archive.ubuntu.com precise/universe i386 Packages [4,796 kB] Get: 13 http://us-east-1.ec2.archive.ubuntu.com precise/main TranslationIndex [3,706 B] Get: 14 http://us-east-1.ec2.archive.ubuntu.com precise/universe TranslationIndex [2,922 B] Get: 15 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main Sources [163 kB] Get: 16 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe Sources [50.8 kB] Get: 17 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main amd64 Packages [382 kB] Get: 18 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe amd64 Packages [129 kB] Get: 19 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main i386 Packages [387 kB] Get: 20 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe i386 Packages [129 kB] Get: 21 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main TranslationIndex [3,564 B] Get: 22 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe TranslationIndex [2,850 B] Get: 23 http://us-east-1.ec2.archive.ubuntu.com precise/main Translation-en [726 kB] Get: 24 http://us-east-1.ec2.archive.ubuntu.com precise/universe Translation-en [3,341 kB] Get: 25 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main Translation-en [188 kB] Get: 26 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe Translation-en [77.1 kB] Fetched 23.8 MB in 23s (1,026 kB/s) Reading package lists... Note. My question is almost exactly the same as Running 'apt-get upgrade' on Amazon EC2 AMI twice in succession upgrades very different packages except that I'm seeing this issue with aptitude updates rather than apt-get upgrades.

    Read the article

  • Second Edition of Regular Expressions Cookbook Has Been Published

    - by Jan Goyvaerts
    %COOKBOOKFRAME% The first edition of Regular Expressions Cookbook was published in May of 2009. It quickly became a bestseller, briefly holding the #1 spot in computer books on Amazon.com. It also had staying power. The ebook version was O’Reilly’s top seller during the whole year of 2010. So it’s no surprise that our editor at O’Reilly soon contacted us for a second edition. With Steven and I always being very busy, those plans were delayed until finally both of us found the time to update the book. Work started in January. Today you can buy your own copy of the second edition of Regular Expressions Cookbook. O’Reilly’s online shop sells the eBook in DRM-free ePub, Mobi, and PDF formats for $39.99 and the print version for $49.99. These are the list prices for the eBook and the print book. If you’re looking for a discount and free shipping of the print book, you can pre-order on one of the various Amazon sites. Deliveries should start soon. The discount rates differ and are subject to change. Amazon will also pay me an affiliate commission if you use one of these links, which pretty much doubles the income I get from the book. Amazon.com. Free shipping to the USA. Amazon.co.uk. Free shipping to the UK and Ireland. Amazon.fr. Free shipping to France, Monaco, Luxembourg, and Belgium. Amazon.de. Free shipping to Germany, Austria, Switzerland, Luxembourg, Liechtenstein, Belgium, and The Netherlands. If you don’t want to wait for the print book to arrive, the Kindle edition is already available for instant delivery. The Kindle edition works on Amazon’s Kindle hardware, and on PCs via Amazon’s Kindle software (free download). Amazon.com Amazon.co.uk Amazon.fr Amazon.de I’ll blog more about the book in the coming days and weeks with details about what’s new in the second edition.

    Read the article

  • Ubuntu 12.04 cloud edition on Amazon - Apache2 - /etc

    - by jdog
    I have setup a web server on Amazon with 3 Virtual hosts. For some reason I can't get any of the sites going on it, they all show a 404 error. /var/log/apache2/error.log shows "File does not exist: /etc/apache2/htdocs" I have checked: a2ensite all my virtual hosts actually checked softlinks in sites-enabled access rights in /var/www to 777, in case user is not www-data grep -r htdocs /etc/apache2 (returns nothing) ports.conf has NameVirtualHost directive exactly matching Virtual Hosts What else could this be? ports.conf # If you just change the port or add more ports here, you will likely also # have to change the VirtualHost statement in # /etc/apache2/sites-enabled/000-default # This is also true if you have upgraded from before 2.2.9-3 (i.e. from # Debian etch). See /usr/share/doc/apache2.2-common/NEWS.Debian.gz and # README.Debian.gz NameVirtualHost 107.20.169.163:80 Listen 80 <IfModule mod_ssl.c> # If you add NameVirtualHost *:443 here, you will also have to change # the VirtualHost statement in /etc/apache2/sites-available/default-ssl # to <VirtualHost *:443> # Server Name Indication for SSL named virtual hosts is currently not # supported by MSIE on Windows XP. Listen 443 </IfModule> <IfModule mod_gnutls.c> Listen 443 </IfModule> sites-available/www.seleconlight.com <VirtualHost 107.20.169.163:80> ServerName www.seleconlight.com DocumentRoot /var/www/www.seleconlight.com CustomLog /var/log/apache2/www.seleconlight.com-access.log combined ErrorLog /var/log/apache2/www.seleconlight.com-error.log </VirtualHost>

    Read the article

  • SSH attack CentOS Amazon EC2

    - by user37143
    Hi, I run a few Rightscale CentOS AMI based instances on Amazon EC2. Two months back I found that our SSHD security is compromised( I had added host.allow and host.deny for ssh). So I created new instances and done an IP based ssh that allows only our IPs through AWS Firewall(ec2-authorize) and chnaged the ssh 22 default port to some other port but two days back I found I was not able to login to the server and when I tried on 22 port the ssh got connected and I found that sshd_conf was changed and when I tried to edit sshd_config I found root had no write permission on the file. So I tried a chmod and it said access denied for 'root' user. This is very strange. I checked secure log and history and found nothing informative. I have PHP, Ruby On Rails, Java, Wordpress apps running on these server. This time I did a chkrootkit scan and found nothing. I renamed the /etc/ssh folder and reinstalled openssh through yum. I had faced this on 3 instances on CentOS(5.2, 5.4) I have instances on Debian as well those working fine. Is this a CentOS/Rightscale issue. Guys, what security measures I should take to prevent this. Please support me this is very critical. Thanks

    Read the article

  • SSH attcack CentOS Amazon EC2

    - by user37143
    Hi, I run a few Rightscale CentOS AMI based instances on Amazon EC2. Two months back I found that our SSHD security is compromised( I had added host.allow and host.deny for ssh). So I created new instances and done an IP based ssh that allows only our IPs through AWS Firewall(ec2-authorize) and chnaged the ssh 22 default port to some other port but two days back I found I was not able to login to the server and when I tried on 22 port the ssh got connected and I found that sshd_conf was changed and when I tried to edit sshd_config I found root had no write permission on the file. So I tried a chmod and it said access denied for 'root' user. This is very strange. I checked secure log and history and found nothing informative. I have PHP, Ruby On Rails, Java, Wordpress apps running on these server. This time I did a chkrootkit scan and found nothing. I renamed the /etc/ssh folder and reinstalled openssh through yum. I had faced this on 3 instances on CentOS(5.2, 5.4) I have instances on Debian as well those working fine. Is this a CentOS/Rightscale issue. Guys, what security measures I should take to prevent this. Please support me this is very critical. Thanks

    Read the article

  • Alternative databases to use when putting IIS Logs into a database using LogParser

    - by Robin Day
    We have run some scripts that use LogParser to dump our IIS logs into a SQL Server database. We can then query this to get simple stats on hits, usage etc. It's also good when linking it to error log databases and performance counter database to compare usage with errors, etc. Having implemented this for just one system and for the last 2-3 weeks we already have a 5GB database with around 10 million records. This is making any queries to this database quite slow and will no doubt cause storage issues if we continue to log as we are. Can anyone suggest any alternative databases that we could use for this data that would be more efficient for such logs? I'd be particularly interested in any experience of Google's BigTable or Amazon's SimbleDB. Are either of these suitable for reporting queries? COUNTs, GROUP BYs, PIVOTs?

    Read the article

  • API to lookup product information by UPC?

    - by officespace672
    Is there an API that allows lookup of product information by UPC? I know that Amazon has the Product Advertising API but don't think it can be used for any purpose other than sending traffic to amazon.com as per their license agreement here. Specifically, my application would not have the principal purpose of advertising and marketing the Amazon Site and driving sales of products and services on the Amazon Site Does such an API exist that I can do anything I want with the data? UPDATE I would want to use the API for my application, not create create such an API.

    Read the article

  • Using Amazon S3 for multiple remote data site uploads, securely

    - by Aitch
    I've been playing about with Amazon S3 a little for the first time and like what I see for various reasons relating to my potential use case. We have multiple (online) remote server boxes harvesting sensor data that is regularly uploaded every hour or so (rsync'ed) to a VPS server. The number of remote server boxes is growing regularly and forecast to keep growing (hundreds). The servers are geographically dispersed. The servers are also automatically built, therefore generic with standard tools and not bespoke per location. The data is many hundreds of files per day. I want to avoid a situation where I need to provision more VPS storage, or additional servers every time we hit the VPS capacity limit, after every N server deployments, whatever N might be. The remote servers can never be considered fully secure due to us not knowing what might happen to them when we are not looking. Our current solution is a bit naive and simply restricts inbound rsync only over ssh to known mac address directories and a known public key. There are plenty of holes to pick in this, I know. Let's say I write or use a script like s3cmd/s3sync to potentially push up the files. Would I need to manage hundreds of access keys and have each server customized to include this (do-able, but key management becomes nightmarish?) Could I restrict inbound connections somehow (eg by mac address), or just allow write-only to any client that was running the script? ( i could deal with a flood of data if someone got into a system? ) having a bucket per remote machine does not seem feasible due to bucket limits? I don't think I want to use a single common key as if one machine is breached then potentially, a malicious hack could get access to the filestore key and start deleting for ll clients, correct? I hope my inexperience has not blinded me to some other solution that might be suggested! I've read lots of examples of people using S3 for backup, but can't really find anything about this sort of data collection, unless my google terminology is wrong... I've written more than I should here, perhaps it can be summarised thus: In a perfect world I just want to have one of our techs install a new remote server into a location and it automagically starts sending files home with little or no intervention, and minimises risk? Pipedream or feasible? TIA, Aitch

    Read the article

  • yum not working on EC2 Red Hat instance: Cannot retrieve repository metadata

    - by adev3
    For some reason yum has stopped working in my Amazon EC2 instance, located in the EU West sector. There seems to be something wrong with the path of the repo metadata, is this correct? I would be very grateful for any help, as my experience in this field is somewhat limited. Thank you very much. cat /etc/redhat-release: Red Hat Enterprise Linux Server release 6.2 (Santiago) yum repolist: Loaded plugins: amazon-id, rhui-lb, security https://rhui2-cds01.eu-west-1.aws.ce.redhat.com/pulp/repos//rhui-client-config/rhel/server/6/x86_64/os/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 401" Trying other mirror. https://rhui2-cds02.eu-west-1.aws.ce.redhat.com/pulp/repos//rhui-client-config/rhel/server/6/x86_64/os/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 401" Trying other mirror. repo id repo name status rhui-eu-west-1-client-config-server-6 Red Hat Update Infrastructure 2.0 Client Configuration Server 6 0 rhui-eu-west-1-rhel-server-releases Red Hat Enterprise Linux Server 6 (RPMs) 0 rhui-eu-west-1-rhel-server-releases-optional Red Hat Enterprise Linux Server 6 Optional (RPMs) 0 repolist: 0 yum update: (I needed to remove the base URLs below because of ServerFault's restrictions for new users) Loaded plugins: amazon-id, rhui-lb, security [same as base url 1 above]/pulp/repos//rhui-client-config/rhel/server/6/x86_64/os/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 401" Trying other mirror. [same as base url 2 above]/pulp/repos//rhui-client-config/rhel/server/6/x86_64/os/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 401" Trying other mirror. Error: Cannot retrieve repository metadata (repomd.xml) for repository: rhui-eu-west-1-client-config-server-6. Please verify its path and try again

    Read the article

  • Can I replicate data between mySQL and SQL Server/SQL Azure?

    - by Ernest Mueller
    I have a replicated mySQL setup running happily on Amazon AWS, making user data available locally in various regions. Now I'm faced with an app that needs to go up on Microsoft Azure and I need to replicate the data over to there as well. So that's annoying. I am faced with several options: Replicate from mySQL to SQL Azure/SQL Server seems like it would be lovely - is this possible? I'd consider using a third party tool and paying $$ if I had to. We're not using anything complicated in the db feature set, it's just data in tables. Get mySQL working on Microsoft Azure - which seems really dicey at best. All the HOWTOs I can find say "this is possible but you really shouldn't try this for production apps." Go non-realtime and do syncs from mySQL to SQL Azure, which may be somewhat expensive and slower. Rip out all my mySQL on Amazon and use SQL Server there, which would make Baby Jesus cry. Has anyone gotten mySQL to SQL Azure/SQL Server replication or syncing working? Or have any other approaches (a NoSQL solution that replicates and might meet our but-we-need-to-join-some-tables needs that can easily be run on Amazon and Azure)?

    Read the article

  • How can I debug a port/connectivity issue?

    - by rfw21
    I am running a simple WebSocket server on Amazon EC2 (Fedora Core). I've opened the relevant port using ec2-authorize, and checked that it's opened. Iptables is definitely not running. However I can't connect to the port from outside EC2. I've tried the following (my server is running on port 7000): telnet ec2-public-dns.xx.xx.xx.amazon.com 7000 (from within EC2: connects fine) nmap localhost (output includes line: 7000/tcp open afs3-fileserver) telnet ec2-public-dns.xx.xx.xx.amazon.com 7000 (this time from my local machine: I get "connection refused: Unable to connect to remote host") The strange thing is this: if I start Nginx on port 7000 then it works and I can connect from outside EC2! And the WebSocket server fails on port 80, where Nginx works fine. To me this suggests a problem with the WebSocket server, BUT I can connect to it successfully from within EC2. (And it works fine on a different VPS account). How can I debug this further? If anybody can stop me tearing my hair out, I'd be very grateful indeed :)

    Read the article

  • How do I Install Intermediate Certificates (in AWS)?

    - by getmizanur
    I have installed private key (pem encoded) and public key certificate (pem encoded) on Amazon Load Balancer. However, when I check the SSL with site test tool, I get the following error: Error while checking the SSL Certificate!! Unable to get the local issuer of the certificate. The issuer of a locally looked up certificate could not be found. Normally this indicates that not all intermediate certificates are installed on the server. I converted crt file to pem using these commands from this tutorial: openssl x509 -in input.crt -out input.der -outform DER openssl x509 -in input.der -inform DER -out output.pem -outform PEM During setup of Amazon Load Balancer, the only option I left out was certificate chain. (pem encoded) However, this was optional. Could this be cause of my issue? And if so; How do I create certificate chain? UPDATE If you make request to VeriSign they will give you a certificate chain. This chain includes public crt, intermediate crt and root crt. Make sure to remove the public crt from your certificate chain (which is the top most certificate) before adding it to your certification chain box of your Amazon Load Balancer. If you are making HTTPS requests from an Android app, then above instruction may not work for older Android OS such as 2.1 and 2.2. To make it work on older Android OS: go here click on "retail ssl" tab and then click on "secure site" "CA Bundle for Apache Server" copy and past these intermediate certs into certificate chain box. just incase if you have not found it here is the direct link. If you are using geo trust certificates then the solution is much the same for Android devices, however, you need to copy and paste their intermediate certs for Android.

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >