Search Results

Search found 3148 results on 126 pages for 'amazon s3'.

Page 45/126 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • Indefinite hang when restoring SQL 2005 database on a SQL 2008 server in EC2

    - by erinloy
    I'm trying to restore a 25 GB database backup taken from a Windows 2003/SQL 2005 machine to a Windows 2008/SQL 2008 machine in the Amazon EC2 cloud, using a .bak file and the SQL Management Studio. SQL Management Studio reports the restore reaches 100% complete, and then just hangs indefinitely (24+ hours) using a lot of CPU, until I restart the SQL Server service. Upon restart, SQL again uses a lot of CPU activity for what seems to be an indefinite amount of time, but the DB never comes online. Here are some details: - I have created two EBS volumes, one for DATA and one for LOGS, and I have set the default directories in SQL Server to the \DATA and \LOG directory on these respective volumes. (I wonder if the issue could be related to this, but the DB is too big to restore on the root drive.) - I have given the SQL Server user group full access to these directories. - The server can create a new empty test DB in these directories just fine, and can backup and restore the test DB. - I have tried both restoring of a .bak file and attaching directly to copies of the original .mdf/.ldf files, and the result is the same in both cases. - Both the .bak restore and the .mdf/.ldf attach occur from/to the EBS volumes. - I've also tried the above via SQL script, and "WITH RECOVERY", with no difference in the result, just less UI. - The backup contains two full text indexes. - I have to use "WITH MOVE" for most of the files in the backup. - There's nothing wrong with the backup or .mdf/.ldf files, as this works just fine on a Windows 2003/SQL 2005 machine in the Amazon EC2, but not Windows 2008/SQL 2008. - The DB is NOT marked as "Restoring" in the SQL Management Studio - it is just listed as a normal database, but throws errors when I try to do anything with it (expand the object browser tree, view properties, etc.) Any ideas?

    Read the article

  • How can we configure the Bitnami Joomla stack to open a socket on startup?

    - by bobo
    I have deployed the Bitnami Ubuntu Joomla! 3.1.5-2 (64-bit) stack on Amazon Cloud: http://bitnami.com/stack/joomla/cloud/amazon By default, the stack is configured to run PHP using PHP-FPM. I have no problem getting the Joomla and phpmyadmin running as virtual hosts on Apache. But now, I would like to add another virtual host. The problem I am having is, I have no idea how to get the system creating a socket on startup in the following folder: bitnami@ip-172-31-15-99:/opt/bitnami/php/var/run$ ls -al total 12 drwxr-xr-x 2 root root 4096 Nov 3 20:43 . drwxr-xr-x 4 root root 4096 Oct 9 15:39 .. srw-rw-rw- 1 root root 0 Nov 3 20:43 joomla.sock -rw-r--r-- 1 root root 4 Nov 3 20:43 php5-fpm.pid srw-rw-rw- 1 root root 0 Nov 3 20:43 phpmyadmin.sock srw-rw-rw- 1 root root 0 Nov 3 20:43 www.sock bitnami@ip-172-31-15-99:/opt/bitnami/php/var/run$ I have the following /opt/bitnami/apps/mywebsite/conf/php-fpm/pool.conf file: [mywebsite] listen=/opt/bitnami/php/var/run/mywebsite.sock include=/opt/bitnami/php/etc/common-dynamic.conf include=/opt/bitnami/apps/mywebsite/conf/php-fpm/php-settings.conf pm=dynamic As it can be seen, listen points to the mywebsite.sock which does not currently exist. I did an experiment, by removing the .sock files in the /opt/bitnami/php/var/run folder and they would come back on reboot. So how can we configure it to open a socket for mywebsite on startup?

    Read the article

  • stunnel not working - stunnel.pem: No such file or directory

    - by Marronsuisse
    I am trying to install stunnel on an amazon LINUX machine. (i want to configure postfix so that it sends its emails through amazon ses) I first tried to install from the tar.gz package download from http://www.stunnel.org and installed with the commands: ./configure make make install but than the stunnel command was still not found. Then I installed with yum install stunnel. But now when I try I get: sudo stunnel 2012.06.23 06:51:53 LOG7[20071:3078289200]: Snagged 64 random bytes from /root/.rnd 2012.06.23 06:51:53 LOG7[20071:3078289200]: Wrote 1024 new random bytes to /root/.rnd 2012.06.23 06:51:53 LOG7[20071:3078289200]: RAND_status claims sufficient entropy for the PRNG 2012.06.23 06:51:53 LOG7[20071:3078289200]: PRNG seeded successfully 2012.06.23 06:51:53 LOG3[20071:3078289200]: stunnel.pem: No such file or directory (2) So it seems there is still a problem with the install. When I use the locate stunnel command, I see files a bit everywhere. How can I do to have a clean install of stunnel? Edit: i was following this procedure: http://docs.amazonwebservices.com/ses/latest/DeveloperGuide/SMTP.MTAs.SecureTunnel.html when I got stuck at point 5 and got the stunnel.pem: No such file or directory message.

    Read the article

  • Running your own GAE server

    - by h2g2java
    The question http://stackoverflow.com/questions/2505265/how-difficult-is-it-to-migrate-away-from-google-app-engine triggered me to think about this issue again. I have read of someone running, production-wise, Google app engine development version on their own server. My questions are: Are there any security issues running GAE development on your own server in production mode and exposing it to the www? If so how to mitigate them? Can GAE dev be run on Amazon? Is it possible to port my GAE apps running on Google servers to a GAE running on Amazon, without code changes, but without changing any reference in using other gdata services such as google docs, youtube, gmail, etc. How to configure GAE dev server to use my own hadoop? Or to use Amazon's hadoop?

    Read the article

  • Tomcat performing terribly for no apparent reason

    - by John
    We're running a game application .WAR on Tomcat 6 on an Amazon EC2 server, 8 core processor, 7gb RAM. The application uses a MySQL database hosted on Amazon RDS. This Facebook application takes ages to access when a mere 20-30 users are playing it. Big difference from 1-2 users. The entire .WAR is ~4mb, all static content hosted elsewhere. The server has never been close to running out of RAM. The CPU utilization has never been higher than 13.5-14%. Even with ~500 users that completely slowed everything to a standstill. The thread count or threadpools isn't close to being maxed out. I heightened maxthreads but it didn't make a noticeable difference. My theory is that Tomcat can only use one processor core, which would explain why it was slowed to a halt even though CPU usage was stably at 13-14% at the activity spike. But I'm struggling to understand why it would only use one CPU core. There is no processor cap in server.xml. The app contains several servlets (4 or 5). There is no mention of SingleThreadModel in the Java code. WHAT could be causing the application to run extremely slowly? If there is only 1-5 people on the application it runs fine. With 20-30 people it's barely contactable.

    Read the article

  • Run command remotely on Windows computer

    - by Bilal Aslam
    I have a Windows Server 2008 instance on Amazon EC2 (Amazon's cloud compute platform, which provides VMs in the cloud). It has an external IP, and I have an admin account on the box. I would like to 'bootstrap' this instance remotely i.e. I want to run commands to download, install and configure apps on it, all without having to log on even once. Also, I cannot use psexec on the source computer. I have figured out how to do this to a remote, domain-joined computer using WMI. However, I have NOT been able to do for a remote computer on EC2. Here are some specific restrictions: The remote computer is not part of my domain, hence no Kerberos The remote computer does not have a cert I trust, or vice versa I am sure I am running into to some auth/trust restriction. Is there any way I can run a single command on the remote, given that I have admin privileges? I'm not tied down to using WMI, but I do need to run a command somehow. Feels like this should be a solved problem.

    Read the article

  • Installing sqlite gem fails on AWS Linux instance with sqlite-devel libraries installed

    - by Scott
    Hi, I'm running an instance built off ami-595a0a1c. I am trying to install the sqlite3 (or sqlite) gem and it's failing with the below error: $ sudo gem install sqlite3 Building native extensions. This could take a while... ERROR: Error installing sqlite3: ERROR: Failed to build gem native extension. /usr/bin/ruby extconf.rb checking for sqlite3.h... no sqlite3.h is missing. Try 'port install sqlite3 +universal' or 'yum install sqlite3-devel' and check your shared library search path (the location where your sqlite3 shared library is located). extconf.rb failed * Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/bin/ruby --with-sqlite3-dir --without-sqlite3-dir --with-sqlite3-include --without-sqlite3-include=${sqlite3-dir}/include --with-sqlite3-lib --without-sqlite3-lib=${sqlite3-dir}/lib Gem files will remain installed in /usr/lib64/ruby/gems/1.8/gems/sqlite3-1.3.3 for inspection. Results logged to /usr/lib64/ruby/gems/1.8/gems/sqlite3-1.3.3/ext/sqlite3/gem_make.out Typically, this just means you need to install the development libraries and everything is cool. However, I have installed the sqlite-devel packages and still no dice. Since this is the Amazon Linux instance, I'd rather not add more repositories than the ones Amazon provides if possible. What can i do to get this thing to compile? Thanks for any insight! From a brand new instance, here's what I've done: $ sudo yum install rubygems ruby-devel $ sudo gem update --system $ sudo gem install rails $ rails new app $ cd app $ rails server Could not find gem 'sqlite3 (= 0)' in any of the gem sources listed in your Gemfile. $ sudo yum install sqlite-devel $ sudo gem install sqlite (or sqlite3 -- same result) See breakage above

    Read the article

  • Knowing the selections made on a 'multichooser' box in a mechanical turk hit (using Command Line Too

    - by gveda
    Hi All, I am new to Amazon Mechanical Turk, and wanted to create a hit with a qualification task. I am using the command line tools interface. One of the questions in my qualification task involves users selecting a number of options. I use a 'multichooser' selection type. Now I want to grade the responses based on the selections, where each selection has a different score. So for example, s1 has a score of 5, s2 of 10, s3 of 6, and so on. If the user selects s1 and s3, he/she gets a score of 11. Unfortunately, doing something like the following does not work: <AnswerOption> <SelectionIdentifier>s1</SelectionIdentifier> <AnswerScore>5</AnswerScore> </AnswerOption> <AnswerOption> <SelectionIdentifier>s2</SelectionIdentifier> <AnswerScore>10</AnswerScore> </AnswerOption> <AnswerOption> <SelectionIdentifier>s3</SelectionIdentifier> <AnswerScore>6</AnswerScore> </AnswerOption> If I do this, when I select multiple things, I get a score of 0. If I select only one option, say s1, then I get the appropriate score. Can you please help me on how to go about this? I could ask the same question 5 times with the same options, but then users might choose the same answer multiple times - something I wish to avoid. Thanks! Gaurav

    Read the article

  • Run command remotely on Windows computer from C#

    - by Bilal Aslam
    I have a Windows Server 2008 instance on Amazon EC2 (Amazon's cloud compute platform, which provides VMs in the cloud). It has an external IP, and I have an admin account on the box. I would like to 'bootstrap' this instance remotely i.e. I want to run commands to download, install and configure apps on it, all without having to log on even once. Also, I cannot use psexec on the source computer. I have figured out how to do this to a remote, domain-joined computer using WMI. However, I have NOT been able to do for a remote computer on EC2. Here are some specific restrictions: 1) The remote computer is not part of my domain, hence no Kerberos 2) The remote computer does not have a cert I trust, or vice versa I am sure I am running into to some auth/trust restriction. Is there any way I can run a single command on the remote, given that I have admin privileges? I'm not tied down to using WMI, but I do need to run a command somehow. Feels like this should be a solved problem.

    Read the article

  • mongodb read/write performance and mongo hosting in the cloud

    - by z3cko
    we are currently developing a high traffic rails application with facebooker (facebook game). since amazon simpledb (aws-sdb) is really slow, we are thinking of using a dedicated mongodb server as offered by mongoHQ for example. questions: what is the read/writes peak value for a mongodb server running on a amazon ec2 instance? what would be a recommended setup for a ec2 hosted app with mongodb - a master on amazon EBS and replicas on the ec2 instances? any examples or experiences? is there a company that offers mongodb hosting in the cloud? thanks, mz

    Read the article

  • No apparent reason for high load average

    - by Oz.
    We have several web servers running on Amazon (ec2) c1.xlarge, over Amazon AMI. The servers are duplicates of each other, running the exact same hardware and software. Each server spec is: 7 GB of memory 20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each) 1690 GB of instance storage 64-bit platform I/O Performance: High API name: c1.xlarge A couple of weeks ago we have run a yum upgrade on one of the servers. Starting on this upgrade the upgraded server started showing a high load average. Needless to say, we did not update the other servers and we can not do so until we understand the reason for this behavior. The strange thing is that when we compare the servers using top or iostat, we can not find the reason for the high load. Note that we have moved traffic from the "problematic" server to the others, which have made the "problematic" server less crowded in terms of requests, and still his load is higher. Do you have any idea what could it be, or where else can we check? Many thanks for the help! Oz. # # proper server # w command # 00:42:26 up 2 days, 19:54, 2 users, load average: 0.41, 0.48, 0.49 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT pts/1 82.80.137.29 00:28 14:05 0.01s 0.01s -bash pts/2 82.80.137.29 00:38 0.00s 0.02s 0.00s w # # proper server # iostat command # Linux 3.2.12-3.2.4.amzn1.x86_64 _x86_64_ (8 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 9.03 0.02 4.26 0.17 0.13 86.39 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn xvdap1 1.63 1.50 55.00 367236 13444008 xvdfp1 4.41 45.93 70.48 11227226 17228552 xvdfp2 2.61 2.01 59.81 491890 14620104 xvdfp3 8.16 14.47 94.23 3536522 23034376 xvdfp4 0.98 0.79 45.86 192818 11209784 # # problematic server # w command # 00:43:26 up 2 days, 21:52, 2 users, load average: 1.35, 1.10, 1.17 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT pts/0 82.80.137.29 00:28 15:04 0.02s 0.02s -bash pts/1 82.80.137.29 00:38 0.00s 0.05s 0.00s w # # problematic server # iostat command # Linux 3.2.20-1.29.6.amzn1.x86_64 _x86_64_ (8 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 7.97 0.04 3.43 0.19 0.07 88.30 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn xvdap1 2.10 1.49 76.54 374660 19253592 xvdfp1 5.64 40.98 85.92 10308946 21612112 xvdfp2 3.97 4.32 93.18 1087090 23439488 xvdfp3 10.87 30.30 115.14 7622474 28961720 xvdfp4 1.12 0.28 65.54 71034 16487112

    Read the article

  • AWS own email domain and some generic questions

    - by John Brunner
    I'm getting started with Amazon Web Services and I have a few question I'm not sure about. As every (company) webpage I want to use an "[email protected]" email adress, but how is that done? I looked up at godaddy.com (for domain registration), the offer me an email adress like I want, but for 3 dollars per month. Is this possible with AWS? Because at AWS you have just a complex domain which is not very userfriendly or serious. Also I want to host my dynamic webpage on the amazon cloud, but I'm not sure if I'm doing that right. I've read many guides, and all I know is that I have to purchase a Elastic Compute Cloud, and a Simple Storage Service... and every guide is working with the basic linux package, why not Windows? Is it more expensive? I just want to host a mySQL Server for the dynamic webpage, which is reached over a normal domain. And one last question, if I sign up for an AWS account it asks me for an email account. But I found it a little bit unserious to write there my free-webmailer-adress... How is it done the normal way? Thanks in advance! Best regards, john.

    Read the article

  • MySQL Not Turning On

    - by Shalin Shah
    I have an amazon ec2 instance running on the Amazon Linux AMI and its a micro instance. I wanted to install Django onto my server so I entered these commands wget http://www.mlsite.net/blog/wp-content/uploads/2008/11/go wget http://www.mlsite.net/blog/wp-content/uploads/2008/11/django.conf chmod 744 go ./go So after I was done, I ran sudo service httpd restart and sudo service mysqld restart and This is what came up for mysqld: Stopping mysqld: [ OK ] MySQL Daemon failed to start. Starting mysqld: [FAILED] So I deleted the django files /usr/local/python2.6.8/site-packages/django_registration.egg and I tried finding the error and I found out that in my /etc/my.cnf for the socket, it said socket=/var/lock/subsys/mysql.sock so I went to /var/lock/subsys/ and there was no mysql.sock. I tried creating one using vim but it still didn't work. Then I checked the error log and it said Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) So I am pretty much lost right now. I know it has something to do with mysql.sock If you might know a reason why this was caused could you please let me know? I have a wordpress site on my server, so i kind of need MySQL to work. Thanks!

    Read the article

  • Scaling a LAMP website hosted on EC2

    - by Gublooo
    Hello, I'm very new to all this - I've recently managed to launch my website on EC2. As next step, I want to learn how to scale the website. I have a general idea but wanted some input from the experts about how to go about it. My website is based on LAMP but also has Red5 server which allows users to record messages and also used for playing them back. Currently this is the architecture I'm planning to setup for initial scaling. Deploy four small EC2 instances for the following purposes: Instance-1: On this instance I will run the MySql database Instance-2: On this instance I will run the red5 server Instance-3 & Instance-4 These 2 instances will be used to deploy the website and will have Apache running on them. They will communicate with the mysql server on Instance-1 and red5 server on Instance-2 using the internal IP address. As an when required, I will launch another instance of the same EBS - I will have EBS of say 50 GIG where all the mysql data will be stored. Also red5 will use this EBS to store the video messages Load-Balancer - Use the load balancer provided by Amazon to load balance Instance-3 and Instance-4 This is what I have in mind. I could be way off so please bear with me. Also I have not taken into account the case of scaling MySql server as I currently have no idea about how that will be done and whether or not it is necessary initially. I am aware that Amazon provides auto scaling and mysql scaling as well but I dont want to get into that right now. Your feedback is appreciated Thanks

    Read the article

  • Server Hosting + AWS

    - by ledy
    Since my dedicated servers are hosted at a "normal" hosting service, I wonder if there is a really cheap way to extend the server farm with AWS instances. E.g. it seems to be a effient and flexible solution with data storage and ressources for ocassional data processing, too. However, it might be very in-efficient to mix two data centres and transfering data from current webhoster to amazon and vice-versa. In my case, the traffic for this continuous data exchange seems to be expensive and the delay for moving the data back to the hoster leads into a lack or delay. How are best practises for mixing non-aws and aws systems? E.g.: How to move the hosters data to aws as log file storage to run urchin analysis and/or port the log file data into a bigtable for exhausting analysis there. After working with the data: how to bring it back to the hoster and use the data with the webservers there? I am not going to move all the server farm to amazon, only "separate" parts or tasks if the transfer/exchange does not lead to increased cost.

    Read the article

  • Replicated filesystem and EC2 MySQL

    - by El Yobo
    I'm currently investigating migrating our infrastructure over to run on Amazon's EC2 and am trying to figure out the best way to set up a MySQL service. I'm leaning towards running our own MySQL instances, rather than going with Amazon's RDS, but am still considering the best approach for performance and cost on the instance itself. In order to have persistent data, the MySQL data needs to be on an EBS volume (with some form of striped RAID, e.g. RAID0 or RAID10) to improve persistence. However, EBS IO is limited by the network interface (gigabit, so a theoretical maximum of 128 MB/s), while the ephemeral volumes have no such problem. I did see a suggestion for running two MySQL servers on an instance, with a master running on the ephemeral disk (which we would also RAID) and a slave storing changes to an EBS volume, but this has some additional overhead and complexity (two servers). What I was imagining is using some form of replicated file system such that I could have a filesystem on top of a RAID0 of ephemeral volumes to maximise performance all changes from the above immediately replicated to another RAID1 volume backed by multiple EBS volumes to ensure no data loss The advantages of this would be best possible IO performance for the DB server; no network delay in IO decreased IO on EBS volumes (as all read IO will be done on the ephemeral volumes) so decreased cost good data security, as it's backed onto redundant EBS volumes However, I haven't seen an appropriate system to replicate all changes from one volume to the other; is there a filesystem, or any other approach, which will do this? The distributed file systems, e.g. GlusterFS, DRBD etc seem to focus on replicating disks between servers, can they be set up to do what I'm interested in here? I also haven't seen anything about other's taking this approach. Do I have a solution in need of a problem here (i.e. is performance good enough, so this whole idea is redundant)? Is there some flaw in the plan?

    Read the article

  • What other ways can I load balance EC2 servers without using Elastic Load Balancing?

    - by undefined
    I have a web application that consists of a web server managed by a web hosting firm, a set of EC2 instances in amazons cloud and a MySQL database (hosted on the webserver). MySQL is behind a firewall and is set to allow access from Localhost and from a single IP address which is an Amazon Elastic IP address that is attached to the EC2 instance I have been running up to now. The problem is that I want to look at my scaling up and load balancing strategy for my EC2 instance. To this end I have been investigating the Elastic Load Balancers and Autoscaling tools that Amazon provides and have managed to set this up fine but for one thing - connecting to the MySQL database running on my webserver. I realised (thanks to answers on Serverfault) that I needed to check firewall settings and add the IP address for the load balancer, however Elastic Load Balancers provide you with a DNS name, not an IP address and infact the IP addresses change over time so this will not work. I have been told by the company hosting the database that the way the firewall works is to look up the IP address of the DNS name and store the IP rather than the DNS name. so basically this will not work and the only way to allow access would be to open up the SQL port to allow access from anyone! Is this a viable idea? Should I look at moving my database into the cloud? Is there another firewall that the server company can use? Should I find another way of load balancing (if so what?) tricky one eh? any help appreciated!

    Read the article

  • AWS Amazon EC2 - password-less SSH login for non-root users using PEM keypairs

    - by Mark White
    We've got a couple of clusters running on AWS (HAProxy/Solr, PGPool/PostgreSQL) and we've setup scripts to allow new slave instances to be auto-included into the clusters by updating their IPs to config files held on S3, then SSHing to the master instance to kick them to download the revised config and restart the service. It's all working nicely, but in testing we're using our master pem for SSH which means it needs to be stored on an instance. Not good. I want a non-root user that can use an AWS keypair who will have sudo access to run the download-config-and-restart scripts, but nothing else. rbash seems to be the way to go, but I understand this can be insecure unless setup correctly. So what security holes are there in this approach: New AWS keypair created for user.pem (not really called 'user') New user on instances: user Public key for user is in ~user/.ssh/authorized_keys (taken by creating new instance with user.pem, and copying it from /root/.ssh/authorized_keys) Private key for user is in ~user/.ssh/user.pem 'user' has login shell of /home/user/bin/rbash ~user/bin/ contains symbolic links to /bin/rbash and /usr/bin/sudo /etc/sudoers has entry "user ALL=(root) NOPASSWD: ~user/.bashrc sets PATH to /home/user/bin/ only ~user/.inputrc has 'set disable-completion on' to prevent double tabbing from 'sudo /' to find paths. ~user/ -R is owned by root with read-only access to user, except for ~user/.ssh which has write access for user (for writing known_hosts), and ~user/bin/* which are +x Inter-instance communication uses 'ssh -o StrictHostKeyChecking=no -i ~user/.ssh/user.pem user@ sudo ' Any thoughts would be welcome. Mark...

    Read the article

  • Amazon like Ecommerce site and Recommendation system

    - by Hellnar
    Hello, I am planning to implement a basic recommendation system that uses Facebook Connect or similar social networking site API's to connect a users profile, based on tags do an analyze and by using the results, generate item recommendations on my e-commerce site(works similar to Amazon). I do believe I need to divide parts into such: Fetching social networking data via API's.(Indeed user allows this) Analyze these data and generate tokes. By using information tokens, do item recommendations on my e-commerce site. Ie: I am a fan of "The Strokes" band on my Facebook account, system analyze this and recommending me "The Strokes Live" CD. For any part(fetching data, doing recommendation based on tags...), what algorithm and method would you recommend/ is used ? Thanks

    Read the article

  • SoundPool repeating issue for Samsung Galaxy S3

    - by Alaa Eldin
    I'm trying to play a background sound for my application, I use SoundPool class, my problem is that, sound plays well only when I set the loop parameter with zero value, but it doesn't work for any other value. My code for initialization is: soundpool = new SoundPool(4, AudioManager.STREAM_MUSIC, 0); soundsMap = new HashMap<Integer, Integer>(); soundsMap.put(1, soundpool.load(this, R.raw.soundfile_1, 1)); soundsMap.put(2, soundpool.load(this, R.raw.soundfile_2, 1)); my code for playing is soundpool.play(1, 0.9f, 0.9f, 1, -1, 1f); as I mentioned sound works when I put (0) instead of (-1) for the loop value, anyone has any idea why (-1) or any value other than (0) doesn't work (there is no output sound) ?

    Read the article

  • amazon design doubt

    - by praveen
    I was looking at the amazon website and was wondering how one of the feature would have been implemented. The feature : what customers buy after viewing a particular item. If i were to develop such a feature i would probably generate a session id for each user session and store the session id-page id combination in a log file. and if a book is bought set a separate flag for the session id-page id. A separate program can then be run on the log file periodically, to identify the groups that were bought together/viewed together and that information can be stored in a persistent file. This is ofcourse a simple solution without taking into consideration the distributed nature of the servers - but would this suffice or can you help me identify a better design.

    Read the article

  • Flowplayer RTMP streaming, mp4, Amazon Cloudfront and iPad/iPhone

    - by circey
    I've been working on a site where 2 video clips are streamed using Amazon Cloudfront and Flowplayer. You can see one video/page here: http://graemeclarkoration.org.au/gcorationp1.htm (works as a Highslide popup/modal window, hence the lack of adornment). While it works in all browsers and Android devices, I can't get it to work on an iPad or an iPhone; the page opens fine and the video box appears but the video never loads. Does anyone have any ideas how to fix or even why the video won't load? MTIA

    Read the article

  • Installing Skype on Amazon EC2 instance

    - by Adrian
    For my application, I need to have Skype working on my Amazon EC2 Windows instance. I got the application installed and am able to log in, however, I can't make a phone call, since I am getting an 'Can't detect your sound card' error. Since I'm trying to inject audio from an audio file into the phone call, I don't need the sound card on the server. Thus, I need a way to bypass this error message. I have tried installing Virtual Audio Cable, which unfortunately didn't work (even though it worked on my desktop machine).

    Read the article

  • What is the difference between Anycast and GeoDNS / GeoIP wrt HA?

    - by Riyad
    Based on the Wikipedia description of Anycast, it includes both the distribution of a domain-name-to-many-IP-mapping across many DNS servers as well as replying to clients with the most geographically close (or fastest) server. In the context of a globally distributed, highly available site like google.com (or any CDN service with many global edge locations) this sounds like the two key features one would need. DNS services like Amazon's Route53, EasyDNS and DNSMadeEasy all advertise themselves as Anycast-enabled networks. Therefore my assumption is that each of these DNS services transparently offer me those two killer features: multi-IP-to-domain mapping AND routing clients to the closest node. However, each of these services seem to separate out these two functionalities, referring to the 2nd one (routing clients to closest node) as "GeoDNS", "GeoIP" or "Global Traffic Director" and charge extra for the service. If a core tenant of an Anycast-capable system is to already do this, why is this functionality being earmarked as this extra feature? What is this "GeoDNS" feature doing that a standard Anycast DNS service won't do (according to the definition of Anycast from Wikipedia -- I understand what is being advertised, just not why it isn't implied already). I get extra-confused when a DNS service like Route53 that doesn't support this nebulous "GeoDNS" feature lists functionality like: Fast – Using a global anycast network of DNS servers around the world, Route 53 is designed to automatically route your users to the optimal location depending on network conditions. As a result, the service offers low query latency for your end users, as well as low update latency for your DNS record management needs. ... which sounds exactly like what GeoDNS is intended to do, but geographically directing clients is something they explicitly don't support it yet. Ultimately I am looking for the two following features from a DNS provider: Map multiple IP addresses to a single domain name (like google.com, amazon.com, etc. does) Utilize a DNS service that will respond to client requests for that domain with the IP address of the nearest server to the requestee. As mentioned, it seems like this is all part of an "Anycast" DNS service (all of which these services are), but the features and marketing I see from them suggest otherwise, making me think I need to learn a bit more about how DNS works before making a deployment choice. Thanks in advance for any clarifications.

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >