Search Results

Search found 282 results on 12 pages for 'ami mahloof'.

Page 11/12 | < Previous Page | 7 8 9 10 11 12  | Next Page >

  • Merely installing PHP5 causes my AWS Ubuntu server to die minutes later from a massive CPU spike

    - by Mark Amery
    I have an AWS server with Ubuntu 11.04 as the OS that is running an Apache2 webserver (incidentally Python-based and using Django). We recently needed to add support for php5 to let us use a third party PHP library (incidentally for serving minified versions of js and css files). However, for no reason any of us can discern, if we simply run sudo apt-get install php5 on the server, then the install appears to finish successfully but, without us taking any further action (including not yet running sudo apt-get install libapache2-mod-php5, which I think would be the next step for us if everything worked), or actually running any PHP scripts on the server, a few minutes later the server becomes impossible to connect to, and looking at the 'Monitoring' tab for the server in the EC2 Management Console reveals that a while after the installation, CPU usage spikes to 100% and stays there permanently (until we reboot the server from the AWS Console). After rebooting, the server also reliably dies within a few (between 0 and 10) minutes. We restored the server to a pre-PHP state from an AMI Image, observed that it was stable, and then tried installing PHP5 again and observed the server die in exactly the same way, so we're pretty much certain that installing PHP5 is what causes the symptoms. What on earth could be causing this behaviour, and how can we get PHP installed on the server without it dying?

    Read the article

  • Amazon EC2: Instances, IPs and a wordpress blog (LAMP)

    - by JustinXXVII
    I had a link to my blog posted on Reddit yesterday and MySQL crashed on my EC2 Micro instance. I know I didn't have that many visitors because I used a marketing link that tracks hits. The link got 167 hits over the course of the last 18 hours, and MySQL crashed twice. So anyway, 167 visits is not a lot, so I've done some short term optimizations like restricting the number of Apache threads to limit the MySQL calls. I also set up WP Super Cache to serve static content. Soon I'm going to offload all of my images to S3 or CloudFront. So this leads me to my question. If this doesn't seem to help, and if i have another traffic "spike", how do AMIs work when you have a MySQL database? I think I understand that if you have more than one instance and assign the same Elastic IP to both of them, the incoming traffic gets distributed among both. But what happens when the MySQL database gets updated on one of the instances? I just need to wrap my mind around what happens when I create an AMI and then launch a new instance to help with traffic. Thanks for your suggestions.

    Read the article

  • javascript doesn't seem to be able to post form data (nginx server w/ php-fpm)

    - by Jones
    So the situation is like so: I have a nginx server with php-fpm installed. All is well and the site scripts and all work perfectly. I am able to use html to POST form data and it works just fine. However, There seems to be be some correlation between javascript, the POST protocol and nothing happening. I cant seem to determine the issue. Example: I have a user login widget that uses javascript on submit the fields and POST the data to a backend auth script which returns a server message that then populates the login box saying something like "Login Successful" followed by reloading the page to properly enable content. Problem is, nothing happens when you hit submit. I do know the setup works because i had it working on apache before migrating. Also if it makes any difference, the server is a Amazon EC2 instance using the Amazon AMI. I really dont know where to start looking on this one, but below is my default.conf for the server: upstream backend_get { server 127.0.0.1:80 weight=1; } upstream backend_post { server 127.0.0.1:80 weight=1; } #Main website url server { listen 80; server_name server.com; #charset koi8-r; access_log logs/host.access.log main; error_log logs/host.error.log; location / { root /usr/share/nginx/html; index index.php index.html index.htm; if ($request_method = POST) { proxy_pass http://backend_post; break; } } location ~ \.php$ { #fastcgi_pass 127.0.0.1:9000; fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } }

    Read the article

  • SSH attcack CentOS Amazon EC2

    - by user37143
    Hi, I run a few Rightscale CentOS AMI based instances on Amazon EC2. Two months back I found that our SSHD security is compromised( I had added host.allow and host.deny for ssh). So I created new instances and done an IP based ssh that allows only our IPs through AWS Firewall(ec2-authorize) and chnaged the ssh 22 default port to some other port but two days back I found I was not able to login to the server and when I tried on 22 port the ssh got connected and I found that sshd_conf was changed and when I tried to edit sshd_config I found root had no write permission on the file. So I tried a chmod and it said access denied for 'root' user. This is very strange. I checked secure log and history and found nothing informative. I have PHP, Ruby On Rails, Java, Wordpress apps running on these server. This time I did a chkrootkit scan and found nothing. I renamed the /etc/ssh folder and reinstalled openssh through yum. I had faced this on 3 instances on CentOS(5.2, 5.4) I have instances on Debian as well those working fine. Is this a CentOS/Rightscale issue. Guys, what security measures I should take to prevent this. Please support me this is very critical. Thanks

    Read the article

  • Why is my new PC so slow at startup?

    - by rumtscho
    Bought a new PC this weekend, and it works really good. Only I have one big problem: startup time. Its BIOS needs 62 sec to load, then from Grub start to pw entering screen it's another 26 sec. I think this is a lot, because my old PC needs 34 sec for BIOS and another 8 sec to pw screen. After I enter the pw, the desktop is usable with practically no delay on both. The new PC is a core i7-930, running a Lucid Lynx 64 bit from a Intel Postville SSD (no internal HDs). The old PC is a Pentium 4 celeron (forgot the clock speed) running a Lucid Lynx 32 bit from an ATA 100 hard drive. Neither PC is overclocked. The new one has boot sequence 1.DVD ROM, 2.SSD (connected over SATA in AHCI mode), 3. removable drive. The old one boots from 1. DVD ROM, 2. HDD, 3. Floppy. Neither has a second OS installed. The new one has less software installed than the old one (I think), but the boot time difference was noticeable even before I made any installs. As far as I know, just the SSD should be enough to make a noticeable difference in boot time. I thought that having a good mainboard on the new PC as opposed to the basic office model on the old one would also mean a faster loading BIOS. If these assumptions are right, I guess I must have misconfigured something in the BIOS of the new PC. How should I configure it for a fast boot? It has an ASUS P6X58D board with an AMI BIOS, if you need the BIOS revision number I could post that too.

    Read the article

  • disk space keeps filling up on EC2 instance with no apperent files/directories

    - by sasher
    How come os shows 6.5G used but I see only 3.6G in files/directories? Running as root on an Amazon Linux AMI (seems like Centos), lots of free memory available, no swapping going on, no apparent file descriptors issue. The only thing I can think of is a log file that was deleted while applications append to it. Disk space usage is slowly but continuously rising towards full capacity (~1k/min with very small decreases from time to time) Any explanation? Solution? du --max-depth=1 -h / 1.2G /usr 4.0K /cgroup 22M /lib64 11M /sbin 19M /etc 52K /dev 2.1G /var 4.0K /media 0 /sys 4.0K /selinux du: cannot access /proc/14024/task/14024/fd/4': No such file or directory du: cannot access<br/> /proc/14024/task/14024/fdinfo/4': No such file or directory du: cannot access /proc/14024/fd/4': No such file or directory du: cannot<br/> access/proc/14024/fdinfo/4': No such file or directory 0 /proc 18M /home 4.0K /logs 8.1M /bin 16K /lost+found 12M /tmp 4.0K /srv 35M /boot 79M /lib 56K /root 67M /opt 4.0K /local 4.0K /mnt 3.6G / df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda1 7.9G 6.5G 1.4G 84% / tmpfs 3.7G 0 3.7G 0% /dev/shm sysctl fs.file-nr fs.file-nr = 864 0 761182

    Read the article

  • error 503: Can't deploy rails 3 app with apache + thin (bitnamy ruby stack)

    - by Pacu
    As you'll notice, I'm a bit of a noob on Rails. Here's the thing I have a EC2 Bitnami RubyStack AMI running. I'm trying to deploy the sample project to be sure I'm doing the right thing, but I'm not getting anywhere at all. I just get a 503 error I'm following bitnami's docs on thin + apache Here are my files: the httpd.conf I include in the main httpd.conf Alias /sample "/home/bitnami/stack/projects/sample/public" <Directory "/home/bitnami/stack/projects/sample/public"> AllowOverride None Order allow,deny Allow from all </Directory> ProxyPass /sample balancer://appcluster ProxyPassReverse /sample balancer://appcluster <Proxy balancer://appcluster> BalancerMember http://127.0.0.1:3001/sample BalancerMember http://127.0.0.1:3002/sample BalancerMember http://127.0.0.1:3003/sample BalancerMember http://127.0.0.1:3004/sample </Proxy> the thin.yml file chdir: /opt/bitnami/projects/sample environment: production address: 127.0.0.1 port: 3000 timeout: 30 log: log/thin.log pid: tmp/pids/thin.pid max_conns: 1024 max_persistent_conns: 512 require: [] wait: 30 servers: 5 prefix: /sample daemonize: true I'm able to start and stop apache, but thin does not stop correctly though. When I try to stop thin, I get this output /opt/bitnami/projects/sample$ sudo thin -C config/thin.yml stop Stopping server on 127.0.0.1:3000 ... Can't stop process, no PID found in tmp/pids/thin.3000.pid Stopping server on 127.0.0.1:3001 ... Can't stop process, no PID found in tmp/pids/thin.3001.pid Stopping server on 127.0.0.1:3002 ... Can't stop process, no PID found in tmp/pids/thin.3002.pid Stopping server on 127.0.0.1:3003 ... Can't stop process, no PID found in tmp/pids/thin.3003.pid Stopping server on 127.0.0.1:3004 ... Can't stop process, no PID found in tmp/pids/thin.3004.pid I've tried to use nginx as well, without any luck unfortunately. Thank you for your time and help!

    Read the article

  • How does everyone set up AWS for PHP with a git workflow while worrying about distributing EC2?

    - by Parris
    Hello, I have been looking for something like heroku but for php, and after much frustration (and almost finding what I need, but not quite) we decided to just go with AWS without any other abstraction. We are using PHP 5.3 (and CakePHP 1.3), and are currently using git. Ubuntu seems like the easiest way to get both of those on there and we will most likely use that. We aren't really going worry about outgoing email. We are using smtp through gmail, but will most likely switch to some other service eventually. I had 3 questions: 1) I have been looking at Zend Server, and I am not quite sure how that is more beneficial than xampp. Perhaps it is not? 2) I suppose to make the application scale we would need multiple instances of some ec2 ami. Then just duplicate it and such. The question then becomes how do we make sure all EC2 instances are up to date? 3) I understand the concept of load balancing to some degree. I understand that in 1 region you select a bunch of servers and have it load balance across them. The question then becomes well how about world wide? How do I make it so that traffic is directed to the correct ec2 server? I have heard of route 53, and tried signing up for that, but nothing appears in my control panel. Also perhaps it is just a DNS thing with my domain registrar? AHHH... some tutorial would be helpful!

    Read the article

  • Adatbázis szerver konszolidáció Oracle technológiákkal - eroforrás allokálás

    - by Lajos Sárecz
    Szerver konszolidációnál alapmegoldás a virtualizáció, pedig az Oracle Database rendelkezik olyan képességekkel, melyekkel a virtualizáció elonyeit élvezhetjük, ám teljesítményben felülmúljuk azt. Több adatbázis konszolidációját meg lehet oldani egy nagy szerveren, vagy egy több szerverbol álló klaszteren. Bármelyik megoldást is választjuk (ezek elonyeivel és hátrányaival most nem foglalkozok), az egyik legfontosabb megoldandó probléma, hogy biztonsággal el tudjuk oket szeparálni akár adatbiztonsági, akár eroforrás kezelési szempontból. A szoftveres és hardveres virtualizációk lehetové teszik, hogy a szerver eroforrásait több virtuális szerver között felosszuk, ezáltal elszeparálhatók a párhuzamosan futó adatbázis példányok. Ezek a megoldások általában költségesek, plusz adminisztrációt jelentenek és teljesítmény csökkenést okoznak. Az alábbiakban röviden összeszedem, hogy az Oracle Database milyen eroforrás szeparációs technológiákkal rendelkezik, melyek jól használhatók adatbázis konszolidáció esetén: Adatbázis szolgáltatások: Azt talán minden Oracle adatbázis-kezelovel foglalkozó szakérto tudja, hogy akliensek az adatbázist az adatbázis szolgáltatás nevével érik el. Alapértelmezetten minden adatbázis egyetlen szolgáltatással rendelkezik, mely automatikusan a 'global database name' paraméterrel megegyezo nevet kapja az adatbázis létrehozásakor. Ugyanakkor egy adatbázishoz több szolgáltatás név is rendelheto. A szolgáltatásokkal csoportosíthatók a különbözo feladatokat végrehajtó kliensek, és a szolgáltatásokhoz rendelhetjük hogy melyik kliens csoportnak mennyi rendszer eroforrást allokálunk. Klaszteres adatbázisok (RAC) esetén egy szolgáltatás több adatbázis példányhoz (szerverhez) kapcsolódhat, amivel valós terheléstol függo terhelés elosztás valósítható meg (itt már szerepet kap egyébként a Resource Manager is, lásd késobb). Az alkalmazás számára irrelevánssá válik, hogy az adott szolgáltatást mely szerver szolgálja ki. A szolgáltatásokhoz kapcsolódó eroforrások menet közben dinamikusan bovíthetok, de kezelik a kieso eroforrások hiányát is (failover). Database Resource Manager: Az Oracle Database Resource Manager az adatbázis szintjén kezeli az eroforrásokat, a CPU használatot szabályozza az adatbázis terhelés kontrolljával. A Resource Manager egy CPU-n adott pillanatban csak egyetlen Oracle processz futtatását engedélyezi, miközben a többit várakoztatja (ahogy az egy operációs rendszer ütemezojében is muködik). A Resource Manager csak akkor lép muködésbe, amikor a CPU terhelése eléri a 100%-ot. Ekkor a Resource Plan-nek megfeleloen korlátozhatja az egyes eroforrás csoportok számára elérheto eroforrás (CPU) mennyiségét. Instance Caging: A Resource Manager részeként az Oracle Database 11gR2-tol elérheto Instance Caging technológiával virtualizáció és operációs rendszer szintu eroforrás felosztás nélkül az adatbázis példány szintjén lehet szabályozni az allokált CPU számot. Erre akkor lehet szükség, ha egy szerveren több példány futtatására van szükség. A Resource Manager bekapcsolásával és a cpu_count paraméter beállításával lehet adatbázis példányonként aktiválni az Instance Caging funkcionalitást. A cpu_count egy dinamikus paraméter, célszeru arra az értékre állítani, ahány CPU-t az adott adatbázis példány maximálisan igényelhet. Lehetoség van túlméretezni a példányok számára rendelkezésre álló processzorok számát. Például egy 4 CPUs- szerver esetében ha van 3 példányunk, mindháromnak adhatunk 3 CPU-t. Azonban ha mindegyik terhelés alatt van, akkor a példány számára maximum allokált CPU szám osztva összes allokált CPU számmal arányban részesül a processzorból, ami a példában 33,33%, azaz 1,33 CPU. Input Output Resource Manager (IORM):Nem csak a processzorok használatát szabályozhatjuk, lehetoség van a megosztott storage eroforrásainak felosztására is. Az Input Output Resorce Manager (IORM) alkalmazásával storage szinten tudjuk szabályozni az adatbázisok közötti és azokon belüli minimális I/O szinteket. Database Vault: Ugyanazon adatbázisba konszolidált alkalmazások esetén a rendszergazda szerepkörök szeparálása lehetséges az Oracle Database Vault technológiával. Ezzel elérheto az, hogy biztonságosan konszolidáljuk adatbázisainkat úgy, hogy minden adminisztrátor csak a hozzá tartozó adatokat, objektumokat lássa, módosíthassa.

    Read the article

  • How to fix: Ubuntu 12.04 reboots after loading with elilo

    - by Casey
    I have an HP p6-2120 with CPU: AMD A6-3620 APU with Radeon Graphics RAM: 6GB BIOS: HO2_710.ROM v7.10 [AMI v7.10 4/19/2012] Disk: SATA1 (/dev/sda) - 1 TB (windows) Disk: SATA2 (/dev/sdb) - 1 TB partitioned using "parted -a optimal /dev/sdb" as follows: .. 1049KB 201MB FAT32 boot flag set .. 201MB 60GB ext2 (/) .. 68GB 78GB linux-swap(v1) (swap) .. 78GB 790GB ext4 (/home) .. - rest is "free" space reserved for other purposes (eventually) ubuntu: 12.04.1 LTS [specifically: Release 12.04 (precise) 64-bit] kernel: linux 3.2.0-29-generic I created a bootable EFI USB from the ISO (64-bit) which I downloaded. I can run and install from the USB without any problems. The BIOS is an EFI bios that appears to be capable of booting in either EFI or Legacy mode. Initially, I did the "standard" install with NOTHING on disk2, and let the installer configure everything. The net result of this was that when I started the computer and forced it into "boot" menu mode, it DOES NOT recognize SATA2 as an EFI drive, and when I attempt to "legacy" boot from it, I get the message "ERROR: No Boot Disk has been detected." The "standard" install created one large partition that consumed the entire disk. At that point, I manually partitioned the disk (using sudo parted -a optimal /dev/sdb) as described above. I selected the "other" install, and changed the /dev/sdb1 to "bios_grub", /dev/sdb2 as "/" (ext4), /dev/sdb3 as swap, and /dev/sdb4 as "/home". [Note: fearing that possibly elilo did not recognize ext4, I switched /dev/sdb2 to ext2 and re-insalled] The net result was that the install appeared to trash the /dev/sdb1 partition so that it was NOT readable by anything. I re-formated /dev/sdb1 as FAT32 and set the boot flag. I repeated the install ignoring the messages about no bios_grub partition. After several attempts to get GRUB2 to work, I switched to elilo. I downloaded the most recent version and copied it (elilo-3.14-ia64.efi) to /dev/sdb1/efi/boot/bootx64.efi. (The BIOS boot loader did not recognize it either as elilo-3.14.ia64.efi or as elilo.efi. Based on the advice in one of the web-pages I found, I renamed it to bootx64.efi. This worked.) In that same directory (/efi/boot), I copied the file pointed to the link in /dev/sdb2/vmlinuz to /efi/boot/vmlinuz, and the file pointed to the link in /dev/sdb2/initrd.img to /efi/boot/initrd.img. I created an elilo.conf file as follows: timeout=5000 prompt default=linux-boot image=vmlinuz label=linux-boot read-only initrd=initrd.img root=/dev/sdb2 The /efi/boot directory contains 4 files: bootx64.efi elilo.conf vmlinuz initrd.img When I power-cycle the computer and force the boot menu, drive2 shows up as an EFI bootable drive. When I select it, I get the elilo prompt. Pressing , it appears to load the kernal (I have tried it with verbose=5, and there is a long string of messages with the final one a command line to load the kernel and a series of several dots that fly by) then the screen goes blank, and it reboots the computer. [Note: I have also tried substituting the UUID as found in the /etc/fstab of the installed system for the root directory. This had no effect.] This is a brief synopsis of several nights of fiddling with this. I would deeply appreciate any help you can give.

    Read the article

  • Playing video from URL -Android

    - by Rajeev
    In the following code what ami doing wrong the video dosnt seem to play here.Is that the permission issue if so what should be included in the manifest file this main.xml <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="wrap_content" android:layout_height="wrap_content" android:orientation="horizontal" android:baselineAligned="true"> <LinearLayout android:layout_height="match_parent" android:layout_width="wrap_content" android:id="@+id/linearLayout2"></LinearLayout> <MediaController android:id="@+id/mediacnt" android:layout_width="wrap_content" android:layout_height="wrap_content"></MediaController> <LinearLayout android:layout_height="match_parent" android:layout_width="wrap_content" android:id="@+id/linearLayout1" android:orientation="vertical"></LinearLayout> <Gallery android:layout_height="wrap_content" android:id="@+id/gallery" android:layout_width="wrap_content" android:layout_weight="1"></Gallery> <VideoView android:layout_height="match_parent" android:id="@+id/vv" android:layout_width="wrap_content"></VideoView> </LinearLayout> this is java class package com.gallery; import java.net.URL; import android.app.Activity; import android.app.Activity; import android.net.Uri; import android.os.Bundle; import android.widget.MediaController; import android.widget.Toast; import android.widget.VideoView; public class GalleryActivity extends Activity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); Toast.makeText(GalleryActivity.this, "Hello world", Toast.LENGTH_LONG).show(); VideoView videoView = (VideoView) findViewById(R.id.vv); MediaController mediaController = new MediaController(this); mediaController.setAnchorView(videoView); // Set video link (mp4 format ) Uri video = Uri.parse("http://www.youtube.com/watch?v=lEbxLDuecHU&playnext=1&list=PL040F3034C69B1674"); videoView.setMediaController(mediaController); videoView.setVideoURI(video); videoView.start(); } }

    Read the article

  • Extracting shell script from parameterised Hudson job

    - by Jonik
    I have a parameterised Hudson job, used for some AWS deployment stuff, which in one build step runs certain shell commands. However, that script has become sufficiently complicated that I want to "extract" it from Hudson to a separate script file, so that it can easily be versioned properly. The Hudson job would then simply update from VCS and execute the external script file. My main question is about passing parameters to the script. I have a Hudson parameter named AMI_ID and a few others. The script references those params as if they were environment variables: echo "Using AMI $AMI_ID and type $TYPE" Now, this works fine inside Hudson, but not if Hudson calls an external script. Could I somehow make Hudson set the params as environment variables so that I don't need to change the script? Or is my best option to alter the script to take command line parameters (and possibly assign those to named variables for readability: ami_id=$1; type=$2; ... )? I tried something like this but the script doesn't get correctly replaced values: export AMI_ID=$AMI_ID export TYPE=$TYPE external-script.sh # this tries to use e.g. $AMI_ID Bonus question: when the script is inside Hudson, the "console output" will contain both the executed commands and their output. This is extremely useful for debugging when something goes wrong with a build! For example, here the line starting with "+" is part of the script and the following line its output: + ec2-associate-address -K pk.pem -C cert.pem 77.125.116.139 -i i-aa3487fd ADDRESS 77.125.116.139 i-aa3487fd When calling an external script, Hudson output will only contain the latter line, making debugging harder. I could cat the script file to stdout before running it, but that's not optimal either. In effect, I'd like a kind of DOS-style "echo on" for the script which I'm calling from Hudson - anyone know a trick to achieve this?

    Read the article

  • getting hasMany records of a HABTM relationship

    - by charliefarley321
    I have tables: categories HABTM sculptures hasMany images from the CategoriesController#find() produces an array like so: array( 'Category' => array( 'id' => '3', 'name' => 'Modern', ), 'Sculpture' => array( (int) 0 => array( 'id' => '25', 'name' => 'Ami', 'material' => 'Bronze', 'CategoriesSculpture' => array( 'id' => '18', 'category_id' => '3', 'sculpture_id' => '25' ) ), (int) 1 => array( 'id' => '26', 'name' => 'Charis', 'material' => 'Bronze', 'CategoriesSculpture' => array( 'id' => '19', 'category_id' => '3', 'sculpture_id' => '26' ) ) ) ) I'd like to be able to get the images related to sculpture in the array as well if this is possible?

    Read the article

  • Java Play Mustache NPE Error

    - by zanedev
    We are getting a mustache play error in production (amazon linux EC2 AMI) but not in development (MACs) and we have tried upgrading the jvm, using the jdk instead, and changing from a tomcat deploy model to match our development environments as much as possible but nothing is working. Please any help would be greatly appreciated. We have lots of shared code in java and javascript using mustache and it would be a big deal to rewrite everything if we had to ditch mustache on the java side. 20:48:52,403 ERROR ~ @6al2dd0po Internal Server Error (500) for request GET /mystuff/people Execution exception (In {module:mustache-0.2}/app/play/modules/mustache/MustacheTags.java around line 32) NullPointerException occured : null play.exceptions.JavaExecutionException at play.templates.BaseTemplate.throwException(BaseTemplate.java:90) at play.templates.GroovyTemplate.internalRender(GroovyTemplate.java:257) at play.templates.Template.render(Template.java:26) at play.templates.GroovyTemplate.render(GroovyTemplate.java:187) at play.mvc.results.RenderTemplate.<init>(RenderTemplate.java:24) at play.mvc.Controller.renderTemplate(Controller.java:660) at play.mvc.Controller.renderTemplate(Controller.java:640) at play.mvc.Controller.render(Controller.java:695) at controllers.MyStuff.people(MyStuff.java:183) at play.mvc.ActionInvoker.invokeWithContinuation(ActionInvoker.java:548) at play.mvc.ActionInvoker.invoke(ActionInvoker.java:502) at play.mvc.ActionInvoker.invokeControllerMethod(ActionInvoker.java:478) at play.mvc.ActionInvoker.invokeControllerMethod(ActionInvoker.java:473) at play.mvc.ActionInvoker.invoke(ActionInvoker.java:161) at Invocation.HTTP Request(Play!) Caused by: java.lang.NullPointerException at play.modules.mustache.MustacheTags._template(MustacheTags.java:32) at play.modules.mustache.MustacheTags$_template.call(Unknown Source) at /app/views/User/people.html.(line:22) at play.templates.GroovyTemplate.internalRender(GroovyTemplate.java:232) ... 13 more

    Read the article

  • Should I use a regular server instead of AWS?

    - by Jon Ramvi
    Reading about and using the Amazon Web Services, I'm not really able to grasp how to use it correctly. Sorry about the long question: I have a EC2 instance which mostly does the work of a web server (apache for file sharing and Tomcat with Play Framework for the web app). As it's a web server, the instance is running 24/7. It just came to my attention that the data on the EC2 instance is non persistent. This means I lose my database and files if it's stopped. But I guess it also means my server settings and installed applications are lost as they are just files in the same way as the other data. This means that I will either have to rewrite the whole app to use amazon CloudDB or write some code which stores the db on S3 and make my own AMI with the correct applications installed and configured. Or can this be quick-fixed by using EBS somehow? My question is 1. is my understanding of aws is correct? and 2. is it's worth it? It could be a possibility to just set up a regular dedicated server where everything is persistent, as you would expect. Would love to have the scaleability of aws though..

    Read the article

  • Retrieving license type (linux/windows/windows+sqlserver) for an Amazon EC2 instance via the API?

    - by Geir
    I need to calculate the hourly running costs for my Amazon EC2 instances. This varies even between instances with same hardware configs (instance types) because I use different amazon images (AMIs): some plain windows server and some windows server with sql server (both of them have additional costs compared with plain linux instances) The EC2 Java API has a describeInstances() method which returns Instance objects with metadata such as instance id, instance type (m1.small/large...), state (running,stopped..) public ip, etc. This Instance object also has a .getLicense().getPool() which according to the Java API should return "The license pool from which this license was used (ex: 'windows')." I thought this is were it may also give 'windows+sqlserver' or something to that effect. The getLicense() method does however return null.. I've navigated around the EC2 web console, not being able to find this information, but I'm hoping that it is possible - otherwise it would mean that you cannot identify the true hourly cost of an particular instance unless you know which AMI was used to create it in the first place (plain windows server or windows server with sql server). Anyone? Thanks :) /Geir

    Read the article

  • Managing server instance identity on EC2

    - by kikibobo
    I recently brought up a cluster on EC2, and I felt like I had to invent a lot of things. I'm wondering what kinds of tools, patterns, ideas are out there for how to deal with this. Some context: I had 3 different kinds of servers, so first I created AMIs for each of them. The first AMI had zookeeper, so step one in deploying the system was to get the zookeeper server running. My script then made a note of the mapping between EC2's completely arbitrary and unpredictable hostnames, and the zookeeper server. Then as I brought up new instances of the other 2 kinds of servers, the first thing I would do is ssh to the new server, and add the zookeeper server to its /etc/hosts file. Then as the server software on each instance starts up, it can find zookeeper. Obviously this is a problem that lots of people have to solve, and it probably works a little bit differently in different clouds. Are there products that address this concept? I was pretty surprised that EC2 didn't provide some kind of way to tie your own name to its name. Thanks for any ideas.

    Read the article

  • Nginx1.6.1: Installing from source not running corectly

    - by Maca
    I just installed Nginx 1.6.1 from source but it didn't seem to installed correctly. Nginx is running if I service nginx status but when I do nginx -v it outputs command not found. Regular HTML page shows fine and there is no error in the error logs. I am on AWS Ec2 linux AMI. Here is my /etc/init.d/nginx script #!/bin/sh # # nginx - this script starts and stops the nginx daemon # # chkconfig: - 85 15 # description: Nginx is an HTTP(S) server, HTTP(S) reverse \ # proxy and IMAP/POP3 proxy server # processname: nginx # config: /etc/nginx/nginx.conf # config: /etc/sysconfig/nginx # pidfile: /usr/local/nginx/logs/nginx.pid # Source function library. . /etc/rc.d/init.d/functions # Source networking configuration. . /etc/sysconfig/network # Check that networking is up. [ "$NETWORKING" = "no" ] && exit 0 nginx="/usr/local/nginx/sbin/nginx" prog=$(basename $nginx) NGINX_CONF_FILE="/usr/local/nginx/conf/nginx.conf" [ -f /etc/sysconfig/nginx ] && . /etc/sysconfig/nginx lockfile=/usr/local/nginx/logs/nginx.lock make_dirs() { # make required directories user=`nginx -V 2>&1 | grep "configure arguments:" | sed 's/[^*]*--user=\([^ ]*\).*/\1/g' -` options=`$nginx -V 2>&1 | grep 'configure arguments:'` for opt in $options; do if [ `echo $opt | grep '.*-temp-path'` ]; then value=`echo $opt | cut -d "=" -f 2` if [ ! -d "$value" ]; then # echo "creating" $value mkdir -p $value && chown -R $user $value fi fi done } start() { [ -x $nginx ] || exit 5 [ -f $NGINX_CONF_FILE ] || exit 6 make_dirs echo -n $"Starting $prog: " daemon $nginx -c $NGINX_CONF_FILE retval=$? echo [ $retval -eq 0 ] && touch $lockfile return $retval } stop() { echo -n $"Stopping $prog: " killproc $prog -QUIT retval=$? echo [ $retval -eq 0 ] && rm -f $lockfile return $retval } restart() { configtest || return $? stop sleep 1 start } reload() { configtest || return $? echo -n $"Reloading $prog: " killproc $nginx -HUP RETVAL=$? echo } force_reload() { restart } configtest() { $nginx -t -c $NGINX_CONF_FILE } rh_status() { status $prog } rh_status_q() { rh_status >/dev/null 2>&1 } case "$1" in start) rh_status_q && exit 0 $1 ;; stop) rh_status_q || exit 0 $1 ;; restart|configtest) $1 ;; reload) rh_status_q || exit 7 $1 ;; force-reload) force_reload ;; status) rh_status ;; condrestart|try-restart) rh_status_q || exit 0 ;; *) echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload|configtest}" exit 2 esac

    Read the article

  • Debian keyring error: "No keyring installed"

    - by donatello
    I have a Debian Squeeze EC2 AMI. On booting up an instance with it and trying to install packages with apt-get I get errors saying there is no keyring installed. Here is the error with apt-get update: root@ip:~# apt-get update Get:1 http://ftp.us.debian.org squeeze Release.gpg [1672 B] Ign http://ftp.us.debian.org/debian/ squeeze/contrib Translation-en Ign http://ftp.us.debian.org/debian/ squeeze/main Translation-en Ign http://ftp.us.debian.org/debian/ squeeze/non-free Translation-en Get:2 http://security.debian.org squeeze/updates Release.gpg [836 B] Ign http://security.debian.org/ squeeze/updates/contrib Translation-en Ign http://security.debian.org/ squeeze/updates/main Translation-en Hit http://ftp.us.debian.org squeeze Release Ign http://ftp.us.debian.org squeeze Release Ign http://security.debian.org/ squeeze/updates/non-free Translation-en Ign http://ftp.us.debian.org squeeze/main Sources/DiffIndex Get:3 http://security.debian.org squeeze/updates Release [86.9 kB] Ign http://security.debian.org squeeze/updates Release Ign http://ftp.us.debian.org squeeze/contrib Sources/DiffIndex Ign http://ftp.us.debian.org squeeze/non-free Sources/DiffIndex Ign http://ftp.us.debian.org squeeze/main amd64 Packages/DiffIndex Ign http://ftp.us.debian.org squeeze/contrib amd64 Packages/DiffIndex Ign http://ftp.us.debian.org squeeze/non-free amd64 Packages/DiffIndex Ign http://security.debian.org squeeze/updates/main Sources/DiffIndex Hit http://ftp.us.debian.org squeeze/main Sources Hit http://ftp.us.debian.org squeeze/contrib Sources Hit http://ftp.us.debian.org squeeze/non-free Sources Hit http://ftp.us.debian.org squeeze/main amd64 Packages Hit http://ftp.us.debian.org squeeze/contrib amd64 Packages Ign http://security.debian.org squeeze/updates/contrib Sources/DiffIndex Ign http://security.debian.org squeeze/updates/non-free Sources/DiffIndex Ign http://security.debian.org squeeze/updates/main amd64 Packages/DiffIndex Ign http://security.debian.org squeeze/updates/contrib amd64 Packages/DiffIndex Ign http://security.debian.org squeeze/updates/non-free amd64 Packages/DiffIndex Hit http://ftp.us.debian.org squeeze/non-free amd64 Packages Get:4 http://backports.debian.org squeeze-backports Release.gpg [836 B] Ign http://backports.debian.org/debian-backports/ squeeze-backports/main Translation-en Hit http://security.debian.org squeeze/updates/main Sources Hit http://security.debian.org squeeze/updates/contrib Sources Hit http://security.debian.org squeeze/updates/non-free Sources Hit http://security.debian.org squeeze/updates/main amd64 Packages Hit http://security.debian.org squeeze/updates/contrib amd64 Packages Hit http://security.debian.org squeeze/updates/non-free amd64 Packages Get:5 http://backports.debian.org squeeze-backports Release [77.6 kB] Ign http://backports.debian.org squeeze-backports Release Hit http://backports.debian.org squeeze-backports/main amd64 Packages/DiffIndex Hit http://backports.debian.org squeeze-backports/main amd64 Packages Fetched 3346 B in 0s (5298 B/s) Reading package lists... Done W: GPG error: http://ftp.us.debian.org squeeze Release: No keyring installed in /etc/apt/trusted.gpg.d/. W: GPG error: http://security.debian.org squeeze/updates Release: No keyring installed in /etc/apt/trusted.gpg.d/. W: GPG error: http://backports.debian.org squeeze-backports Release: No keyring installed in /etc/apt/trusted.gpg.d/. Googling around didn't really help me fix this problem. I tried installing the packages "debian-keyring" and "debian-archive-keyring" but the error does not go away. I'd like to avoid installing unstrusted packages. Any help is appreciated! Why does this error happen and where can I learn more?

    Read the article

  • How to move a ruby on rails application to a new server

    - by ManiacZX
    I have a rails app on an old Ubuntu server I need to move onto a new machine. I haven't worked with ruby on rails so I don't really know anything about the structure of the app. I want to load this onto an Ubuntu 8.04 AMI on Amazon EC2 and am looking for any information regarding the migration process such as: Do I copy over the entire folder defined as the application root in the mongrel config (for ex: /u/apps/myapp/current) or just certain folders? Am I looking for trouble if I go with the latest versions of ruby and the various gems? Any general gotchas to look out for in the process. Current server information: root@webnode001:/# cat /proc/version Linux version 2.6.15-27-server (buildd@terranova) (gcc version 4.0.3 (Ubuntu 4.0.3-1ubuntu5)) #1 SMP Fri Dec 8 18:43:54 UTC 2006 root@webnode001:/# rails -v Rails 1.2.3 root@webnode001:/# mongrel_rails cluster::configure --version Version 1.0.1 root@webnode001:/# gem -v 0.9.0 root@webnode001:/# gem list -l *** LOCAL GEMS *** actionmailer (1.3.3, 1.2.5) Service layer for easy email delivery and testing. actionpack (1.13.3, 1.12.5) Web-flow and rendering framework putting the VC in MVC. actionwebservice (1.2.3, 1.1.6) Web service support for Action Pack. activerecord (1.15.3, 1.15.2, 1.14.4) Implements the ActiveRecord pattern for ORM. activesupport (1.4.2, 1.4.1, 1.3.1) Support and utility classes used by the Rails framework. cgi_multipart_eof_fix (2.1) Fix an exploitable bug in CGI multipart parsing which affects Ruby <= 1.8.5 when multipart boundary attribute contains a non-halting regular expression string. daemons (1.0.7, 1.0.5, 1.0.4, 1.0.2) A toolkit to create and control daemons in different ways eventmachine (0.7.2, 0.7.0) Ruby/EventMachine socket engine library fastercsv (1.2.0, 1.1.0) FasterCSV is CSV, but faster, smaller, and cleaner. fastthread (1.0) Optimized replacement for thread.rb primitives ferret (0.11.4) Ruby indexing library. gem_plugin (0.2.2, 0.2.1) A plugin system based only on rubygems that uses dependencies only mongrel (1.0.1, 0.3.13.4) A small fast HTTP library and server that runs Rails, Camping, Nitro and Iowa apps. mongrel_cluster (0.2.1) Mongrel plugin that provides commands and Capistrano tasks for managing multiple Mongrel processes. mysql (2.7) MySQL/Ruby provides the same functions for Ruby programs that the MySQL C API provides for C programs. piston (1.3.3) Piston is a utility that enables merge tracking of remote repositories. rails (1.2.3, 1.1.6) Web-application framework with template engine, control-flow layer, and ORM. rake (0.7.3, 0.7.1) Ruby based make-like utility. sources (0.0.1) This package provides download sources for remote gem installation swiftiply (0.5.1) A fast clustering proxy for web applications.

    Read the article

  • What are the pros and cons of AWS Elastic Beanstalk compared with other deployment strategies?

    - by James van Dyke
    I'm pretty new to the whole Netflix OSS stack and deployments in general. As a background for my current level of knowledge ops-wise, my main role is as a front-end application engineer. However, I enjoy the operations side of things, so I'm attempting to setup a new deployment strategy and the tooling for a new project. Our Goals Super easy deploys (we want to push a button to update production) Automated deploys to test environments (using Jenkins) Ease of maintenance (we have an app to write, don't want to spend our time fiddling with production issues) Ability to handle a service oriented architecture (many small apps, various languages and data stores) Enough flexibility to ensure we won't have to change strategies any time soon (we're already trying to get away from RightScale) We're OK with a little more initial setup time if doing so will save us some headaches in the future. So, along these lines, I've been listening to podcasts, watching Ops talks, and reading tons of blog posts and based on our goals and what I've taken to be some forming best practices, we've started forming a plan using Asgard, rolling our package into a jar and rolling that into an AMI. We had this all planned out and like the advantages the process versus using a Chef server and converging instances on the fly (we felt this was error prone given our limited timeline and lack of understanding around a Chef server workflow). However, a coworker did a little looking around on his own and felt like Elastic Beanstalk met our needs. I've looked into it and spun up a test environment with a WAR file and an attached RDS database. Things seem to work and I believe that we can automate deploys to a testing environment using Jenkins via the AWS API. Seems simple enough... perhaps too simple. What I'm wondering is, what's the catch? If Elastic Beanstalk is so simple and effective, why isn't it talked about more? I'm having a hard time finding enough objective opinions and facts about the two different deployment strategies, so I thought I'd ask around. Do you use Elastic Beanstalk? If so, why and what factors lead to that decision? What do you like and dislike? If you don't use Elastic Beanstalk but considered it, what do you use and why didn't you use Elastic Beanstalk? What are the advantages and disadvantages to a Elastic Beanstalk based deployment strategy for an SOA? That is, will Elastic Beanstalk work well with many small applications that rely on each other to work?

    Read the article

  • .htaccess ignored, SPECIFIC to EC2 - not the usual suspects

    - by tedneigerux
    I run 8-10 EC2 based web servers, so my experience is many hours, but is limited to CentOS; specifically Amazon's distribution. I'm installing Apache using yum, so therefore getting Amazon's default compilation of Apache. I want to implement canonical redirects from non-www (bare/root) domain to www.domain.com for SEO using mod_rewrite BUT MY .htaccess FILE IS CONSISTENTLY IGNORED. My troubleshooting steps (outlined below) lead me to believe it's something specific to Amazon's build of Apache. TEST CASE Launch a EC2 Instance, e.g. Amazon Linux AMI 2013.03.1 SSH to the Server Run the commands: $ sudo yum install httpd $ sudo apachectl start $ sudo vi /etc/httpd/conf/httpd.conf $ sudo apachectl restart $ sudo vi /var/www/html/.htaccess In httpd.conf I changed the following, in the DOCROOT section / scope: AllowOverride All In .htaccess, added: (EDIT, I added RewriteEngine On later) RewriteCond %{HTTP_HOST} ^domain\.com$ [NC] RewriteRule ^/(.*) http://www.domain.com/$1 [R=301,L] Permissions on .htaccess are correct, AFAI can tell: $ ls -al /var/www/html/.htaccess -rwxrwxr-x 1 git apache 142 Jun 18 22:58 /var/www/html/.htaccess Other info: $ httpd -v Server version: Apache/2.2.24 (Unix) Server built: May 20 2013 21:12:45 $ httpd -M Loaded Modules: core_module (static) ... rewrite_module (shared) ... version_module (shared) Syntax OK EXPECTED BEHAVIOR $ curl -I domain.com HTTP/1.1 301 Moved Permanently Date: Wed, 19 Jun 2013 12:36:22 GMT Server: Apache/2.2.24 (Amazon) Location: http://www.domain.com/ Connection: close Content-Type: text/html; charset=UTF-8 ACTUAL BEHAVIOR $ curl -I domain.com HTTP/1.1 200 OK Date: Wed, 19 Jun 2013 12:34:10 GMT Server: Apache/2.2.24 (Amazon) Connection: close Content-Type: text/html; charset=UTF-8 TROUBLESHOOTING STEPS In .htaccess, added: BLAH BLAH BLAH ERROR RewriteCond %{HTTP_HOST} ^domain\.com$ [NC] RewriteRule ^/(.*) http://www.domain.com/$1 [R=301,L] My server threw an error 500, so I knew the .htaccess file was processed. As expected, it created an Error log entry: [Wed Jun 19 02:24:19 2013] [alert] [client XXX.XXX.XXX.XXX] /var/www/html/.htaccess: Invalid command 'BLAH BLAH BLAH ERROR', perhaps misspelled or defined by a module not included in the server configuration Since I have root access on the server, I then tried moving my rewrite rule directly to the httpd.conf file. THIS WORKED. This tells us several important things are working. $ curl -I domain.com HTTP/1.1 301 Moved Permanently Date: Wed, 19 Jun 2013 12:36:22 GMT Server: Apache/2.2.24 (Amazon) Location: http://www.domain.com/ Connection: close Content-Type: text/html; charset=UTF-8 HOWEVER, it is bothering me that it didn't work in the .htaccess file. And I have other use cases where I need it to work in .htaccess (e.g. an EC2 instance with named virtual hosts). Thank you in advance for your help.

    Read the article

  • Running a Mongo Replica Set on Azure VM Roles

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/10/15/running-a-mongo-replica-set-on-azure-vm-roles.aspxSetting up a MongoDB Replica Set with a bunch of Azure VMs is straightforward stuff. Here’s a step-by-step which gets you from 0 to fully-redundant 3-node document database in about 30 minutes (most of which will be spent waiting for VMs to fire up). First, create yourself 3 VM roles, which is the minimum number of nodes you need for high availability. You can use any OS that Mongo supports. This guide uses Windows but the only difference will be the mechanism for starting the Mongo service when the VM starts (Windows Service, daemon etc.) While the VMs are provisioning, download and install Mongo locally, so you can set up the replica set with the Mongo shell. We’ll create our replica set from scratch, doing one machine at a time (if you have a single node you want to upgrade to a replica set, it’s the same from step 3 onwards): 1. Setup Mongo Log into the first node, download mongo and unzip it to C:. Rename the folder to remove the version – so you have c:\MongoDB\bin etc. – and create a new folder for the logs, c:\MongoDB\logs. 2. Setup your data disk When you initialize a node in a replica set, Mongo pre-allocates a whole chunk of storage to use for data replication. It will use up to 5% of your data disk, so if you use a Windows VM image with a defsault 120Gb disk and host your data on C:, then Mongo will allocate 6Gb for replication. And that takes a while. Instead you can create yourself a new partition by shrinking down the C: drive in Computer Management, by say 10Gb, and then creating a new logical disk for your data from that spare 10Gb, which will be allocated as E:. Create a new folder, e:\data. 3. Start Mongo When that’s done, start a command line, point to the mongo binaries folder, install Mongo as a Windows Service, running in replica set mode, and start the service: cd c:\mongodb\bin mongod -logpath c:\mongodb\logs\mongod.log -dbpath e:\data -replSet TheReplicaSet –install net start mongodb 4. Open the ports Mongo uses port 27017 by default, so you need to allow access in the machine and in Azure. In the VM, open Windows Firewall and create a new inbound rule to allow access via port 27017. Then in the Azure Management Console for the VM role, under the Configure tab add a new rule, again to allow port 27017. 5. Initialise the replica set Start up your local mongo shell, connecting to your Azure VM, and initiate the replica set: c:\mongodb\bin\mongo sc-xyz-db1.cloudapp.net rs.initiate() This is the bit where the new node (at this point the only node) allocates its replication files, so if your data disk is large, this can take a long time (if you’re using the default C: drive with 120Gb, it may take so long that rs.initiate() never responds. If you’re sat waiting more than 20 minutes, start another instance of the mongo shell pointing to the same machine to check on it). Run rs.conf() and you should see one node configured. 6. Fix the host name for the primary – *don’t miss this one* For the first node in the replica set, Mongo on Windows doesn’t populate the full machine name. Run rs.conf() and the name of the primary is sc-xyz-db1, which isn’t accessible to the outside world. The replica set configuration needs the full DNS name of every node, so you need to manually rename it in your shell, which you can do like this: cfg = rs.conf() cfg.members[0].host = ‘sc-xyz-db1.cloudapp.net:27017’ rs.reconfig(cfg) When that returns, rs.conf() will have your full DNS name for the primary, and the other nodes will be able to connect. At this point you have a working database, so you can start adding documents, but there’s no replication yet. 7. Add more nodes For the next two VMs, follow steps 1 through to 4, which will give you a working Mongo database on each node, which you can add to the replica set from the shell with rs.add(), using the full DNS name of the new node and the port you’re using: rs.add(‘sc-xyz-db2.cloudapp.net:27017’) Run rs.status() and you’ll see your new node in STARTUP2 state, which means its initializing and replicating from the PRIMARY. Repeat for your third node: rs.add(‘sc-xyz-db3.cloudapp.net:27017’) When all nodes are finished initializing, you will have a PRIMARY and two SECONDARY nodes showing in rs.status(). Now you have high availability, so you can happily stop db1, and one of the other nodes will become the PRIMARY with no loss of data or service. Note – the process for AWS EC2 is exactly the same, but with one important difference. On the Azure Windows Server 2012 base image, the MongoDB release for 64-bit 2008R2+ works fine, but on the base 2012 AMI that release keeps failing with a UAC permission error. The standard 64-bit release is fine, but it lacks some optimizations that are in the 2008R2+ version.

    Read the article

  • flickr.photos.search strange behavior with nested xml response

    - by JohnIdol
    I am querying flickr with the following request: view-source:http://www.flickr.com/services/rest/?method=flickr.photos.search&api_key=MY_KEY&text=cork&per_page=12&&format=rest if I put the above url in the browser I get the following as I would expect: <rsp stat="ok"> <photos page="1" pages="22661" perpage="12" total="271924"> <photo id="4587565363" owner="46277999@N05" secret="717118838d" server="4029" farm="5" title="06052010(001)" ispublic="1" isfriend="0" isfamily="0" /> <photo id="4587466773" owner="25901874@N06" secret="32c5a1a57f" server="4002" farm="5" title="black wine cork 2 BW" ispublic="1" isfriend="0" isfamily="0" /> <photo id="4588084730" owner="25901874@N06" secret="b27eef5635" server="4032" farm="5" title="black wine cork 2" ispublic="1" isfriend="0" isfamily="0" /> <photo id="4587457789" owner="43115275@N00" secret="ae0daa0ab6" server="4034" farm="5" title="Matthew" ispublic="1" isfriend="0" isfamily="0" /> <photo id="4587443837" owner="49967920@N07" secret="2b4c1a58de" server="4066" farm="5" title="Two car garage" ispublic="1" isfriend="0" isfamily="0" /> <photo id="4588050906" owner="45827769@N06" secret="72de138f7e" server="4067" farm="5" title="Dabie Mountains, Central China" ispublic="1" isfriend="0" isfamily="0" /> <photo id="4587979300" owner="36531359@N00" secret="e5f8b30734" server="3299" farm="4" title="stroll" ispublic="1" isfriend="0" isfamily="0" /> <photo id="4587304769" owner="7904649@N02" secret="995062f550" server="4034" farm="5" title="IMG_4325" ispublic="1" isfriend="0" isfamily="0" /> <photo id="4587306827" owner="7904649@N02" secret="b92d7ff916" server="4050" farm="5" title="IMG_4329" ispublic="1" isfriend="0" isfamily="0" /> <photo id="4587311657" owner="7904649@N02" secret="bb1b34ccf8" server="4053" farm="5" title="IMG_4336" ispublic="1" isfriend="0" isfamily="0" /> <photo id="4587933110" owner="7904649@N02" secret="f733b1d482" server="3300" farm="4" title="IMG_4334" ispublic="1" isfriend="0" isfamily="0" /> <photo id="4587930650" owner="7904649@N02" secret="9bc202b3ed" server="4018" farm="5" title="IMG_4331" ispublic="1" isfriend="0" isfamily="0" /> </photos> </rsp> Now I've got something very simple PHP to query the exact same url, called from javascript through AJAX - it all works as I can see putting a breakpoint in my javascript function that handles the response: function HandleResponse(response) { document.getElementById('ResponseDiv').innerHTML = response; } Looking at response value in the snippet above I see what I expect (xml formed as above) - then the strangest thing happens: when I inspect the DOM instead of having photo elements all as children of photos they're all nested within each other. What amI missing - How is this possible in light of the fact that the response string is definitely formed as I would expect it to be?

    Read the article

  • Can't launch glassfish on ec2 - can't open port

    - by orange80
    I'm trying to start glassfish on an EBS-based AMI of Ubuntu 10.04 64-bit. I have used glassfish on non-ec2 servers with no problems, but on ec2 I get this message: $ sudo -u glassfish bin/asadmin start-domain domain1 There is a process already using the admin port 4848 -- it probably is another instance of a GlassFish server. Command start-domain failed. I know that ec2 has requires that firewall rules be modified using ec2-authorize to let outside traffic thru the firewall, as I had to do to make ssh work. This still doesn't explain the port error when all I'm trying to do is start glassfish so I can try $ wget localhost:8080and make sure it's working. This is very frustrating and I'd really appreciate any help. Thanks. FINAL UPDATE: Sorry if you came here looking for answers. I never figured out what was causing the problem. I created another fresh instance, installed the same stuff, and Glassfish worked perfectly. Something obviously got boned during installation, but I have no idea what. I guess it will remain a mystery. UPDATE: Here's what I get from netstat: # netstat -nuptl Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 462/sshd tcp6 0 0 :::22 :::* LISTEN 462/sshd udp 0 0 0.0.0.0:5353 0.0.0.0:* 483/avahi-daemon: r udp 0 0 0.0.0.0:1194 0.0.0.0:* 589/openvpn udp 0 0 0.0.0.0:37940 0.0.0.0:* 483/avahi-daemon: r udp 0 0 0.0.0.0:68 0.0.0.0:* 377/dhclient3 UPDATE: One more thing... I know that the "net.ipv6.bindv6only" kernel option can cause problems with java networking, so I did set this: # sysctl -w net.ipv6.bindv6only=0 UPDATE: I also verified that it has nothing at all to do with the port number (4848). As you can see here, when I changed the admin-listener port in domain.xml to 4949, I get a similar message: # sudo -u glassfish bin/asadmin start-domain domain1 There is a process already using the admin port 4949 -- it probably is another instance of a GlassFish server. Command start-domain failed. UPDATE: Here are the contents of /etc/hosts: 127.0.0.1 localhost # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters ff02::3 ip6-allhosts I should mention that I have another Ubuntu Lucid 10.04 64-bit slice that is NOT hosted on ec2, and set it up the exact same way with no problems whatsoever. Also server.log doesn't offer much insight either: # cat ./server.log Nov 20, 2010 8:46:49 AM com.sun.enterprise.admin.launcher.GFLauncherLogger info INFO: JVM invocation command line: /usr/lib/jvm/java-6-sun-1.6.0.22/bin/java -cp /opt/glassfishv3/glassfish/modules/glassfish.jar -XX:+UnlockDiagnosticVMOptions -XX:MaxPermSize=192m -XX:NewRatio=2 -XX:+LogVMOutput -XX:LogFile=/opt/glassfishv3/glassfish/domains/domain1/logs/jvm.log -Xmx512m -client -javaagent:/opt/glassfishv3/glassfish/lib/monitor/btrace-agent.jar=unsafe=true,noServer=true -Dosgi.shell.telnet.maxconn=1 -Djdbc.drivers=org.apache.derby.jdbc.ClientDriver -Dfelix.fileinstall.dir=/opt/glassfishv3/glassfish/modules/autostart/ -Djavax.net.ssl.keyStore=/opt/glassfishv3/glassfish/domains/domain1/config/keystore.jks -Dosgi.shell.telnet.port=6666 -Djava.security.policy=/opt/glassfishv3/glassfish/domains/domain1/config/server.policy -Dfelix.fileinstall.poll=5000 -Dcom.sun.aas.instanceRoot=/opt/glassfishv3/glassfish/domains/domain1 -Dcom.sun.enterprise.config.config_environment_factory_class=com.sun.enterprise.config.serverbeans.AppserverConfigEnvironmentFactory -Dosgi.shell.telnet.ip=127.0.0.1 -Djava.endorsed.dirs=/opt/glassfishv3/glassfish/modules/endorsed:/opt/glassfishv3/glassfish/lib/endorsed -Dcom.sun.aas.installRoot=/opt/glassfishv3/glassfish -Djava.ext.dirs=/usr/lib/jvm/java-6-sun-1.6.0.22/lib/ext:/usr/lib/jvm/java-6-sun-1.6.0.22/jre/lib/ext:/opt/glassfishv3/glassfish/domains/domain1/lib/ext -Dfelix.fileinstall.bundles.new.start=true -Djavax.net.ssl.trustStore=/opt/glassfishv3/glassfish/domains/domain1/config/cacerts.jks -Dcom.sun.enterprise.security.httpsOutboundKeyAlias=s1as -Djava.security.auth.login.config=/opt/glassfishv3/glassfish/domains/domain1/config/login.conf -DANTLR_USE_DIRECT_CLASS_LOADING=true -Dfelix.fileinstall.debug=1 -Dorg.glassfish.web.rfc2109_cookie_names_enforced=false -Djava.library.path=/opt/glassfishv3/glassfish/lib:/usr/lib/jvm/java-6-sun-1.6.0.22/jre/lib/amd64/server:/usr/lib/jvm/java-6-sun-1.6.0.22/jre/lib/amd64:/usr/lib/jvm/java-6-sun-1.6.0.22/lib/amd64:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib com.sun.enterprise.glassfish.bootstrap.ASMain -domainname domain1 -asadmin-args start-domain,,,domain1 -instancename server -verbose false -debug false -asadmin-classpath /opt/glassfishv3/glassfish/modules/admin-cli.jar -asadmin-classname com.sun.enterprise.admin.cli.AsadminMain -upgrade false -domaindir /opt/glassfishv3/glassfish/domains/domain1 -read-stdin true

    Read the article

< Previous Page | 7 8 9 10 11 12  | Next Page >