Search Results

Search found 2845 results on 114 pages for 'amazon dynamodb'.

Page 58/114 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • Google app engine or Amazin ec2 for Restful services and direct access to datastore

    - by imran
    I'm thinking of building a Restful app on either App engine or ec2 devloped in Java. I'm interested in opinions/experience of using the two options for this. The primary purpose is to create web services to write and retrieve data through a mobile device...basically creating an API for the service I want to create. It seems to me it would be quicker and cheaper in the beginning to go with google app engine using either restlet or grails.But I also think that I could run into problems in the future when I want to so somthing more advanced and might be restricted by app engines environment. I also want to be able to do data analysis on the data in the datastore as well. It seems that with app engine this would be hard as I don't have direct access to the datastore ( in Amazon I could still have access to the underlying db if I go with MySQL ) .

    Read the article

  • PIG doesn't read my custom InputFormat

    - by Simon Guo
    I have a custom MyInputFormat that suppose to deal with record boundary problem for multi-lined inputs. But when I put the MyInputFormat into my UDF load function. As follow: public class EccUDFLogLoader extends LoadFunc { @Override public InputFormat getInputFormat() { System.out.println("I am in getInputFormat function"); return new MyInputFormat(); } } public class MyInputFormat extends TextInputFormat { public RecordReader createRecordReader(InputSplit inputSplit, JobConf jobConf) throws IOException { System.out.prinln("I am in createRecordReader"); //MyRecordReader suppose to handle record boundary return new MyRecordReader((FileSplit)inputSplit, jobConf); } } For each mapper, it print out I am in getInputFormat function but not I am in createRecordReader. I am wondering if anyone can provide a hint on how to hoop up my costome MyInputFormat to PIG's UDF loader? Much Thanks. I am using PIG on Amazon EMR.

    Read the article

  • AWS SES for bulk mail : Require email verification?

    - by weotch
    We're thinking of moving to Amazon's SES for sending bulk mail. It appears that we have a unique API call for each email we want to send. So if there are 20k emails to send, we make 20k API calls. My question is, do we need to verify these email addresses before we send to them? We have an existing database of users and I'd rather the transition to SES to be transparent to them. I noticed that SES has an API method for verifying emails. If we aren't required to verify, why would someone would use this method?

    Read the article

  • What is the cheapest non-colocation way to serve about 10 static files at a rate of 100 megabits per

    - by Mark Maunder
    I've looked at Amazon S3 and it costs roughly $4746 per month for 100 megabits/s (which translates into 31,640 Gigabytes of data transferred. That's at a rate of $0.15 per gig.) I haven't found a cheaper "cloud" option. I'm curious if there's any other cloud hosting option out there cheaper than S3. Uptime is not an issue because I can build failover for most things into the browser. e.g. I can use javascript to say "if the image didn't load then go to this other URL instead." FYI I'm currently using a colocation facility which is about 30% cheaper than S3 and I'm familiar with colo prices - so this question is really about "cloud" services and by that I mean services where I don't have to worry about the infrastructure.

    Read the article

  • Database storage for high sample rate data in web app

    - by Jim
    I've got multiple sensors feeding data to my web app. Each channel is 5 samples per second and the data gets uploaded bundled together in 1 minute json messages (containing 300 samples). The data will be graphed using flot at multiple zoom levels from 1 day to 1 minute. I'm using Amazon SimpleDB and I'm currently storing the data in the 1 minute chunks that I receive it in. This works well for high zoom levels, but for full days there will be simply be too many rows to retrieve. The idea I've currently got is that every hour I can crawl through the data and collect together 300 samples for the last hour and store them in an hour Domain (table if you like). Does this sound like a reasonable solution? How have others implemented the same sort of systems?

    Read the article

  • Serving GZipped files from s3 using the Asset Pipeline

    - by kmurph79
    I have a Rails 3.2.3 app on Heroku and I'm using the asset_sync gem to serve my assets from s3, via these instructions. It works great, except s3 is not serving up the gzipped css/js files (just the uncompressed version). I've enabled gzip compression, to no avail: config.gzip_compression = true According to Using GZIP with html pages served from Amazon S3 I need to add meta-data to the s3 object for uploading. How would I do this in concert with the Asset Pipeline? Thank you for any help.

    Read the article

  • Why wouldn't an S3 ACL "stick"?

    - by Chris Phillips
    We would like to set an ACL to allow access to one of our buckets with a partner account. We've tested the process on a test account and everything works fine. On our production account/buckets, however, we can set the ACL and see the update but as soon as we attempt to access the bucket from the other account we get a forbidden response. Afterwards, when we look at the ACL list for the bucket, the permission is gone. We've tried using both Amazon's new S3 tool in the AWS Management Console and CloudBerry Explorer and both tools exhibit exactly the same behavior. Using the same process to update an ACL from our test account works as expected ( the ACL update "sticks" ). What would cause the ACL to not "stick"? Does anyone have any ideas on how to fix/workaround the problem?

    Read the article

  • CSS not displayed depending on page

    - by Kanjiroushi
    I have a friend that has a really strange issue with my website. When he clicks on http://www.copeo.fr/ the page displays fine but when he clicks on a link like www.copeo.fr/user/ the CSS is not applied even after a refresh. The raw html does display. I asked him to display the CSS that is hosted on amazon S3 hcopeoressources.s3.amazonaws.com/style/futurvert/style.css and it displays fine. The code validates on W3C validator so does the CSS. I am lost what could be the origin of the issue. Could it be its enterprise cache? configuration of IE7 on his machine? If it happens to someone else who could explain the issue to me, I am all hears. Thanks

    Read the article

  • How to host a naked domain on a CDN?

    - by rjw79
    If I have a domain that I wish to serve "naked" eg http://examp.le/, and efficiently with a CDN, what are my options? The issue is that the CDNs I looked at all want you to use a CNAME so that they can do geo ip lookup. CNAMES are not meant to be served at the same level as other records, and this apparently breaks some dns resolvers. You at least need SOA and MX records at the same level for a naked domain. The only solutions are: having A records in your own dns, thus skipping the geo ip, or finding a cdn who will allow delegation of the whole domain so they can do geo ip things for the A record directly. I've tried googling and can't find any Cdn who offers this. Any ideas? I looked closely at Amazon cloudfront and rackspace cloudfiles. I couldn't work it out for those.

    Read the article

  • Cluster of computers for rent?

    - by R S
    I am doing a project in the university which requires running of multiple instances (1000s) of a program I've written (in C++), which runs for quite a while (say 2 hours). The program is very self contained - it does not require input files, and the only dependency I think is boost. I'm currently using the university-owned cluster of computer. However, it's quite old and the jobs dispatching and monitors services are pretty bad. So I was wondering whether I can run my jobs elsewhere, for some money. For example, I looked a bit into Google App Engine, but as it seems every job must end after 30 seconds it is not suitable for me. Maybe Amazon EC2? Do you know of such options?

    Read the article

  • Free service that allows storing game data online?

    - by StackedCrooked
    I have created a small game in Java and I would like to add the ability for a player to publish his highscores online. I'm willing to write the server software myself (it's easy these days with Ruby Mongrel, or even C++). All I just need to have some sort hosting. One solution that immediately comes to mind is Amazon EC2. But that's kind of expensive for my needs. Since the requirements are very minimal (I don't even need a website, just a web service) I think there may be a cheaper solution out there. Does anyone know of a free or cheap provider for this kind of thing?

    Read the article

  • Ruby: Streaming large AWS S3 object freezes

    - by Peter
    Hi, I am using the ruby aws/s3 library to retrieve files from Amazon S3. I stream an object and write it to file as per the documentation (with debug every 100 chunks to confirm progress) This works for small files, but randomly freezes downloading large (150MB) files on VPS Ubuntu. Fetching the same files (150MB) from my mac on a much slower connection works just fine. When it hangs there is no error thrown and the last line of debug output is the 'Finished chunk'. I've seen it write between 100 and 10,000 chunks before freezing. Anyone come across this or have ideas on what the cause might be? Thanks The code that hangs: i=1 open(local_file, 'w') do |f| AWS::S3::S3Object.value(key, @s3_bucket) do |chunk| puts("Writing chunk #{i}") f.write chunk.read_body puts("Finished chunk #{i}") i=i+1 end end

    Read the article

  • What is the best instance type to use for hosting a website on ec2?

    - by Josh
    Amazon offers two instance types on EC2: 1) On-Demand and 2) Reserved. After reading the docs on these, I don't really understand the difference from an end-user perspective. More specifically, I'd like to know the answer to this question: is one or the other better for web applications? Based on their names and descriptions, it seems as though on-demand instances may get wiped away from the server altogether if they're not in use which means that they need to be restarted when a request finally does come in. That seems like a pretty bad thing for a website. Am I just misinterpreting the docs? Thanks!

    Read the article

  • Cannot access data from hbase with amazone ec2

    - by Najeeb Thalakkatt
    I have a single node hadoop machine in which hbase is running in the amazone ec2 instance. Due to some reason the server got restarted. So i need to start hadoop and hbase again. Then its working fine, but the old data in the hbase cannot be accessed through the web-servies. While i use the shell command its working fine, i am getting the data. So i created the scenario on my local server machine, but its working fine. The version details are as follows. hadoop-0.20.2 hbase-0.90.5 apache-tomcat-7.0.30 in Amazon ec2 medium instance And i use restful web-services with Orm-hbase to access data.

    Read the article

  • What payment gateways do real customers really use when given the choice?

    - by ??????
    I would like to give customers the option of paying however they can whether that be through a proper gateway (e.g. SagePay) or through something else such as PayPal, Amazon Checkout or Google Checkout. Personally I have not bought anything through the Amazon Checkout except for on Amazon.co.uk and my PayPal buys have been limited. As for Google Checkout I have no idea what that is or how it works from a consumer perspective. I understand that people buying from smaller sites are happier to pay by PayPal as they have an account already and trust PayPal. As for Amazon Payments and Google Checkout, do people actually use them if given the choice? There are a lot of people on Kindles these days, happy to buy stuff via Amazon on their Kindle. Would Amazon Payments make sense to this growing crowd? With too many payment gateways on offer it might be confusing at the checkout. Does anyone know if this is a problem for genuine customers? I also have not seen many 'pay by Amazon Payments' icons on websites (you see PayPal all the time). Does advertising the fact that you can pay by Amazon Payments increase sales, e.g. to Kindle owners that have a nebulous book-buying account that 'their other half doesn't know about'?

    Read the article

  • ez components and AWS PHP SDK makes ez components freak out

    - by David
    Hi, I try to work with ez Components and AWS PHP SDK at the same time. I have a file called resize.php which is just handling resizing images using the ez Components ImageTransition tools. I queue the image for resize in Amazon AWS SQS. If I load the AWS PHP SDK and ez Components in the same file, PHP always complains about not finding the ez Components classes. Code looks something like this: amazonSQS.php: require 'modules/resize.php'; require 'modules/aws/sdk.class.php'; $sqs = new AmazonSQS(); $response = $sqs->send_message($queue_url, $message); resize.php: function resize_image($filename) { $settings = new ezcImageConverterSettings( array( //new ezcImageHandlerSettings( 'GD', 'ezcImageGdHandler' ), new ezcImageHandlerSettings( 'ImageMagick', 'ezcImageImagemagickHandler' ), ) ); Error message: Fatal error: Class 'ezcImageConverterSettings' not found in /home/www.com/public_html/modules/resize.php on line 10 If I call resize.php from another PHP file which has AWS not included, it works fine. I load ezComponents like this: require 'ezc/Base/ezc_bootstrap.php'; It is installed as a PEAR package. Any idea someone?

    Read the article

  • Heroku and Refinerycms: Application failed to start ~ attachment_fu problem

    - by John Deely
    Ok so I'm trying to get Refinerycms working with Heroku, and I'm new at all of this. I've set up an amazon s3 account and added keys and ids to the amazon_s3.yml files. When launched on Heroku at gart.heroku.com I get the following error: App failed to start /disk1/home/slugs/141557_e8490b3_d5eb/mnt/vendor/plugins/attachment_fu/lib/technoweenie/attachment_fu/backends/s3_backend.rb:187:in read': No such file or directory - /disk1/home/slugs/141557_e8490b3_d5eb/mnt/config/amazon_s3.yml (Errno::ENOENT) from /disk1/home/slugs/141557_e8490b3_d5eb/mnt/vendor/plugins/attachment_fu/lib/technoweenie/attachment_fu/backends/s3_backend.rb:187:inincluded' from /disk1/home/slugs/141557_e8490b3_d5eb/mnt/vendor/plugins/attachment_fu/lib/technoweenie/attachment_fu.rb:123:in include' from /disk1/home/slugs/141557_e8490b3_d5eb/mnt/vendor/plugins/attachment_fu/lib/technoweenie/attachment_fu.rb:123:inhas_attachment' from /disk1/home/slugs/141557_e8490b3_d5eb/mnt/app/models/image.rb:13 from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in gem_original_require' from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:inrequire' from /usr/local/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:158:in require' from /usr/local/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:265:inrequire_or_load' ... 42 levels... from /usr/local/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/builder.rb:29:in instance_eval' from /usr/local/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/builder.rb:29:ininitialize' from /home/heroku_rack/heroku.ru:1:in `new' from /home/heroku_rack/heroku.ru:1 The s3_backend.rb line 187 contains: @@s3_config = @@s3_config = YAML.load(ERB.new(File.read(@@s3_config_path)).result)[RAILS_ENV].symbolize_keys Any help would be great!

    Read the article

  • Deployed a Zend App on EC2 and its not working

    - by Gublooo
    Hey Guys, I'm confused about the problem and not sure if I can provide enough details. I have a php app built on Zend framework which I've successfully deployed on other hosting companies. I am now trying to move to Amazon EC2 I moved all my code and set my domain to point to the IP address. So far so good. Now when I access my home page say www.example.com - everything looks good - The home page opens up which means the IndexController is being called and the index method is properly executed which pulls data from the database and displays it in the index.phtml page. So this lets me to believe that everything is working fine. But every link I click on the home page whether its a simple contact us link - or any other action that I directly try to call even through the URL results in 404 Not Found The requested URL /user/add was not found on this server. Apache/2.2.9 (Fedora) Server at www.example.com Port 80 The interesting thing is the home page opens up fine when I call www.example.com but when I give the entire path which is www.example.com/index/index - I get the same above error. I have checked the logs and there are no errors. Has anyone encountered something similar or have any idea if I'm missing a simple step or something like rewrite rule maybe. Its running on LAMP Any ideas Thanks

    Read the article

  • Knowing the selections made on a 'multichooser' box in a mechanical turk hit (using Command Line Too

    - by gveda
    Hi All, I am new to Amazon Mechanical Turk, and wanted to create a hit with a qualification task. I am using the command line tools interface. One of the questions in my qualification task involves users selecting a number of options. I use a 'multichooser' selection type. Now I want to grade the responses based on the selections, where each selection has a different score. So for example, s1 has a score of 5, s2 of 10, s3 of 6, and so on. If the user selects s1 and s3, he/she gets a score of 11. Unfortunately, doing something like the following does not work: <AnswerOption> <SelectionIdentifier>s1</SelectionIdentifier> <AnswerScore>5</AnswerScore> </AnswerOption> <AnswerOption> <SelectionIdentifier>s2</SelectionIdentifier> <AnswerScore>10</AnswerScore> </AnswerOption> <AnswerOption> <SelectionIdentifier>s3</SelectionIdentifier> <AnswerScore>6</AnswerScore> </AnswerOption> If I do this, when I select multiple things, I get a score of 0. If I select only one option, say s1, then I get the appropriate score. Can you please help me on how to go about this? I could ask the same question 5 times with the same options, but then users might choose the same answer multiple times - something I wish to avoid. Thanks! Gaurav

    Read the article

  • S3 Backup Memory Usage in Python

    - by danpalmer
    I currently use WebFaction for my hosting with the basic package that gives us 80MB of RAM. This is more than adequate for our needs at the moment, apart from our backups. We do our own backups to S3 once a day. The backup process is this: dump the database, tar.gz all the files into one backup named with the correct date of the backup, upload to S3 using the python library provided by Amazon. Unfortunately, it appears (although I don't know this for certain) that either my code for reading the file or the S3 code is loading the entire file in to memory. As the file is approximately 320MB (for today's backup) it is using about 320MB just for the backup. This causes WebFaction to quit all our processes meaning the backup doesn't happen and our site goes down. So this is the question: Is there any way to not load the whole file in to memory, or are there any other python S3 libraries that are much better with RAM usage. Ideally it needs to be about 60MB at the most! If this can't be done, how can I split the file and upload separate parts? Thanks for your help. This is the section of code (in my backup script) that caused the processes to be quit: filedata = open(filename, 'rb').read() content_type = mimetypes.guess_type(filename)[0] if not content_type: content_type = 'text/plain' print 'Uploading to S3...' response = connection.put(BUCKET_NAME, 'daily/%s' % filename, S3.S3Object(filedata), {'x-amz-acl': 'public-read', 'Content-Type': content_type})

    Read the article

  • Multiple elements with the same name with SimpleXML and Java

    - by LouieGeetoo
    I'm trying to use SimpleXML to parse an XML document (an ItemLookupResponse for a book from the Amazon Product Advertising API) which contains the following element: <ItemAttributes> <Author>Shane Conder</Author> <Author>Lauren Darcey</Author> <Manufacturer>Pearson Educacion</Manufacturer> <ProductGroup>Book</ProductGroup> <Title>Android Wireless Application Development: Barnes & Noble Special Edition</Title> </ItemAttributes> My problem is that I don't know how to deal with the multiple possible Author elements. Here's what I have right now for the corresponding POJO (Plain Old Java Object), keeping in mind that it's not handling the case of multiple Authors: @Element public class ItemAttributes { @Element public String Author; @Element public String Manufacturer; @Element public String Title; } (I don't care about the ProductGroup, so it's not in the class -- I'm just setting SimpleXML's strict mode to off to allow for that.) I couldn't find an example in the documentation that corresponded with such a case. Using an ElementList with (inline=true) seemed along the right lines, but I didn't see how to do it for String (as opposed to a separate Author class, which I have no need for and don't see how it would even work). Here's a similar question and answer, but for PHP: php - simpleXML how to access a specific element with the same name as others? I don't know what the Java equivalent would be to the accepted answer. Thanks in advance.

    Read the article

  • Can't seem to sign AWS Cloud Front URL properly

    - by Joe Corkery
    Hi everybody, I found a lot of detailed examples online on how to sign an Amazon CloudFront URL for private content. Unfortunately, whenever I implement these examples my URL doesn't seem to work. The resource path is correct because I can download the file when it is set for world read, but the URL doesn't work when set just for authorized users. The PHP code I am using is below. If anybody has any insights as to what I might be doing wrong (I'm guessing it is something obvious that I am just not seeing right now), it would be greatly appreciated. function urlCloudFront($resource) { $AWS_CF_KEY = 'APKA...'; $priv_key = file_get_contents(path_to_pem_file); $pkeyid = openssl_get_privatekey($priv_key); $expires = strtotime("+ 3 hours"); $policy_str = '{"Statement":[{"Resource":"'.$resource.'","Condition":{"DateLessThan":{"AWS:EpochTime":'.$expires.'}}}]}'; $policy_str = trim( preg_replace( '/\s+/', '', $policy_str ) ); $res = openssl_sign($policy_str, $signature, $pkeyid, OPENSSL_ALGO_SHA1); $signature_base64 = (base64_encode($signature)); $repl = array('+' => '-','=' => '_','/' => '~'); $signature_base64 = strtr($signature_base64,$repl); $url = $resource . '?Expires=' .$expires. '&Signature=' . $signature_base64 . '&Key-Pair-Id='. $AWS_CF_KEY; print '<p><A href="' .$url. '">Download VIDA (CloudFrount)</A>'; } urlCloudFront("http://mydistcloud.cloudfront.net/mydir/myfile.tar.gz"); Thanks.

    Read the article

  • API-based solutions for sending payments to people without bank accounts

    - by Tauren
    I'm looking for inexpensive ways to send payments to hundreds or thousands of individual contractors, even if they do not have a bank account. Currently I only need to support payment in the USA, but may eventually be international. Here's the scenario: I offer a service that allows an organization or manager-type person to coordinate contractors for very short term jobs. These jobs are typically only an hour or two in length. A contractor may get only one job over an entire month, several jobs spread out over a month, multiple jobs on a single day, or any other combination. Thus, a single contractor could earn as little as one job's payment up to potentially payment for dozens. Payment for a month could be as little as $10 up to $1000's. Right now, the system provides payroll reports to the manager and it is the manager's responsibility to produce checks, stuff envelopes, and send mail via the US postal service. I'd like to remove this burden from the manager and have all the payments taken care of for them automatically by the system. I'm not sure where to start or what the best options would be. I'm starting to look into the following solutions, but don't know specifics yet and would like some advice before pursuing them. I'd also like to hear about other ideas or suggestions. PayPal (Send Money, Adaptive Payments, x.com, other???) Amazon (Flexible Payments System?) Fund some sort of pre-paid debit card? Web service with API that mails checks for you? Direct deposit via a bank API (for users with bank accounts)? The problem is that many of these contractors may not be able to obtain bank accounts or credit cards within the USA. I don't mind doing a hybrid of solutions, but are there any that would work well with this issue? I want the solution to be easy to use for the contractors, meaning that they can get the money easily (via check in the mail, debit card ATM withdrawal, etc.)

    Read the article

  • How can I lookup data about a book from its barcode number?

    - by Joel Spolsky
    I'm building the world's simplest library application. All I want to be able to do is scan in a book's UPC (barcode) using a typical scanner (which just types the numbers of the barcode into a field) and then use it to look up data about the book... at a minimum, title, author, year published, and either the Dewey Decimal or Library of Congress catalog number. The goal is to print out a tiny sticker ("spine label") with the card catalog number that I can stick on the spine of the book, and then I can sort the books by card catalog number on the shelves in our company library. That way books on similar subjects will tend to be near each other, for example, if you know you're looking for a book about accounting, all you have to do is find SOME book about accounting and you'll see the other half dozen that we have right next to it which makes it convenient to browse the library. There seem to be lots of web APIs to do this, including Amazon and the Library of Congress. But those are all extremely confusing to me. What I really just want is a single higher level function that takes a UPC barcode number and returns some basic data about the book.

    Read the article

  • PHP : How to get specific data from array

    - by Giffary
    Hello! I try to use amazon API using PHP. If I use print_r($parsed_xml->Items->Item->ItemAttributes) it show me some result like SimpleXMLElement Object ( [Binding] = Electronics [Brand] = Canon [DisplaySize] = 2.5 [EAN] = 0013803113662 [Feature] = Array ( [0] = High-powered 20x wide-angle optical zoom with Optical Image Stabilizer [1] = Capture 720p HD movies with stereo sound; HDMI output connector for easy playback on your HDTV [2] = 2.5-inch Vari-Angle System LCD; improved Smart AUTO intelligently selects from 22 predefined shooting situations [3] = DIGIC 4 Image Processor; 12.1-megapixel resolution for poster-size, photo-quality prints [4] = Powered by AA batteries (included); capture images to SD/SDHC memory cards (not included) ) [FloppyDiskDriveDescription] = None [FormFactor] = Rotating [HasRedEyeReduction] = 1 [IsAutographed] = 0 [IsMemorabilia] = 0 [ItemDimensions] = SimpleXMLElement Object ( [Height] = 340 [Length] = 490 [Weight] = 124 [Width] = 350 ) [Label] = Canon [LensType] = Zoom lens [ListPrice] = SimpleXMLElement Object ( [Amount] = 60103 [CurrencyCode] = USD [FormattedPrice] = $601.03 ) [Manufacturer] = Canon [MaximumFocalLength] = 100 [MaximumResolution] = 12.1 [MinimumFocalLength] = 5 [Model] = SX20IS [MPN] = SX20IS [OpticalSensorResolution] = 12.1 [OpticalZoom] = 20 [PackageDimensions] = SimpleXMLElement Object ( [Height] = 460 [Length] = 900 [Weight] = 242 [Width] = 630 ) [PackageQuantity] = 1 [ProductGroup] = Photography [ProductTypeName] = CAMERA_DIGITAL [ProductTypeSubcategory] = point-and-shoot [Publisher] = Canon [Studio] = Canon [Title] = Canon PowerShot SX20IS 12.1MP Digital Camera with 20x Wide Angle Optical Image Stabilized Zoom and 2.5-inch Articulating LCD [UPC] = 013803113662 ) my goal is to get only Feature infomation and I try to use $feature = $parsed_xml->Items->Item->ItemAttributes->Feature it does'not work for me because it just show me the first feature only. How do i get all feature information? please help

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >