Search Results

Search found 22711 results on 909 pages for 'amazon product api'.

Page 20/909 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Google Fonts API JSON Data in WordPress Options-Framework-Theme

    - by Rob
    I'm developing a child-theme off of the new Twenty Twelve theme using Wordpress 3.4.2 and the development version of the Options Theme Framework by Devin Price. In Devin's tutorial, it shows of a way to implement 15 Google Web Fonts into the Theme Options page, but not all of them (roughly 560). I know I can create a "manual list", like in the tutorial that states each one with fallbacks, but this is time consuming and unproductive as Google may or may not add to, update, change or remove some of these fonts from their list. The list I've created above will ultimately store unavailable fonts the user thinks is there because of what they can see in the drop-down menu and it won't have any new ones - making the list and some selections obsolete. On the Google Developer API Web Fonts page, it talks briefly on retrieving a "dynamic list" using JSON/JavaScript. I was wondering how would I be able to pull the Google Web Fonts API into my Wordpress Theme Options page so I'm not creating my own list or have to constantly release an update to solve this issue. Could someone please walk me through what I would need to paste into my options.php, functions.php, /inc/options-framework.php file etc. or even in a new one to implement this? I've also had a look into some screencasts, plugins and tutorials on how it works, but none of them are specific enough for people just starting out. Please keep in mind I'm not the best coder... Thank you.

    Read the article

  • DB API for shell scripting (any shell)

    - by foampile
    I am faced with some legacy shell scripts that run batch data processing jobs in Oracle using SQL+. For the most part, the data tier does not have to communicate back to the script with retrieved data to be passed for shell-level processing but in a few cases it does. The problem is, SQL+ is really meant to be an end user app and not an API that can communicate with other clients programmaticaly. That is why people have invented APIs such as DBD::DBI for Perl, JDBC for Java, ODBC etc. The way it is done is they invoke SQL+ and then parse the output, which is clearly designed for human eye consumption, using tools like sed and awk. The whole thing is at best a hack and very prone to bugs. Since this client is rather conservative with their technology, they don't want to scale their scripts up to Perl or Python where there are data access APIs. So I am wondering whether there are similar APIs for shell, e.g. K or bash. What I would like is if an API would return data in a 2-dimensional array or strings (for the lack of type setting) so that I can just read DB data like that. The way they do it now is akin to parsing regular web page HTML to get a single stock quote rather than cleanly calling a web service and be done with it. Anybody know of a product I can use? Thanks

    Read the article

  • Balancing dependency injection with public API design

    - by kolektiv
    I've been contemplating how to balance testable design using dependency injection with providing simple fixed public API. My dilemma is: people would want to do something like var server = new Server(){ ... } and not have to worry about creating the many dependencies and graph of dependencies that a Server(,,,,,,) may have. While developing, I don't worry too much, as I use an IoC/DI framework to handle all that (I'm not using the lifecycle management aspects of any container, which would complicate things further). Now, the dependencies are unlikely to be re-implemented. Componentisation in this case is almost purely for testability (and decent design!) rather than creating seams for extension, etc. People will 99.999% of the time wish to use a default configuration. So. I could hardcode the dependencies. Don't want to do that, we lose our testing! I could provide a default constructor with hard-coded dependencies and one which takes dependencies. That's... messy, and likely to be confusing, but viable. I could make the dependency receiving constructor internal and make my unit tests a friend assembly (assuming C#), which tidies the public API but leaves a nasty hidden trap lurking for maintenance. Having two constructors which are implicitly connected rather than explicitly would be bad design in general in my book. At the moment that's about the least evil I can think of. Opinions? Wisdom?

    Read the article

  • Reflective discovery of an inner class in an API

    - by wassup
    Let me ask you, as this bothers me for quite a while but appears to be subjectively the best solution for my problem, if reflective discovery of an inner class for API purposes is that bad idea? First, let me explain what I mean by saying "reflective discovery" and all that stuff. I am sketching an API for a Java database system, that'll be centered around block-based entities (don't ask me what that means - that's a long story), and those entities can be read and returned to the Java code as objects subclassed from the Entity class. I have an Entity.Factory class, that, by means of fluent interfaces, takes a Class<? extends Entity> argument and then, uses an instance of Section.Builder, Property.Builder, or whatever builder the entity has, to put it into the back-end storage. The idea about registering all entity types and their builders just doesn't appeal to me, so I thought that the closest solution to the problem that'd suffice my design needs would be to discover, using reflection, all inner classes of Entity classes and find one that's called Builder. Looking for some expert insight :) And if I missed some important design details (which could happen as I tried to make this question as concise as possible), just tell me and I'll add them.

    Read the article

  • Retexturing a model via API on the web

    - by AndyMcKenna
    I'm looking at creating a site where a user could see our product and configure the options or look of it and see an image that represents that. The way I'm doing it now is if you have Piece A selected and then you choose Texture X, I have an image on the filesystem that is A with X applied to it. I just swap out my default image with that specific one. One product has 8 areas, with 10-70 pieces per area, and about 200 textures for each piece. Programming the site was pretty simple but we are getting bogged down in rendering all these pieces/textures and entering them into the system. What I would love to do is build a model and have some way to apply the textures via API and render it to the browser. I would even settle for exporting a flat image and pulling that into the browser. Is that possible with something like SolidWorks, 3DSMax, or something else? If the rendering is too time intensive it would still help to batch create all my images and use them the way I am currently.

    Read the article

  • Yelp, Google's API for restaurants help

    - by chris
    Ok I have looked into this, and I'm not sure if anyone else has experience with it. I'm having termendous difficulties with Yelp and Google's API. To help explain what I am trying to do here is the concept of the website. We would have to pull restaurants based on user distance, and then randomize them based on quality of restaurant based on feedback from review websites (Yelp, Google, urbanspoon, zagat, opentable, kudzu, yahoo - doesn't have to be from all), and feedback from our users (on results page for the random restaurant users can select good recommendation/bad recommendation). There’s a lot we could calculate for our formula. Things that will dictate your results will be based on if you’re at home or work. If you’re at home you will have more time to drive out to the city to grab some dinner or lunch. If you’re at work we would have to recommend restaurants nearby as lunch is typically 30 minutes to a hour. A 30 minute lunch would require take out most likely or quick service. A hour lunch break you could dine in at a local fine dining restaurant. So in a nutshell, user comes to website. Select if they're at home or work, click submit and we will have a random restaurant selected for them to go. If they don't like it they can click retry and a new restaurant can show. The issue I am having is using the API to gather all the restaurants in the US. I know it can be done because there are similiar websites/apps that pull restaurants that are closest to you such as Ness, Alfred, and I believe there's two more but I can't remember the names. Anyone know if this can be accomplish?

    Read the article

  • Magento - How to link the Configurable Product image to the Simple Product image?

    - by Niels
    This is the case: I have a configurable product with several simple products. These simple products need to have the same product image as the configurable product. Now I have to upload the same image to each simple product over and over again. Is there a way to link the product image of the configurable product to the simple products? Some of my products have 30 simple products in 1 configurable product and it is kinda irrelevant to upload the same image 30 times. I hope someone can help me with this problem! Thanks in advance!

    Read the article

  • Google Maps API DirectionsRendererOptions not working?

    - by YWE
    I am trying to use DirectionsRenderer to display a DirectionsResult without the route list. According to the API version 3 documentation, there is a "hideRouteList" property of the DirectionsRendererOptions object that when set to true should hide the route list. I cannot get it to work. Is this a bug or am I just not coding this correctly? Following is my code. <html> <head> <title>Driving Directions</title> <script type="text/javascript" src="http://maps.google.com/maps/api/js?sensor=false"> </script> <script type="text/javascript"> <!-- function initialize() { var dirService = new google.maps.DirectionsService(); var dirRequest = { origin: "350 5th Ave, New York, NY, 10118", destination: "1 Wall St, New York, NY", travelMode: google.maps.DirectionsTravelMode.DRIVING, unitSystem: google.maps.DirectionsUnitSystem.IMPERIAL, provideTripAlternatives: true }; dirService.route(dirRequest, showDirections); } function showDirections(dirResult, dirStatus) { if (dirStatus != google.maps.DirectionsStatus.OK) { alert('Directions failed: ' + dirStatus); return; } var rendererOptions = { hideRouteList: true }; var dirRenderer = new google.maps.DirectionsRenderer(rendererOptions); dirRenderer.setPanel(document.getElementById('dir-container')); dirRenderer.setDirections(dirResult); } --> </script> </head> <body onLoad="initialize();"> <div id="dir-container"></div> </body> </html>

    Read the article

  • Stange stream of HTTP GET requests in apache logs, from amazon ec2 instances

    - by Alexandre Boeglin
    I just had a look at my apache logs, and I see a lot of very similar requests: GET / HTTP/1.1 User-Agent: curl/7.24.0 (i386-redhat-linux-gnu) libcurl/7.24.0 \ NSS/3.13.5.0 zlib/1.2.5 libidn/1.18 libssh2/1.2.2 Host: [my_domain].org Accept: */* there's a steady stream of those, about 2 or 3 per minute; they all request the same domain and resource (there are slight variations in user agent version numbers); they come form a lot of different IPv4 and IPv6 addresses, in blocs that belong to amazon ec2 (in Singapore, Japan, Ireland and the USA). I tried to look for an explanation online, or even just similar stories, but couldn't find any. Has anyone got a clue as to what this is? It doesn't look malicious per say, but it's just annoying me, and I couldn't find any more information about it. I first suspected it could be a bot checking if my server is still up, but: I don't remember subscribing to such a service; why would it need to check my site twice every minute; why doesn't it use a clearly identifying fqdn. Or, should I send this question to amazon, via their abuse contact? Thanks!

    Read the article

  • Attempting to update Amazon Route53 using a script, but domain is not being updated

    - by ks78
    I have several Amazon EC2 instances, running Ubuntu 10.04, with which I'd like to use Amazon's Route53. I setup a script as described in Shlomo Swidler's article, but I'm still missing something. When the script runs, it doesn't return any output, which I initially assumed meant it ran correctly. However, when I check the DNS records using MyR53DNS, there are no entries for my instances. Here's my script: #!/bin/tcsh -f set root=`dirname $0` setenv EC2_HOME /usr/lib/ec2-api-tools setenv EC2_CERT /etc/cron.route53/ec2_x509_cert.pem setenv EC2_PRIVATE_KEY /etc/cron.route53/ec2_x509_private.pem setenv AWS_ACCESS_KEY_ID myaccesskeyid setenv AWS_SECRET_ACCESS_KEY mysecretaccesskey /user/bin/ec2-describe-instances | \ perl -ne '/^INSTANCE\s+(i-\S+).*?(\S+\.amazonaws\.com)/ \ and do { $dns = $2; print "$1 $dns\n" }; /^TAG.+\sShortName\s+(\S+)/ \ and print "$1 $dns\n"' | \ perl -ane 'print "$F[0] CNAME $F[1] --replace\n"' | \ xargs -n 4 $/etc/cron.route53/cli53/cli53.py \ rrcreate -x 60 mydomain.com Does anyone see a problem with this script? If its not the script, what else could be preventing my Route53 domain from being updated? I am using the Security Groups to IP-restrict the instances. I've tried opening port 53, but that didn't seem to have an effect. Is there another port that Route53 uses? I'd appreciate any help or guidance the ServerFault community can offer. Let me know if you need any further info.

    Read the article

  • Difference in performance: local machine VS amazon medium instance

    - by user644745
    I see a drastic difference in performance matrix when i run it with apache benchmark (ab) in my local machine VS production hosted in amazon medium instance. Same concurrent requests (5) and same total number of requests (111) has been run against both. Amazon has better memory than my local machine. But there are 2 CPUs in my local machine vs 1 CPU in m1.medium. My internet speed is very low at the moment, I am getting Transfer rate as 25.29KBps. How can I improve the performance ? Do not know how to interpret Connect, Processing, Waiting and total in ab output. Here is Localhost: Server Hostname: localhost Server Port: 9999 Document Path: / Document Length: 7631 bytes Concurrency Level: 5 Time taken for tests: 1.424 seconds Complete requests: 111 Failed requests: 102 (Connect: 0, Receive: 0, Length: 102, Exceptions: 0) Write errors: 0 Total transferred: 860808 bytes HTML transferred: 847155 bytes Requests per second: 77.95 [#/sec] (mean) Time per request: 64.148 [ms] (mean) Time per request: 12.830 [ms] (mean, across all concurrent requests) Transfer rate: 590.30 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.5 0 1 Processing: 14 63 99.9 43 562 Waiting: 14 60 96.7 39 560 Total: 14 63 99.9 43 563 And this is production: Document Path: / Document Length: 7783 bytes Concurrency Level: 5 Time taken for tests: 33.883 seconds Complete requests: 111 Failed requests: 0 Write errors: 0 Total transferred: 877566 bytes HTML transferred: 863913 bytes Requests per second: 3.28 [#/sec] (mean) Time per request: 1526.258 [ms] (mean) Time per request: 305.252 [ms] (mean, across all concurrent requests) Transfer rate: 25.29 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 290 297 14.0 293 413 Processing: 897 1178 63.4 1176 1391 Waiting: 296 606 135.6 588 1171 Total: 1191 1475 66.0 1471 1684

    Read the article

  • Shared files folder in Amazon Elastic Beanstalk environment

    - by por
    I'm working on a Drupal application, which is planned to be hosted in Amazon Elastic Beanstalk environment. Basically, Elastic Beanstalk enables the application to scale automatically by starting additional web server instances based on predefined rules. The shared database is running on an Amazon RDS instance, which all instances can access properly. The problem is the shared files folder (sites/default/files). We're using git as SCM, and with it we're able to deploy new versions by executing $ git aws.push. In the background Elastic Beanstalk automatically deletes ($ rm -rf) the current codebase from all servers running in the environment, and deploys the new version. The plan was to use S3 (s3fs) for shared files in the staging environment, and NFS in the production environment. We've managed to set up the environment to the extent where the shared files folder is mounted after a reboot properly. But... The Problem is that, in this setup, the deployment of new versions on running instances fail because $ rm -rf can't remove the mounted directory, and as result, the entire environment goes down and we need restart the environment, which isn't really an elegant solution. Question #1 is that what would be the proper way to manage shared files in this kind of deployment? Are you running such an environment? How did you solve the problem? By looking at Elastic Beanstalk Hostmanager code (Ruby) there seems be a way to hook our functionality (unmount if mounted in pre-deploy and mount in post-deploy) into Hostmanager (/opt/hostmanager/srv/lib/elasticbeanstalk/hostmanager/applications/phpapplication.rb) but the scripts defined in the file (i.e. /tmp/php_post_deploy_app.sh) don't seem to be working. That might be because our Ruby skills are non-existent. Question #2 is that did you manage to hook your functionality in Hostmanager in a portable way (i.e. by not changing the core Hostmanager files)?

    Read the article

  • Creating an ec2 image on amazon fails at mkfs.ext3

    - by Dave Orr
    I'm trying to create an image of my ec2 instance in Amazon's cloud. It's been a bit of an adventure so far. I did manage to install Amazon's ec2-api-tools, which was harder than it seemed like it should have been. Then I ran: ec2-bundle-vol -d /mnt -k pk-{key}.pem -c cert-{cert}.pem -u {uid} -s 1536 Which returned: Copying / into the image file /mnt/image... Excluding: /sys/kernel/debug /sys/kernel/security /sys /proc /dev/pts /dev /dev /media /mnt /proc /sys /etc/udev/rules.d/70-persistent-net.rules /etc/udev/rules.d/z25_persistent-net.rules /mnt/image /mnt/img-mnt 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.00677357 s, 155 MB/s mkfs.ext3: option requires an argument -- 'L' Usage: mkfs.ext3 [-c|-l filename] [-b block-size] [-f fragment-size] [-i bytes-per-inode] [-I inode-size] [-J journal-options] [-G meta group size] [-N number-of-inodes] [-m reserved-blocks-percentage] [-o creator-os] [-g blocks-per-group] [-L volume-label] [-M last-mounted-directory] [-O feature[,...]] [-r fs-revision] [-E extended-option[,...]] [-T fs-type] [-U UUID] [-jnqvFKSV] device [blocks-count] ERROR: execution failed: "mkfs.ext3 -F /mnt/image -U 1c001580-9118-4a50-9a25-dcf02be6d25f -L " So mkfs.ext3 wants -L, which is a volume name. But ec2-bundle-vol doesn't seem to take in a volume name as an argument, and the docs (http://docs.amazonwebservices.com/AmazonEC2/gsg/2006-06-26/creating-an-image.html) don't seem to think one should be needed. Certainly their sample command: # ec2-bundle-vol -d /mnt -k ~root/pk-HKZYKTAIG2ECMXYIBH3HXV4ZBZQ55CLO.pem -u 495219933132 -s 1536 doesn't specify anything. So... any help? What am I missing?

    Read the article

  • Amazon EC2 - Free memory

    - by Damo
    We have an amazon ec2 small instance running and over the past few days we noticed that the memory is going down and down. On the small instance, we are running apache and tomcat6 Tomcat is started with the following JVM parameters -Xms32m -Xmx128m -XX:PermSize=128m -XX:MaxPermSize=256m We use nagios to monitor stuff like updates to apply, free disk space and memory. Everything else is behaving as expected but our memory is going down all the time. Our app receives approx half a million hits a day When I shutdown apache and tomcat, and ran free -m, we had only 594mb of memory free out out of the 1.7gb of memory. Not much else is running on the small instance and when running the top command I cannot see where the memory is going. The app we run on tomcat is a grails webapp. Could there be a possibility that there is a memory leak within our application? I read online and folks say that a small amazon instance is perfect for running apach and tomcat. I found a few posts online that showed how to setup apache and tomcat to limit the memory usage and I have already performed those steps. The memory is not being used up as quick but the memory is still decreasing over time. We have other amazone ec2 small instances running grails apps and the memory is fairly standard on those nodes. But they would not be receiving as much traffic Just to add, when I run the top command on the problem server, I cannot see where all the memory is being used Any help with this is greatly appreciated The output of free -m when run on my server is as follows total used free shared buffers cached Mem: 1657 1380 277 0 158 773 -/+ buffers/cache: 447 1209 Swap: 895 0 895 In your opinion, does this look ok? At what stage would the OS give back memory, would it wait to the memory reaches 0% or is this OS dependent?

    Read the article

  • Checking whether product key will work with SBS 2003

    - by Rob Nicholson
    We've recently absorbed a small company who had a Dell PowerEdge server running SBS 2003. For some reason, the hard disks have been wiped. We have the product key though from the sticker on the side of the case but not the installation media: Win SBS Std 2003 1-2 CPU 5-CAL OEM software We do have a Dell labelled set of four CDs labelled SBS 2003 in our store and I've built a VM from this media but it doesn't prompt for the product key during install. Is there any way to ascertain whether this media will work with this product key without going through activation? I know one can activate several times but would prefer to check we've got the right media before doing this. Thanks, Rob.

    Read the article

  • reinstalling vista product key

    - by Arabella
    I recently formatted my laptop which came with Vista preinstalled and installed Windows 7 on the primary partition. I've now installed Vista on a different partition, but it won't activate my valid product key. I've looked around on here and have seen similar issues being raised, but I don't have the telephonic activation option (only option I have is online activation). I am located in South Africa. When I enter my product key from my sticker it says it is not valid, so I must either try to activate online again or buy a different product key. I have reinstalled Vista on the primary partition several times and activated the key without a problem. This is the first time I am installing it on a different partition.

    Read the article

  • Facebook API call with email to return UID

    - by Jackson
    I'm trying to do a simple API call with facebook, with a user-given email to return their uid. Do I really need to auth them before this call is made? Thanks! :) I'm not doing anything else with the UID besides displaying to the user, which is why I don't really think it's worth authenticating them.

    Read the article

  • Pure Front end JavaScript with Web API versus MVC views with ajax

    - by eyeballpaul
    This was more a discussion for what peoples thoughts are these days on how to split a web application. I am used to creating an MVC application with all its views and controllers. I would normally create a full view and pass this back to the browser on a full page request, unless there were specific areas that I did not want to populate straight away and would then use DOM page load events to call the server to load other areas using AJAX. Also, when it came to partial page refreshing, I would call an MVC action method which would return the HTML fragment which I could then use to populate parts of the page. This would be for areas that I did not want to slow down initial page load, or areas that fitted better with AJAX calls. One example would be for table paging. If you want to move on to the next page, I would prefer it if an AJAX call got that info rather than using a full page refresh. But the AJAX call would still return an HTML fragment. My question is. Are my thoughts on this archaic because I come from a .net background rather than a pure front end background? An intelligent front end developer that I work with, prefers to do more or less nothing in the MVC views, and would rather do everything on the front end. Right down to web API calls populating the page. So that rather than calling an MVC action method, which returns HTML, he would prefer to return a standard object and use javascript to create all the elements of the page. The front end developer way means that any benefits that I normally get with MVC model validation, including client side validation, would be gone. It also means that any benefits that I get with creating the views, with strongly typed html templates etc would be gone. I believe this would mean I would need to write the same validation for front end and back end validation. The javascript would also need to have lots of methods for creating all the different parts of the DOM. For example, when adding a new row to a table, I would normally use the MVC partial view for creating the row, and then return this as part of the AJAX call, which then gets injected into the table. By using a pure front end way, the javascript would would take in an object (for, say, a product) for the row from the api call, and then create a row from that object. Creating each individual part of the table row. The website in question will have lots of different areas, from administration, forms, product searching etc. A website that I don't think requires to be architected in a single page application way. What are everyone's thoughts on this? I am interested to hear from front end devs and back end devs.

    Read the article

  • How to create Ror style Restful routing in Asp.net MVC Web Api

    - by Jas
    How to configure routing in asp.net web api, to that I can code for the following actions in my ApiController inherited class? |======================================================================================| |Http Verb| Path | Action | Used for | |======================================================================================| | GET | /photos | index | display a list of all photos | | GET | /photos/new | new | return an HTML form for creating a new photo | | POST | /photos/ | create | create a new photo | | GET | /photos/:id | show | display a specific photo | | GET | /photos/:id/edit | edit | return an HTML form for editing a photo | | PUT | /photos/:id | update | update a specific photo | | DELETE | /photos/:id | destroy | delete a specific photo |

    Read the article

  • FaceBook Graph API

    - by LingAi
    Hi All, When I use FaceBook API for retrieving posts information, I found that the returned information are changing all the time. for e.g., when I retrieved information 2 times with 1mins interval, one record appears in the 1st time, and disapeared in the 2nd time. https://graph.facebook.com/search?q=baby&type=post&limit=100&since=2010-05-19&until=2010-05-21 Does anyone know what happen? Cheers, LingChen

    Read the article

  • API's

    - by raghu.yadav
    lets dump API's here .... // if you want to put/get something in/from the pageFlowScope, use thisMap pfsMap = AdfFacesContext.getCurrentInstance().getPageFlowScope(); pfsMap.put(key, value); // pfsMap.put("#{pageFlowScope.param}, "sample"); pfsMap.get(key); // pfsMap.get("#{pageFlowScope.param} // if you want to set bean's property value, use this MyBackingBean bean = (MyBackingBean)pfsMap.get("my_backing_bean_name"); // the name under which the bean is registered in the task flow bean.setMyParam(newValue);

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >