Search Results

Search found 31026 results on 1242 pages for 'google cloud platform'.

Page 196/1242 | < Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >

  • ASP.NET MVC routing issue with Google Chrome client

    - by synergetic
    My Silverlight 4 app is hosted in ASP.NET MVC 2 web application. It works fine when I browse with Internet Explorer 8. However Google Chrome (version 5) cannot find ASP.NET controllers. Specifically, the following ASP.NET controller works both with Chrome and IE. //[OutputCache(NoStore = true, Duration = 0, VaryByParam = "None")] public ContentResult TestMe() { ContentResult result = new ContentResult(); XElement response = new XElement("SvrResponse", new XElement("Data", "my data")); result.Content = response.ToString(); return result; } If I uncomment [OutputCache] attribute then it works with IE but not with Chrome. Also, I use custom model binding with controllers, so if I write the following: public ContentResult TestMe(UserContext userContext) { ... } it also works with IE, but again not with Chrome which gives me error message saying that resource was not found. Of course, I configured IIS 6 for handling all requests via aspnet_isapi.dll and I have registered custom model binder in my web app's Global.asax inside Application_Start() method. Can someone explain me what might be the cause? Thank you.

    Read the article

  • Google Translation API

    - by Nimesh
    I have text that I would like to translate into Russian. The text has custom tags and has multiple <BR> tags. The API behaves oddly with <BR> tags. Are there known issues with <BR> tags? Is there a way around it or what is the best way to use Google JQuery tranlsation to translate the text? The text is <INPUTANSWER PARTID='1'> <SPAN STYLE="FONT: 7pt 'Times New Roman'">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; </SPAN> Place a <STRONG>90 degree</STRONG> explicit angle constraint to the inside faces of <STRONG>DP-1007:1 </STRONG>and<STRONG>DP-1006:1</STRONG> as shown.</P> <P STYLE="MARGIN-LEFT: 0.5in; TEXT-INDENT: -0.25in"> 2. <SPAN STYLE="FONT: 7pt 'Times New Roman'"> </SPAN> Drive this angle constraint between <STRONG>90 and 100 degrees</STRONG> with an <STRONG>increment</STRONG> <STRONG>of 0.125 degrees.</STRONG> </INPUTANSWER>

    Read the article

  • Struts2 Tiles in Google app engine

    - by user365941
    I am trying to build an java web application using struts2 and tiles in Google App Engine. Below is my tiles.xml file <!DOCTYPE tiles-definitions PUBLIC "-//Apache Software Foundation//DTD Tiles Configuration 2.0//EN" "http://tiles.apache.org/dtds/tiles-config_2_0.dtd"> <tiles-definitions> <definition name="baseLayout" template="BaseLayout.jsp"> <put-attribute name="title" value="" /> <put-attribute name="header" value="Header.jsp" /> <put-attribute name="body" value="" /> <put-attribute name="footer" value="Footer.jsp" /> </definition> <definition name="/welcome.tiles" extends="baseLayout"> <put-attribute name="title" value="Welcome" /> <put-attribute name="body" value="Welcome.jsp" /> </definition> </tiles-definitions> But when I run the app,I am not getting any error. it just prints "Header.jsp Welcome.jsp Footer.jsp". It does not show the actual jsp pages. Please advise on what needs to be done. Thanks in advance Regards

    Read the article

  • Refreshing WEB-INF/lib in Google App Engine (with Eclipse)

    - by Adrian Petrescu
    Hi, I've created a new Google App Engine project within Eclipse. I copied several JARs that I need for my application into the WEB-INF/lib directory, and add them to the build path. I make some random calls to these JARs from within the handler, deploy, and everything works fine. However, if I then change one of the JARs outside the project, and copy the new version to WEB-INF/lib (with the same name) and re-deploy, it doesn't seem to be sending the new JAR; everything is still linking to the old one even though it's not even in my WEB-INF/lib anymore. I'm guessing it's being cached by the server or Eclipse is not even realizing something has changed in order to upload the new version. If I just create a new project with the new JAR, everything is fine again (until I have to make another change...) but of course I don't want to have to create a new project for every change to a dependency I make. My question is, how can I make GAE re-upload all the JARs I have from within Eclipse? Thanks in advance, guys :) -Adrian

    Read the article

  • Android application listed as compatible with Sony Xperia S but still filtered from google play

    - by mlidal
    I have published an Android application and some users are complaining that it is listed as not compatible with Sony Xperia S. According to the developer console Xperia S (LT26i) is listed as compatible. Do anyone know of any reason why the app is still filtered from google play? I have seen people reporting problems with big apk files. This app is about 20Mb in size, with the largest file being 14Mb. Quite a bit but not enough to cause problems I think... Here is the output from aapt dump badging: package: name='no.bouvet.nrkut' versionCode='4' versionName='1.0' sdkVersion:'4' targetSdkVersion:'13' uses-permission:'android.permission.ACCESS_FINE_LOCATION' uses-permission:'android.permission.ACCESS_COARSE_LOCATION' uses-permission:'android.permission.ACCESS_WIFI_STATE' uses-permission:'android.permission.ACCESS_NETWORK_STATE' uses-permission:'android.permission.INTERNET' uses-permission:'android.permission.WRITE_EXTERNAL_STORAGE' application-label:'UT.no' application-icon-120:'res/drawable-ldpi/utno_launcher.png' application-icon-160:'res/drawable-mdpi/utno_launcher.png' application-icon-240:'res/drawable-hdpi/utno_launcher.png' application-icon-320:'res/drawable-xhdpi/utno_launcher.png' application: label='UT.no' icon='res/drawable-mdpi/utno_launcher.png' launchable-activity: name='no.bouvet.nrkut.MainActivity' label='UT.no' icon='' uses-feature:'android.hardware.location' uses-feature:'android.hardware.location.gps' uses-feature:'android.hardware.location.network' uses-feature:'android.hardware.wifi' uses-feature:'android.hardware.touchscreen' uses-feature:'android.hardware.screen.portrait' main other-activities search supports-screens: 'small' 'normal' 'large' 'xlarge' supports-any-density: 'true' locales: '--_--' densities: '120' '160' '240' '320'

    Read the article

  • Google Maps in Drupal: reference to gmap object from JavaScript

    - by user280817
    Is there a way to obtain JavaScript references to the Google maps that are embedded into Drupal pages by the GMap module? I want to be able to manipulate the maps in these pages. I want to pan and zoom them. But I cannot find a reference to an embedded map object. I've dissected the relevant JavaScript objects Drupal.gmap and Drupal.settings.gmap with no success--unless I've overlooked something. The Drupal GMap module doesn't seem to explicitly provide references (within its API) to the GMap objects that it embeds into pages. It just generates themed text which is interpolated into the page. The technique of passing the HTML ID of the map container to either the GMap2 object constructor or the similar Drupal.gmap.getMap() function in order to obtain a map reference doesn't appear to work: Both simply return an instance to a new map, one having the same dimensions and basic characteristics of the original map, but apparently sans all of its overlays (which could contain markers). And I have to call setCenter() on it before I can use it, which initializes the structure, so I know it has no overlays.

    Read the article

  • google chrome extension update text after response callback

    - by Jerome
    I am writing a Google Chrome extension. I have reached the stage where I can pass messages back and forth readily but I am running into trouble with using the response callback. My background page opens a message page and then the message page requests more information from background. When the message page receives the response I want to replace some of the standard text on the message page with custom text based on the response. Here is the code: chrome.extension.sendRequest({cmd: "sendKeyWords"}, function(response) { keyWordList=response.keyWordsFound; var keyWords=""; for (var i = 0; i FIRST QUESTION: This all seems to work fine but the text on the page doesn't change. I am almost certainly because the callback completes after the page is finished loading and the rest of the code finishes before the callback completes, too. How do I update the page with the new text? Can I listen for the callback to complete or something like that? SECOND QUESTION: The procedure I am pursuing first opens the message page and then the message page requests the keyword list from background. Since I always want the keyword list, it makes more sense to just send it when I create the tab. Can I do that? Here is the code from background that opens the message page: //when request from detail page to open message page chrome.extension.onRequest.addListener(function(request, sender, sendResponse) { if(request.cmd == "openMessage") { console.log("Received Request to Open Message, Profile Score: "+request.keyWordsFound.length); keyWordList=request.keyWordsFound; chrome.tabs.create({url: request.url}, function(tab){ msgTabId=tab.id; //needed to determine if message tab has later been closed chrome.tabs.executeScript(tab.id, {file: "message.js"}); }); console.log("Opening Message"); } });

    Read the article

  • Efficiently Serving Dynamic Content in Google App Engine

    - by awegawef
    My app on google app engine returns content items (just text) and comments on them. It works like this (pseudo-ish code): query: get keys of latest content #query to datastore for each item in content if item_dict in memcache: use item_dict else: build_item_dict(item) #by fetching from datastore store item_dict in memcache send all item_dicts to template Sorry if the code isn't understandable. I get all of the content dictionaries and send them to the template, which uses them to create the webpage. My problem is that if the memcache has expired, for each item I want to display, I have to (1) lookup item in memcache, (2) since no memcache exists I must fetch item from the datastore, and (3) store the item in memcache. These calls build up quickly. I don't set an expire time for the entries to the memcache, so this really only happens once in the morning, but the webpage takes long enough to load (~1 sec) that the browser reports it as not existing. Regularly, my webpages take about 50ms to load. This approach works decently for frequent visits, but it has its flaws as shown above. How can I remedy this? The entries are dynamic enough that I don't think it would be in my best interest to cache my initial request. Thanks in advance

    Read the article

  • translate a PHP $string using google translator API

    - by Toni Michel Caubet
    hey there! been google'ing for a while how is the best way to translate with google translator in PHP, found very different ways converting URLS, or using Js but i want to do it only with php (or with a very simple solution JS/JQUery) example: //hopefully with $from_lan and $to_lan being like 'en','de', .. or similar function translate($from_lan, $to_lan, $text){ // do return $translated_text; } can you give me a clue? or maybe you already have this function.. my intention it's to use it only for the languages i have not already defined (or keys i haven't defined), that's why i wan it so simple, will be only temporal.. EDIT thanks for your replies we are now trying this soulutions: function auto_translate($from_lan, $to_lan, $text){ // do $json = json_decode(file_get_contents('https://ajax.googleapis.com/ajax/services/language/translate?v=1.0&q=' . urlencode($text) . '&langpair=' . $from_lan . '|' . $to_lan)); $translated_text = $json->responseData->translatedText; return $translated_text; } (there was a extra 'g' on variables for lang... anyway) it returns: works now :) i don't really understand much the function, so any idea why is not acepting the object? (now i do) OR: unction auto_translate($from_lan, $to_lan, $text){ // do // $json = json_decode(file_get_contents('https://ajax.googleapis.com/ajax/services/language/translate?v=1.0&q=' . urlencode($text) . '&langpair=' . $from_lan . '|' . $to_lan)); // $translated_text = $json['responseData']['translatedText']; error_reporting(1); require_once('GTranslate.php'); try{ $gt = new Gtranslate(); $translated_text = $gt-english_to_german($text); } catch (GTranslateException $ge) { $translated_text= $ge->getMessage(); } return $translated_text; } And this one looks great but it doesn't even gives me an error, the page won't load (error_report(1) :S) thanks in advance!

    Read the article

  • Google Maps - custom icons with infoWindows

    - by hfidgen
    Hiya, As far as I can tell, this code is fine, and should display some custom icons with popup HTML windows. But the popups aren't working! Can anyone point out what I'm doing wrong? I can't seem to debug it myself. Thanks! function initialize() { if (GBrowserIsCompatible()) { var map = new GMap2(document.getElementById("map")); map.setCenter(new GLatLng(51.410416, -0.293884), 15); map.addControl(new GSmallMapControl()); map.addControl(new GMapTypeControl()); var i_parking = new GIcon(); i_parking.image = "http://google-maps-icons.googlecode.com/files/parking.png"; i_parking.iconSize = new GSize(32, 37); i_parking.iconAnchor = new GPoint(16, 37); icon_parking = { icon:i_parking }; var marker_office = new GMarker(new GLatLng(51.410416,-0.293884)); var marker_parking1 = new GMarker((new GLatLng(51.410178,-0.292000)),icon_parking); var marker_parking2 = new GMarker((new GLatLng(51.410152,-0.298948)),icon_parking); marker_parking1.openInfoWindowHtml('<strong>On Street Parking</strong><br>Church Road - 40p per hour'); marker_parking2.openInfoWindowHtml('<strong>Multi Storey - Fairfield</strong><br>Upper Car Park - 90p per half hour<br>Lower Car Park - £1.20 per hour'); map.addOverlay(marker_office); map.addOverlay(marker_parking1); map.addOverlay(marker_parking2); } }

    Read the article

  • Google Adwords API response parse

    - by Yun Ling
    I am trying to figure out how to parse the Adword API query response without exceptions and one issue that i came across is that sometimes, the data itself contains comma besides the comma between each column. Say i do a query on Adroup, campaign and impression by using <reportDefinition xmlns="https://adwords.google.com/api/adwords/cm/v201209"> <selector> <fields>CampaignName</fields> <fields>AdgroupName</fields> <fields>Impressions</fields> <predicates> <field>Status</field> <operator>IN</operator> <values>ENABLED</values> <values>PAUSED</values> </predicates> </selector> <reportName>Custom Adgroup Performance Report</reportName> <reportType>ADGROUP_PERFORMANCE_REPORT</reportType> <dateRangeType>LAST_7_DAYS</dateRangeType> <downloadFormat>CSV</downloadFormat> </reportDefinition> Since my campaign has comma within the string like below: "Adroup,Campaign,Impressions, Premiun Beer, Beer, Chicago, 1000" where the adgroup is "premium beer" and campaign is "Beer,Chicago". that will cause an issue if we parse this information by using comma. Does anyone know how to solve this problem?

    Read the article

  • Google App Engine error Object Manager has been closed

    - by newbie
    I had following error from Google App Engine when I was trying to iterate list in JSP page with EL. Object Manager has been closed I solved problem with following coe, but I don't think that it is very good solution to this problem: public List<Item> getItems() { PersistenceManager pm = getPersistenceManager(); Query query = pm.newQuery("select from " + Item.class.getName()); List<Item> items = (List<Items>) query.execute(); List<Item> items2 = new ArrayList<Item>(); // This line solved my problem Collections.copy(items, items2); // and this also pm.close(); return (List<Item>) items; } When I tried to use pm.detachCopyAll(items) it gave same error. I understood that detachCopyAll() method should do same what I did, but that method should be part of data nucelus, so it should be used instead of my owm methods. So why dosen't detachCopyAll() work at all?

    Read the article

  • Problems with Getting Remote Contents using Google App Engine

    - by dade
    Here is the client side code. It is running insdide a Google Gadgets var params = {}; params[gadgets.io.RequestParameters.CONTENT_TYPE] = gadgets.io.ContentType.JSON; var url = "http://invplatformtest.appspot.com/getrecent/"; gadgets.io.makeRequest(url, response, params); The response function is: function response(obj) { var r = obj.data; alert(r['name']); } while on the server end, the python code sending the JSON is: class GetRecent(webapp.RequestHandler): def get(self): self.response.out.write({'name':'geocities'}) #i know this is where the problem is so how do i encode json in GAE? which is just supposed to send back a Json encoded string but when i run this, the javascript throws the following error: r is null alert(r['name']); If i were recieving just TEXT contents and my server send TEXT everything works fine. I only get this problem when am trying to send JSON. Where exactly is the problem? Am i encoding the JSON the wrong way on AppEngine? I tried using the JSON library but it looks as if this is not supported. Where is the problem exactly? :(

    Read the article

  • Twitter API similar to Google Alert

    - by Felix Perdana
    I am trying to create a web application which have a similar functionality with Google Alerts. (by similar I mean, the user can provide their email address for the alert to be sent to, daily or hourly) The only limitation is that it only gives alerts to user based on a certain keyword or hashtag. I think that I have found the fundamental API needed for this web application. https://dev.twitter.com/docs/api/1/get/search The problem is I still don't know all the web technologies needed for this application to work properly. For example, Do I have to store all of the searched keywords in database? Do I have to keep pooling ajax request all the time in order to keep my database updated? What if the keyword the user provided is very popular right now that might have thousands of tweets just in an hour (not to mention, there might be several emails that request several trending topics)? By the way, I am trying to build this application using PHP. So please let me know, what kind of techniques I need to learn for such web app (and some references maybe)? Any kind of help will be appreciated. Thanks in advance :) Regards, Felix Perdana

    Read the article

  • jQuery && Google Chrome

    - by Happy
    This script works perfectly in all the browsers, except Google Chrome. $(document).ready(function(){ $(".banners-anim img").each(function(){ var hover_width = $(this).width(); var hover_height = $(this).height(); var unhover_width = (hover_width - 30); $(this).width(unhover_width); var unhover_height = $(this).height(); $(this).closest("li").height(unhover_height); var offset = "-" + ((hover_height - unhover_height)/2) + "px"; $(this).closest("span").css({'position':'absolute', 'left':'0', 'top':'25px', 'width':'100%'}); $(this).hover(function(){ $(this).animate({width: hover_width, marginTop: offset}, "fast") },function(){ $(this).animate({width: unhover_width, marginTop: 0}, "fast") }); }); }); Chrome doesn't recognize changed image attributes. When width of the img changes, height also changes. Even not in Chrome.. $(this).width(unhover_width); var unhover_height = $(this).height(); unhover_height gives 0. Full code of this script (html included) - unhover_height Please help to fix this. Thanks.

    Read the article

  • ASP.NET - Google Chrome caching DropDownList selections

    - by Fake
    I'm experiencing what seems to be a caching issue with Google Chrome and Safari on my cart page. In the cart there are 2 dropdown lists. When you hit the checkout button after changing the values in the dropdown lists, it commits what's selected in the lists to the database. It's a little bit hard to explain the unexpected behavior so I will try to write it out step by step with an illustration of my problem. Lets say the first dropdown list has the values of: VALUE1 VALUE2 VALUE3 And the second dropdown list has the values of: DUMBO1 DUMBO2 DUMBO3 I add an item to my cart. Screen Says: VALUE1, DUMBO1 Database Says: VALUE1, DUMBO1 I hit Checkout. Database says: VALUE1, DUMBO1 (I can't see the dropdown lists after I hit checkout because i'm not at the cart page) I hit the back button. Screen Says: VALUE1, DUMBO1 Database Says: VALUE1, DUMBO1 I drop down the VALUE1 combo and select VALUE2, VALUE2 is selected momentarily and then the site posts back and VALUE1 is re-selected in the drop down list (from being reloaded from the DB) MOMENTARILY Screen Says: VALUE2, DUMBO1 Database Says: VALUE1, DUMBO1 THEN AFTER POSTBACK FROM DROPDOWNLIST_SELECTIONCHANGED EVENT Screen Says: VALUE1, DUMBO1 Database Says: VALUE1, DUMBO1 Hit Checkout. Database Says VALUE1 ,DUMBO1 (I can't see the dropdown lists after I hit checkout because i'm not at the cart page) Go back. Screen Says: VALUE2, DUMBO1 Database Says: VALUE1, DUMBO1 So it appears that it's remembering my selection of VALUE2 even though it jumped back to VALUE1 before I checked out. It seems to be a caching problem, however I've got some no-cache code to prevent caching of that page that works great in firefox and internet explorer but seems to be failing in Chrome and Safari. I'm basically returning in the headers for the cart page: no-cache, no-store, and must-revalidate to attempt to prevent caching, but based on this scenario it seems to be caching the page anyway and not reloading it when I hit the back button. I am open to any solutions or suggestions at this point. Thanks!

    Read the article

  • Web scrapping from a Google Chrome extension

    - by limoragni
    I've started to develop a Chrome extension to navigate and prform actions on a website. Until now the extension is able to receive a couple of parameters and check a set of radio-buttons, fill in a few inputs of a form and then submit it. What I want to do now is to repeat the process, but I'm stuck when the page is reloaded. And I don't know how can I do to make the script reacts to the finish of the request. The workflow I want to achieve is the following (is for automaticly copying a certain object): Popup side Enter the number of the Master object to copy Enter the base name of the copies (example Mod, so the I can iterate and add mod1, mod2, modn) Enter the number of copies Background side Select master Select standard options Fill in inputs Submit form Wait for the page to complete the request and continue to the next copy. (here I need help) The problem is on the repetition, the rest is taking care of. I assume that must be a way of dealing with requests. Any ideas? By the way I'm doing it all with the extension and tabs methods of google chrome plus javascript and jquery.

    Read the article

  • SQLAuthority News – Storing Data and Files in Cloud – Dropbox – Personal Technology Tip

    - by pinaldave
    I thought long and hard about doing a Personal Technology Tips series for this blog.  I have so many tips I’d like to share.  I am on my computer almost all day, every day, so I have a treasure trove of interesting tidbits I like to share if given the chance.  The only thing holding me back – which tip to share first?  The first tip obviously has the weight of seeming like the most important.  But this would mean choosing amongst my favorite tricks and shortcuts.  This is a hard task. Source: Dropbox.com My Dropbox I have finally decided, though, and have determined that the first Personal Technology Tip may not be the most secret or even trickier to master – in fact, it is probably the easiest.  My today’s Personal Technology Tip is Dropbox. I hope that all of you are nodding along in recognition right now.  If you do not use Dropbox, or have not even heard of it before, get on the internet and find their site.  You won’t be disappointed.  A quick recap for those in the dark: Dropbox is an online storage site with a lot of additional syncing and cloud-computing capabilities.  Now that we’ve covered the basics, let’s explore some of my favorite options in Dropbox. Collaborate with All The first thing I love about Dropbox is the ability it gives you to collaborate with others.  You can share files easily with other Dropbox users, and they can alter them, share them with you, all while keeping track of different versions in on easy place.  I’d like to see anyone try to accomplish that key idea – “easily” – using e-mail versions and multiple computers.  It’s even difficult to accomplish using a shared network. Afraid that this kind of ease looks too good to be true?  Afraid that maybe there isn’t enough storage space, or the user interface is confusing?  Think again.  There is plenty of space – you can get 2 GB with just a free account, and upgrades are inexpensive and go up to 100 GB of storage.  And the user interface is so easy that anyone can learn to use it. What I use Dropbox for I love Dropbox because I give a lot of presentations and often they are far from home.  I can keep my presentations on Dropbox and have easy access to them anywhere, without needing to have my whole computer with me.  This is just one small way that you can use Dropbox. You can sync your entire hard drive, or hard drives if you have multiple computers (home, work, office, shared), and you can set Dropbox to automatically sync files on a certain timeline, or whenever Dropbox notices that they’ve been changed. Why I love Dropbox Dropbox has plenty of storage, but 2 GB still has a hard time competing with the average desktop’s storage space.  So what if you want to sync most of your files, but only the ones you use the most and share between work and home, and not all your files (especially large files like pictures and videos)?  You can use selective sync to choose which files to sync. Above all, my favorite feature is LanSync.  Dropbox will search your Local Area Network (LAN) for new files and sync them to Dropbox, as well as downloading the new version to all the shared files across the network.  That means that if move around on different computers at work or at home, you will have the same version of the file every time.  Or, other users on the LAN will have access to the new version, which makes collaboration extremely easy. Ref: rzfeeser.com Dropbox has so many other features that I feel like I could create a Personal Technology Tips series devoted entirely to Dropbox.  I’m going to create a bullet list here to make things shorter, but I strongly encourage you to look further into these into options if it sounds like something you would use. Theft Recover Home Security File Hosting and Sharing Portable Dropbox Sync your iCal calendar Password Storage What is your favorite tool and why? I could go on and on, but I will end here.  In summary – I strongly encourage everyone to investigate Dropbox to see if it’s something they would find useful.  If you use Dropbox and know of a great feature I failed to mention, please share it with me, I’d love to hear how everyone uses this program. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology Tagged: Personal Technology

    Read the article

  • ORACLE RIGHTNOW DYNAMIC AGENT DESKTOP CLOUD SERVICE - Putting the Dynamite into Dynamic Agent Desktop

    - by Andreea Vaduva
    Untitled Document There’s a mountain of evidence to prove that a great contact centre experience results in happy, profitable and loyal customers. The very best Contact Centres are those with high first contact resolution, customer satisfaction and agent productivity. But how many companies really believe they are the best? And how many believe that they can be? We know that with the right tools, companies can aspire to greatness – and achieve it. Core to this is ensuring their agents have the best tools that give them the right information at the right time, so they can focus on the customer and provide a personalised, professional and efficient service. Today there are multiple channels through which customers can communicate with you; phone, web, chat, social to name a few but regardless of how they communicate, customers expect a seamless, quality experience. Most contact centre agents need to switch between lots of different systems to locate the right information. This hampers their productivity, frustrates both the agent and the customer and increases call handling times. With this in mind, Oracle RightNow has designed and refined a suite of add-ins to optimize the Agent Desktop. Each is designed to simplify and adapt the agent experience for any given situation and unify the customer experience across your media channels. Let’s take a brief look at some of the most useful tools available and see how they make a difference. Contextual Workspaces: The screen where agents do their job. Agents don’t want to be slowed down by busy screens, scrolling through endless tabs or links to find what they’re looking for. They want quick, accurate and easy. Contextual Workspaces are fully configurable and through workspace rules apply if, then, else logic to display only the information the agent needs for the issue at hand . Assigned at the Profile level, different levels of agent, from a novice to the most experienced, get a screen that is relevant to their role and responsibilities and ensures their job is done quickly and efficiently the first time round. Agent Scripting: Sometimes, agents need to deliver difficult or sensitive messages while maximising the opportunity to cross-sell and up-sell. After all, contact centres are now increasingly viewed as revenue generators. Containing sophisticated branching logic, scripting helps agents to capture the right level of information and guides the agent step by step, ensuring no mistakes, inconsistencies or missed opportunities. Guided Assistance: This is typically used to solve common troubleshooting issues, displaying a series of question and answer sets in a decision-tree structure. This means agents avoid having to bookmark favourites or rely on written notes. Agents find particular value in these guides - to quickly craft chat and email responses. What’s more, by publishing guides in answers on support pages customers, can resolve issues themselves, without needing to contact your agents. And b ecause it can also accelerate agent ramp-up time, it ensures that even novice agents can solve customer problems like an expert. Desktop Workflow: Take a step back and look at the full customer interaction of your agents. It probably spans multiple systems and multiple tasks. With Desktop Workflows you control the design workflows that span the full customer interaction from start to finish. As sequences of decisions and actions, workflows are unique in that they can create or modify different records and provide automation behind the scenes. This means your agents can save time and provide better quality of service by having the tools they need and the relevant information as required. And doing this boosts satisfaction among your customers, your agents and you – so win, win, win! I have highlighted above some of the tools which can be used to optimise the desktop; however, this is by no means an exhaustive list. In approaching your design, it’s important to understand why and how your customers contact you in the first place. Once you have this list of “whys” and “hows”, you can design effective policies and procedures to handle each category of problem, and then implement the right agent desktop user interface to support them. This will avoid duplication and wasted effort. Five Top Tips to take away: Start by working out “why” and “how” customers are contacting you. Implement a clean and relevant agent desktop to support your agents. If your workspaces are getting complicated consider using Desktop Workflow to streamline the interaction. Enhance your Knowledgebase with Guides. Agents can access them proactively and can be published on your web pages for customers to help themselves. Script any complex, critical or sensitive interactions to ensure consistency and accuracy. Desktop optimization is an ongoing process so continue to monitor and incorporate feedback from your agents and your customers to keep your Contact Centre successful.   Want to learn more? Having attending the 3-day Oracle RightNow Customer Service Administration class your next step is to attend the Oracle RightNow Customer Portal Design and 2-day Dynamic Agent Desktop Administration class. Here you’ll learn not only how to leverage the Agent Desktop tools but also how to optimise your self-service pages to enhance your customers’ web experience.   Useful resources: Review the Best Practice Guide Review the tune-up guide   About the Author: Angela Chandler joined Oracle University as a Senior Instructor through the RightNow Customer Experience Acquisition. Her other areas of expertise include Business Intelligence and Knowledge Management.  She currently delivers the following Oracle RightNow courses in the classroom and as a Live Virtual Class: RightNow Customer Service Administration (3 days) RightNow Customer Portal Design and Dynamic Agent Desktop Administration (2 days) RightNow Analytics (2 days) Rightnow Chat Cloud Service Administration (2 days)

    Read the article

  • Best Practices Generating WebService Proxies for Oracle Sales Cloud (Fusion CRM)

    - by asantaga
    I've recently been building a REST Service wrapper for Oracle Sales Cloud and initially all was going well, however as soon as I added all of my Web Service proxies I started to get weird errors..  My project structure looks like this What I found out was if I only had the InteractionsService & OpportunityService WebService Proxies then all worked ok, but as soon as I added the LocationsService Proxy, I would start to see strange JAXB errors. Example of the error message Exception in thread "main" javax.xml.ws.WebServiceException: Unable to create JAXBContextat com.sun.xml.ws.model.AbstractSEIModelImpl.createJAXBContext(AbstractSEIModelImpl.java:164)at com.sun.xml.ws.model.AbstractSEIModelImpl.postProcess(AbstractSEIModelImpl.java:94)at com.sun.xml.ws.model.RuntimeModeler.buildRuntimeModel(RuntimeModeler.java:281)at com.sun.xml.ws.client.WSServiceDelegate.buildRuntimeModel(WSServiceDelegate.java:762)at weblogic.wsee.jaxws.spi.WLSProvider$ServiceDelegate.buildRuntimeModel(WLSProvider.java:982)at com.sun.xml.ws.client.WSServiceDelegate.createSEIPortInfo(WSServiceDelegate.java:746)at com.sun.xml.ws.client.WSServiceDelegate.addSEI(WSServiceDelegate.java:737)at com.sun.xml.ws.client.WSServiceDelegate.getPort(WSServiceDelegate.java:361)at weblogic.wsee.jaxws.spi.WLSProvider$ServiceDelegate.internalGetPort(WLSProvider.java:934)at weblogic.wsee.jaxws.spi.WLSProvider$ServiceDelegate$PortClientInstanceFactory.createClientInstance(WLSProvider.java:1039)...... Looking further down I see the error message is related to JAXB not being able to find an objectFactory for one of its types Caused by: java.security.PrivilegedActionException: com.sun.xml.bind.v2.runtime.IllegalAnnotationsException: 6 counts of IllegalAnnotationExceptionsThere's no ObjectFactory with an @XmlElementDecl for the element {http://xmlns.oracle.com/apps/crmCommon/activities/activitiesService/}AssigneeRsrcOrgIdthis problem is related to the following location:at protected javax.xml.bind.JAXBElement com.oracle.xmlns.apps.crmcommon.activities.activitiesservice.ActivityAssignee.assigneeRsrcOrgId at com.oracle.xmlns.apps.crmcommon.activities.activitiesservice.ActivityAssignee This is very strange... My first thoughts are that when I generated the WebService Proxy I entered the package name as "oracle.demo.pts.fusionproxy.servicename" and left the generated types as blank. This way all the generated types get put into the same package hierarchy and when deployed they get merged... Sounds resaonable and appears to work but not in this case..  To resolve this I regenerate the proxy but this time setting : Package name : To the name of my package eg. oracle.demo.pts.fusionproxy.interactionsRoot Package for Generated Types :  Package where the types will be generated to, e.g. oracle.demo.pts.fusionproxy.SalesParty.types When I ran the application now, it all works , awesome eh???? Alas no, there is a serious side effect. The problem now is that to help coding I've created a collection of helper classes , these helper classes take parameters which use some of the "generic" datatypes, like FindCriteria. e.g. This wont work any more public static FindCriteria createCustomFindCriteria(FindCriteria pFc,String pAttributes) Here lies a gremlin of a problem.. I cant use this method anymore, this is because the FindCriteria datatype is now being defined two, or more times, in the generated code for my project. If you leave the Root Package for types blank it will get generated to com.oracle.xmlns, and if you populate it then it gets generated to your custom package.. The two datatypes look the same, sound the same (and if this were a duck would sound the same), but THEY ARE NOT THE SAME... Speaking to development, they recommend you should not be entering anything in the Root Packages section, so the mystery thickens why does it work.. Well after spending sometime with some colleagues of mine in development we've identified the issue.. Alas different parts of Oracle Fusion Development have multiple schemas with the same namespace, when the WebService generator generates its classes its not seeing the other schemas properly and not generating the Object Factories correctly...  Thankfully I've found a workaround Solution Overview When generating the proxies leave the Root Package for Generated Types BLANK When you have finished generating your proxies, use the JAXB tool XJC and generate Java classes for all datatypes  Create a project within your JDeveloper11g workspace and import the java classes into this project Final bit.. within the project dependencies ensure that the JAXB/XJC generated classes are "FIRST" in the classpath Solution Details Generate the WebServices SOAP proxies When generating the proxies your generation dialog should look like this Ensure the "unwrap" parameters is selected, if it isn't then that's ok, it simply means when issuing a "get" you need to extract out the Element Generate the JAXB Classes using XJC XJC provides a command line switch called -wsdl, this (although is experimental/beta) , accepts a HTTP WSDL and will generate the relevant classes. You can put these into a single batch/shell script xjc -wsdl https://fusionservername:443/appCmmnCompInteractions/InteractionService?wsdlxjc -wsdl https://fusionservername443/opptyMgmtOpportunities/OpportunityService?wsdl Create Project in JDeveloper to store the XJC "generated" JAXB classes Within the project folder create a filesystem folder called "src" and copy the generated files into this folder. JDeveloper11g should then see the classes and display them, if it doesnt try clicking the "refresh" button In your main project ensure that the JDeveloper XJC project is selected as a dependancy and IMPORTANT make sure it is at the top of the list. This ensures that the classes are at the front of the classpath And voilà.. Hopefully you wont see any JAXB generation errors and you can use common datatypes interchangeably in your project, (e.g. FindCriteria etc)

    Read the article

  • High Availability for IaaS, PaaS and SaaS in the Cloud

    - by BuckWoody
    Outages, natural disasters and unforeseen events have proved that even in a distributed architecture, you need to plan for High Availability (HA). In this entry I'll explain a few considerations for HA within Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). In a separate post I'll talk more about Disaster Recovery (DR), since each paradigm has a different way to handle that. Planning for HA in IaaS IaaS involves Virtual Machines - so in effect, an HA strategy here takes on many of the same characteristics as it would on-premises. The primary difference is that the vendor controls the hardware, so you need to verify what they do for things like local redundancy and so on from the hardware perspective. As far as what you can control and plan for, the primary factors fall into three areas: multiple instances, geographical dispersion and task-switching. In almost every cloud vendor I've studied, to ensure your application will be protected by any level of HA, you need to have at least two of the Instances (VM's) running. This makes sense, but you might assume that the vendor just takes care of that for you - they don't. If a single VM goes down (for whatever reason) then the access to it is lost. Depending on multiple factors, you might be able to recover the data, but you should assume that you can't. You should keep a sync to another location (perhaps the vendor's storage system in another geographic datacenter or to a local location) to ensure you can continue to serve your clients. You'll also need to host the same VM's in another geographical location. Everything from a vendor outage to a network path problem could prevent your users from reaching the system, so you need to have multiple locations to handle this. This means that you'll have to figure out how to manage state between the geo's. If the system goes down in the middle of a transaction, you need to figure out what part of the process the system was in, and then re-create or transfer that state to the second set of systems. If you didn't write the software yourself, this is non-trivial. You'll also need a manual or automatic process to detect the failure and re-route the traffic to your secondary location. You could flip a DNS entry (if your application can tolerate that) or invoke another process to alias the first system to the second, such as load-balancing and so on. There are many options, but all of them involve coding the state into the application layer. If you've simply moved a state-ful application to VM's, you may not be able to easily implement an HA solution. Planning for HA in PaaS Implementing HA in PaaS is a bit simpler, since it's built on the concept of stateless applications deployment. Once again, you need at least two copies of each element in the solution (web roles, worker roles, etc.) to remain available in a single datacenter. Also, you need to deploy the application again in a separate geo, but the advantage here is that you could work out a "shared storage" model such that state is auto-balanced across the world. In fact, you don't have to maintain a "DR" site, the alternate location can be live and serving clients, and only take on extra load if the other site is not available. In Windows Azure, you can use the Traffic Manager service top route the requests as a type of auto balancer. Even with these benefits, I recommend a second backup of storage in another geographic location. Storage is inexpensive; and that second copy can be used for not only HA but DR. Planning for HA in SaaS In Software-as-a-Service (such as Office 365, or Hadoop in Windows Azure) You have far less control over the HA solution, although you still maintain the responsibility to ensure you have it. Since each SaaS is different, check with the vendor on the solution for HA - and make sure you understand what they do and what you are responsible for. They may have no HA for that solution, or pin it to a particular geo, or perhaps they have a massive HA built in with automatic load balancing (which is often the case).   All of these options (with the exception of SaaS) involve higher costs for the design. Do not sacrifice reliability for cost - that will always cost you more in the end. Build in the redundancy and HA at the very outset of the project - if you try to tack it on later in the process the business will push back and potentially not implement HA. References: http://www.bing.com/search?q=windows+azure+High+Availability  (each type of implementation is different, so I'm routing you to a search on the topic - look for the "Patterns and Practices" results for the area in Azure you're interested in)

    Read the article

  • Google I/O 2010 - Keynote Day 2 Android Demo, pt. 4

    Google I/O 2010 - Keynote Day 2 Android Demo, pt. 4 Google I/O 2010 - Keynote Day 2 Android Demo, part 4 Video footage from Day 2 keynote at Google I/O 2010 For Google I/O session videos, presentations, developer interviews and more, go to: code.google.com/io From: GoogleDevelopers Views: 1 0 ratings Time: 10:00 More in Science & Technology

    Read the article

  • Google I/O 2010 - Keynote Day 2 Android Demo, pt. 3

    Google I/O 2010 - Keynote Day 2 Android Demo, pt. 3 Google I/O 2010 - Keynote Day 2 Android Demo, part 3 Video footage from Day 2 keynote at Google I/O 2010 For Google I/O session videos, presentations, developer interviews and more, go to: code.google.com/io From: GoogleDevelopers Views: 2 0 ratings Time: 09:44 More in Science & Technology

    Read the article

  • Google I/O 2010 - Keynote Day 2 Android Demo, pt. 1

    Google I/O 2010 - Keynote Day 2 Android Demo, pt. 1 Google I/O 2010 - Keynote Day 2 Android Demo, part 1 Video footage from Day 2 keynote at Google I/O 2010 For Google I/O session videos, presentations, developer interviews and more, go to: code.google.com/io From: GoogleDevelopers Views: 20 0 ratings Time: 10:23 More in Science & Technology

    Read the article

< Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >