Search Results

Search found 34826 results on 1394 pages for 'valid html'.

Page 748/1394 | < Previous Page | 744 745 746 747 748 749 750 751 752 753 754 755  | Next Page >

  • What's wrong with my HtmlHelper?

    - by Dejan.S
    I done a htmlhelper but I can not get it to work tho. I did like I seen on different tutorials. my MenuItemHelper static Class public static string MenuItem(this HtmlHelper helper, string linkText, string actionName, string controllerName) { var currentControllerName = (string)helper.ViewContext.RouteData.Values["controller"]; var currentActionName = (string)helper.ViewContext.RouteData.Values["action"]; var sb = new StringBuilder(); if (currentControllerName.Equals(controllerName, StringComparison.CurrentCultureIgnoreCase) && currentActionName.Equals(actionName, StringComparison.CurrentCultureIgnoreCase)) sb.Append("<li class=\"selected\">"); else sb.Append("<li>"); sb.Append(helper.ActionLink(linkText, actionName, controllerName)); sb.Append("</li>"); return sb.ToString(); } import namespace <%@ Import Namespace="MYAPP.Web.App.Helpers" %> Implementation on my master.page <%= Html.MenuItem("TEST LINK", "About", "Site") %> Errormessage I get Method not found: 'System.String System.Web.Mvc.Html.LinkExtensions.ActionLink(System.Web.Mvc.HtmlHelper, System.String, System.String, System.String)

    Read the article

  • ec2 ami device mapping

    - by hortitude
    I have large ec2 Ubuntu image and I'm just looking through the devices. I noticed from the metadata that % curl http://169.254.169.254/latest/meta-data/block-device-mapping/ami sda1 % curl http://169.254.169.254/latest/meta-data/block-device-mapping/ephemeral0 sdb However when I look what is actually mounted there is /dev/xvda1 and /dev/xvdb (and there is no /dev/sd* ) I know that both names look somewhat valid from the AWS documentation, but it looks to me from this like there is a mismatch in the instance metadata and what is actually on the machine. Why don't they match?

    Read the article

  • SCJP Book, IO section: Is this a typo or is there a reason it would look like this?

    - by iamchuckb
    My question is about line 4, where the new PrintWriter is created with the constructor taking the FileWriter fw as a parameter. I don't understand the use of chaining the BufferedWriter bw to FileWriter if it isn't used later on in the actual writing. Can Java apply chaining in a way that bw still somehow affects the rest of the program? 16. try { 17. FileWriter fw = new FileWriter(test); 18. BufferedWriter bw = new BufferedWriter(fw, 1024); 19. PrintWriter out = new PrintWriter(fw); 20. out.println("<html><body><h1>"); 21. out.println(args[0]); 22. out.println("</h1></body></html>"); 23. out.close(); 24. bw.close(); 25. fw.close(); 26. }catch(IOException e) { 27. e.printStackTrace(); 28. } I think it is probably a typo and they meant to use bw as the parameter for PrintWriter out but like the title says, I'm new to this. Thanks to all in advance.

    Read the article

  • How to prevent BeautifulSoup from stripping lines

    - by Oli
    I'm trying to translate an online html page into text. I have a problem with this structure: <div align="justify"><b>Available in <a href="http://www.example.com.be/book.php?number=1"> French</a> and <a href="http://www.example.com.be/book.php?number=5"> English</a>. </div> Here is its representation as a python string: '<div align="justify"><b>Available in \r\n<a href="http://www.example.com.be/book.php?number=1">\r\nFrench</a>; \r\n<a href="http://www.example.com.be/book.php?number=5">\r\nEnglish</a>.\r\n</div>' When using: html_content = get_html_div_from_above() para = BeautifulSoup(html_content) txt = para.text BeautifulSoup translate it (in the 'txt' variable) as: u'Available inFrenchandEnglish.' It probably strips each line in the original html string. Do you have a clean solution about this problem ? Thanks.

    Read the article

  • Apache + plesk vhost problem: .htaccess ignored!

    - by DaNieL
    Hi guys, i have a problem with a simple apache configuration. When the user ask for https://mydomain.com i have to redirect it to https://www.mydomain.com, becose my https certificate is valid just for the domain with www. I create the vhost.conf into my /var/www/vhosts/mydomain.com/conf/ directory, with inside: <Directory /var/www/vhosts/mydomain.com/httpsdocs> AllowOverride All </Directory> And my .htaccess file into the /var/www/vhosts/mydomain.com/httpsdocs/ is: RewriteEngine on RewriteCond %{HTTPS_HOST} ^mydomain\.com RewriteRule ^(.*)$ https://www.mydomain.com/$1 [R=301,L] But seem like the .htaccess is completely ignored. Any idea?

    Read the article

  • How to set up a DNS name server to always resolve to a constant IP address for every request

    - by Andy Higgins
    I am looking for a simple DNS name server set up to always return the same IP address no matter what the request is. The reason for this is we are a domain registrar and when a domain is first registered we need it to have valid name servers (and don't want to have to first create name server records before registering a domain). We will then subsequently change the name server records after the domain has been registered. I assume this is possible to do with bind but was wondering if there might be a simpler solution available using one of the more light weight name servers out there? Any suggestions on how to accomplish this in a simple manner will be appreciated.

    Read the article

  • Telerik MVC Grid won't load data into details table (subtable)

    - by henriksen
    I have a list of Plants and assosiated Projects. I want to output this in a table with all the Plants and use Telerik.Grid to expand a Plant, show a Telerik.Grid with associated Projects. I want the Projects to be dynamically loaded with Ajax. The code for the grid: @(Html.Telerik().Grid<PlantDto>() .Name("Plants") .Columns(columns => { columns.Bound(plant => plant.Title); }) .DetailView(details => details.ClientTemplate( Html.Telerik().Grid<ProjectDto>() .Name("Plant_<#= Id #>") .DataBinding(dataBinding => dataBinding.Ajax() .Select("ProjectsForPlant", "User", new { plantId = "<#= Id #>" })) .ToHtmlString() )) .DataBinding(dataBinding => dataBinding.Ajax().Select("PlantsForUser", "User")) ) The initial data is loaded into the grid just fine (the list of Plants) but when I expand a plant I just get an empty sub-table. Looking in FireBug there are no calls to the server. The controller that should serve the list of projects is never called. Anyone have an idea on what it could be?

    Read the article

  • Selenium RC: Selecting elements using the CSS :contains pseudo-class

    - by Andrew
    I would like to assert that a table row contains the data that I expect in two different tables. Using the following HTML as an example: <table> <tr> <th>Table 1</th> </tr> <tr> <td>Row 1 Col 1</td> <td>Row 1 Col 2</td> </tr> </table> <table> <tr> <th>Table 2</th> </tr> <tr> <td>Row 1 Col 1</td> <td>different data</td> </tr> </table> The following assertion passes: $this->assertElementPresent('css=table:contains(Table 1)'); However, this one doesn't: $this->assertElementPresent('css=table:contains(Table 1) tr:contains(Row 1 Col 1)'); And ultimately, I need to be able to test that both columns within the table row contain the data that I expect: $this->assertElementPresent('css=table:contains(Table 1) tr:contains(Row 1 Col 1):contains(Row 1 Col 2)'); $this->assertElementPresent('css=table:contains(Table 2) tr:contains(Row 1 Col 1):contains(different data)'); What am I doing wrong? How can I achieve this? Update: Sounds like the problem is a bug in Selenium when trying to select descendants. The only way I was able to get this to work was to add an extra identifier on the table so I could tell which one I was working with: /* HTML */ <table id="table-1"> /* PHP */ $this->assertElementPresent("css=#table-1 tr:contains(Row 1 Col 1):contains(Row 1 Col 2)");

    Read the article

  • Anchor as a Submit button

    - by griegs
    I have an MVC 2 application that has the following on it; <% using( Html.BeginForm("Results","Quote", FormMethod.Post, new { name="Results" })){ %> <% Html.RenderPartial("Needs", Model.needs); %> <div class="But green" style=""> <a href="." onclick="javascript:document.Results.submit();">Go</a> </div> <input type="submit" /> <%} %> Pressing the Submit button or the anchor both post back to the right ActionResult. However, when in the controller I return View(stuff..) only the Submit button will come back to the page. When the call finishes from pressing the anchor, I go to an error page informing me that the resource cannot be found. I suspect it has something to do with href="." but am unsure what to set it to.

    Read the article

  • eclipse - starting with android sdk

    - by dontHaveName
    I want start programming for android.. What I have: -Windows 7 -Eclipse Classic 4.2 -Downloaded all there required files - http://developer.android.com/sdk/installing/adding-packages.html -ADT Plugin I want install new ADT plugin.. at first I tried to download it from http://dl-ssl.google.com/android/eclipse, I add it, but if i selected it there is only "pending.." and nothing has load..(maybe internet connection?I have selected Native connection in preferences after pending it wrotes: Unable to connect to repository http://dl-ssl.google.com/android/eclipse/content.xml org.eclipse.equinox.p2.core.ProvidesException ) Thats why I download ADT plugin. So if I select downloaded ADT plugin - content of it load - developer tools and ndk plugin so I select all and click next. It loads and writes this: "Cannot complete the install because one or more required items could not be found. Software being installed: Android Development Tools 20.0.3.v201208082019-427395 (com.android.ide.eclipse.adt.feature.group 20.0.3.v201208082019-427395) Missing requirement: Android Development Tools 20.0.3.v201208082019-427395 (com.android.ide.eclipse.adt.feature.group 20.0.3.v201208082019-427395) requires 'org.eclipse.wst.sse.core 0.0.0' but it could not be found" requires 'org.eclipse.wst.sse.core 0.0.0 this problem is shown here: http://developer.android.com/resources/faq/troubleshooting.html#installeclipsecomponents but there is solution only for version 3.3 and 3.4 (I have 4.2) anyway but I tried it- I look for updates but nothing were found I really dont know where could be problem.. Thanks for any answer. (sorry for my english) I will send 1€ to somebody who can solve my problem ;) (I think all problems all for internet connection but I cant set it..)

    Read the article

  • Difference in clientX and clientY when going out of the browser on ie/ff

    - by Py
    I just ran into a little problem with clientX and clientY. I put a little event to detect if the mouse goes out of the window and to know where it exits. And there come the trouble, it works fine with firefox, but only sends -1 as an answer in IE. Does someone know if there is a way to solve easily that problem and that without using a framework? A little bit of code to reproduce that: <html> <head> <script type="text/javascript"> document.onmouseout=function(e){ if (!e) var e = window.event; var relTarg = e.relatedTarget || e.toElement; if (!relTarg){ document.getElementById('result1').innerHTML="e.clientY:"+e.clientY+" e.clientX:"+e.clientX; } }; </script> </head> <body> <div id="result1">Not Yet</div> </body> </html> the results if I exit through the left of the window are: e.clientY:302 e.clientX:-130 on firefox e.clientY:-1 e.clientX:-1 on ie. Thanks in advance.

    Read the article

  • IE "Microsoft JScript runtime error: Object expected"

    - by Stephen Borg
    Hi there, I have problems with regards to javascript only when using IE. The error I am getting is "Microsoft JScript runtime error: Object expected" and I have no idea why. It is then jumping into the JQuery 1.4.2 file, without giving me a proper error message. All I am doing is simply reading on page load the raw URL, and getting a query string named Search. Using that in an AJAX call to return products and put then into a DIV. No biggies, but somehow IE is managing to blow my page up :-( Any ideas? Code as follows : <script type="text/javascript"> $(document).ready(function (e) { $('.boxLoader').show(); function getParameterByName(name) { name = name.replace(/[\[]/, "\\\[").replace(/[\]]/, "\\\]"); var regexS = "[\\?&]" + name + "=([^&#]*)"; var regex = new RegExp(regexS); var results = regex.exec(window.location.href); if (results == null) return ""; else return decodeURIComponent(results[1].replace(/\+/g, " ")); } var Search; Search = getParameterByName("search"); $('#searchCriteria').text(Search); $.get("/Handlers/processProducts.aspx", { SearchCriteria: Search }, function (data) { $('#innercontent').html(data); $('#innercontent').fadeIn(200); $('.boxLoader').fadeOut(200); }); $('#searchBox').live("click", function () { $.get("/Handlers/processProducts.aspx", { SearchCriteria: $('#searchCriteria').val() }, function (data) { $('#innercontent').html(data); $('#innercontent').fadeIn(200); $('.boxLoader').fadeOut(200); }); }); }); </script>

    Read the article

  • XMLHttpRequst return null on Chrome

    - by BoltBait
    I have the following code that works fine in IE: <HTML> <BODY> <script language="JavaScript"> text=""; req = new XMLHttpRequest(); if (req) { req.onreadystatechange = processStateChange; req.open("GET", "http://www.boltbait.com", true); req.send(); } function processStateChange() { // is the data ready for use? if (req.readyState == 4) { // process my data alert(req.status); alert(req.responseText); } } </script> </BODY> </HTML> In IE, the first alert returns 200, the second returns the web page. However, in Chrome the first alert returns 0 and the second returns the empty string. My intent is to grab a web page into a string for processing. If I'm not doing this right, how should I be doing this? Thanks.

    Read the article

  • using jquery selector to change attribute in variable returned from ajax request

    - by Blake
    I'm trying to pull in a filename.txt (contains html) using ajax and change the src path in the data variable before I load it into the target div. If I first load it into the div the browser first requests the broken image and I don't want this so I would like to do my processing before I load anything onto the page. I can pull the src values fine but I can't change them. In this example the src values aren't changed. Is there a way to do this with selectors or can they only modify DOM elements? Otherwise I may have to do some regex replace but using a selector will be more convenient if possible. $.ajax( { url: getDate+'/'+name+'.txt', success: function(data) { $('img', data).attr('src', 'new_test_src'); $('#'+target).fadeOut('slow', function(){ $('#'+target).html(data); $('#'+target).fadeIn('slow'); }); } }); My reason is I'm building a fully standalone javascript template system for a newsletter and since images and other things are upload via a drupal web file manager I want the content creators to keep their paths very short and simple and I can then modify them before I load in the content. This will also be distributed on a CD so I can need to change the paths for that so they still work.

    Read the article

  • PHP header location redirect causing 500 Internal Server error

    - by Globalz
    Hi, I I keep getting a 500 Internal Server Error when the script below reaches the header('location:php_email_thankyou.php'). Im not sure what is causing this, as I can place the header expression before or after the if statements and it works fine. In firebug it mentions a GET request for the php_email_thankyou.php page not sure if that means anything... <?php ini_set('display_errors', 'On'); error_reporting(E_ALL | E_STRICT); include('php/cl/cl_val.php'); $val = new Validate; $print_errors = false; if (isset($_POST['email(email)'])){ if(isset($_SERVER['HTTP_X_REQUESTED_WITH'])) { $validation = $val->clean($_POST); if (isset($validation['send'])) { header('location:php_email_thankyou.php'); exit(); } else { print json_encode($validation); exit(); } } else { $validation = $val->clean($_POST); } } ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> Thanks heaps!

    Read the article

  • JQuery drag, drop and save via cookie - how to?

    - by RussP
    Sorry to be back folks, but you guys & girls seem to know much more about this than I do ... anyhow, here is my question/problem I want to use drag, drop, sort (the interface plugin does me even though I have read it's out of date? but have looked at UI and to be honest is not clear and to me appears heavier than interface?) Anyhow, how do I set a cookie to save positions from this: $(document).ready( function () { $('a.closeEl').bind('click', toggleContent); $('div.groupWrapper').Sortable( { accept: 'groupItem', helperclass: 'sortHelper', activeclass : 'sortableactive', hoverclass : 'sortablehover', handle: 'div.itemHeader', tolerance: 'pointer', onChange : function(ser) { }, onStart : function() { $.iAutoscroller.start(this, document.getElementsByTagName('body')); }, onStop : function() { $.iAutoscroller.stop(); } } ); } ); var toggleContent = function(e) { var targetContent = $('div.itemContent', this.parentNode.parentNode); if (targetContent.css('display') == 'none') { targetContent.slideDown(300); $(this).html('[-]'); } else { targetContent.slideUp(300); $(this).html('[+]'); } return false; }; var ser = function (s) { serial = $.SortSerialize(s); alert(serial.hash); }; which is the "standard" interface demo, PLUS How do I then get to read that cookie so that when I next visit the page the order is as I set it in the cookie? Hopefully from that I can work out the rest .......? Thanks for help in advance.

    Read the article

  • Facebook not recoginising open graph tags

    - by Pratik Poddar
    My object page looks like: <html xmlns="http://www.w3.org/1999/xhtml" dir="ltr" lang="en-US" xmlns:fb="https://www.facebook.com/2008/fbml"> <head prefix="og: http://ogp.me/ns# cliprin: http://ogp.me/ns/apps/cliprin#"> <meta property="fb:app_id" content="143944345745133" /> <meta property="og:type" content="cliprin:product" /> <meta property="og:url" content="https://itsourstudio.com/" /> <meta property="og:title" content="LED Ice Cubes (Set Of 4)" /> <meta property="og:sitename" content="Its Our Studio" /> <meta property="og:image" content="https://s-static.ak.fbcdn.net/images/devsite/attachment_blank.png" /> <meta property="og:description" content="Blah Blah Blah" /> </head> </html> The JSLink Debugger of the page as shown by the link shows that of:type is website and gives following warnings: Open Graph Warnings That Should Be Fixed Inferred Property: The 'og:url' property should be explicitly provided, even if a value can be inferred from other tags. Inferred Property: The 'og:title' property should be explicitly provided, even if a value can be inferred from other tags. Inferred Property: The 'og:description' property should be explicitly provided, even if a value can be inferred from other tags. Inferred Property: The 'og:image' property should be explicitly provided, even if a value can be inferred from other tags. Tiny og:image: All the images referenced by og:image must be at least 200px in both dimensions. Please check all the images with tag og:image in the given url and ensure that it meets the minimum specification.

    Read the article

  • JQuery tablesorter - Second click on column header doesn't resort

    - by Jonathan
    I'm using tablesorter in on a table I added to a view in django's admin (although I'm not sure this is relevant). I'm extending the html's header: {% block extrahead %} <script type="text/javascript" src="http://code.jquery.com/jquery-1.4.2.js"></script> <script type="text/javascript" src="http://mysite.com/media/tablesorter/jquery.tablesorter.js"></script> <script type="text/javascript"> $(document).ready(function() { $("#myTable").tablesorter(); } ); </script> {% endblock %} When I click on a column header, it sorts the table using this column in descending order - that's ok. When I click the same column header a second time - it does not reorder to ascending order. What's wrong with it? the table's html looks like: <table id="myTable" border="1"> <thead> <tr> <th>column_name_1</th> <th>column_name_2</th> <th>column_name_3</th> </tr> </thead> <tbody> {% for item in extra.items %} <tr> <td>{{ item.0|safe }} </td> <td>{{ item.1|safe }} </td> <td>{{ item.2|safe }} </td> </tr> {% endfor %} </tbody> </table>

    Read the article

  • apache2 Webdav using VirtualDocumentRoot

    - by picca
    I'm trying to get up dynamical WebDav on my virtual hosts <VirtualHost *:80> # http://www.example.com/test.txt -> /var/www/example.com/www/test.txt VirtualDocumentRoot /var/www/%-2.0.%-1.0/%-3+/ <Location /webdav> Dav On AuthType Basic AuthName "example.com" AuthUserFile /var/www/[PROBLEM-1]/passwd.dav Require valid-user </Location> </VirtualHost> Is there any way I can set dynamically PROBLEM-1 placeholder based on whatever comes with *HTTP_HOST*? More precisely part of it? Example: HTTP_HOST = www.example.com - PROBLEM-1 = example.com HTTP_HOST = example.com - PROBLEM-1 = example.com What I'm trying to do here is to load dav passwd file dynamically based on which domain is requested. It is something like "groups" if you wish. So that owner of domainA is not allowed to access files of domainB. So maybe there is some other solution based on AuthGroupFile directive?

    Read the article

  • How do I get into a career as a programmer/development DBA?

    - by markle976
    About 8-9 years ago I started getting into programming as a hobby. I started with my TI-86 calculator, and then moved into using Visual Basic. After about a year I started playing around with HTML and JavaScript. Then I discovered Flash; I programmed with Actionscript 2.0 for about 2 years which lead me to start using Coldfusion. After a while I realized that A) I am not a designer, and B) with the way that things were going with AJAX, .NET, and PHP there wasn’t much future in Coldfusion/Actionscript. I had been working mostly as an administrative assistant, but about 3-4 years ago I got a position where I would be doing some web development, and assisting the system admin with supporting windows desktop PCs. I have gotten some decent experience over the past few years, but it has been spread out in somewhat disparate areas: I spend about 40% of my time writing PHP/MySQL and HTML/CSS, etc. I spend about 20% of my time helping users with PC questions. I spend about 20% of my time doing administrative things (mail-merges, excel, etc). I spend about 20% of my time managing / creating reports from our Access Database. I have also taught myself many things on my own, and now have a beginner’s level understanding of things like: Windows Server, Java, Linux, Objective-C, SQL Server, C#, C++, Ruby, Mac OSX, VBA, VBScript, and basic IP networks. I feel like I am in a bit of a rut – I want to get my career moving, but I am not sure what I need to do. If I practice with C# and SQL Server Express for a year will that be enough to get me in the door somewhere? Would it be easier to get a position if I teach myself Linux/Apache since I have more experience with PHP/MySQL?

    Read the article

  • What is the best way to back up dedicated web server? (Amanda versus Rsync)

    - by Scott
    Hello everyone, I am trying to establish valid back ups for my web server. It is a linux box on CentOS. I have asked around and "rsync" was suggested by some of the server fault community. However, my coworker at work says that this is really only moving over the physical files and isn't really a usable "snapshot." He suggested using "amanda" and that this did full server snapshots that are more what I am accustomed to. I know at my company we have virtual machines that we take snapshots of and we can restore everything back to just as they were with little effort and little downtime. Is this possible with rsync? Or would I need to create a new server and then migrate the files back and do various configurations? I think I prefer being able to just reset everything to a point in time. Forgive my ignorance, Back ups are something that I have never really had to worry about before.

    Read the article

  • How can I update Firefox add-ons automatically?

    - by Maelstrom
    Similar to this question, is it possible to update installed plugins via the command line? I'm running YSlow with beacon reporting as a nightly cron job under OSX: /Applications/Firefox.app/Contents/MacOS/firefox-bin -no-remote -P YSlow http://www.example.com/ & PID=$! sleep 300 kill $PID This dumps FF into the background and grabs the PID, waits 300 seconds (for the page to load) then kills it. If there is an update pending, the browser "hangs" waiting for a confirmation. If I do click on the "install updates" link, everything works and then Firefox launches a new process - the $! returned by the shell is no longer valid. Can I update a plugin from the command line without confirmation? Can I curl the XPI into a file and install it without confirmation?

    Read the article

  • Why would certain browsers request all pages on my ASP.Net Web site twice?

    - by Deane
    Firefox is issuing duplicate requests to my ASP.Net web site. It will request a page, get the response, then immediately issue the same request again (well, almost the same -- see below). This happens on every page of this particular Web site (but not any others). IE does not do this, but Chrome also does this. I have confirmed that there is no Location header in the response, and no Javascript or meta tag in the page which would cause the page to be re-requested (if any of these were true, IE would be re-requesting pages as well). I have confirmed this behavior on multiple Firefox installs on multiple machines. Versions vary, but all are 3.x. The only difference between the two requests is the Accepts header. For the first request, it looks like this: Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 For the second request, it looks like this: Accept: */* The Content-Type response header in all cases is: Content-Type: text/html; charset=utf-8 Something else odd -- even though Firefox requests the page twice, it uses the first response and discards the second. I put a counter on a page that increments with every request. I can watch the responses come back (via the Charles proxy). Firefox will get a "1" the first time, and a "2" the second time. Yet it will display the "1," for some reason. Chrome exhibits this exact same behavior. I suspect it's a protocol-level issue, given the difference in Accepts header, but I've never seen this before.

    Read the article

  • PHP URL parameters append return special character

    - by Alexandre Lavoie
    I'm programming a function to build an URL, here it is : public static function requestContent($p_lParameters) { $sParameters = "?key=TEST&format=json&jsoncallback=none"; foreach($p_lParameters as $sParameterName => $sParameterValue) { $sParameters .= "&$sParameterName=$sParameterValue"; } echo "<span style='font-size: 16px;'>URL : http://api.oodle.com/api/v2/listings" . $sParameters . "</span><br />"; $aXMLData = file_get_contents("http://api.oodle.com/api/v2/listings" . $sParameters); return json_decode($aXMLData,true); } And I am calling this function with this array list : print_r() result : Array ( [region] => canada [category] => housing/sale/home ) But this is very strange I get an unexpected character (note the special character none*®*ion) : http://api.oodle.com/api/v2/listings?key=TEST&format=json&jsoncallback=none®ion=canada&category=housing/sale/home For information I use this header : <meta http-equiv="Content-Type" content="text/html;charset=UTF-8" /> <?php header('Content-Type: text/html;charset=UTF-8'); ?> EDIT : $sRequest = "http://api.oodle.com/api/v2/listings?key=TEST&format=json&jsoncallback=none&region=canada&category=housing/sale/home"; echo "<span style='font-size: 16px;'>URL : " . $sRequest . "</span><br />"; return the exact URL with problem : http://api.oodle.com/api/v2/listings?key=TEST&format=json&jsoncallback=none®ion=canada&category=housing/sale/home Thank you for your help!

    Read the article

  • Chrome extension sendRequest from async callback not working?

    - by Eugene
    Can't figure out what's wrong. onRequest not triggered on call from async callback method, the same request from content script works. The sample code below. background.js ============= ... makeAsyncRequest(); ... chrome.extension.onRequest.addListener(function(request, sender, sendResponse) { switch (request.id) { case "from_content_script": // This works console.log("from_content_script"); sendResponse({}); // clean up break; case "from_async": // Not working! console.log("from_async"); sendResponse({}); // clean up break; } }); methods.js ========== makeAsyncRequest = function() { ... var xhr = new XMLHttpRequest(); xhr.onreadystatechange = function() { if (xhr.readyState == 4) { ... // It works console.log("makeAsyncRequest callback"); chrome.extension.sendRequest({id: "from_async"}, function(response) { }); } } ... }; UPDATE: manifest configuration file. Don't no what's wrong here. { "name": "TestExt", "version": "0.0.1", "icons": { "48": "img/icon-48-green.gif" }, "description": "write it later", "background_page": "background.html", "options_page": "options.html", "browser_action": { "default_title": "TestExt", "default_icon": "img/icon-48-green.gif" }, "permissions": [ "tabs", "http://*/*", "https://*/*", "file://*/*", "webNavigation" ] }

    Read the article

< Previous Page | 744 745 746 747 748 749 750 751 752 753 754 755  | Next Page >