Search Results

Search found 20935 results on 838 pages for 'content'.

Page 100/838 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • DataGridView not displaying data in ToolStripControlHost

    - by jblaske
    I'm utilizing the code posted by Jesper Palm here: http://stackoverflow.com/questions/280891/make-user-control-display-outside-of-form-boundry /// <summary> /// A simple popup window that can host any System.Windows.Forms.Control /// </summary> public class PopupWindow : System.Windows.Forms.ToolStripDropDown { private System.Windows.Forms.Control _content; private System.Windows.Forms.ToolStripControlHost _host; public PopupWindow(System.Windows.Forms.Control content) { //Basic setup... this.AutoSize = false; this.DoubleBuffered = true; this.ResizeRedraw = true; this._content = content; this._host = new System.Windows.Forms.ToolStripControlHost(content); //Positioning and Sizing this.MinimumSize = content.MinimumSize; this.MaximumSize = content.Size; this.Size = content.Size; content.Location = Point.Empty; //Add the host to the list this.Items.Add(this._host); } } I've translated it to VB: Public Class PopupWindow Inherits System.Windows.Forms.ToolStripDropDown Private _content As System.Windows.Forms.Control Private _host As System.Windows.Forms.ToolStripControlHost Public Sub New(ByVal content As System.Windows.Forms.Control) Me.AutoSize = False Me.DoubleBuffered = True Me.ResizeRedraw = True Me._content = content Me._host = New System.Windows.Forms.ToolStripControlHost(content) Me.MinimumSize = content.MinimumSize Me.MaximumSize = content.MaximumSize Me.Size = content.Size content.Location = Point.Empty Me.Items.Add(Me._host) End Sub End Class It works great with a PictureBox showing its information. But for some reason I cannot get the DataGridView to display anything when it is in the popup. If I pull the grid out of the popup it displays all of its information fine. If I pause during debug, the grid shows that it has all the data in it. It's just not displaying anything. Does anybody have any ideas?

    Read the article

  • Laravel check if id exists?

    - by devt204
    I've two columns in contents tables 1. id 2. content now this is what i'm trying to do Route::post('save', function() { $editor_content=Input::get('editor_content'); $rules = array('editor_content' => 'required'); $validator= Validator::make(Input::all(), $rules); if($validator->passes()) { //1. check if id is submitted? //2. if id exists update content table //3. else insert new content //create new instance $content= new Content; // insert the content to content column $content->content = $editor_content; //save the content $content->save(); // check if content has id $id=$content->id; return Response::json(array('success' => 'sucessfully saved', 'id' => $id)); } if($validator->fails()) { return $validator->messages() ; } }); i wanted to check if id has been already submit or checked i'm processing the request via ajax, and if id exists i wanted update the content column and if it doesn't i wanted to create new instance how do i do it ?

    Read the article

  • PHP Templated site w/ file_get_content links

    - by s32ialx
    Ok so i have a template script my friend built for me. I'll include all file names OK so what is not working is file_get_contents is not grabing the content (1 I don't know where the content should be placed and 2 i want it placed in a directory so that IF i change the template the area where content stays is the same. I'm trying to get file_get_contents to load the links ?=about ?=services etc to load in to body.tpl in the contents div please any help is apperciated. /* file.class.php */ <?php $file = new file(); class file{ var $path = "templates/clean"; var $ext = "tpl"; function loadfile($filename){ return file_get_contents($this->path . "/" . $filename . "." . $this->ext); } function css($val,$content='',$contentvar='#CSS#') { if(is_array($val)) { $css = 'style="'; foreach($val as $p) { $css .= $p . ";"; } $css .= '"'; } else { $css = 'style="' . $val . '"'; } if($content!='') { return str_replace($contentvar,' ' . $css,$content); } else { return $css; } } function setsize($content,$width='-1',$height='-1',$border='-1'){ $css = ''; if($width!='-1') { $css = $css . "width=\"".$width."\""; } if($height!='-1') { $css = $css . "height=\"".$height."\""; } if($border!='-1') { $css = $css . "border=\"" . $border . "\""; } return str_replace('#SIZE#',' ' . $css,$content); } function setcontent($content,$newcontent,$vartoreplace='#CONTENT#'){ $val = str_replace($vartoreplace,$newcontent,$content); return $val; } function p($content) { $v = $content; $v = str_replace('#CONTENT#','',$v); $v = str_replace('#SIZE#','',$v); print $v; } } if (isset($_GET['page'])) { $content = $_GET['page'].'.php'; } else { $content = 'main.php'; } ?> if some one could trim that down so it JUST is the template required code and file get contents. /* index.php */ <?php include('classes/file.class.php'); //load template content $header = $file->loadfile('header'); $body = $file->loadfile('body'); $footer = $file->loadfile('footer'); //assign content to multiple variables $file->p($header . $body . $footer); ?> /* header.tpl */ <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-Transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"/> <meta name="robots" content="index,follow"/> <meta name="distribution" content="global"/> <meta name="description" content=""/> <meta name="keywords" content=""/> <link href="templates/clean/style.css" rel="stylesheet" type="text/css" media="screen" /> <link rel="stylesheet" href="templates/clean/menu_style.css" type="text/css" /> <script type="text/javascript" src="http://jqueryjs.googlecode.com/files/jquery-1.3.2.min.js"></script> </head> <body> <div id="header"> <div id="logo"><a href="index.php" style="height:30px;width:150px;"><img src="images/logo.png" border="0" alt=""/></a></div> <div id="menuo"> <div class="menu"> <ul id="menu"> <li><a href="?page=home">Home</a></li> <li><a href="?page=about">About Us</a></li> <li><a href="?page=services">Services</a> <ul> <li><a href="?page=instore">InStore Repairs</a></li> <li><a href="?page=inhome">InHome Repairs</a></li> <li><a href="?page=website">Website Design</a></li> <li><a href="?page=soon">Comming Soon.</a></li> </ul> </li> <li><a href="?page=products">Products</a> <ul> <li><a href="?page=pchard">Computer Hardware</a></li> <li><a href="?page=monitor">Monitor's</a></li> <li><a href="?page=laptop">Laptop + Netbooks</a></li> <li><a href="?page=soon">Comming Soon.</a></li> </ul> </li> <li><a href="?page=contact">Contact</a></li> </ul> </div> </div> </div> <div id="headerf"> </div> /* body.tpl */ <div id="bodys"> <div id="bodt"></div> <div id="bodm"> <div id="contents"> #CONTENT# </div> /* footer.tpl */ <div id="footer"> <div style="position:absolute; top:4px; left:4px;"><img src="images/ff.png" alt="ok"></div> <div style="position:absolute; top:4px; right:5px; color:#FFFFFF;">&copy;2010 <a href="mailto:">Company Name</a></div> </div> </body> </html>

    Read the article

  • Using visual basic in excel to create word document, how do I make some bold text?

    - by Ernst
    I've seen this, but it doesn't work for me, I don't get where to change from insertafter to typetext. What should I change in the following to get part of the text bold as desired? Sub CreateNewWordDoc() Dim wrdDoc As Word.Document Dim wrdApp As Word.Application Set wrdApp = CreateObject("Word.Application") Set wrdDoc = wrdApp.Documents.Add With wrdDoc .Content.InsertAfter "not bold " .Content.Font.Bold = True .Content.InsertAfter "should be bold" .Content.Font.Bold = False .Content.InsertAfter " again not bold, followed by newline" .Content.InsertParagraphAfter .Content.Font.Bold = True .Content.InsertAfter "bold again" .Content.Font.Bold = False .Content.InsertAfter " and again not bold" .Content.InsertParagraphAfter .SaveAs ("testword.doc") .Close End With wrdApp.Quit Set wrdDoc = Nothing Set wrdApp = Nothing End Sub Thanks, Ernst

    Read the article

  • need to open an image open in web browser

    - by manish
    byte.eml file is having image base64 encoded value ..and i am tring to open it in browser ...but this is not populating image file....plz help me out.. this is code... Dim oFile As System.IO.File Dim orEAD As System.IO.StreamReader orEAD = oFile.OpenText("E:\mailbox\P3_hemantd.mbx\byte.eml") Dim content As String content = "" ''Dim intsinglechr As Integer ''Dim csinglechr As String While orEAD.Peek <> -1 content = content & Chr(orEAD.Read) content = Replace(content, vbCrLf, "") content = Replace(content, vbTab, "") content = Replace(content, " ", "") End While Response.ContentType = "image/jpeg" Response.BinaryWrite(Convert.FromBase64String(content))

    Read the article

  • Where to find a template or script with frame on the left side(list of articleHeadlines) and on the right side the content

    - by Gero
    I am looking for something like the following: http://www.scala-lang.org/api/current/index.html#scala.Any http://resources.arcgis.com/en/help/arcobjects-net/componenthelp/index.html#/Overview/004t00000009000000/ On the left side i want to have/create in some admintool categeries, subcategories and add names/links to the articles on the right side. So when i click on one of the articles/links, i would see the content on the right side. Is there any script or template or whatever that would allow me that?

    Read the article

  • SEO penalty for "duplicate" content when a site's also accessible via another domain name?

    - by tog22
    While testing searches for keywords on my site, I notice that a mirror of it at http://a8.8d.344a.static.theplanet.com/ sometimes appears at the top result rather than my primary domain. It looks like this is an alternative address for my server. Will the presence of identical content at this domain and at my primary domain result in a Google penalty? If so, what can I do about it? Thanks for any help...

    Read the article

  • Copy and Pasting Web page Content into an Office Application.

    - by gcc
    I want take information from any website and paste it into Libre Office (text and images). Firstly, I want record the name/URL, description and some basic information from each website. Afterwards, my intention is to copy/paste the web-page content into LibreOffice in order to analyze it. Can Libre Office do this and is it my best option? if not can you recommend a tool which is available for 12.04?

    Read the article

  • Why the Ubuntu App Developer website is not showing content about development for desktop?

    - by Zignd
    Looks like they removed every content that is not related with development for desktop. For example when you click in "Get Started" tab there is only information about the Ubuntu Touch and its SDK, when you click on "Resources" tab and then on "Programming languages" you only see C++, JavaScript and QML (no Python, Java, Mono, etc). You also can't find any information about Quickly, try clicking on "Quicky" at "Resources" in the website bottom and you will see a "Page not found" error. Is the site under maintenance or something else?

    Read the article

  • How to build a web service to detect content change(s) at an external website?

    - by Global nomad
    I'm researching ways to build a web service to periodically traverse a predetermined list of web pages (of another external website) to detect if a page's content has changed from editing of the page, and deletion of the page. The end goal is to have this web service post push-notification events to mobile devices. FYI, I've searched and read "Questions with similar titles" here. Thank you for sharing your answers.

    Read the article

  • Which Content Management System (CMS)/Wiki should I use?

    - by danlefree
    This is a general, community wiki catch-all question to address non-specific "I need a CMS or Wiki that does x, y, and z..." questions. If your question was closed as a duplicate of this question and you feel that the information provided here does not provide a sufficient answer, please open a discussion on Pro Webmasters Meta. I have a list of features that I want for my website's Content Management System (CMS) - where can I find a [free] script that includes all of them?

    Read the article

  • Do you known a reputable backup software that can capture ONLY file system structure + attributes, WITHOUT file content

    - by bogdan
    Is there, on Windows, a reputable backup software out there capable of capturing ONLY a file system's directory and file structure, along with each item's attributes, WITHOUT capturing the actual file content (all files should be zero-length in the backup). I thoroughly searched the web for a solution and wasn't able to find one. Scenario when this would be very useful: I have a large drive with a huge amount of files. If the drive dies, I don't care so much about the content in these files (I can always download this content again from the Internet at any time) but I do care HUGELY about the names of the files that were on it, possibly also about their MD5 hashes and other classic file attributes (especially created-date / modified-date). The functionality I need is present to an extent in "media"/file cataloging software (i.e. whereisit) and, to a lesser extent, in a Total Commander set of extensions (DiskDir, DiskDirExtended). The huge drawback with cataloging software is that it's not designed to store previous versions of each item (AFAIK) and, most importantly, it has very weak content backup capabilities. I managed to think of a hack but I hope there's some backup software out there that already has this capability and I just failed to find it, thus this question. The hack: RoboCopy could be used with /CREATE (CREATE directory tree and zero-length files only) or /COPY (what to COPY for files) without the D=Data flag, to clone a directory structure into one where all files are zero-length but have the desired attributes. Then I would backup the cloned directory structure with a reputable backup software. I would really love to avoid a hack like this one, if possible. Thanks, Bogdan

    Read the article

  • Right-aligning button in a grid with possibly no content - stretch grid to always fill the page

    - by Peter Perhác
    Hello people, I am losing my patience with this. I am working on a Windows Phone 7 application and I can't figure out what layout manager to use to achieve the following: Basically, when I use a Grid as the layout root, I can't make the grid to stretch to the size of the phone application page. When the main content area is full, all is well and the button sits where I want it to sit. However, in case the page content is very short, the grid is only as wide as to accommodate its content and then the button (which I am desperate to keep near the right edge of the screen) moves away from the right edge. If I replace the grid and use a vertically oriented stack panel for the layout root, the button sits where I want it but then the content area is capable of growing beyond the bottom edge. So, when I place a listbox full of items into the main content area, it doesn't adjust its height to be completely in view, but the majority of items in that listbox are just rendered below the bottom edge of the display area. I have tried using a third-party DockPanel layout manager and then docked the button in it's top section and set the button's HorizontalAlignment="Right" but the result was the same as with the grid, it also shrinks in size when there isn't enough content in the content area (or when title is short). How do I do this then? ==EDIT== I tried WPCoder's XAML, only I replaced the dummy text box with what I would have in a real page (stackpanel) and placed a listbox into the ContentPanel grid. I noticed that what I had before and what WPCoder is suggesting is very similar. Here's my current XAML and the page still doesn't grow to fit the width of the page and I get identical results to what I had before: <phone:PhoneApplicationPage x:Name="categoriesPage" x:Class="CatalogueBrowser.CategoriesPage" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:phone="clr-namespace:Microsoft.Phone.Controls;assembly=Microsoft.Phone" xmlns:shell="clr-namespace:Microsoft.Phone.Shell;assembly=Microsoft.Phone" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" FontFamily="{StaticResource PhoneFontFamilyNormal}" FontSize="{StaticResource PhoneFontSizeNormal}" Foreground="{StaticResource PhoneForegroundBrush}" SupportedOrientations="PortraitOrLandscape" Orientation="Portrait" mc:Ignorable="d" d:DesignWidth="480" d:DesignHeight="768" xmlns:ctrls="clr-namespace:Microsoft.Phone.Controls;assembly=Microsoft.Phone.Controls.Toolkit" shell:SystemTray.IsVisible="True"> <Grid x:Name="LayoutRoot" Background="Transparent"> <Grid.RowDefinitions> <RowDefinition Height="Auto"/> <RowDefinition Height="*"/> </Grid.RowDefinitions> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="*" /> <ColumnDefinition Width="Auto" /> </Grid.ColumnDefinitions> <StackPanel Orientation="Horizontal" VerticalAlignment="Center" > <TextBlock Text="Browsing:" Margin="10,10" Style="{StaticResource PhoneTextTitle3Style}" /> <TextBlock x:Name="ListTitle" Text="{Binding DisplayName}" Margin="0,10" Style="{StaticResource PhoneTextTitle3Style}" /> </StackPanel> <Button Grid.Column="1" x:Name="btnRefineSearch" Content="Refine Search" Style="{StaticResource buttonBarStyle}" FontSize="14" /> </Grid> <Grid x:Name="ContentPanel" Grid.Row="1"> <ListBox x:Name="CategoryList" ItemsSource="{Binding Categories}" Style="{StaticResource CatalogueList}" SelectionChanged="CategoryList_SelectionChanged"/> </Grid> </Grid> </phone:PhoneApplicationPage> This is what the page with the above XAML markup looks like in the emulator:

    Read the article

  • How can I clear content without getting the dreaded "stop running this script?" dialog?

    - by Cheeso
    I have a div, that holds a div. like this: <div id='reportHolder' class='column'> <div id='report'> </div> </div> Within the inner div, I add a bunch (7-12) of pairs of a and div elements, like this: <h4><a>Heading1</a></h4> <div> ...content here....</div> The total size of the content, is maybe 200k. Each div just contains a fragment of HTML. Within it, there are numerous <span> elements, containing other html elements, and they nest, to maybe 5-8 levels deep. Nothing really extraordinary, I don't think. After I add all the content, I then create an accordion. like this: $('#report').accordion({collapsible:true, active:false}); This all works fine. The problem is, when I try to clear or remove the report div, it takes a looooooong time, and I get 3 or 4 popups asking "Do you want to stop running this script?" I have tried several ways: option 1: $('#report').accordion('destroy'); $('#report').remove(); $("#reportHolder").html("<div id='report'> </div>"); option 2: $('#report').accordion('destroy'); $('#report').html(''); $("#reportHolder").html("<div id='report'> </div>"); option 3: $('#report').accordion('destroy'); $("#reportHolder").html("<div id='report'> </div>"); after getting a suggestion in the comment, I also tried: option 4: $('#report').accordion('destroy'); $('#report').empty(); $("#reportHolder").html("<div id='report'> </div>"); No matter what, it hangs for a long while. The call to accordion('destroy') seems to not be the source of the delay. It's the erasure of the html content within the report div. This is jQuery 1.3.2. EDIT - fixed code typo. ps: this happens on FF3.5 as well as IE8 . Questions: What is taking so long? How can I remove content more quickly? Addendum I broke into the debugger in FF, during "option 4", and the stacktrace I see is: data() trigger() triggerHandler() add() each() each() add() empty() each() each() (?)() // <<-- this is the call to empty() ResetUi() // <<-- my code onclick I don't understand why add() is in the stack. I am removing content, not adding it. I'm afraid that in the context of the remove (all), jQuery does something naive. Like it grabs the html content, does the text replace to remove one html element, then calls .add() to put back what remains. Is there a way to tell jQuery to NOT propagate events when removing HTML content from the dom?

    Read the article

  • Any way to view dynamic java content ex-post? Browser session still open

    - by Ryan
    I feel like a grandpa from 1996 asking this, but is it at all possible to view a representation of a particular screen that was rendered as part of a java-based online checkout process I executed a couple days ago? I haven't cleared my browser cache or temp files or anything, and I don't think I've restarted the comp or even the browser since. I'm using mac OS X 10.6.8, and the page(s) were viewed with Chrome version 21.0.1180.89 in standard mode (not incognito). Specifically the page in question was part of Verizon Wireless's 'iconic' contract/checkout process, which leads the user through several pages to make selections on various criteria and seems to be based on java. (Obviously I'm a dummy regarding web stuff so the question is probably not very well defined, I'm happy to elaborate). ^This is the tl;dr question. If it belongs on another site please just let me know. This is what I've been able to figure out on my own, for the bored / ultra-helpful / those who could use a laugh at a noob fumbling his way around cache files with no idea what he's doing: The progress through the selection pages is very clear in Chrome's browser history, the sequential pages are: https://www.verizonwireless.com/b2c/accountholder/estore/phoneupgrade?execution=e3s2 https://www.verizonwireless.com/b2c/accountholder/estore/phoneupgrade?execution=e3s3 https://www.verizonwireless.com/b2c/accountholder/estore/phoneupgrade?execution=e3s4 https://www.verizonwireless.com/b2c/accountholder/estore/phoneupgrade?execution=e3s5 https://preorder.verizonwireless.com/iconic/?format=JSON&value={%22action%22:%22START_ORDER%22,%22custType%22:%22EXISTING%22,%22orderType%22:%22UPGRADE%22,%22lookupMtn%22:%22*(NumberA)*%22,%22lineData%22:[{%22mtn%22:%22*(NumberA)*%22,%22upgType%22:%22ALTERNATE_UPGRADE%22,%22eligibleMtn%22:%22*(NumberB)*%22}]} https://preorder.verizonwireless.com/iconic/iconic/secured/screens/IconicOrder.do?format=JSON&value={%22action%22:%22START_ORDER%22,%22custType%22:%22EXISTING%22,%22orderType%22:%22UPGRADE%22,%22lookupMtn%22:%22*(NumberA)*%22,%22lineData%22:[{%22mtn%22:%22*(NumberA)*%22,%22upgType%22:%22ALTERNATE_UPGRADE%22,%22eligibleMtn%22:%22*(NumberB)*%22}]} https://preorder.verizonwireless.com/iconic/iconic/secured/screens/IconicEligibility.do https://preorder.verizonwireless.com/iconic/iconic/secured/screens/IconicDeviceSelection.do https://preorder.verizonwireless.com/iconic/iconic/secured/screens/PlanOptions.do https://preorder.verizonwireless.com/iconic/iconic/secured/screens/IconicFeatures.do https://preorder.verizonwireless.com/iconic/iconic/secured/screens/IconicAccessories.do https://preorder.verizonwireless.com/iconic/iconic/secured/screens/IconicShipmentBilling.do https://preorder.verizonwireless.com/iconic/iconic/secured/screens/IconicReview.do https://preorder.verizonwireless.com/iconic/iconic/secured/screens/IconicPaymentCreditInfo.do https://preorder.verizonwireless.com/iconic/iconic/secured/screens/IconicConfirmation.do The visual representation I would need could come from any of these pages, as the necessary information was shown at the top of each of them (although the two with long URLs were just like redirects or something). Of course, clicking the link to the page in History right now requires a new sign-in and just returns the user to the initial step for doing the process again; it does not pull up a representation of the page as it was seen several days ago. This I understand. Instead using Chrome's integrated cache viewer by typing about:cache in the address bar, I can search and find links that appear to be relevant, when I click on the link I just get a http header and a bunch of hexadecimal gobbledygook. I've tried to use the URL at the top of the cache and URLs in the http headers, but they take me to current versions of those pages and not the versions I saw during the checkout process. I tried this with a few of them but stopped because I noticed that it updated the date in the http header to the present moment and I don't want to take chances overwriting the cache files since I don't know what I'm doing. The links to the cache files look like this: https://login.verizonwireless.com/amserver/UI/Login?realm=vzw&goto=https%3A%2F%2Fpreorder.verizonwireless.com%3A443%2Ficonic%2Ficonic%2Fsecured%2Fscreens%2FPlanOptions.do https://preorder.verizonwireless.com/iconic/iconic/screens/customerTypeOverlay.jsp https://verizonwireless.tt.omtrdc.net/m2/verizonwireless/mbox/standard?mboxHost=login.verizonwireless.com&mboxSession=1347776884663-145230&mboxPC=1347609748832-956765.19&mboxPage=1347776884663-145230&screenHeight=1200&screenWidth=1920&browserWidth=1299&browserHeight=868&browserTimeOffset=-420&colorDepth=24&mboxCount=1&mbox=My_Verizon_Global&mboxId=0&mboxTime=1347751684666&mboxURL=https%3A%2F%2Flogin.verizonwireless.com%2Famserver%2FUI%2FLogin%3Frealm%3Dvzw%26goto%3Dhttps%253A%252F%252Fpreorder.verizonwireless.com%253A443%252Ficonic%252Ficonic%252Fsecured%252Fscreens%252FPlanOptions.do&mboxReferrer=&mboxVersion=41 and https://verizonwireless.tt.omtrdc.net/m2/verizonwireless/mbox/standard?mboxHost=login.verizonwireless.com&mboxSession=1347735676953-663794&mboxPC=1347609748832-956765.19&mboxPage=1347738347511-550383&screenHeight=1200&screenWidth=1920&browserWidth=1299&browserHeight=845&browserTimeOffset=-420&colorDepth=24&mboxCount=1&mbox=My_Verizon_Global&mboxId=0&mboxTime=1347713147517&mboxURL=https%3A%2F%2Flogin.verizonwireless.com%2Famserver%2FUI%2FLogin%3Frealm%3Dvzw%26goto%3Dhttps%253A%252F%252Fpreorder.verizonwireless.com%253A443%252Ficonic%252Ficonic%252Fsecured%252Fscreens%252FIconicOrder.do%253Fformat%253DJSON%2526value%253D%257B%252522action%252522%253A%252522START_ORDER%252522%252C%252522custType%252522%253A%252522EXISTING%252522%252C%252522orderType%252522%253A%252522UPGRADE%252522%252C%252522lookupMtn%252522%253A%252522*(NumberA)*%252522%252C%252522lineData%252522%253A%255B%257B%252522mtn%252522%253A%252522*(NumberA)*%252522%252C%252522upgType%252522%253A%252522ALTERNATE_UPGRADE%252522%252C%252522eligibleMtn%252522%253A%252522*(NumberB)*%252522%257D%255D%257D&mboxReferrer=&mboxVersion=41 and the http headers look like this: HTTP/1.1 200 OK Server: VZW Date: Sun, 16 Sep 2012 14:55:48 GMT Cache-control: private Pragma: no-cache Expires: 0 X-dsameversion: VZW Am_client_type: genericHTML Content-type: text/html;charset=ISO-8859-1 Content-Encoding: gzip Content-Length: 6220 and HTTP/1.1 200 OK Cache-Control: no-cache Date: Sun, 16 Sep 2012 16:16:30 GMT Content-Type: text/html Expires: Thu, 01 Jan 1970 00:00:00 GMT Content-Encoding: gzip X-Powered-By: Servlet/2.5 JSP/2.1 and HTTP/1.1 302 Moved Temporarily Server: VZW Date: Sun, 16 Sep 2012 16:29:32 GMT Cache-control: private Pragma: no-cache X-dsameversion: VZW Am_client_type: genericHTML Location: https://preorder.verizonwireless.com:443/iconic/iconic/secured/screens/IconicOrder.do?format=JSON&value={%22action%22:%22START_ORDER%22,%22custType%22:%22EXISTING%22,%22orderType%22:%22UPGRADE%22,%22lookupMtn%22:%22*(*(NumberA)*%22,%22lineData%22:[{%22mtn%22:%22*(NumberA)*%22,%22upgType%22:%22ALTERNATE_UPGRADE%22,%22eligibleMtn%22:%22*(NumberB)*%22}]} Content-length: 0 ^^this last one actually returned me to a page in the middle of the process when I used the "Location:" given in this http header rather than the URL at the top of the cache page (and was signed in to Verizon's website through a separate tab), but the page it took me to had already been updated to reflect new information, it wasn't presented as of the time the actions were taken several days ago when the page was originally viewed. (It's clear I can't achieve what I'm looking for by visiting current versions of these pages on the web…I should actually probably disable my network adapter while testing this out). The cache folder seems promising, but I don't know what to make of all that hexadecimal mess - if it contains what I'm looking for and if so, how to view it. Finally, the third thing I've come across is the Google Chrome cache folder on my local machine, at ~/Library/Caches/Google/Chrome/ then there are 'Default' and 'Media Cache' folders within. There are ~4,000 files in the former averaging ~100kb each, and 100 files in the latter averaging ~900kb each. The filenames all start "f_00xxxx" except for files titled data_0 through data_4 in each folder. I'm not sure how to observe the contents of these files and don't really want to start opening them up and potentially overwriting existing cached pages, as I notice there are already some holes in the arrangement of the files which I have never deleted manually. Hopefully this is an easy question to answer for someone who knows this stuff, admittedly web stuff is my weak point. As such, I've spent the past five hours searching around and trying to provide all the information I can. I'm probably asking for a miracle - like can those cached pages full of hexadecimal data be used to recreate the representation of the information that was on screen during the process? Or could screenshots of the previously viewed webpages be lurking in the /Caches folder? I have doubt because the content wasn't viewed at a permanent link, rather it seems like the on-screen information was served by Verizon's db, and probably securely so. I'm just not sure if Chrome saves the visual rendering of the page contents somewhere, even just temporarily. Alternatively I would be happy just to get the raw data that was on the page, even if not a visual representation…I just need to be able to demonstrate the phone line that was referenced on this page: https://preorder.verizonwireless.com/iconic/iconic/secured/screens/IconicFeatures.do . Can anyone point me in the right direction?

    Read the article

  • How to concatenate the contents of all children of a node in XPath?

    - by Brian
    Is it possible with XPath to get a concatenated view of all of the children of a node? I am looking for something like the JQuery .html() method. For example, if I have the following XML: <h3 class="title"> <span class="content">this</span> <span class="content"> is</span> <span class="content"> some</span> <span class="content"> text</span> </h3> I would like an XPath query on "h3[@class='title']" that would give me "this is some text". That is the real question, but if more context/background is helpful, here it is: I am using XPath and I used this post to help me write some complex XSL. My source XML looks like this. <h3 class="title">Title</h3> <p> <span class="content">Some</span> <span class="content"> text</span> <span class="content"> for</span> <span class="content"> this</span> <span class="content"> section</span> </p> <p> <span class="content">Another</span> <span class="content"> paragraph</span> </p> <h3 class="title"> <span class="content">Title</span> <span class="content"> 2</span> <span class="content"> is</span> <span class="content"> complex</span> </h3> <p> <span class="content">Here</span> <span class="content"> is</span> <span class="content"> some</span> <span class="content"> text</span> </p> My output XML considers each <h3> as well as all <p> tags until the next <h3>. I wrote the XSL as follows: <xsl:template match="h3[@class='title']"> ... <xsl:apply-templates select="following-sibling::p[ generate-id(preceding-sibling::h3[1][@class='title'][text()=current()/text()]) = generate-id(current()) ]"/> ... </xsl:template> The problem is that I use the text() method to identify h3s that are the same. In the example above, the "Title 2 is complex" title's text() method returns whitespace. My thought was to use a method like JQuery's .html that would return me "Title 2 is complex"

    Read the article

  • Extract news links from news website

    - by Ali
    Is there any reliable method to find out the collection of links which is directed us to detail news page. in other word after visiting the first page of website I just want those links that refer to a news item. any solution ?

    Read the article

  • Get the rendered text from HTML (Delphi)

    - by Daisetsu
    I have some HTML and I need to extract the actual written text from the page. So far I have tried using a web browser and rendering the page, then going to the document property and grabbing the text. This works, but only where the browser is supported (IE com object). The problem is I want this to be able to run under wine also, so I need a solution that doesn't use IE COM. There must be a programatic way to do this that is reasonable.

    Read the article

  • Jquery JQGrid breaks when contentType=application/json?

    - by JK
    I've had to use $.ajaxSetup() to globally change the contentType to application/json $.ajaxSetup({ contentType: "application/json; charset=utf-8" }); (See this question for why I had to use application/json http://stackoverflow.com/questions/2792603/aspnet-mvc-why-is-modelstate-isvalid-false-the-x-field-is-required-when-that) But this breaks the jquery jqrid with this error: Invalid JSON primitive: _search The POST data it is trying to send is: _search=false&nd=1274042681880&rows=20&page=1&sidx=&sord=asc Which of is not in json format, so of course it fails. Is there anyway to tell jqrid what contenttype to use? I have searched on the jqrid wiki, but doesn't have much documentation about anything really. http://www.trirand.com/jqgridwiki/doku.php?do=search&id=contenttype&fulltext=Search

    Read the article

  • Getting BeautifulSoup to find a specific <p>

    - by Ryan
    I'm trying to put together a basic HTML scraper for a variety of scientific journal websites, specifically trying to get the abstract or introductory paragraph. The current journal I'm working on is Nature, and the article I've been using as my sample can be seen at http://www.nature.com/nature/journal/v463/n7284/abs/nature08715.html. I can't get the abstract out of that page, however. I'm searching for everything between the <p class="lead">...</p> tags, but I can't seem to figure out how to isolate them. I thought it would be something simple like from BeautifulSoup import BeautifulSoup import re import urllib2 address="http://www.nature.com/nature/journal/v463/n7284/full/nature08715.html" html = urllib2.urlopen(address).read() soup = BeautifulSoup(html) abstract = soup.find('p', attrs={'class' : 'lead'}) print abstract Using Python 2.5, BeautifulSoup 3.0.8, running this returns 'None'. I have no option of using anything else that needs to be compiled/installed (like lxml). Is BeautifulSoup confused, or am I?

    Read the article

  • Response.TransmitFile() with UNC share (ASP.NET)

    - by frankadelic
    In the comments of this page: http://msdn.microsoft.com/en-us/library/12s31dhy.aspx ..it says that TransmitFile() cannot be used with UNC shares. As far as I can tell, this is the case; I get this error in Event Log when I attempt it: TransmitFile failed. File Name: \\myshare1\e$\file.zip, Impersonation Enabled: 0, Token Valid: 1, HRESULT: 0x8007052e The suggested alternative is to use WriteFile(), however, this is problematic because it loads the file into memory. In my application, the files are 200MB, so this is not going to scale. Is there a method in ASP.NET for streaming files to users that's: scalable (doesn't read entire file into RAM or occupy ASP.NET threads) works with UNC shares Mapping a network drive as a virtual directory is not an option for us. I would like to avoid copying the file to the local web server as well. Thanks

    Read the article

  • How do you parse an HTML in vb.net

    - by tooleb
    I would like to know if there is a simple way to parse HTML in vb.net. I know that HTML is not sctrict subset of XML, but it would be nice if it could be treated that way. Is there anything out there that would let me parse HTML in an XML-like way in VB.net?

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >