Search Results

Search found 42421 results on 1697 pages for 'page numbers'.

Page 321/1697 | < Previous Page | 317 318 319 320 321 322 323 324 325 326 327 328  | Next Page >

  • Looking for a recommendation on measuring a high availability app that is using a CDN.

    - by T Reddy
    I work for a Fortune 500 company that struggles with accurately measuring performance and availability for high availability applications (i.e., apps that are up 99.5% with 5 seconds page to page navigation). We factor in both scheduled and unscheduled downtime to determine this availability number. However, we recently added a CDN into the mix, which kind of complicates our metrics a bit. The CDN now handles about 75% of our traffic, while sending the remainder to our own servers. We attempt to measure what we call a "true user experience" (i.e., our testing scripts emulate a typical user clicking through the application.) These monitoring scripts sit outside of our network, which means we're hitting the CDN about 75% of the time. Management has decided that we take the worst case scenario to measure availability. So if our origin servers are having problems, but yet the CDN is serving content just fine, we still take a hit on availability. The same is true the other way around. My thought is that as long as the "user experience" is successful, we should not unnecessarily punish ourselves. After all, a CDN is there to improve performance and availability! I'm just wondering if anyone has any knowledge of how other Fortune 500 companies calculate their availability numbers? I look at apple.com, for instance, of a storefront that uses a CDN that never seems to be down (unless there is about to be a major product announcement.) It would be great to have some hard, factual data because I don't believe that we need to unnecessarily hurt ourselves on these metrics. We are making business decisions based on these numbers. I can say, however, given that these metrics are visible to management, issues get addressed and resolved pretty fast (read: we cut through the red-tape pretty quick.) Unfortunately, as a developer, I don't want management to think that the application is up or down because some external factor (i.e., CDN) is influencing the numbers. Thoughts? (I mistakenly posted this question on StackOverflow, sorry in advance for the cross-post)

    Read the article

  • Excel concatenate strings from cells listed in third cell

    - by Puddingfox
    I have an excel 2007 workbook that has five columns: A. A list of machines B. A list of service numbers for each machine C. A list of service names for each machine ...(nothing here) I. A list of Service Numbers J. A list of Service Names Each machine listed in column A has one or more services running on it from the list in column J. I would like to be able to add services to a machine (i.e. updating the cell in Column C) by simply adding another comma-separated number to Column B. For Example, The first row would look like this assuming Machine1 has the first three services: | A | B | C | Machine1 | 1,2,3 | HTTP,HTTPS,DNS Right now I have to manually update the formula in column c for each change I make. The current formula is: =CONCATENATE(J1,",",J2,",",J3) I would like to use something like this (please forgive my syntax; I'm a coder and I'm treating cell B1 as if it is an indexed array): =CONCATENATE(CELL("J"+B1[0] , "," , "J"+B1[1] , "," "J"+B1[2]) Although having variable numbers of services makes this even more difficult. Is there any way of doing this. For reference, this is columns I and J: | I | J | 1 |HTTP | 2 |HTTPS | 3 |DNS ..... | 16 |Service16 I don't know very much about Excel so any help is greatly appreciated.

    Read the article

  • VCL - configuration for Magento and Varnish 3.0.2

    - by Tomas
    I would like to kindly ask if there's someone who can help me configure Varnish for Magento to reach far more hits. My current ratio from varnishstat is: cache_hit=271 cache_miss=926 I'm kindly asking this because I've googled almost every site related to this theme, but 99.9% of configurations don't work because of outdated code. Details of my set-up: I use Varnish on port 80, Apache on port 81, PageCache as Magento varnish module, APC for PHP speed and Memcached for dynamic caching. Load speed is about 1.5s on home-page (Pingdom.com average results) USA ping & 2.5s Europe. Servers are located in Toronto, Canada. EDIT: This is my full VCL configuration http://pastebin.com/885BzHCs (I just use xxx.xxx.xxx.xxx for my IPs) This is the info from the command (varnishtop -i TxHeader -I Cookie): TxHeader Cookie: frontend=965b5...(*lots of numbers); adminhtml=3ae65...(*lots of numbers); EXTERNAL_NO_CACHE=1 "(*lots of numbers)" is just my adding to the info Any idea how to avoid Varnish hitting this cookies? (If I got correctly the idea about avoiding Vanrish hitting the cookie and not caching the home page). Thank you for any help!

    Read the article

  • Can varnish cache files without specific extension or residing in specific directory

    - by pataroulis
    I have a varnish installation to cache (MANY) images that my service serves. It is about 200 images of around 4k per second and varnish happily serves them according to the following rule: if (req.request == "GET" && req.url ~ "\.(css|gif|jpg|jpeg|bmp|png|ico|img|tga|wmf)$") { remove req.http.cookie; return(lookup); } Now, the thing is that I recently added another service on the same server that creates thumbnails to serve but it does not add a specific extension. The files are of the following filename pattern: http://www.example.com/thumbnails/date-of-thumbnail/xxxxxxxxx.xx where xx are numbers, so xxxxxxxxx.xx could be 6482364283.73 (two numbers at the end) (actually this is the timestamp so I can keep extra info in the filename) That has the side effect that varnish does not cache them and I see them constantly being served by apache itself. Even though I can change the format from now on to create thumbs ending in .jpg, is there a way to change the vcl file of my varnish daemon to either cache everything under a directory (the thumbnails directory) or everything with two numbers at its extension? Let me know if I can provide any additional info ! Thanks!

    Read the article

  • How can I stop Excel from eating my delicious CSV files and excreting useless data?

    - by atroon
    I have a database which tracks sales of widgets by serial number. Users enter purchaser data and quantity, and scan each widget into a custom client program. They then finalize the order. This all works flawlessly. Some customers want an Excel-compatible spreadsheet of the widgets they have purchased. We generate this with a PHP script which queries the database and outputs the result as a CSV with the store name and associated data. This works perfectly well too. When opened in a text editor such as Notepad or vi, the file looks like this: "Account Number","Store Name","S1","S2","S3","Widget Type","Date" "4173","SpeedyCorp","268435459705526269","","268435459705526269","848 Model Widget","2011-01-17" As you can see, the serial numbers are present (in this case twice, not all secondary serials are the same) and are long strings of numbers. When this file is opened in Excel, the result becomes: Account Number Store Name S1 S2 S3 Widget Type Date 4173 SpeedyCorp 2.68435E+17 2.68435E+17 848 Model Widget 2011-01-17 As you may have observed, the serial numbers are enclosed by double quotes. Excel does not seem to respect text qualifiers in .csv files. When importing these files into Access, we have zero difficulty. When opening them as text, no trouble at all. But Excel, without fail, converts these files into useless garbage. Trying to instruct end users in the art of opening a CSV file with a non-default application is becoming, shall we say, tiresome. Is there hope? Is there a setting I've been unable to find? This seems to be the case with Excel 2003, 2007, and 2010.

    Read the article

  • How to search file for matching whole lines?

    - by WilliamKF
    I have a command which sends to stdout a series of numbers, each on a new line. I need to determine whether a particular number exists in the list. The match needs to be exact, not a subset. For example, a simple way to approach this that does not work would be to do: /run/command/outputing/numbers | grep -c <numberToSearch> My this gives a false positive on the following list when searching for '456': 1234567 98765 23 1771 If the count is non-zero, a match was found or if it was zero, the number is not in the list. The issue with this is that the numberToSearch could match a subsequence of numbers on a line, instead I only want hits on the whole line. I looked at the man page for grep and did not see any way to only match whole lines. Is there a way to do this, or would I be better off using awk or sed or some other tool instead? I need a binary answer as to whether the number being search for is present or not.

    Read the article

  • Cannot debug views in MVC2 project, getting "The resource cannot be found" error

    - by schefdev
    I'm running Visual Studio 2008 sp1 on Win7, with MVC2 RTM installed. I created a new MVC2 project using the wizard and am unable to debug specific pages. With webforms and even MVC1, I was able to sit on a View page, hit F5, and then have the integrated web server in VS2008 start on the page I was working on. Very handy for building up app logic. When I try this now I get a "The resource cannot be found" error page. I retried this just now with a stock new MVC2 Web Application project. Here are the steps I took after creating the new project to reproduce: Open up project settings. Under the Web subtab, set the Start Action to "Current Page". Leave all the other settings as is. Open one of the views up (e.g. Account/Register.aspx) Hit F5 to debug the project Note that the browser window which displays shows the error message "The resource cannot be found". The link I saw in my browser for this run was: http://localhost:49471/Views/Account/Register.aspx I did some googling and found suggestions related to ensuring all HTTP server pieces were installed. I double checked and made sure that "HTTP Errors" and "HTTP Redirection" were both installed. If I leave the project setting as it was originally, set to "Specific Page" with nothing in the text box, then routing works and I always get the default home page. I'm hoping this isn't the only option. Thanks!

    Read the article

  • Java (JSP): repeating the contentType header in a "sub-jsp"

    - by Webinator
    What happens when headers are repeated in a .jsp you include in another .jsp? For example if example.jsp starts with this: <?xml version="1.0" encoding="UTF-8"?> <jsp:root version="2.0" xmlns:jsp="http://java.sun.com/JSP/Page"> <jsp:directive.page contentType="text/html; charset=UTF-8" /> <div class="content"> <jsp:include page="support-header.jsp"/> ... (it includes support-header.jsp) And then support-header.jsp starts also with this: <?xml version="1.0" encoding="UTF-8"?> <jsp:root version="2.0" xmlns:jsp="http://java.sun.com/JSP/Page"> <jsp:directive.page contentType="text/html; charset=UTF-8" /> ... Is that a problem? Is it bad practice? What does concretely happen when you repeat several times a header that only corresponds to one header in the resulting .html page?

    Read the article

  • CGContextDrawPDFPage taking up large amounts of memory

    - by Ed Marty
    I have a PDF file that I want to draw in outline form. I want to draw the first several pages on the document each in their own UIImage to use on a button so that when clicked, the main display will navigate to the clicked page. However, CGContextDrawPDFPage seems to be using copious amounts of memory when attempting to draw the page. Even though the image is only supposed to be around 100px tall, the application crashes while drawing one page in particular, which according to Instruments, allocates about 13 MB of memory just for the one page. Here's the code for drawing: //Note: This is always called in a background thread, but the autorelease pool is setup elsewhere + (void) drawPage:(CGPDFPageRef)m_page inRect:(CGRect)rect inContext:(CGContextRef) g { CGPDFBox box = kCGPDFMediaBox; CGAffineTransform t = CGPDFPageGetDrawingTransform(m_page, box, rect, 0,YES); CGRect pageRect = CGPDFPageGetBoxRect(m_page, box); //Start the drawing CGContextSaveGState(g); //Clip to our bounding box CGContextClipToRect(g, pageRect); //Now we have to flip the origin to top-left instead of bottom left //First: flip y-axix CGContextScaleCTM(g, 1, -1); //Second: move origin CGContextTranslateCTM(g, 0, -rect.size.height); //Now apply the transform to draw the page within the rect CGContextConcatCTM(g, t); //Finally, draw the page //The important bit. Commenting out the following line "fixes" the crashing issue. CGContextDrawPDFPage(g, m_page); CGContextRestoreGState(g); } Is there a better way to draw this image that doesn't take up huge amounts of memory?

    Read the article

  • How do I use DomainContext.Load in my ViewModel?

    - by kristian
    I'm trying to use RIA services to provide data to my Silverlight application by calling DomainContext.Load to retrieve a collection of widgets. I want to expose this collection through a property of the ViewModel so I can bind a control to the collection in my page. I think my approach must be fundamentally wrong because Load is called asynchronously and is therefore not available when my page loads and the control tried to bind. Can someone please show me the right way to do this? My Silverlight page has the following XAML: <navigation:Page x:Class="Demo.UI.Pages.WidgetPage" // the usual xmlns stuff here... xmlns:local="clr-namespace:Demo.UI.Pages" mc:Ignorable="d" xmlns:navigation="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Navigation" d:DataContext="{d:DesignInstance Type=local:WidgetPageModel, IsDesignTimeCreatable=False}" d:DesignWidth="640" d:DesignHeight="480" Title="Widget Page"> <Canvas x:Name="LayoutRoot"> <ListBox ItemsSource="{Binding RedWidgets}" Width="150" Height="500" /> </Canvas> </navigation:Page> My ViewModel looks like this: public class WidgetPageModel { private WidgetDomainContext WidgetContext { get; set; } public WidgetPageModel() { this.WidgetContext = new WidgetDomainContext(); WidgetContext.Load(WidgetContext.GetAllWidgetsQuery(), false); } public IEnumerable<Widget> RedWidgets { get { return this.WidgetContext.Widgets.Where(w => w.Colour == "Red"); } } }

    Read the article

  • Selenium screenshots using rspec

    - by Thomas Albright
    I am trying to capture screenshots on test failure using selenium-client and rspec. I run this command: $ spec my_spec.rb \ --require 'rubygems,selenium/rspec/reporting/selenium_test_report_formatter' \ --format=Selenium::RSpec::SeleniumTestReportFormatter:./report.html It creates the report correctly when everything passes, since no screenshots are required. However, when the test fails, I get this message, and the report has blank screenshots: WARNING: Could not capture HTML snapshot: execution expired WARNING: Could not capture page screenshot: execution expired WARNING: Could not capture system screenshot: execution expired Problem while capturing system stateexecution expired What is causing this 'execution expired' error? Am I missing something important in my spec? Here is the code for my_spec.rb: require 'rubygems' gem "rspec", "=1.2.8" gem "selenium-client" require "selenium/client" require "selenium/rspec/spec_helper" describe "Databases" do attr_reader :selenium_driver alias :page :selenium_driver before(:all) do @selenium_driver = Selenium::Client::Driver.new \ :host => "192.168.0.10", :port => 4444, :browser => "*firefox", :url => "http://192.168.0.11/", :timeout_in_seconds => 10 end before(:each) do @selenium_driver.start_new_browser_session end # The system capture need to happen BEFORE closing the Selenium session append_after(:each) do @selenium_driver.close_current_browser_session end it "backed up" do page.open "/SQLDBDetails.aspx page.click "btnBackup", :wait_for => :page page.text?("Pending Backup").should be_true end end

    Read the article

  • The file is damaged and could not be repaired

    - by acadia
    Hello Experts, I am trying to display a PDF file in my ASP.net page based on the binary data received from the ASP.net Web service. Below is the code. though I am getting the data from the Web Service for some reason, if I run the below mentioned code on page load I am getting the above mentioned error. Please help Response.Buffer = True Response.ContentType = "application/pdf" Response.AddHeader("Content-Disposition", "Inline") Dim ws As New imageGenService.Service1 Dim imagebyte As Byte() = Nothing imagebyte = ws.generateSamplePDF() If imagebyte IsNot Nothing Then '"attachment; filename=Whatever.pdf" Dim MemStream As New System.IO.MemoryStream Dim doc As New iTextSharp.text.Document Dim reader As iTextSharp.text.pdf.PdfReader Dim numberOfPages As Integer Dim currentPageNumber As Integer Dim writer As iTextSharp.text.pdf.PdfWriter = iTextSharp.text.pdf.PdfWriter.GetInstance(doc, MemStream) doc.Open() Dim cb As iTextSharp.text.pdf.PdfContentByte = writer.DirectContent Dim page As iTextSharp.text.pdf.PdfImportedPage Dim rotation As Integer reader = New iTextSharp.text.pdf.PdfReader(imagebyte) numberOfPages = reader.NumberOfPages currentPageNumber = 0 Do While (currentPageNumber < numberOfPages) currentPageNumber += 1 doc.SetPageSize(PageSize.LETTER) doc.NewPage() page = writer.GetImportedPage(reader, currentPageNumber) rotation = reader.GetPageRotation(currentPageNumber) If (rotation = 90) Or (rotation = 270) Then cb.AddTemplate(page, 0, -1.0F, 1.0F, 0, 0, reader.GetPageSizeWithRotation(currentPageNumber).Height) Else cb.AddTemplate(page, 1.0F, 0, 0, 1.0F, 0, 0) End If Loop If MemStream Is Nothing Then Response.Write("No Data is available for output") Else Response.BinaryWrite(MemStream.GetBuffer()) End If End If

    Read the article

  • ASP.NET MVC Session Expiration

    - by Andrew Flanagan
    We have an internal ASP.NET MVC application that requires a logon. Log on works great and does what's expected. We have a session expiration of 15 minutes. After sitting on a single page for that period of time, the user has lost the session. If they attempt to refresh the current page or browse to another, they will get a log on page. We keep their request stored so once they've logged in they can continue on to the page that they've requested. This works great. However, my issue is that on some pages there are AJAX calls. For example, they may fill out part of a form, wander off and let their session expire. When they come back, the screen is still displayed. If they simply fill in a box (which will make an AJAX call) the AJAX call will return the Logon page (inside of whatever div the AJAX should have simply returned the actual results). This looks horrible. I think that the solution is to make the page itself expire (so that when a session is terminated, they automatically are returned to the logon screen without any action by them). However, I'm wondering if there are opinions/ideas on how best to implement this specifically in regards to best practices in ASP.NET MVC.

    Read the article

  • Display a Photo Gallery using Asp.Net and SQL

    - by sweetcoder
    I have recently added photos to my SQL database and have displayed them on an *.aspx page using Asp:Image. The ImageUrl for this control stored in a separate *.aspx page. It works great for profile pictures. I have a new issue at hand. I need each user to be able to have their own photo gallery page. I want the photos to be stored in the sql database. Storing the photos is not difficult. The issue is displaying the photos. I want the photos to be stored in a thumbnail grid fashion, when the user clicks on the photo, it should bring up the photo on a separate page. What is the best way to do this. Obviously it is not to use Asp:Image. I am curious if I should use a Gridview. If so, how do I do that and should their be a thumbnail size stored in the database for this? Once the picture is click on how does the other page look so that it displays the correct image. I would think it is not correct to send the photoId through the url. Below is code from the page I use to display profile pictures. protected void Page_Load(object sender, EventArgs e) { string sql = "SELECT [ProfileImage] FROM [UserProfile] WHERE [UserId] = '" + User.Identity.Name.ToString() + "'"; string strCon = System.Web.Configuration.WebConfigurationManager.ConnectionStrings["SocialSiteConnectionString"].ConnectionString; SqlConnection conn = new SqlConnection(strCon); SqlCommand comm = new SqlCommand(sql, conn); conn.Open(); Response.ContentType = "image/jpeg"; Response.BinaryWrite((byte[])comm.ExecuteScalar()); conn.Close(); }

    Read the article

  • UIScrollView without paging but with touchesmoved

    - by BittenApple
    I have a UIScrollView that I use to display PDF pages in. I don't want to use paging (yet). I want to display content based on touchesmoved event (so after horizontal swipe). This works (sort of), but instead of catching a single swipe and showing 1 page, the swipe seems to gets broken into 100s of pieces and 1 swipe acts as if you're moving a slider! I have no clue what am I doing wrong. Here's the experimental "display next page" code which works on single taps: - (void)nacrtajNovuStranicu:(CGContextRef)myContext { CGContextTranslateCTM(myContext, 0.0, self.bounds.size.height); CGContextScaleCTM(myContext, 1.0, -1.0); CGContextSetRGBFillColor(myContext, 255, 255, 255, 1); CGContextFillRect(myContext, CGRectMake(0, 0, 320, 412)); size_t brojStranica = CGPDFDocumentGetNumberOfPages(pdfFajl); if(pageNumber < brojStranica){ pageNumber ++; } else{ // kraj PDF fajla, ne listaj dalje. } CGPDFPageRef page = CGPDFDocumentGetPage(pdfFajl, pageNumber); CGContextSaveGState(myContext); CGAffineTransform pdfTransform = CGPDFPageGetDrawingTransform(page, kCGPDFCropBox, self.bounds, 0, true); CGContextConcatCTM(myContext, pdfTransform); CGContextDrawPDFPage(myContext, page); CGContextRestoreGState(myContext); //osvjezi displej [self setNeedsDisplay]; } Here's the swiping code: - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesBegan:touches withEvent:event]; UITouch *touch = [touches anyObject]; gestureStartPoint = [touch locationInView:self]; } - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [touches anyObject]; CGPoint currentPosition = [touch locationInView:self]; CGFloat deltaX = fabsf(gestureStartPoint.x - currentPosition.x); CGFloat deltaY = fabsf(gestureStartPoint.y - currentPosition.y); if (deltaX >= kMinimumGestureLength && deltaY <= kMaximumVariance) { [self nacrtajNovuStranicu:(CGContextRef)UIGraphicsGetCurrentContext()]; } } The code sits in UIView which displays the PDF content, perhaps I should place it into UIScrollView or is the "display next page" code wrong?

    Read the article

  • ASP.NET MVC pagination problem????

    - by MD_Oppenheimer
    OK, This is starting to get mildly irritating. I tried to implement Twitter style paging using ASP.NET MVC and JQuery my problem is that when not using Request.IsAjaxRequest() (for users with javascript turned off) it works fine, obviously posting back the whole page. when I run the code for Request.IsAjaxRequest(), it skips entries, and does not return result in order. this is the code I have: public ActionResult Index(int? startRow) { StatusUpdatesRepository statusUpdatesRepository = new StatusUpdatesRepository(); if (!startRow.HasValue) startRow = Globals.Settings.StatusUpdatesSection.StatusUpdateCount;//5 Default starting row //Retrieve the first page with a page size of entryCount int totalItems; if (Request.IsAjaxRequest()) { IEnumerable<StatusUpdate> PagedEntries = statusUpdatesRepository.GetLastStatusUpdates(startRow.Value,Globals.Settings.StatusUpdatesSection.StatusUpdateCount, out totalItems); if (startRow < totalItems) AddMoreUrlToViewData(startRow.Value); return View("StatusUpdates", PagedEntries); } //Retrieve the first page with a page size of global setting // First run skip 0 take 5 IEnumerable<StatusUpdate> entries = statusUpdatesRepository.GetLastStatusUpdates(0,startRow.Value, out totalItems); if (startRow < totalItems) AddMoreUrlToViewData(startRow.Value); return View(entries); } private void AddMoreUrlToViewData(int entryCount) { ViewData["moreUrl"] = Url.Action("Index", "Home", new { startRow = entryCount + Globals.Settings.StatusUpdatesSection.StatusUpdateCount }); } My GetLastStatusUpdates function: public IQueryable GetLastStatusUpdates(int startRowIndex, int maximumRows,out int statusUpdatesCount ) { statusUpdatesCount = db.StatusUpdates.Count(); return db.StatusUpdates .Skip(startRowIndex) .Take(maximumRows) .OrderByDescending(s = s.AddedDate); } Really fresh out out of ideas as to why this is not working properly when responding to a Request.IsAjaxRequest(), ie when I turn of javascript in the browser, the code works perfectly, except I don't want to repost the whole page????

    Read the article

  • Assigning outcome of another JSTL tag as value of one JSTL tag

    - by NoozNooz42
    I've got this, which is working: <c:choose> <c:when test="${sometest}"> Hello, world! </c:when> <c:otherwise> <fmt:message key="${page.title}" /> </c:otherwise> </c:choose> And I want to change it to this: <c:choose> <c:when test="${sometest}"> <c:set var="somevar" scope="page" value="Hello, world!"/> </c:when> <c:otherwise> <c:set var="somevar" scope="page" value="<fmt:message key="${page.title}">" </c:otherwise> </c:choose But of course the following line ain't correct: <c:set var="somevar" scope="page" value="<fmt:message key="${page.title}">" How can I assign to the somevar variable the string resulting from a call to fmt:message?

    Read the article

  • Navigating between pages in a Facebook Platform iframe application

    - by Jimmy Cuadra
    I'm working on a Facebook Platform application that runs in iframe mode, and I'm having trouble understanding how to navigate between pages within the app. Let's say the first page that is loaded within the iframe at my canvas URL is one.html. Within that page, there is a link to two.html that just changes the source of the iframe and doesn't reload the Facebook chrome. When I do this, all the Facebook fb_sig_* query string parameters that Facebook passes to the original page aren't included, and so two.html has no awareness of the connection to Facebook and no ability to make API calls to generate the content for the page. One possible solution would be to manually extract all the Facebook parameters from one.html and append it to the link to two.html myself. This seems really ugly and I figured there had to be a cleaner way. For reference, my application is written in Perl and uses the WWW::Facebook::API module as a client library. I didn't see anything in it that I can use to easily reconstruct the Facebook parameters for use with links in iframe apps. Another possible solution would be to store all the Facebook parameters in a session on my server on the first page load, and just use the values in that session on subsequent page views. But what happens if the data I've stored no longer matches what Facebook would have sent if it were a completely new request (i.e. something in the user's Facebook session changed)? Is there something obvious I'm missing? What is the standard approach to navigating between pages within an iframe app? Facebook's documentation is atrocious and I haven't been able to find anything that clearly explains how this works. I also realize this wouldn't be an issue with an app using FBML instead of an iframe, but my understanding is that iframe apps are now encouraged over FBML apps, though again this seems ambiguous since so much of Facebook's documentation is outdated and contradictory.

    Read the article

  • Auto load a specific link at various time intervals

    - by user228837
    Here's what I need to do. I'm using Google Chrome. I have page that auto-reloads every 5 seconds using a script: javascript: timeout=prompt("Set timeout [s]"); current=location.href; if(timeout>0) setTimeout('reload()',1000*timeout); else location.replace(current); function reload() { setTimeout('reload()',1000*timeout); fr4me='<frameset cols=\'*\'>\n<frame src=\''+current+'\'/>'; fr4me+='</frameset>'; with(document){write(fr4me);void(close())}; } I found that script by Googling. The reason why the page auto-reloads every 5 seconds is I'm waiting for a specific link or url to appear in the page. It appears at random times. Once I see the link I'm waiting for, I immediately click the link. That's fine. But I want more. What I want is the page will auto-reload and I want it to auto-detect the the link I'm waiting for. Once the script finds the link I'm waiting for, it automatically loads that link on a new tab or page. For example, I'm auto-reloading www.example.com. I'm waiting for a specific url "BUY NOW". When the page auto-reloads, it checks if there's a url "BUY NOW". If it sees one, it should automatically open that link. Thanks.

    Read the article

  • How to bind form collection back to custom model object that uses 2 custom objects in asp.net mvc?

    - by baijajusav
    What I'm trying to do is rather basic, but I might have my facts mixed up. I have a details page that has a custom class as it's Model. The custom class uses 2 custom objects with yet another custom object a property of one of the 2. The details page outputs a fair amount of information, but allows the user to post a comment. When the user clicks the post button, the page gets posted to a Details action that looks something like this: [AcceptVerbs(HttpVerbs.Post)] public ActionResult Details(VideoDetailModel vidAndComment) { ....} The only fields on the form that is posted are CommentText and VideoId. Here is what the VideoDetailModel looks like. public class VideoDetailModel { public VideoDetailModel() { Video = new VideoDTO(); Comment = new CommentDTO(); } public VideoDetailModel(VideoDTO vid) { Video = vid; Comment = new CommentDTO(); } public VideoDTO Video { get; set; } public CommentDTO Comment { get; set; } } VideoDTO has a few properties, but the ones I need are VideoId. CommentDTO's pertinent properties include CommentText (which is posting correctly) and a UserDTO object that contains a userId property. Everything other than the CommentText value is not being posted. I also have the following line on the ascx page, but the model value never gets posted to the controller. Html.Hidden("Model.Video.VideoId", Model.Video.VideoId); I'm really not sure what I'm missing here. I suppose if I added more form fields for the properties I need, they would get posted, but I only need 1 form entry field for the CommentText. If I could get the same Model objects value that were sent to the page to post with the page, that would help. I'll be happy to make any clarifications needed here. I'm just at loss as to what's going on.

    Read the article

  • Making an AJAX WCF Web Service request during an Async Postback

    - by nekno
    I want to provide status updates during a long-running task on an ASP.NET WebForms page with AJAX. Is there a way to get the ScriptManager to execute and process a script for a web service request during an async postback? I have a script on the page that makes a web service request. It runs on page load and periodically using setInterval(). It's running correctly before the async postback is initiated, but it stops running during the async postback, and doesn't run again until after the async postback completes. I have an UpdatePanel with a button to trigger an async postback, which executes the long-running task. I also have an instance of an AJAX WCF Web service that is working correctly to fetch data and present it on the page but, like I said, it doesn't fetch and present the data until after the async postback completes. During the async postback, the long-running task sends updates from the page to the web service. The problem is that I can debug and step through the web service and see that the status updates are correctly set, but the updates aren't retrieved by the client script until the async postback completes. It seems the Script Manager is busy executing the async postback, so it doesn't run my other JavaScript via setInterval() until the postback completes. Is there a way to get the Script Manager, or otherwise, to run the script to fetch data from the WCF web service during the async postback? I've tried various methods of using the PageRequestManager to run the script on the client-side BeginRequest event for the async postback, but it runs the script, then stops processing the code that should be running via setInterval() while the page request executes.

    Read the article

  • Making a concurrent AJAX WCF Web Service request during an Async Postback

    - by nekno
    I want to provide status updates during a long-running task on an ASP.NET WebForms page with AJAX. Is there a way to get the ScriptManager to execute and process a script for a web service request concurrently with an async postback? I have a script on the page that makes a web service request. It runs on page load and periodically using setInterval(). It's running correctly before the async postback is initiated, but it stops running during the async postback, and doesn't run again until after the async postback completes. I have an UpdatePanel with a button to trigger an async postback, which executes the long-running task. I also have an instance of an AJAX WCF Web service that is working correctly to fetch data and present it on the page but, like I said, it doesn't fetch and present the data until after the async postback completes. During the async postback, the long-running task sends updates from the page to the web service. The problem is that I can debug and step through the web service and see that the status updates are correctly set, but the updates aren't retrieved by the client script until the async postback completes. It seems the Script Manager is busy executing the async postback, so it doesn't run my other JavaScript via setInterval() until the postback completes. Is there a way to get the Script Manager, or otherwise, to run the script to fetch data from the WCF web service during the async postback? I've tried various methods of using the PageRequestManager to run the script on the client-side BeginRequest event for the async postback, but it runs the script, then stops processing the code that should be running via setInterval() while the page request executes.

    Read the article

  • ASP:NET :Problem in DoNut Caching

    - by Shyju
    I have an ASP.NET page where i am trying to do some output caching.But ran into a problem. My ASPX page has <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="MYProject._Default" %> <%@ OutputCache Duration="600" VaryByParam="None" %> <%@ Register TagPrefix="MYProjectUC" TagName="PageHeader" Src="~/Lib/UserControls/PageHeader.ascx" %> <%@ Register TagPrefix="MYProjectUC" TagName="PageFooter" Src="~/Lib/UserControls/PageFooter.ascx" %> and i have used the User control called "PageHeader" in the aspx page. In PageHeader.ascx, i have an asp.net substitution control, where i want to show some links based on the logged in user. <%@ Control Language="C#" AutoEventWireup="true" CodeBehind="PageHeader.ascx.cs" Inherits="MyProject.Lib.UserControls.PageHeader1" %> <div class="headerRow"> <div class="headerLogo"> <a href="Default.aspx"><img src="Lib/Images/header.gif" alt=""></a> </div> <div id="divHeaderMenu" runat="server"> <asp:Substitution ID="subLinks" runat="server" MethodName="GetUserProfileHeaderLinks" /> </div> </div><!--headerRow--> In my ascx.cs file,i have a static method which will return a string based on whether the used logged in or not using session public static string GetUserProfileHeaderLinks(HttpContext context) { string strHeaderLinks = string.Empty; // check session and return string return strHeaderLinks; } But Still the page shows the same content for both logged in user and Guest user. My objective is to to have the Page being cached except the content inside the substitution control. Any idea how to achieve this ? Thanks in advance

    Read the article

  • ASP.NET Content Web Form - content from placeholder disappears

    - by Naeem Sarfraz
    I'm attempting to set a class on the body tag in my asp.net site which uses a master page and content web forms. I simply want to be able to do this by adding a bodycssclass property (see below) to the content web form page directive. It works through the solution below but when i attempt to view Default.aspx the Content1 control loses its content. Any ideas why? Here is how I'm doing it. I have a master page with the following content: <%@ Master Language="C#" ... %> <html><head>...</head> <body id=ctlBody runat=server> <asp:ContentPlaceHolder ID="cphMain" runat="server" /> </body> </html> it's code behind looks like: public partial class Site : MasterPageBase { public override string BodyCssClass { get { return ctlBody.Attributes["class"]; } set { ctlBody.Attributes["class"] = value; } } } it inherits from: public abstract class MasterPageBase : MasterPage { public abstract string BodyCssClass { get; set; } } my default.aspx is defined as: <%@ Page Title="..." [master page definition etc..] bodycssclass="home" %> <asp:Content ID="Content1" ContentPlaceHolderID="cphMain" runat="server"> Some content </asp:Content> the code behind for this file looks like: public partial class Default : PageBase { ... } and it inherits from : public class PageBase : Page { public string BodyCssClass { get { MasterPageBase mpbCurrent = this.Master as MasterPageBase; return mpbCurrent.BodyCssClass; } set { MasterPageBase mpbCurrent = this.Master as MasterPageBase; mpbCurrent.BodyCssClass = value; } } }

    Read the article

  • Problem with response.redirect sending incorrect HTTPMethod

    - by Andy Macnaughton-Jones
    Hi, I've got a strange problem with a Response.Redirect. I'm using VB.NET with the .NET 2 framework (so VS2005 & SP1). I've got a page that I do a form submit on (that's a proper form method="POST" hard-coded onto the page) and that properly posts me back the page data which is then processed. As part of that processing the system determines if we need to get sent to another URL after processing has been complete. So the request.httpmethod = "POST". So if the "GotoPage" parameter has a URL specified we then do a response.redirect(URL, false). (False as we want page processing to complete in order to write some timing logs etc). The page correctly redirects but instead of the response having a "GET" as the request.httpmethod it has a "POST" instead ! Now, we're using our own custom framework so that we use the HTTPRequest method to determine if a page has been posted back or is being "Getted" so the "IsPagePostBack" property doesn't work (that only works when you're using the normal .NET controls and form submissions). In all other instances our code works happily but what might be causing the Request.httpMethod to not be being set correctly ? I've tried doing a response.clear before the redirect in case headers are being written out before hand but to no avail. Any clues ?! thanks, Andy

    Read the article

< Previous Page | 317 318 319 320 321 322 323 324 325 326 327 328  | Next Page >