Search Results

Search found 48586 results on 1944 pages for 'page performance'.

Page 167/1944 | < Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >

  • Drupal - Hide a single page from search index

    - by ilowe
    Hi, I've taken over an existing Drupal installation and have been asked to remove a single page from the site search results. I know about the lullabot tutorial through this question: http://stackoverflow.com/questions/1748837/hide-drupal-nodes-from-search, but that talks about excluding a class of content when I really just want to exclude a single page. I've tried manually deleting the node from the search_index table, but that didn't seem to work either. Any recommendations for excluding a single regular content page from the search index?

    Read the article

  • Hiding all panels on a web content form within a master page

    - by Jack Marchetti
    I'm trying to hide all panels on a page, when a button click occurs. This is on a web content form, within a master page. The contentplageholder is named: MainContent So I have: foreach (Control c in Page.Form.FindControl("MainContent").Controls) { if (c is Panel) { c.Visible = false; } } This never find any panels. The panels are within an Update Panel, and I tried foreach(Control c in updatePanel.Controls) { } and this didn't work either. I also tried : foreach(Control c in Page.Controls) { } and that didn't work either. Any idea what I'm missing here?

    Read the article

  • apache chokes after 300 connections

    - by john titus
    We have an apache webserver in front of Tomcat hosted on EC2, instance type is extra large with 34GB memory. Our application deals with lot of external webservices and we have a very lousy external webservice which takes almost 300 seconds to respond to requests during peak hours. During peak hours the server chokes at just about 300 httpd processes. ps -ef | grep httpd | wc -l =300 I have googled and found numerous suggestions but nothing seems to work.. following are some configuration i have done which are directly taken from online resources. I have increased the limits of max connection and max clients in both apache and tomcat. here are the configuration details: //apache <IfModule prefork.c> StartServers 100 MinSpareServers 10 MaxSpareServers 10 ServerLimit 50000 MaxClients 50000 MaxRequestsPerChild 2000 </IfModule> //tomcat <Connector port="8080" protocol="org.apache.coyote.http11.Http11NioProtocol" connectionTimeout="600000" redirectPort="8443" enableLookups="false" maxThreads="1500" compressableMimeType="text/html,text/xml,text/plain,text/css,application/x-javascript,text/vnd.wap.wml,text/vnd.wap.wmlscript,application/xhtml+xml,application/xml-dtd,application/xslt+xml" compression="on"/> //Sysctl.conf net.ipv4.tcp_tw_reuse=1 net.ipv4.tcp_tw_recycle=1 fs.file-max = 5049800 vm.min_free_kbytes = 204800 vm.page-cluster = 20 vm.swappiness = 90 net.ipv4.tcp_rfc1337=1 net.ipv4.tcp_max_orphans = 65536 net.ipv4.ip_local_port_range = 5000 65000 net.core.somaxconn = 1024 I have been trying numerous suggestions but in vain.. how to fix this? I'm sure m2xlarge server should serve more requests than 300, probably i might be going wrong with my configuration.. The server chokes only during peak hours and when there are 300 concurrent requests waiting for the [300 second delayed] webservice to respond. Please help..

    Read the article

  • how to change the value of a control in a master page

    - by Azhar
    how to change the value of a control e.g. Literal in a user control and that User control is in master page and I want to change the value of that literal from content page. ((System.Web.UI.UserControl)this.Page.Master.FindControl("ABC")).FindControl("XYZ").Text = ""; here ABC is user control and XYZ is Literal control

    Read the article

  • Inserting asyncronously into Oracle, any benefits?

    - by Karl Trumstedt
    I am using ODP.NET for loading data into Oracle. I am bulking inserts into groups of a 1000 rows each call. Is there any performance benefits in calling my load method asynchronously? So say I want to insert 10000 rows, instead of making 10 calls synchronously I make 10 calls asynchronously. My database is using ASSM right now but otherwise plenty of freelists are used of course. The database server has several cores as well. My initial tests seem to point to a performance increase, but maybe there is something I cannot see? Potential deadlock or contention issues? Of course, there is added complexity in handling transactions and such doing my load this way.

    Read the article

  • Login to website and use cookie to get source for another page

    - by Stu
    I am trying to login to the TV Rage website and get the source code of the My Shows page. I am successfully logging in (I have checked the response from my post request) but then when I try to perform a get request on the My Shows page, I am re-directed to the login page. This is the code I am using to login: private string LoginToTvRage() { string loginUrl = "http://www.tvrage.com/login.php"; string formParams = string.Format("login_name={0}&login_pass={1}", "xxx", "xxxx"); string cookieHeader; WebRequest req = WebRequest.Create(loginUrl); req.ContentType = "application/x-www-form-urlencoded"; req.Method = "POST"; byte[] bytes = Encoding.ASCII.GetBytes(formParams); req.ContentLength = bytes.Length; using (Stream os = req.GetRequestStream()) { os.Write(bytes, 0, bytes.Length); } WebResponse resp = req.GetResponse(); cookieHeader = resp.Headers["Set-cookie"]; String responseStream; using (StreamReader sr = new StreamReader(resp.GetResponseStream())) { responseStream = sr.ReadToEnd(); } return cookieHeader; } I then pass the cookieHeader into this method which should be getting the source of the My Shows page: private string GetSourceForMyShowsPage(string cookieHeader) { string pageSource; string getUrl = "http://www.tvrage.com/mytvrage.php?page=myshows"; WebRequest getRequest = WebRequest.Create(getUrl); getRequest.Headers.Add("Cookie", cookieHeader); WebResponse getResponse = getRequest.GetResponse(); using (StreamReader sr = new StreamReader(getResponse.GetResponseStream())) { pageSource = sr.ReadToEnd(); } return pageSource; } I have been using this previous question as a guide but I'm at a loss as to why my code isn't working.

    Read the article

  • VS2008 intellisense performance issue with large number of partial static classes

    - by scebula
    My question is a follow-up to the issue posted here regarding the Intellisense performance issue when building a large solution in VS2008 that has many partial static classes. Since Microsoft does not seem to be addressing the issue for VS2008, I would like to know if there are other ways around the problem? Waiting for VS2010 is not an option at this time. The proposed solution in the previous post is not practical as some of the partial classes may be regenerated and this would be a maintenance headache.

    Read the article

  • Encrypted HTML page without HTTPS

    - by Tichomir Mitkov
    I have a Perl script to open this page http://svejo.net/popular/all/new/ and filter the names of the posts but except headers everything seems encrypted. Nothing can be read. When I open the same page in a browser everything looks fine including the source code. How is it possible to encrypt a page for a script and not for a browser? My Perl script sends the same headers as my browser (Google Chrome).

    Read the article

  • Suppress "Done, but with errors on page" in IE

    - by calebthorne
    I have a website using lots of jQuery and JavaScript that produces a "Done, but with errors on page" message in the footer of IE. Everything on the site works perfectly, so I don't want to spend the time troubleshooting the exact error. All I would like to do is suppress the "Done, but with errors on page" message so that clients don't freak out. I tried the following at the top of the page with no success: window.onerror = function() {return true;}

    Read the article

  • How does Page.IsValid work?

    - by Lijo
    I have following code with a RequiredFieldValidator. The EnableClientScript property is set as "false" in the validation control. Also I have disabled script in browser. I am NOT using Page.IsValid in code behind. Still, when I submit without any value in textbox I will get error message. From comments of @Dai, I came to know that this can be an issue, if there is any code in Page_Load that is executed in a postback. There will be no validation errors thrown. (However, for button click handler, there is no need to check Page.IsValid) if (Page.IsPostBack) { string value = txtEmpName.Text; txtEmpName.Text = value + "Appended"; } QUESTION Why the server side validation does not happen before Page_Load? Why it works fine when I use Page.IsValid? UPDATE It seems like, we need to add If(Page.IsValid) in button click also if we are using a Custom Validator with server side validation. Refer CustomValidator not working well. Note: Client side validation question is present here: Whether to use Page_IsValid or Page_ClientValidate() (for Client Side Events) MARKUP <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title></title> <script type="text/javascript"> alert('haiii'); </script> </head> <body> <form id="form1" runat="server"> <div> <asp:ValidationSummary runat="server" ID="vsumAll" DisplayMode="BulletList" CssClass="validationsummary" ValidationGroup="ButtonClick" /> <asp:TextBox ID="txtEmpName" runat="server"></asp:TextBox> <asp:RequiredFieldValidator ID="valEmpName" runat="server" ControlToValidate="txtEmpName" EnableClientScript="false" ErrorMessage="RequiredFieldValidator" Text="*" Display="Dynamic" ValidationGroup="ButtonClick"></asp:RequiredFieldValidator> <asp:Button ID="Button1" runat="server" onclick="Button1_Click" Text="Button" ValidationGroup="ButtonClick" /> </div> </form> </body> </html> CODE BEHIND protected void Button1_Click(object sender, EventArgs e) { string value = txtEmpName.Text; SubmitEmployee(value); } References: Should I always call Page.IsValid? ASP.NET Validation Controls – Important Points, Tips and Tricks CustomValidator not working well

    Read the article

  • Improving Javascript Load Times - Concatenation vs Many + Cache

    - by El Yobo
    I'm wondering which of the following is going to result in better performance for a page which loads a large amount of javascript (jQuery + jQuery UI + various other javascript files). I have gone through most of the YSlow and Google Page Speed stuff, but am left wondering about a particular detail. A key thing for me here is that the site I'm working on is not on the public net; it's a business to business platform where almost all users are repeat visitors (and therefore with caches of the data, which is something that YSlow assumes will not be the case for a large number of visitors). First up, the standard approach recommended by tools such as YSlow is to concatenate it, compress it, and serve it up in a single file loaded at the end of your page. This approach sounds reasonably effective, but I think that a key part of the reasoning here is to improve performance for users without cached data. The system I currently have is something like this * All javascript files are compressed and loaded at the bottom of the page * All javascript files have far future cache expiration dates, so will remain (for most users) in the cache for a long time * Pages only load the javascript files that they require, rather than loading one monolithic file, most of which will not be required Now, my understanding is that, if the cache expiration date for a javascript file has not been reached, then the cached version is used immediately; there is no HTTP request sent at to the server at all. If this is correct, I would assume that having multiple tags is not causing any performance penalty, as I'm still not having any additional requests on most pages (recalling from above that almost all users have populated caches). In addition to this, not loading the JS means that the browser doesn't have to interpret or execute all this additional code which it isn't going to need; as a B2B application, most of our users are unfortunately stuck with IE6 and its painfully slow JS engine. Another benefit is that, when code changes, only the affected files need to be fetched again, rather than the whole set (granted, it would only need to be fetched once, so this is not so much of a benefit). I'm also looking at using LabJS to allow for parallel loading of the JS when it's not cached. So, what do people think is a better approach? In a similar vein, what do you think about a similar approach to CSS - is monolithic better?

    Read the article

  • Propor usage of double and single quotes?

    - by Phox
    I'm talking about the performance increase here. From all I know you can echo variables in double quotes ("), like so: <?php echo "You are $yourAge years old"; ?> But single quotes will just return You are $yourAge years old. But what about performance differences? I've always gone by the rule that single quotes are faster because the PHP interpreter doesn't have to search through the string for variables. But I'm seeing more and more blog and forum posts on the web saying differently. Does anyone actually have any information on this subject? Perhaps benchmark tests or something? Cheers.

    Read the article

  • Sticky notes associated with web page - how to?

    - by volvox
    I have this idea for a project. Associated with any web page, i want to create notes that will be saved locally in a database, the notes will be reloaded automatically from that database the next time i visit the same page. Creating the note is easy, but i'm looking for how to link the notes to the web page url and how to keep aware of the active web page. Any idea? (Note: i have come to this searching on the internet: http://webkit.org/demos/sticky-notes/ - this is part of WebKit Open source projects) - this is about what i'm looking for. Thank.

    Read the article

  • Downloading a web page and all of its resource files in Python

    - by Mark
    I want to be able to download a page and all of its associated resources (images, style sheets, script files, etc) using Python. I am (somewhat) familiar with urllib2 and know how to download individual urls, but before I go and start hacking at BeautifulSoup + urllib2 I wanted to be sure that there wasn't already a Python equivalent to "wget --page-requisites http://www.google.com". Specifically I am interested in gathering statistical information about how long it takes to download an entire web page, including all resources. Thanks Mark

    Read the article

  • How reference popup object when opener page changes?

    - by achairapart
    This is driving me crazy! Scenario: The main page opens a pop-up window and later calls a function in it: newWin = window.open(someurl,'newWin',options); ...some code later... newWin.remoteFunction(options); and it's ok. Now, popup is still open and Main Page changes to Page 2. In Page 2 newWin doesn't exist anymore and I need to recreate the popup window object reference in order to call again the remote function (newWin.remoteFunction) I tried something like: newWin = open('','newWin', options); if (!newWin || newWin.closed || !newWin.remoteFunction) { newWin = window.open(someurl,'newWin',options); } And it works, I can call newWin.remoteFunction again BUT Safari for some reason gives Focus() to the popup window everytime the open() method is called breaking the navigation (I absolutely need the popup working in the background). Only workaround I can think to solve this is to create an interval in the popup with: if(window.opener && !window.opener.newWin) window.opener.newWin = self; and then set another interval in the opener Page with some try/catch but it is inelegant and very inefficient. so, I wonder, it's really so hard to get the popup object reference between different pages in the opener window?

    Read the article

  • Drupal: How to Render Results of Form on Same Page as Form

    - by Aaron
    How would I print the results of a form submission on the same page as the form itself? Relevant hook_menu: $items['admin/content/ncbi_subsites/paths'] = array( 'title' => 'Paths', 'description' => 'Paths for a particular subsite', 'page callback' => 'ncbi_subsites_show_path_page', 'access arguments' => array( 'administer site configuration' ), 'type' => MENU_LOCAL_TASK, ); page callback: function ncbi_subsites_show_path_page() { $f = drupal_get_form('_ncbi_subsites_show_paths_form'); return $f; } Form building function: function _ncbi_subsites_show_paths_form() { // bunch of code here $form['subsite'] = array( '#title' => t('Subsites'), '#type' => 'select', '#description' => 'Choose a subsite to get its paths', '#default_value' => 'Choose a subsite', '#options'=> $tmp, ); $form['showthem'] = array( '#type' => 'submit', '#value' => 'Show paths', '#submit' => array( 'ncbi_subsites_show_paths_submit'), ); return $form; } Submit function (skipped validate function for brevity) function ncbi_subsites_show_paths_submit( &$form, &$form_state ) { //dpm ( $form_state ); $subsite_name = $form_state['values']['subsite']; $subsite = new Subsite( $subsite_name ); //y own class that I use internally in this module $paths = $subsite->normalized_paths; // build list $list = theme_item_list( $paths ); } If I print that $list variable, it is exactly what I want, but I am not sure how to get it into the page with the original form page built from 'ncbi_subsites_show_path_page'. Any help is much appreciated!

    Read the article

  • How to make a page print to printer just once

    - by menislici
    I have a basic html page generated through php and it prints to a printer after a link click using Ben Nadel's print plugin. However, I don't want the user to print the page again. I tried setting the 'print' link to a negative z-index using jquery after it's being clicked, but the user can refresh the page and reuse the link so it would print again. I also know that I can somehow disable the refresh feature, by modifying what F5 does, but that would't save the day since the user can refresh the page through the url bar, and I can't remove/hide, it as much as I know. It also runs on localhost so the user client and server are on the same side. Even the browser doesn't matter since I could use the one that fits this case.

    Read the article

  • Entering to index page?

    - by FullmetalBoy
    // // Post: /Search/Alternativ1/txtBoxTitle) [HttpPost] public ActionResult Alternativ1(int txtBoxTitle) { SokningMedAlternativ1 test= new SokningMedAlternativ1(); if (txtBoxTitel != null) { var codeModel = test.FilteraBokLista(txtBoxTitel); } return View(codeModel); } Problem: I have problem to find a solution to go back to my index page (first page when entering a website for the first time) view if txtBoxTitle has null. My request: How shall I enter to my index page view automatically if txtBoxTitle contains null? // Fullmetalboy

    Read the article

  • Ajax: Partial refresh of a parent page (update a div) from "lightbox" window

    - by superUntitled
    Is there a way to update information in a div of a parent page from a pop-up/"lightbox" window. I would like to create a pop up window that contains a form that updates a database (currently i am using php/mysql with prototype). In other words... I would like a user to be able to use a form in a popup window to update the database, and the changes that are made to be shown on the parent page without that parent page being refreshed. Thanks.

    Read the article

  • Disable source of the asp.net page

    - by Zerotoinfinite
    Hi All, I have developed my application in asp.net 3.5 and C#. I have deployed my application on internet and now when I am going to the source of the page, I am able to see all my asp.net controls defined [ie. my aspx page], is their any way I can hide it so that user can't see my source of the page [except right click of mouse] or at least display in pure HTML form so that people can not identify that I am using asp.net. Thanks in advance

    Read the article

< Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >