Search Results

Search found 42685 results on 1708 pages for 'page speed'.

Page 31/1708 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • Multicast image restoration with adaptive speed

    - by Clinton Blackmore
    I'm curious to know if there are any tools for restoring disk images (or even transferring files) via multicast -- for any platform, especially if the project has source available -- where the multicast rate adjusts itself on the fly. On the Mac, all multicast solutions I am aware of (such as Deploy Studio, and NetRestore before it) make use of multicast ASR (apple software restore), which has one glaring deficiency -- you have to set the multicast speed before you start sending a disk image over the network, and that speed is locked in. Either your clients can keep up and restore, or they can't*. It seems to me that it must be possible for the multicast server to adjust the data rate, so you basically say "start sending this image", clients connect, and, if they can't keep up, they tell the server so it slows down. (Likewise, I'd expect the server to try speeding up if no client is having difficulties keeping up, and I'd expect to be able to cap that maximum throughput so that other network activities can go on without being resource starved.) So, what sort of tools are out there? For Linux? Windows? Is there something for the Mac I've overlooked. [It just kills me that it is true that, by the time you get multicast up and going at a good speed to restore a lab, you could've unicasted the data to all the computers and be done.] * There is a little leeway involved. I think individual clients can say, "I missed a little bit of data" and get it, and they can opt to listen in the next time the image is sent over the network, but on the whole, if they missed it the first go round, you have to image the machine again, and there is no time savings.

    Read the article

  • High fan speed with no reason

    - by Klaus
    For a few weeks, the fans of my Lenovo B590 laptop, running on Xubuntu 14, turn to high speed a few minutes after it is turned on. The fans won't speed down until I turn the computer off. This is quite strange, since This didn't happen before The temperatures are quite low (are they ?) $sensors Adapter: Virtual device temp1: +36.0°C (crit = +88.0°C) temp2: +30.0°C (crit = +126.0°C) coretemp-isa-0000 Adapter: ISA adapter Physical id 0: +37.0°C (high = +72.0°C, crit = +90.0°C) Core 0: +34.0°C (high = +72.0°C, crit = +90.0°C) Core 1: +31.0°C (high = +72.0°C, crit = +90.0°C) thinkpad-isa-0000 Adapter: ISA adapter fan1: 0 RPM pkg-temp-0-virtual-0 Adapter: Virtual device temp1: +37.0°C $sudo hddtemp /dev/sda /dev/sda: ST500LT012-9WS142: 33°C The computer is under low load: top - 08:30:15 up 16 min, 2 users, load average: 0.28, 0.23, 0.23 Tasks: 197 total, 1 running, 196 sleeping, 0 stopped, 0 zombie %Cpu(s): 0.8 us, 0.5 sy, 0.0 ni, 98.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem: 3607944 total, 1973956 used, 1633988 free, 99660 buffers KiB Swap: 3744764 total, 0 used, 3744764 free. 789936 cached Mem The BIOS is up to date (and there are no fan settings in it) The fan is clean and dust-free Why would the BIOS turn the fans to high speed where there seem to be no reason for that ? It seems that we cannot control the fan manually with this model, so I guess the only solution is to understand why this happens.

    Read the article

  • Internet compression proxy for low speed broadband?

    - by user23150
    I live in a rural location, using high-latency wireless off a local ISP's tower. My speed tests vary day to day, but I can get around 1Mb up/down. The problem is, I work with large files, uploading and downloading (HD videos, development software, etc.). It can be painful to wait sometimes. Plus I do some side contract game development, and it can be very difficult to playtest with other developers (200ms ping is a good day for me). Now, obviously it's not going to be easy to solve the latency problem without different wireless hardware. But speedwise, I am wondering if I can use some kind of compression technology on a proxy. For instance, my work computer has full access to a 26Mb down, 10Mb up connection, that is totally unused at night and the weekends. If I could run some kind of compression technology on our server, and use it as a proxy to route to my home computer, I could stand to gain some major speed. I realize that by bogging down a system with compression, I could potentially lose whatever speed gain I had. But the proxy server is a quad core xeon, and the receiving computer is a pretty decent i7 computer, so that shouldn't be a concern. I found http://toonel.net/ but it seems more geared toward very slow narrowband users, like dial-up. Plus, I would prefer to just be able to point my browser to a proxy server, rather then install software on my client machine. EDIT I thought about my question a little more, and realize I am going to need to install software on my client in order to decompress, and possible compress (for uploading). That's not a huge deal.

    Read the article

  • How to avoid movement speed stacking when multiple keys are pressed?

    - by eren_tetik
    I've started a new game which requires no mouse, thus leaving the movement up to the keyboard. I have tried to incorporate 8 directions; up, left, right, up-right and so on. However when I press more than one arrow key, the movement speed stacks (http://gfycat.com/CircularBewitchedBarebirdbat). How could I counteract this? Here is relevant part of my code: var speed : int = 5; function Update () { if(Input.GetKey(KeyCode.UpArrow)){ transform.Translate(Vector3.forward * speed * Time.deltaTime); } else if(Input.GetKey(KeyCode.UpArrow) && Input.GetKey(KeyCode.RightArrow)){ transform.Translate(Vector3.forward * speed * Time.deltaTime); } else if(Input.GetKey(KeyCode.UpArrow) && Input.GetKey(KeyCode.LeftArrow)){ transform.rotation = Quaternion.AngleAxis(315, Vector3.up); } if(Input.GetKey(KeyCode.DownArrow)){ transform.Translate(Vector3.forward * speed * Time.deltaTime); } }

    Read the article

  • pagination - 10 pages per page

    - by arthur
    I have a pagination script that displays a list of all pages like so: prev [1][2][3][4][5][6][7][8][9][10][11][12][13][14] next But I would like to only show ten of the numbers at a time: prev [3][4][5][6][7][8][9][10][11][12] next How can I accomplish this? Here is my code so far: <?php /* Set current, prev and next page */ $page = (!isset($_GET['page']))? 1 : $_GET['page']; $prev = ($page - 1); $next = ($page + 1); /* Max results per page */ $max_results = 2; /* Calculate the offset */ $from = (($page * $max_results) - $max_results); /* Query the db for total results. You need to edit the sql to fit your needs */ $result = mysql_query("select title from topics"); $total_results = mysql_num_rows($result); $total_pages = ceil($total_results / $max_results); $pagination = ''; /* Create a PREV link if there is one */ if($page > 1) { $pagination .= '< a hr_ef="?page='.$prev.'">Previous</a> '; } /* Loop through the total pages */ for($i = 1; $i <= $total_pages; $i++) { if(($page) == $i) { $pagination .= $i; } else { $pagination .= '< a hr_ef="index.php?page='.$i.'">'.$i.'</a>'; } } /* Print NEXT link if there is one */ if($page < $total_pages) { $pagination .= '< a hr_ef="?page='.$next.'"> Next</a>'; } /* Now we have our pagination links in a variable($pagination) ready to print to the page. I pu it in a variable because you may want to show them at the top and bottom of the page */ /* Below is how you query the db for ONLY the results for the current page */ $result=mysql_query("select * from topics LIMIT $from, $max_results "); while ($i = mysql_fetch_array($result)) { echo $i['title'].'<br />'; } echo $pagination; ?>

    Read the article

  • How do I prevent a tar pipe from causing swapping?

    - by Jeff Shattock
    I have a rather large filesystem that I need to transfer from one Linux server to another. I figured the best way to do this was via a tar/netcat pipe arrangment, something like tar c . | pv | nc blah blah blah And it works great, the network stays fairly saturated, life is good. Until the source machine starts swapping. The files are on a raid on the source system, so the read speed is much faster than the write speed on the other end. Since the dest machine hasnt picked up the data yet, the source machine needs to stick it somewhere, so into RAM it goes, until there is no more free RAM. It then starts swapping, which is horribly painful since that machine has its OS installed on a somewhat slow CF card. Both machines have 4GB of physical ram, 64 bit Ubuntu 9.04 server. GigE link between them. How do I prevent this swapping? Can I put a "speed-limit" on the tar or netcat process so that the transfer speed doesn't overwhelm the write throughput on the destination end? The man pages didn't list anything, but there might be something I'm overlooking.

    Read the article

  • Horizontal page curl in iPhone - I have - But Valid ?

    - by sagar
    Hello ! Every one. I was googling for applying horizontal page curl in iPhone. I also tried this, but it wasn't appropriate for me (To change orientation & work in different ori.). I tried to google more. Finally I got a link or link . From Where, I could Understand the horizontal page curl. But when I went to code deeply, I found some confusing points. Let me list down. Please download attached code - it's not my code. ( I am not selling any code, but Direct project link would be better than placing multiple codes here. ) extern NSString *kCAFilterPageCurl; // From QuartzCore.framework is it valid to use internal variable ? ( as it's mentioned - from QuartzCore ) CAFilter *previousFilter = [[CAFilter filterWithType:kCAFilterPageCurl] retain]; Above statement is giving warning ( not error ) - no '+filterWithType:' method found Might be using internal (apple's private) method. [previousFilter setDefaults]; Above statement is giving warning ( not error ) - no '-setDefaults:' method found Might be using internal (apple's private) method. Now My queries. I have above doubts in the project ( that might be using apple's private methods ). Is this code safe for apple store approval ? If 2's answer is wrong, what should be done for horizontal page curl ? In short, I want a horizontal page curl, But Need your suggestions for having the proper solution, which might not trouble me in future. Thanks in advance for your great help. Sagar.

    Read the article

  • How to customize the content of each page using Page Control and UIScrollView?

    - by viper15
    I have problem with customizing each page using pagecontrol and UIScrollView. I'm customizing Page Control from Apple. Basically I would like to have each page different with text and image alternately on different page. Page 1 will have all text, Page 2 will have just images, Page 3 will have all text and goes on. This is original code: // Set the label and background color when the view has finished loading. - (void)viewDidLoad { pageNumberLabel.text = [NSString stringWithFormat:@"Page %d", pageNumber + 1]; self.view.backgroundColor = [MyViewController pageControlColorWithIndex:pageNumber]; } As you can see, this code shows only Page 1, Page 2 etc as you scroll right. I tried to put in this new code but that didn't make any difference. There's no error. I know this is pretty simple code. I don't why it doesn't work. I declare pageText as UILabel. // Set the label and background color when the view has finished loading. - (void)viewDidLoad { pageNumberLabel.text = [NSString stringWithFormat:@"Page %d", pageNumber + 1]; self.view.backgroundColor = [MyViewController pageControlColorWithIndex:pageNumber]; if (pageNumber == 1) { pageText.text = @"Text in page 1"; } if (pageNumber == 2) { pageText.text = @"Image in page 2"; } if (pageNumber == 3) { pageText.text = @"Text in page 3"; } } I don't know why it doesn't work. Also if you have better way to do it, let me know. Thanks.

    Read the article

  • [LaTeX] positions of page numbers, position of chapter headings, chapters AND Table of Contents, Ref

    - by kaikanmonaco
    I am writing my PhD thesis (120+ pages) in latex, the deadline is approaching and I am struggling with layout problems. I am using the documentstyle book. I am posting both problems in this one thread because I am not sure if the solution might be related to both problems or not. Problems are: 1.) The page numbers are mostly located on the top-right of each page (this is correct and where I want them to be). However, only on the first page of chapters and on the first page of what I call "special chapters", the page number is located bottom-centered. With "special chapters" I mean: List of Contents, List of Figures, List of Tables, References, Index. My university will not accept the thesis like this. The page number must ALWAYS be top-right one each page, even if the page is the first page of a chapter or the first page of something like the List of Contents. How can I fix this? 2.) On the first page of chapters and "special chapters" (List of Contents...), the chapter title is located far too low on the page. This is the standard layout of LaTeX with documentstyle book I think. However, the chapter title must start at the very top of the page! I.e. the same height as the normal text on the pages that follow. I mean the chapter title, not the header. I.e., if there is a chapter called "Chapter 1 Dynamics of foobar under mechanical stress" then that text has to start from the top the page, but right now it starts several centimeters below the top. How can I fix this? Have tried all kinds of things to no effect, I'd be very thankful for a solution! Thanks.

    Read the article

  • Since Google reduces the value of links alongside nofollow links, what is an alternative?

    - by SharkTheDark
    Since 2009, Google counts nofollow links also as outgoing links, and thus reduces the value of the other links. What are some alternatives to stop Google counting outside links from my page? If I make links appear on my page source like this: <span hrefs="http://link" rel="nofollow" link="true">Link Name</span> and then in JavaScript replace span with a tag and replace hrefs with href for every span tag that has link="true". Will this help?

    Read the article

  • Since Google doesn't use nofollow anymore what is smart alternative?

    - by SharkTheDark
    Since from 2009. Google count nofollow links also as outgoing link, what are alternative to stop Google count outside links from my page? If I make links appear on my page source like this: <span hrefs="http://link" rel="nofollow" link="true">Link Name</span> and then in JavaScript replace span with a tag and replace hrefs with href for every span tag that has link="true". Will this help?

    Read the article

  • Since Google doesn't use nofollow anymore what is an alternative?

    - by SharkTheDark
    Since 2009, Google counts nofollow links also as outgoing links. What are some alternatives to stop Google counting outside links from my page? If I make links appear on my page source like this: <span hrefs="http://link" rel="nofollow" link="true">Link Name</span> and then in JavaScript replace span with a tag and replace hrefs with href for every span tag that has link="true". Will this help?

    Read the article

  • Establishing a web page bookmarking process - looking for ideas to improve

    - by Matt
    Like many others, I have a process for bookmarking web pages to read later. My requirements for web page bookmarking are: Ability to bookmark pages must be available from all (within reason) platforms - PC/browser, mobile device, etc. Bookmarks must be centrally stored (implicit from #2) so that I can read the bookmarks from anywhere/any device Full text of web pages must be stored Bonus features would be: Bookmarks and page content should be full text searchable Maintain an archive indefinitely Distinguish between what's read vs. unread Bookmarked page content is cleaned up, e.g. ads eliminated, unnecessary html removed, pages better formatted for reading My current process (which addresses most of these requirements) is as follows: I set up a Gmail account with 2 labels, "Bookmarks Unread" and "Bookmarks Read" Gmail filters set up such that depending on the form of the address (using Gmail's '+string' functionality in addresses), the incoming bookmark gets labeled appropriately On each of my browsers/devices, I have an address book entry for [email protected] and [email protected]. If I want to clean up the page content, I use the Readability bookmarklet which does a great job of giving me the essential content only Anywhere I have Firefox, I use the Send Page by Email extension which, with 2 clicks, allows me to send the cleaned-up Readability page URL and content to one of the above email addresses. Where I don't have Firefox (e.g. iPhone or other mobile device) I use the native ability to send the current link via email (most/all apps have them, including the browser, RSS readers, NYTimes, etc.). In most cases (unless it's built into the particular app), this won't include the page body. The process is almost perfect. I've got the central access and ubiquitous access of Gmail as the storage mechanism, full text searchability (due to Gmail, but of course only for the URLs I send from that Firefox extension), a cleaned up page due to Readability, ability to read offline (assuming I use an IMAP client against Gmail) and permanent archiving of content, including what's been read vs. unread. The missing pieces are: The Send Page by Email Firefox extension seems to only send X bytes of a web page. Or some portion. So it limits my full text searchability. Where I don't have Firefox, I can only send the link, so no full text search at all in those cases. Instapaper looks like it meets most of my requirements (and bonus items). The only downside to me (personal preference) is that central storage is based on Instapaper vs. something more broad like Gmail, which as a generalized service and with Google behind it pretty much means it's permanent. I'm not too hung up on this, but I would definitely prefer to keep Gmail if possible. An upside of Instapaper is that it does the page clean-up as well as stores the entire page content, unlike my Firefox extension. Thoughts on addressing the gaps and improving this process further?

    Read the article

  • Formula-based Excel page headers

    - by Jake Krohn
    I'm using the "Rows to repeat at top" function in Excel's "Page Setup" dialog to ensure that a multi-row header block appears on every printed page of my worksheet. However, I'd like to be able to change certain bits of the header based on the content of the current page. I would simply like to display the value of one cell in the first row that is printed on the page. If this is my header: Section: xx And the data looks like this (columns are Section and Name): 1 Foo 1 Bar 2 Baz I want the "xx" in the header to be "1". If, further down on the next page, the value in the Section column is "3", I want that printed in the header of the next page. I originally thought that using the "OFFSET" function might help, e.g. ="Section: "&OFFSET(A2, 1, 0) But it only shows the offset from the original placement of the header, thus only working on page 1. The end document is a PDF, so right now I'm able to go back in with the "TouchUp Text Tool" in Acrobat and add the numbers page by page. But it gets to be a tedious process with 70+ page reports. Anyone have any better ideas that don't require me mucking up the original Excel document with inserted headers every N lines? This is Excel 2008 for Mac, if it makes a difference.

    Read the article

  • CrossPost access to data

    - by Craig
    Hi, I have a search form on a page that posts back to itself and shows the results, all works fine. I now have a requirement to put the same search form on the site home page. This needs to post back to the search form and run the findResults code. Using the PostBackURL parameter on the home page form's submit button I can hit the search page ok. However, when using the following code in the Page_Load section of the search page I hit the problem of not being able to access data from the posting page as I get the following error message on the line starting "yearList.SelectedValue....": "'Site._default1.Protected WithEvents yearList As System.Web.UI.WebControls.DropDownList' is not accessible in this context because it is 'Protected'". '################################# '# Handle form post from Home page '################################# Dim crossPostBackPage As Site._default1 If Not (Page.PreviousPage Is Nothing) Then If Not (Page.IsCrossPagePostBack) Then If (Page.PreviousPage.IsValid) Then crossPostBackPage = CType(PreviousPage, Site._default1) yearList.SelectedValue = crossPostBackPage.yearList.SelectedValue getAvailability() End If End If End If As I didn't declare yearList Protected, I don't know where to change it or how to. Any advice would be appreciated, Craig

    Read the article

  • Why am I getting domainpark.cgi being called from my website?

    - by Sean
    I used to test my site on www.exampleone.com and now I have moved to the real domain www.realdomain.com now and www.exampleone.com is now parked by 1and1 (default). Now when I test to see which requests are made by the www.realdomain.comI see domainpark.cgi and park.js from Sedo Parking also being requested as well as the js that serves the ads by adclicks. How do I get rid of this? It's not on the index page at all, and it's causing a lot of strain and slowing my site down.

    Read the article

  • get data from asp page

    - by sam
    I am wondering if there is anyway to grab the html that is generated from an ASP page. I am trying to pull a table from the page, and I foolishly used a static html page so I would not have to be constantly querying the server where this page resides while I tested out my code. The javascript code I wrote to grab to unlabeled table from the page works. Then when I put it into practice with the real page and found that the ASP page does not generate a viewable page with a jquery .get request on the URL. Is there any way to query the page for the table I need so that the ASP page returns a valid page on request? (I am also limited to using javascript and perl for this, the server where this will reside will not run php and I have no desire to learn ASP.NET to solve this by adding to the issue of proprietary software)

    Read the article

  • How can I most accurately calculate the execution time of an ASP.NET page while also displaying it o

    - by henningst
    I want to calculate the execution time of my ASP.NET pages and display it on the page. Currently I'm calculating the execution time using a System.Diagnostics.Stopwatch and then store the value in a log database. The stopwatch is started in OnInit and stopped in OnPreRenderComplete. This seems to be working quite fine, and it's giving a similar execution time as the one shown in the page trace. The problem now is that I'm not able to display the execution time on the page because the stopwatch is stopped too late in the life cycle. What is the best way to do this?

    Read the article

  • Will_paginate Plugin on two objects on same page

    - by piemesons
    Hello I am using will_paginte plugin on two objects on a same page. Like on stackoverflow. There is a profile page on which there is a pagination on two things QUestions and answers. I am having problem ie:-- when user is clicking on questions pagination page 2. answers page are also updating. The reason is both is sending a post variable ie params[:page] How to change this variable so that only one should be updated. and how to maintain that user should not lose the other page. ie he is on 3rd page of questions and 1st page of answers and now he click on 5th page of the questions the result should be 3rd page of questions and 5th page of answers.

    Read the article

  • Joining and compressing all javascript files together - good idea?

    - by Tomáš Zato
    Curently, I avoid loading any unnecesary scripts on individual pages of my site. I have a class that remembers all javascript files that were requested during PHP processing and adds them to HTML. I was just thinking that I could merge the current set of files, save the result in special directory and let the browser download just one, big file. Since the number of possible combinations is not very high, I would end up with about 10 combined files for different pages. I've never seen that on any site. What are the reasons not to do it? I need very fast page load.

    Read the article

  • asp.net could a half submitted web page be processed?

    - by c00ke
    Having a weird bug in production and just wondering if it's possible for a half submitted web page to processed by the server? The page has no view state just using plain old html controls and accessing data displayed in repeater on the back end via Request.Form[name] etc. Is it possible for a request to be truncated perhaps due to lost internet connection and the page still processed by the server. Therefore if field not part of the request Request.Form[name] could result in null? I know can use fiddler to modify request but unfortunately we are not allowed to change group policy and change the proxy! Many Thanks

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >