Search Results

Search found 17233 results on 690 pages for 'download speed'.

Page 42/690 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • Slow Gmail Attachment Download over HTTPS [closed]

    - by Todd
    It only affects Gmail. I have tested other HTTP, FTP and HTTPS websites and downloading is fine, at the exact same time the problem is affecting a Gmail attachment download. The symptoms include both: High waiting time, before the download starts (in the order of 30 seconds); AND Very slow download transfer speeds ( in the order of 1kbps) It's a very strange problem, and I didn't find any recent issues relating to Google Australia. The account works fine at another site with a different ISP, so it's either a computer configuration or ISP - I lean to the latter.

    Read the article

  • How to download images using wget from a txt file that contains links

    - by SwanC
    I can download images using wget if I download from a website. But I have several links and I have saved them in a text file. For example: wget -r -A.jpg -np www.fragrancenet.com There are so many pictures on this website. I have saved the links for the particular pictures I want: www.fragrancenet.com/images1 www.fragrancenet.com/images2 www.fragrancenet.com/images3 The links are saved in a text file named images.txt in my computer. How can I download the links in the images.txt text file using wget?

    Read the article

  • how to speed up this code? [migrated]

    - by dot
    I have some code that's taking over 3 seconds to complete. I'm just wondering if there's a faster way to do this. I have a string with anywhere from 10 to 70 rows of data. I break it up into an array and then loop through the array to find specific patterns. $this->_data = str_replace(chr(27)," ",$this->_data,$count);//strip out esc character $this->_data = explode("\r\n", $this->_data); $detailsArray = array(); foreach ($this->_data as $details) { $pattern = '/(\s+)([0-9a-z]*)(\s+)(100\/1000T|10|1000SX|\s+)(\s*)(\|)(\s+)(\w+)(\s+)(\w+)(\s+)(\w+)(\s+)(1000FDx|10HDx|100HDx|10FDx|100FDx|\s+)(\s*)(\w+)(\s*)(\w+|\s+)(\s*)(0)/i'; if (preg_match($pattern, $details, $matches)) { array_push($detailsArray, array( 'Port' => $matches[2], 'Type' => $matches[4], 'Alert' => $matches[8], 'Enabled' => $matches[10], 'Status' => $matches[12], 'Mode' => $matches[14], 'MDIMode' => $matches[16], 'FlowCtrl' => $matches[18], 'BcastLimit' => $matches[20])); }//end if }//end for $this->_data = $detailsArray; Just wondering if you think there's a way to make it more efficient. Thanks.

    Read the article

  • What is recommended minimum object size for gzip performance benefits?

    - by utt73
    I'm working on improving page speed display times, and one of the methods is to gzip content from the webserver. Google recommends: Note that gzipping is only beneficial for larger resources. Due to the overhead and latency of compression and decompression, you should only gzip files above a certain size threshold; we recommend a minimum range between 150 and 1000 bytes. Gzipping files below 150 bytes can actually make them larger. We serve our content through Akamai, using their network for a proxy and CDN. What they've told me: Following up on your question regarding what is the minimum size Akamai will compress the requested object when sending it to the end user: The minimum size is 860 bytes. My reply: What is the reason(s) for why Akamai's minimum size is 860 bytes? And why, for example, is this not the case for files Akamai serves for facebook? (see below) Google recommends to gzip more agressively. And that seems appropriate on our site where the most frequent hits, by far, are AJAX calls that are <860 bytes. Akamai's response: The reasons 860 bytes is the minimum size for compression is twofold: (1) The overhead of compressing an object under 860 bytes outweighs performance gain. (2) Objects under 860 bytes can be transmitted via a single packet anyway, so there isn't a compelling reason to compress them. So I'm here for some fact checking. Is the 860 byte limit due to packet size the end of this reasoning? Why would high traffic sites push this down to the 150 byte limit... just to save on bandwidth costs (since CDNs base their charges on bandwith offloaded from origin), or is there a performance gain in doing so?

    Read the article

  • Lightning fast forum based around metadata / tags? [closed]

    - by Dan W
    Possible Duplicate: What Forum Software should I use? I wonder if anything like this exists. I'd like to add a forum to my site, but instead of the usual forum/subforum/sub-subforum structure, I'd like to use a metadata/tag approach where everything exists as a single directory, and where there's a search field at the top which instantly (<0.5 sec) filters the threads to a particular keyword or keywords. Also, as the admin, I would be able to add highly visible buttons at the top, which can be clicked on for the main categories I choose for the forum (nevertheless, users can also add tags to their own threads outside of these default main tags I supply if they wish). This approach, if done properly, is more powerful, efficient, maintenance free, scalable and friendly than a standard forum, so I was hoping someone had the same idea and made something out of it. It couldn't be that hard. I'd want the speed to be up to (or near) the standard of this: http://forum.dlang.org/ Other forums (e.g.: phpBB) are orders of magnitude worse than that in terms of latency (posting or browsing), and I think that is wrong, even in principle ;)

    Read the article

  • What is recommended minimum object size for gzip benefits?

    - by utt73
    I'm working on improving page speed display times, and one of the methods is to gzip content from the webserver. Google recommends: Note that gzipping is only beneficial for larger resources. Due to the overhead and latency of compression and decompression, you should only gzip files above a certain size threshold; we recommend a minimum range between 150 and 1000 bytes. Gzipping files below 150 bytes can actually make them larger. We serve our content through Akamai, using their network for a proxy and CDN. What they've told me: Following up on your question regarding what is the minimum size Akamai will compress the requested object when sending it to the end user: The minimum size is 860 bytes. My reply: What is the reason(s) for why Akamai's minimum size is 860 bytes? And why, for example, is this not the case for files Akamai serves for facebook? (see below) Google recommends to gzip more agressively. And that seems appropriate on our site where the most frequent hits, by far, are AJAX calls that are <860 bytes. Akamai's response: The reasons 860 bytes is the minimum size for compression is twofold: (1) The overhead of compressing an object under 860 bytes outweighs performance gain. (2) Objects under 860 bytes can be transmitted via a single packet anyway, so there isn't a compelling reason to compress them. So I'm here for some fact checking. Is the 860 byte limit due to packet size the end of this reasoning? Why would high traffic sites push this lower/closer to the 150 byte limit... just to save on bandwidth costs, or is there a performance gain in doing so?

    Read the article

  • Session lost and application end, after file download

    - by Amr ElGarhy
    I have this code in the end of link button click: Response.ContentType = "application/zip"; Response.AppendHeader("content-disposition", "attachment; filename=download.zip"); Response.TransmitFile(Server.MapPath("download.zip")); Response.End(); to download a zip file from an aspx page. In the previous page i set a session variable, after going to this download page and download the file, then press back i find the session=null "this happen after downloading more than 1 time", and the application_end in global.ascx called. Do you know why this may happen??

    Read the article

  • .NET WebClient tag property, or keeping track of downloads ?

    - by GX
    Hello, I am trying to implement an Asynchronous file download from a web server using private void btnDownload_Click(object sender, EventArgs e) { WebClient webClient = new WebClient(); webClient.Credentials = new NetworkCredential("test", "test"); webClient.DownloadFileCompleted += new AsyncCompletedEventHandler(Completed); webClient.DownloadProgressChanged += new DownloadProgressChangedEventHandler(ProgressChanged); webClient.DownloadFileAsync(new Uri("ftp://2.1.1.1:17865/zaz.txt"), @"c:\myfile.txt"); } private void ProgressChanged(object sender, DownloadProgressChangedEventArgs e) { progressBar1.Value = e.ProgressPercentage; } private void Completed(object sender, AsyncCompletedEventArgs e) { MessageBox.Show("Download completed!"); } Now the download works fine, what I would like to know is the following: assuming that I will have more than 1 download at a time, what is the best way to keep track of each download and report progress separately ? I was looking for a tag property of the WebClient, but could not find one. Thank you

    Read the article

  • PHP/Apache Deny folder access to user but not to script

    - by Piero
    Hey all, So I have this php web app, and one of my folder contains some files that can be downloaded. I have a download script that modifies the headers, in order to always offer a download link. (instead of showing a picture for example, when you click on a link, a download box pops out) Right now, if you enter a url like: http://www.mywebsite.com/content/ You get the listing of all the downloadable files, and of course, you can just download them all, without going through the website interface. Personally, I don't think it's a problem, since I often use downthemall or other downloading tool, and this type of access is a great time saver.... But of course my company does not think so :-p They want people to use the interface in order to view the Ads... Would they be a way, maybe with a protected .htaccess, to leave the folder access to my download script, but deny access to the users...? I hope I am making sense and you know what I mean :) All help/remarks appreciated!

    Read the article

  • Download multiple files in background in Android

    - by Addev
    Basically I'm trying to make a little app for watching offline content. So there's a moment where the user selects to download the contents (and the app should download about 300 small files and images). I'd like to show the user how does the process go if he enters the proper activity. Showing a list of all the files, telling what has been already downloaded, in progress or waiting for download. My problem is that I really don't know what approach to take for achieve this. Since the download should last until finished I imagine the solution is an Service, but whats best? an IntentService, a Bound Service or an Standard Service calling a startService() for each download? And how can I keep my objects updated for displaying them later? should I use a database or objects in memory? Thanks

    Read the article

  • how to use javascript to download a file on Chrome without Chrome auto renaming file to "download"? [duplicate]

    - by user3688566
    This question already has an answer here: Is there any way to specify a suggested filename when using data: URI? 11 answers I use javascript to generate a file and download. It seems that depending on the version of chrome, the download file names can be auto renamed to 'download'. is there a way to avoid it? this is my code: var link = document.createElement("a"); link.setAttribute("href", 'data:application/octet-stream,' + 'file content here'); link.setAttribute("download", 'file1.txt'); link.click(); This is not a duplicated question because i am using the latest chrome and the previously suggested hyperlink is exactly what i am using. I think chrome v34 works fine. but once my chrome autoupdated to v35, it went back to 'download' file name.

    Read the article

  • Multicast image restoration with adaptive speed

    - by Clinton Blackmore
    I'm curious to know if there are any tools for restoring disk images (or even transferring files) via multicast -- for any platform, especially if the project has source available -- where the multicast rate adjusts itself on the fly. On the Mac, all multicast solutions I am aware of (such as Deploy Studio, and NetRestore before it) make use of multicast ASR (apple software restore), which has one glaring deficiency -- you have to set the multicast speed before you start sending a disk image over the network, and that speed is locked in. Either your clients can keep up and restore, or they can't*. It seems to me that it must be possible for the multicast server to adjust the data rate, so you basically say "start sending this image", clients connect, and, if they can't keep up, they tell the server so it slows down. (Likewise, I'd expect the server to try speeding up if no client is having difficulties keeping up, and I'd expect to be able to cap that maximum throughput so that other network activities can go on without being resource starved.) So, what sort of tools are out there? For Linux? Windows? Is there something for the Mac I've overlooked. [It just kills me that it is true that, by the time you get multicast up and going at a good speed to restore a lab, you could've unicasted the data to all the computers and be done.] * There is a little leeway involved. I think individual clients can say, "I missed a little bit of data" and get it, and they can opt to listen in the next time the image is sent over the network, but on the whole, if they missed it the first go round, you have to image the machine again, and there is no time savings.

    Read the article

  • repo sync "CyanogenMod/android_prebuilt" size and resume capability.?

    - by james
    I'm downloading CyanogenMod-10.1 source on a low speed broadband. About 4GB of source is downloaded . In that 4GB, there is a big project "CyanogenMod/android_frameworks_base" which alone took 1GB of download without any interruption. Ok now, after 4GB of download, my internet got disconnected and I had to stop (ctrl + z) repo sync while it was downloading the project "CyanogenMod/android_prebuilt". Before I stopped repo sync the android_prebuilt got downloaded till 250MB and is at 42percent. I checked the working folder and there is a file "tmp_pack_df5CKb" of size 250MB in the path "$WORKING_DIR/.repo/projects/prebuilt.git/objects/pack/" . Then I restarted repo sync and it was downloading the android_prebuilt project. But I'm not sure if it was downloading from start or resuming from 250MB. While downloading this time , the previous "tmp_pack_df5CKb" isn't deleted and the content is being downloaded to a new file "tmp_pack_HPfvFG". I heard repo sync cannot be resumed for a project. But here, since the previous file isn't deleted I want to ask if android_prebuilt is resuming or downloading from start again? Now that my high speed internet is over (current speed 256kbps), I'm not sure if I can download the remaining ~4GB if single project is in size 500 MB.

    Read the article

  • High fan speed with no reason

    - by Klaus
    For a few weeks, the fans of my Lenovo B590 laptop, running on Xubuntu 14, turn to high speed a few minutes after it is turned on. The fans won't speed down until I turn the computer off. This is quite strange, since This didn't happen before The temperatures are quite low (are they ?) $sensors Adapter: Virtual device temp1: +36.0°C (crit = +88.0°C) temp2: +30.0°C (crit = +126.0°C) coretemp-isa-0000 Adapter: ISA adapter Physical id 0: +37.0°C (high = +72.0°C, crit = +90.0°C) Core 0: +34.0°C (high = +72.0°C, crit = +90.0°C) Core 1: +31.0°C (high = +72.0°C, crit = +90.0°C) thinkpad-isa-0000 Adapter: ISA adapter fan1: 0 RPM pkg-temp-0-virtual-0 Adapter: Virtual device temp1: +37.0°C $sudo hddtemp /dev/sda /dev/sda: ST500LT012-9WS142: 33°C The computer is under low load: top - 08:30:15 up 16 min, 2 users, load average: 0.28, 0.23, 0.23 Tasks: 197 total, 1 running, 196 sleeping, 0 stopped, 0 zombie %Cpu(s): 0.8 us, 0.5 sy, 0.0 ni, 98.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem: 3607944 total, 1973956 used, 1633988 free, 99660 buffers KiB Swap: 3744764 total, 0 used, 3744764 free. 789936 cached Mem The BIOS is up to date (and there are no fan settings in it) The fan is clean and dust-free Why would the BIOS turn the fans to high speed where there seem to be no reason for that ? It seems that we cannot control the fan manually with this model, so I guess the only solution is to understand why this happens.

    Read the article

  • Internet compression proxy for low speed broadband?

    - by user23150
    I live in a rural location, using high-latency wireless off a local ISP's tower. My speed tests vary day to day, but I can get around 1Mb up/down. The problem is, I work with large files, uploading and downloading (HD videos, development software, etc.). It can be painful to wait sometimes. Plus I do some side contract game development, and it can be very difficult to playtest with other developers (200ms ping is a good day for me). Now, obviously it's not going to be easy to solve the latency problem without different wireless hardware. But speedwise, I am wondering if I can use some kind of compression technology on a proxy. For instance, my work computer has full access to a 26Mb down, 10Mb up connection, that is totally unused at night and the weekends. If I could run some kind of compression technology on our server, and use it as a proxy to route to my home computer, I could stand to gain some major speed. I realize that by bogging down a system with compression, I could potentially lose whatever speed gain I had. But the proxy server is a quad core xeon, and the receiving computer is a pretty decent i7 computer, so that shouldn't be a concern. I found http://toonel.net/ but it seems more geared toward very slow narrowband users, like dial-up. Plus, I would prefer to just be able to point my browser to a proxy server, rather then install software on my client machine. EDIT I thought about my question a little more, and realize I am going to need to install software on my client in order to decompress, and possible compress (for uploading). That's not a huge deal.

    Read the article

  • How to avoid movement speed stacking when multiple keys are pressed?

    - by eren_tetik
    I've started a new game which requires no mouse, thus leaving the movement up to the keyboard. I have tried to incorporate 8 directions; up, left, right, up-right and so on. However when I press more than one arrow key, the movement speed stacks (http://gfycat.com/CircularBewitchedBarebirdbat). How could I counteract this? Here is relevant part of my code: var speed : int = 5; function Update () { if(Input.GetKey(KeyCode.UpArrow)){ transform.Translate(Vector3.forward * speed * Time.deltaTime); } else if(Input.GetKey(KeyCode.UpArrow) && Input.GetKey(KeyCode.RightArrow)){ transform.Translate(Vector3.forward * speed * Time.deltaTime); } else if(Input.GetKey(KeyCode.UpArrow) && Input.GetKey(KeyCode.LeftArrow)){ transform.rotation = Quaternion.AngleAxis(315, Vector3.up); } if(Input.GetKey(KeyCode.DownArrow)){ transform.Translate(Vector3.forward * speed * Time.deltaTime); } }

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >