Search Results

Search found 17233 results on 690 pages for 'download speed'.

Page 80/690 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • download file, without saving in server

    - by lolalola
    Hi, Or possible for users to download the file without saving it on the server? I am getting data from the database, and I want to save them .doc (MS Word) file. if(isset($_POST['save'])) { $file = "new_file2.doc"; $stringData = "Text text text text...."; //This data put in new Word file $fh = fopen($file, 'w'); fwrite($fh, $stringData); fclose($fh); header('Content-Description: File Transfer'); header('Content-type: application/msword'); header('Content-Disposition: attachment; filename="'.$file.'"'); header('Content-Transfer-Encoding: binary'); header('Expires: 0'); header('Cache-Control: must-revalidate, post-check=0, pre-check=0'); header('Pragma: public'); header('Content-Length: ' . filesize($file)); ob_clean(); flush(); readfile($file); unlink($file); exit; } How should the code look like without this: " $fh = fopen($file, 'w'); fwrite($fh, $stringData); fclose($fh);" and this "unlink($file);" I hope to understand what I need enter code here

    Read the article

  • download large files using servlet

    - by niks
    I am using Apache Tomcat Server 6 and Java 1.6 and am trying to write large mp3 files to the ServletOutputStream for a user to download. Files are ranging from a 50-750MB at the moment. The smaller files aren't causing too much of a problem but with the larger files it and getting socket exception broken pipe. File fileMp3 = new File(objDownloadSong.getStrSongFolder() + "/" + strSongIdName); FileInputStream fis = new FileInputStream(fileMp3); response.setContentType("audio/mpeg"); response.setHeader("Content-Disposition", "attachment; filename=\"" + strSongName + ".mp3\";"); response.setContentLength((int) fileMp3.length()); OutputStream os = response.getOutputStream(); try { int byteRead = 0; while ((byteRead = fis.read()) != -1) { os.write(byteRead); } os.flush(); } catch (Exception excp) { downloadComplete = "-1"; excp.printStackTrace(); } finally { os.close(); fis.close(); }

    Read the article

  • Curl download image not working

    - by mark
    I would like to check whether a remote image is not older than 2 days and then download it. The image is not downloaded in anycase. What is wrong here? $ch = curl_init($file_source); // the file we are downloading curl_setopt($ch, CURLOPT_TIMEOUT, 15); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true); curl_setopt($ch, CURLOPT_FILETIME, true); curl_setopt($ch, CURLOPT_HEADER, true); curl_setopt($ch, CURLOPT_RETURNTRANSFER, false); curl_exec($ch); $headers = curl_getinfo($ch); $last_modified = $headers['filetime']; if ($last_modified != -1) { if ($last_modifiedtime()-86400*2) { $ch2 = curl_init($file_source); $wh = fopen($file_target, 'wb') or errorIMG('003'); curl_setopt($ch2, CURLOPT_FILE, $wh); curl_setopt($ch2, CURLOPT_TIMEOUT, 25); curl_setopt($ch2, CURLOPT_FOLLOWLOCATION, true); curl_setopt($ch2, CURLOPT_HEADER, true); curl_setopt($ch2, CURLOPT_RETURNTRANSFER, true); curl_exec($ch2); curl_close($ch2); fclose($wh); } } curl_close($ch);

    Read the article

  • Download Remote File

    - by Abs
    Hello all, I have a function that will be passed in a link. The link is to a remote image. I thought I could just use the extension of the file in the URL to determine the type of image but some URLs won't have extensions in the URL. They probably just push headers to the browser and therefore I do not have an extension to parse from the URL. How can I test if the URL has an extension and if not then read the headers to determine the file type? Am I over complicating things here? Is there an easier way to do this? I am making use of Codeigniter maybe there is something already built in to do this? All I really want to do is download an image from a URL with the correct extension. This is what I have so far. function get_image($image_link){ $remoteFile = $image_link; $ext = ''; //some URLs might not have an extension $file = fopen($remoteFile, "r"); if (!$file) { return false; }else{ $line = ''; while (!feof ($file)) { $line .= fgets ($file, 4096); } $file_name = time().$ext; file_put_contents($file_name, $line); } fclose($file); } Thanks all for any help

    Read the article

  • Nginx - Treats PHP as binary

    - by Think Floyd
    We are running Nginx+FastCgi as the backend for our Drupal site. Everything seems to work like fine, except for this one url. http:///sites/all/modules/tinymce/tinymce/jscripts/tiny_mce/plugins/smimage/index.php (We use TinyMCE module in Drupal, and the url above is invoked when user tries to upload an image) When we were using Apache, everything was working fine. However, nginx treats that above url Binary and tries to Download it. (We've verified that the file pointed out by the url is a valid PHP file) Any idea what could be wrong here? I think it's something to do with the NGINX configuration, but not entirely sure what that is. Any help is greatly appreciated. Config: Here's the snippet from the nginx configuration file: root /var/www/; index index.php; if (!-e $request_filename) { rewrite ^/(.*)$ /index.php?q=$1 last; } error_page 404 index.php; location ~* \.(engine|inc|info|install|module|profile|po|sh|.*sql|theme|tpl(\.php)?|xtmpl)$|^(code-style\.pl|Entries.*|Repository|Root|Tag|Template)$ { deny all; } location ~* ^.+\.(jpg|jpeg|gif|png|ico)$ { access_log off; expires 7d; } location ~* ^.+\.(css|js)$ { access_log off; expires 7d; } location ~ .php$ { include /etc/nginx/fcgi.conf; fastcgi_pass 127.0.0.1:8888; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; } location ~ /\.ht { deny all; }

    Read the article

  • Speed up csv export when using php from mysql database query

    - by John
    Ok, so i've got a web system (built on codeigniter & running on mysql) that allows people to query a database of postal address data by making selections in a series of forms until they arrive at the selection that want, pretty standard stuff. They can then buy that information and download it via that system. The queries run very fast, but when it comes to applying that query to the database,and exporting it to csv, once the datasets get to around the 30,000 record mark (each row has around 40 columns of which about 20 are all populated with on average 20 chars of data per cell) it can take 5 or so minutes to export to csv. So, my question is, what is the main cause for the slowness? Is it that the resultset of data from the query is so large, that it is running into memory issues? Therefore should i allow much more memory to the process? Or, is there a much more efficient way of exporting to csv from a mysql query that i'm not doing? Should i save the contents of the query to a temp table and simply export the temp table to csv? Or am i going about this all wrong? Also, is the fact that i'm using Codeigniters Active Record for this prohibitive due to the way that it stores the resultset? Any advice is welcome! Thank you for reading!

    Read the article

  • Download Canvas Image Png Chome/Safari

    - by user2639176
    Works in Firefox, and won't work in Safari, or Chrome. function loadimage() { var canvas = document.getElementById("canvas"); if (window.XMLHttpRequest) {// code for IE7+, Firefox, Chrome, Opera, Safari xmlhttp=new XMLHttpRequest(); xmlhttp2=new XMLHttpRequest(); } else {// code for IE6, IE5 xmlhttp=new ActiveXObject("Microsoft.XMLHTTP"); xmlhttp2=new ActiveXObject("Microsoft.XMLHTTP"); } xmlhttp.onreadystatechange=function() { if (xmlhttp.readyState==4 && xmlhttp.status==200) { rasterizeHTML.drawHTML(xmlhttp.responseText, canvas); var t=setTimeout(function(){copy()},3000) } } xmlhttp.open("GET","/sm/<?=$sm[0];?>",true); xmlhttp.send(); } function copy() { var canvas = document.getElementById("canvas"); var img = canvas.toDataURL("image/png"); document.getElementById('dl').href = img; document.getElementById('dl').innerHTML = "Download"; } Now I didn't write this, so I don't know too much javascript. But the script works in Firefox. In Chrome, getting: Uncaught Security Error: An attempt was made to break through the security policy of the user-agent. For toDataURL("image/png")

    Read the article

  • Serving files (800MB) results in an empty file

    - by azz0r
    Hello, with the following code, small files are served fine, however large (see, 800MB and above) result in empty files! Would I need to do something with apache to solve this? <?php class Model_Download { function __construct($path, $file_name) { $this->full_path = $path.$file_name; } public function execute() { if ($fd = fopen ($this->full_path, "r")) { $fsize = filesize($this->full_path); $path_parts = pathinfo($this->full_path); $ext = strtolower($path_parts["extension"]); switch ($ext) { case "pdf": header("Content-type: application/pdf"); // add here more headers for diff. extensions header("Content-Disposition: attachment; filename=\"".$path_parts["basename"]."\""); // use 'attachment' to force a download break; default; header("Content-type: application/octet-stream"); header("Content-Disposition: filename=\"".$path_parts["basename"]."\""); break; } header("Content-length: $fsize"); header("Cache-control: private"); //use this to open files directly while(!feof($fd)) { $buffer = fread($fd, 2048); echo $buffer; } } fclose ($fd); exit; } }

    Read the article

  • USB Drive Not recognized

    - by user36582
    My Friend's Pen Drive, which was working well very well just few days ago, is not being recognized after getting used by a virus affected machine. Its not on fdisk -l or lsusb However in dmesg I can see the following: [ 977.300013] usb 5-2: new full speed USB device using uhci_hcd and address 2 [ 977.420014] usb 5-2: device descriptor read/64, error -71 [ 977.644023] usb 5-2: device descriptor read/64, error -71 [ 977.860013] usb 5-2: new full speed USB device using uhci_hcd and address 3 [ 977.980013] usb 5-2: device descriptor read/64, error -71 [ 978.204013] usb 5-2: device descriptor read/64, error -71 [ 978.420013] usb 5-2: new full speed USB device using uhci_hcd and address 4 [ 978.828015] usb 5-2: device not accepting address 4, error -71 [ 978.940015] usb 5-2: new full speed USB device using uhci_hcd and address 5 [ 979.348013] usb 5-2: device not accepting address 5, error -71 [ 979.348292] hub 5-0:1.0: unable to enumerate USB device on port 2 [ 1017.848015] usb 5-2: new full speed USB device using uhci_hcd and address 6 [ 1017.968012] usb 5-2: device descriptor read/64, error -71 [ 1018.192017] usb 5-2: device descriptor read/64, error -71 [ 1018.408014] usb 5-2: new full speed USB device using uhci_hcd and address 7 [ 1018.528012] usb 5-2: device descriptor read/64, error -71 [ 1018.752023] usb 5-2: device descriptor read/64, error -71 [ 1018.968012] usb 5-2: new full speed USB device using uhci_hcd and address 8 [ 1019.376019] usb 5-2: device not accepting address 8, error -71 [ 1019.488011] usb 5-2: new full speed USB device using uhci_hcd and address 9 [ 1019.896016] usb 5-2: device not accepting address 9, error -71 [ 1019.896308] hub 5-0:1.0: unable to enumerate USB device on port 2 [ 1049.984016] usb 5-1: new full speed USB device using uhci_hcd and address 10 [ 1050.104014] usb 5-1: device descriptor read/64, error -71 [ 1050.328014] usb 5-1: device descriptor read/64, error -71 [ 1050.544014] usb 5-1: new full speed USB device using uhci_hcd and address 11 [ 1050.664018] usb 5-1: device descriptor read/64, error -71 [ 1050.888019] usb 5-1: device descriptor read/64, error -71 [ 1051.104025] usb 5-1: new full speed USB device using uhci_hcd and address 12 [ 1051.512014] usb 5-1: device not accepting address 12, error -71 [ 1051.624101] usb 5-1: new full speed USB device using uhci_hcd and address 13 [ 1052.032014] usb 5-1: device not accepting address 13, error -71 [ 1052.032991] hub 5-0:1.0: unable to enumerate USB device on port 1 What these Errors actually mean and how Can I get this pen drive Back to work ??

    Read the article

  • SqlBulkCopy is slow, doesn't utilize full network speed

    - by Alex
    Hi, for that past couple of weeks I have been creating generic script that is able to copy databases. The goal is to be able to specify any database on some server and copy it to some other location, and it should only copy the specified content. The exact content to be copied over is specified in a configuration file. This script is going to be used on some 10 different databases and run weekly. And in the end we are copying only about 3%-20% of databases which are as large as 500GB. I have been using the SMO assemblies to achieve this. This is my first time working with SMO and it took a while to create generic way to copy the schema objects, filegroups ...etc. (Actually helped find some bad stored procs). Overall I have a working script which is lacking on performance (and at times times out) and was hoping you guys would be able to help. When executing the WriteToServer command to copy large amount of data ( 6GB) it reaches my timeout period of 1hr. Here is the core code for copying table data. The script is written in PowerShell. $query = ("SELECT * FROM $selectedTable " + $global:selectiveTables.Get_Item($selectedTable)).Trim() Write-LogOutput "Copying $selectedTable : '$query'" $cmd = New-Object Data.SqlClient.SqlCommand -argumentList $query, $source $cmd.CommandTimeout = 120; $bulkData = ([Data.SqlClient.SqlBulkCopy]$destination) $bulkData.DestinationTableName = $selectedTable; $bulkData.BulkCopyTimeout = $global:tableCopyDataTimeout # = 3600 $reader = $cmd.ExecuteReader(); $bulkData.WriteToServer($reader); # Takes forever here on large tables The source and target databases are located on different servers so I kept track of the network speed as well. The network utilization never went over 1% which was quite surprising to me. But when I just transfer some large files between the servers, the network utilization spikes up to 10%. I have tried setting the $bulkData.BatchSize to 5000 but nothing really changed. Increasing the BulkCopyTimeout to an even greater amount would only solve the timeout. I really would like to know why the network is not being used fully. Anyone else had this problem? Any suggestions on networking or bulk copy will be appreciated. And please let me know if you need more information. Thanks. UPDATE I have tweaked several options that increase the performance of SqlBulkCopy, such as setting the transaction logging to simple and providing a table lock to SqlBulkCopy instead of the default row lock. Also some tables are better optimized for certain batch sizes. Overall, the duration of the copy was decreased by some 15%. And what we will do is execute the copy of each database simultaneously on different servers. But I am still having a timeout issue when copying one of the databases. When copying one of the larger databases, there is a table for which I consistently get the following exception: System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. It is thrown about 16 after it starts copying the table which is no where near my BulkCopyTimeout. Even though I get the exception that table is fully copied in the end. Also, if I truncate that table and restart my process for that table only, the tables is copied over without any issues. But going through the process of copying that entire database fails always for that one table. I have tried executing the entire process and reseting the connection before copying that faulty table, but it still errored out. My SqlBulkCopy and Reader are closed after each table. Any suggestions as to what else could be causing the script to fail at the point each time?

    Read the article

  • how to speed up the code??

    - by kaushik
    i have very huge code about 600 lines plus. cant post the whole thing here. but a particular code snippet is taking so much time,leading to problems. here i post that part of code please tell me what to do speed up the processing.. please suggest the part which may be the reason and measure to improve them if this small part of code is understandable. using_data={} def join_cost(a , b): global using_data #print a #print b save_a=[] save_b=[] print 1 #for i in range(len(m)): #if str(m[i][0])==str(a): save_a=database_index[a] #for i in range(len(m)): # if str(m[i][0])==str(b): #print 'save_a',save_a #print 'save_b',save_b print 2 save_b=database_index[b] using_data[save_a[0]]=save_a s=str(save_a[1]).replace('phone','text') s=str(s)+'.pm' p=os.path.join("c:/begpython/wavnk/",s) x=open(p , 'r') print 3 for i in range(6): x.readline() k2='a' j=0 o=[] while k2 is not '': k2=x.readline() k2=k2.rstrip('\n') oj=k2.split(' ') o=o+[oj] #print o[j] j=j+1 #print j #print o[2][0] temp=long(1232332) end_time=save_a[4] #print end_time k=(j-1) for i in range(k): diff=float(o[i][0])-float(end_time) if diff<0: diff=diff*(-1) if temp>diff: temp=diff pm_row=i #print pm_row #print temp #print o[pm_row] #pm_row=3 q=[] print 4 l=str(p).replace('.pm','.mcep') z=open(l ,'r') for i in range(pm_row): z.readline() k3=z.readline() k3=k3.rstrip('\n') q=k3.split(' ') #print q print 5 s=str(save_b[1]).replace('phone','text') s=str(s)+'.pm' p=os.path.join("c:/begpython/wavnk/",s) x=open(p , 'r') for i in range(6): x.readline() k2='a' j=0 o=[] while k2 is not '': k2=x.readline() k2=k2.rstrip('\n') oj=k2.split(' ') o=o+[oj] #print o[j] j=j+1 #print j #print o[2][0] temp=long(1232332) strt_time=save_b[3] #print strt_time k=(j-1) for i in range(k): diff=float(o[i][0])-float(strt_time) if diff<0: diff=diff*(-1) if temp>diff: temp=diff pm_row=i #print pm_row #print temp #print o[pm_row] #pm_row=3 w=[] l=str(p).replace('.pm','.mcep') z=open(l ,'r') for i in range(pm_row): z.readline() k3=z.readline() k3=k3.rstrip('\n') w=k3.split(' ') #print w cost=0 for i in range(12): #print q[i] #print w[i] h=float(q[i])-float(w[i]) cost=cost+math.pow(h,2) j_cost=math.sqrt(cost) #print cost return j_cost def target_cost(a , b): a=(b+1)*3 b=(a+1)*2 t_cost=(a+b)*5/2 return t_cost r1='shht:ra_77' r2='grx_18' g=[] nodes=[] nodes=nodes+[[r1]] for i in range(len(y_in_db_format)): g=y_in_db_format[i] #print g #print g[0] g.remove(str(g[0])) nodes=nodes+[g] nodes=nodes+[[r2]] print nodes print "lenght of nodes",len(nodes) lists=[] #lists=lists+[r1] for i in range(len(nodes)): for j in range(len(nodes[i])): lists=lists+[nodes[i][j]] #lists=lists+[r2] print lists distance={} for i in range(len(lists)): if i==0: distance[str(lists[i])]=0 else: distance[str(lists[i])]=long(123231223) #print distance group_dist=[] infinity=long(123232323) for i in range(len(nodes)): distances=[] for j in range(len(nodes[i])): #distances=[] if i==0: distances=distances+[[nodes[i][j], 0]] else: distances=distances+[[nodes[i][j],infinity]] group_dist=group_dist+[distances] #print distances print "group_distances",group_dist #print "check",group_dist[0][0][1] #costs={} #for i in range(len(lists)): #if i==0: # costs[str(lists[i])]=1 #else: # costs[str(lists[i])]=get_selfcost(lists[i]) path=[] for i in range(len(nodes)): mini=[] if i!=(len(nodes)-1): #temp=long(123234324) #Now calculate the cost between the current node and each of its neighbour for k in range(len(nodes[(i+1)])): for j in range(len(nodes[i])): current=nodes[i][j] #print "current_node",current j_distance=join_cost( current , nodes[i+1][k]) #t_distance=target_cost( current , nodes[i+1][k]) t_distance=34 #print distance #print "distance between current and neighbours",distance total_distance=(.5*(float(group_dist[i][j][1])+float(j_distance))+.5*(float(t_distance))) #print "total distance between the intial_nodes and current neighbour",total_distance if int(group_dist[i+1][k][1]) > int(total_distance): group_dist[i+1][k][1]=total_distance #print "updated distance",group_dist[i+1][k][1] a=current #print "the neighbour",nodes[i+1][k],"updated the value",a mini=mini+[[str(nodes[i+1][k]),a]] print mini

    Read the article

  • select value of td and download content of selected tds

    - by user1272145
    I have this table <table class="results" id="summary_results"> <tr> <td>select all</td> <td>name</td> <td>id</td> <td>address</td> <td>url</td> </tr> <tr> <td> <input type="checkbox"> </td> <td>john doe</td> <td>1</td> <td>33.85 some address</td> <td>http://www.domain.com</td> </tr> <tr> <td> <input type="checkbox"> </td> <td>jane doe</td> <td>2</td> <td>34.85 some address</td> <td>http://www.domain2.com</td> </tr> <tr> <td> <input type="checkbox"> </td> <td>sam</td> <td>3</td> <td>33.86 some address</td> <td>http://www.domain3.com</td> </tr> </table> I would like to select all the rows then download the content of the urls knowing that each url is linked to the id. for example the first url will be www.domain.com?id=1&report=report

    Read the article

  • Is it possible to download a large database using mysql query

    - by Rose
    i am downloading files from server using WinSCP.Is it possible to write a query to download a large database using mysql query? Or using any other method i have tried with this code but i am not able to get the whole database structure <?php if(file_exists('backup_sql/my_backup.zip')) { unlink('backup_sql/my_backup.zip'); } $tables='*'; $host='MY HOST NAME'; $user='MY_USERNAME'; $pass='MYPASSWORD'; $name='MY_DB_NAME'; $link = mysql_connect($host,$user,$pass); mysql_select_db($name,$link); //get all of the tables if($tables == '*') { $tables = array(); $result = mysql_query('SHOW TABLES'); while($row = mysql_fetch_row($result)) { $tables[] = $row[0]; } } else { $tables = is_array($tables) ? $tables : explode(',',$tables); } $return=''; //cycle through foreach($tables as $table) { $result = mysql_query('SELECT * FROM '.$table); $num_fields = mysql_num_fields($result); //$return.= 'DROP TABLE '.$table.';'; $row2 = mysql_fetch_row(mysql_query('SHOW CREATE TABLE '.$table)); $return.= "\n\n".$row2[1].";\n\n"; for ($i = 0; $i < $num_fields; $i++) { while($row = mysql_fetch_row($result)) { $return.= 'INSERT INTO '.$table.' VALUES('; for($j=0; $j<$num_fields; $j++) { $row[$j] = addslashes($row[$j]); //$row[$j] = ereg_replace("\n","\\n",$row[$j]); if (isset($row[$j])) { $return.= '"'.$row[$j].'"' ; } else { $return.= '""'; } if ($j<($num_fields-1)) { $return.= ','; } } $return.= ");\n"; } } $return.="\n\n\n"; } $rand_var=time(); $files_to_zip = array( "'backup_sql/db-backup-'.$rand_var.'.sql'", ); $name = 'db-backup-'.$rand_var.'.sql'; $data = $return; ?> any one please help me... thank you

    Read the article

  • ACPI Throttling in Ubuntu

    - by Evan
    I'm looking to throttle my cpu through the ACPI. I've read up on it, but I keep receiving permission denied statements. I have 8 available throttling states. Here are the outcomes of my atttempts: evan@evan-laptop:/proc/acpi/processor/CPU0$ echo 3 /proc/acpi/processor/CPU0/throttling bash: /proc/acpi/processor/CPU0/throttling: Permission denied evan@evan-laptop:/proc/acpi/processor/CPU0$ sudo echo 3 /proc/acpi/processor/CPU0/throttling bash: /proc/acpi/processor/CPU0/throttling: Permission denied EDIT: For reference, I am running Ubuntu Karmic with Intel Core Duo T2500 with ACPI enabled

    Read the article

  • Which Linux is the most efficient?

    - by quandary
    Simple question: There is a gazillion Linux distributions out there. Which one (distribution/incl. window manager) makes (technically) the most efficient use of my (aging) computer ? I have appx. 1 GB RAM and a 1.6 GHz processor, 120 GB hd. I develop applications (C++/.NET/mono/ASP/PostGre SQL/). Usually, I prefer distros with apt-get. Anybody knows which one takes the most care of my limited RAM, and wich one is the fastest/slimmest of them all, that has a decent repo and is damn fast)

    Read the article

  • Bluray Drives: 2x vs 4x vs 6x vs 8x read/writespeed.

    - by Wesley
    Hi all, I couldn't find a duplicate question, but I was wondering what the differences are between different read/write speeds for Bluray drive. I'm planning on buying one for a build but don't know if I can cheap out on getting a Bluray 2x drive or spend more money for a quality Bluray 8x drive. Will I just experience more lag/buffering times for Bluray discs on a 2x and none for a 6x or 8x? Thanks in advance.

    Read the article

  • Our wi-fi at work is ridiculously slow, will adding more range extenders improve it?

    - by john
    At work, we have two wireless networks (e.g., Work1 Work2); the Work2 is used downstairs and Work1 is used upstairs. However, both are notoriously slow. The connection is better when we are wired in, but unfortunately due to our building being very old and our company growing very fast, most employees are not seated near the walls where the ethernet cables are. I had Cox, our ISP, run a bandwidth utilization test and it doesn't seem like we are capping out on upstream/downstream, which leads me to believe that it's strictly an issue with the wireless networks (which were implemented before I got there). The wireless networks are both Apple Airport Extremes. Is there anything I can do to improve the situation for everyone? Speeds are extremely slow, and sometimes drops out.

    Read the article

  • Do different operating systems have different read and write speeds?

    - by Ivan
    If I have two different operating systems, such as Windows 8 and Ubuntu, running on the same hardware, will the two operating systems have different read and write speeds? My guess is that there would be minimal difference between operating systems and read and write speeds to the hard disk since the major limited factor is seeking; however, different operating systems may use different file systems in order to attempt to reduce seek time in the hard disk. Likewise, I'm sure that modern operating systems will not actually write directly to the hard disk, and instead will just have it in memory and marked with a dirty bit. Are there any studies that show differences in read and write speeds between OSs? Or would the file system being used by the OS matter more than the OS itself?

    Read the article

  • Why does my Internet slow to a crawl unless I reboot my router every few days?

    - by Lord Torgamus
    A few weeks ago, I noticed that my Internet connection had slowed down to a crawl. I waited a few days hoping it would go away on its own, but it didn't get better. So I asked this question about how to make it faster. The problem went away after I updated to the latest firmware, so I didn't follow up too carefully. But every few days since then, my Internet has slowed down again. Unlike before, all I have to do to fix it is open the router administration page and press the "Reboot" button. Nothing else seems to work, though I'm sure there are options I haven't tried. If it makes a difference, my girlfriend and I both transfer large amounts of data fairly routinely for school (videoconferencing, downloading entire recorded lectures). The router is a Cisco/Linksys 160N V3 that's about a year old. Most of the time, it deals with just two standard Windows 7 laptops. The only thing I came across while searching for answers/dupes was this question, which seems similar superficially, but probably doesn't have the same root issue. Anyways, it's not resolved. What could be causing these slowdowns, and how can I get rid of them?

    Read the article

  • Any rerefence of CPU world statistics?

    - by Áxel Costas Pena
    I am looking for any referencee about computer power statistics across the world. My main interest is about real computing capabilities, so I'd prefer information about real processor power, and even best if it includes also other critical hardware statistics, like RAM memory, but if it isn't possible, maybe statistics about brand/model distribution will be also useful. I've Googled for some minutes and I've found nothing related.

    Read the article

  • How to set expiration date for external files? [closed]

    - by garconcn
    I have a site included lots of external files, most of them are gif format. I have no control on the external files, but have to use them(with permission). When I check the site using Google Pagespeed, I got very low score(31) even though the page load is fast. One of the high priority suggestion is to leverage browser caching by setting an expiration date. However, all the files are on external links. I have already set the expiration date for local files.

    Read the article

  • Compression without Mod_Deflate

    - by pws5068
    Greetings all, After running tests with Google PageSpeed, I believe my site could really benefit from compressing js/html/css/php files. Unfortunately, my host (Host Gator) does not support Mod_Gzip or Mod_Deflate. I was able to enable php compression through the ini file. Is there another way to serve compressed files to browsers that support them, in a manner similar to Mod_Deflate?

    Read the article

  • Is it possible to configure a CDN so that it will step out of the way for a subset of regional IPs?

    - by rwired
    We have a website which targets customers in China, both expat and local Chinese. We have an ICP license which allows us to host in a datacenter inside China. Internet in China is actually as fast as anywhere else (faster than most places actually), so long as the content is served-up within the boundaries of the Great-Firewall. Anything that crosses the wall is horribly slow. The problem is that most expats have some sort of VPN installed so that they can access all the blocked stuff. What this means is that when they access our site, the traffic first has to go out of China through the firewall to their VPN, and then back in. The performance is terrible, worse than if we were just hosting outside of China directly (which we used to do before the ICP was issued). So I want to use a global CDN to mirror the site automatically, but I only want to deliver the content via the CDN if the user's request IP address is outside of China. Inside China I would like the content to be served by our own server. I also want to be careful with the domain names. We currently use www.xxx.com and www.xxx.cn for language selection purposes, as these perform well in SEO on Google (which the expats use), and Baidu (which the locals use). If possible I would like to avoid having one domain on the outside, and the other on the inside since not all expats use a VPN, and some Chinese speakers also use VPNs. Also some of our legitimate customers in both languages are from outside of China. I also don't want to resort to using something like www2.xxx.com/cn for the outside connection if at all possible, since I have worries about duplicate content and canonical URLs ruining our SEO (unless you know of a quick fix for that). CDNs I'm considering are: Google PageSpeed, CloudFlare, Amazon CloudFront. None of which have datacenters inside China. I have complete control of the .com DNS zone records, but the .cn zones are under the control of the domain issuing body in China. I'm not sure at this time if they would allow even a CNAME to point to an IP outside of China (although I don't see why not). They no longer allow outside registrars like they used to.

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >