Search Results

Search found 11646 results on 466 pages for 'progressive download'.

Page 44/466 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • Host timeout during file upload/download over SFTP/SSH

    - by kritop
    I tried different clients because i thought its client related, but all of them sooner or later disconnect or stop uploading/downloading files, and getting a timeout disconnect. After a reconnect it works again for a bit of time. Really strange cannot figure out the reason. I'm on a mac and the server is a debian VPS! If u need further informations ask please! I appreciate any tips, because i'm kinda stuck!

    Read the article

  • My php homepage downloads index.php instead of being processed on Gandi.net

    - by alekone
    if I go to the homepage of my website http://www.website.com (on a brand new server) the index.php gets donwloaded instead than processed. I don't have the same problem on other folders. my .htaccess reads: AddHandler php5-script .php what could this be? I suspect it's something with php config / or htaccess, but I'm not able to figure it out. help please! edit: I don't know if this helps: it's a wordpress installation, I have this problem only on the public part of the website, not on the admin (that renders correctly)

    Read the article

  • Change location of RSS Dynamic Desktops

    - by Andy
    I'm currently using CCleaner to take care of my computer, but I also have a dynamic desktop background provided by Bing (I'm running Windows 7 HP) - and unfortunately the two conflict. Whenever I 'clean' my computer using CCleaner it messes up my destop backgrounds as they are stored in the temporary internet files directory, and for some reason I don't appear to be able to get as far as the 'Enclosures' sub directory in order to tell CCleaner to exclude the directory (I can see it in Windows Explorer but not in CCleaner's directory browser). Therefore, I am looking for an alternative solution to this problem and wondered if I could change the directory to which the images were downloaded on the RSS feed. If anybody knows how to do this, I would be grateful if you could share or indeed, I would be equally as greatful if anyone knows any other ways of getting around CCleaner. Please note that I don't want to stop cleaning the whole of my temporary internet files though - I just don't want the wallpapers that have been downloaded to be deleted... Thanks in advance!

    Read the article

  • Can access website but images, css stylesheets and javascript files do not download

    - by Triztian
    i have this problem, not sure about the source of it, Basically the title describes the issue, I can access the webpage and see the html structure, but no resources are being donwloaded nor I have access to them using the browser that means, no javascript, no css styles and no images., any solutions?, Im using tomcat by the way. EDIT 1 If I access the tomcat manager from within the server it also blocks the images. I'm running on windows server 2008 R2.

    Read the article

  • Downloading a large site with wget

    - by Evan Gill
    Hi, I'm trying to mirror a very large site but wget never seems to finish properly. I am using the command: wget -r -l inf -nc -w 0.5 {the-site} I have downloaded a good portion of the site, but not the whole thing. The content does not change fast enough to bother using time-stamping. After running overnight, this message appears: File `{filename}.html' already there; not retrieving. File `{filename}.html' already there; not retrieving. File `{filename}.html' already there; not retrieving. File `{filename}.html' already there; not retrieving. Killed does anyone know what is happening and how I can fix it?

    Read the article

  • Mac: Script application downloaded from the Internet

    - by Svish
    I downloaded a php framework and has started to make a website using that. Sometimes I need to look at the source of that framework and every time I open a file I haven't opened before I get this message: “somefile.php” is a script application which was downloaded from the Internet. Are you sure you want to open it? That is ok and nice I suppose, but I am getting tired of it. Is there a way I can fix all the files in my web directory so that the os somehow forgets the files are from the Internet or something like that?

    Read the article

  • PHP file gets download instead of getting executed when browsed in any browser

    - by baltusaj
    I have a phpinfo.php file which I am trying to run by browsing to it using browser but the browser downloads the file instead of executing it. phpinfo.php <?php phpinfo(); ?> I followed following this post, added following lines to my /etc/apache2/httpd.conf and restarted apache but invain. phpinfo.php still gets downloaded. AddType application/x-httpd-php .php .phtml AddType application/x-httpd-php-source .phps Have I added these line to the correct file? On an openSuSE forum following was mentioned. I followed this too but still no success. Same problem is persisting. In case the browser wants to save your php files instead of displaying the content, you should enable php support in the /etc/apache2/mod_userdir.conf file. Add the following line to it, just after the line and restart the server. Include /etc/apache2/conf.d/php5.conf

    Read the article

  • something is downloading in background Linux

    - by wisdom
    My laptop boots normally to Linux -LinuxMint- and immediately I like to open gnome-system-monitor but the shock is that something is downloading(the problem has been out 2 days ago) and nothing I run yet ! I did iftop in terminal but nothing strange -a screen shot provided-, also I tried nethogs which showed nothing at all. The more complicated when I reboot and same problem is there so I can't browse/surf Internet, no one connected to my network can browse any more it's absorbing the stream horribly! After many reboots it will go to normal state(no background downloading). But really I can't figure out the problem going behind the scene ! Any suggestion,any help would be appreciated...thanks

    Read the article

  • How to download a full website as PDF?

    - by MartyIX
    I'm trying to make an offline version of a web site and I'm looking for a tool that would do the task automatically for the whole web site (circa 1000 pages of HTML + images). Is there anything like that and free? I know it is quite challenge for a program but maybe I'll be lucky :). EDIT: It should be a program for Windows.

    Read the article

  • Good firefox extension to download .flv flash video?

    - by Tamás Szelei
    I'm looking for a decent flash video downloader for Firefox. There are tons of them. I'd like to use it for a custom site with some random flash video player, not Youtube. How can it be accomplished? Edit: None of these methods you guys suggested worked so far; I looked up various places where the temporary file might end up - it just doesn't. It is indeed a flash player, but it seems like it's not caching - is that possible?

    Read the article

  • Why does Outlook Express download old (POP3) mail?

    - by CityNeonRain
    We are using Outlook Express to access mail through a POP3 server. The problem is that every now and then, Outlook Express starts downloading all old emails, which amounts to tens of thousands of emails. This happens on every single computer (three) at the office. Is this caused by our OE configuration? Or by our hosting?

    Read the article

  • Download databasename.bak file

    - by Jordon
    I have downloaded databasename.bak file from my hosting company, when i tried to restore that DB file in SQL server 2008 it is keep on giving me following error. The media family on device 'C:\go4sharepoint_1384_8481.bak' is incorrectly formed. SQL Server cannot process this media family. RESTORE HEADERONLY is terminating abnormally. (Microsoft SQL Server, Error: 3241) According to this error and from following link http://www.sqlcoffee.com/Troubleshooting047.htm It is clear that either file i am downloading is corrupt or it is getting corrupted on the way? Any idea, why I am keep on receiving this error? I tried almost all ways but unable to fix this problem, please help me.

    Read the article

  • Save a single web page (with background images) with Wget

    - by mikael
    I want to use Wget to save single web pages (not recursively, not whole sites) for reference. Much like Firefox's "Web Page, complete". My first problem is: I can't get Wget to save background images specified in the CSS. Even if it did save the background image files I don't think --convert-links would convert the background-image URLs in the CSS file to point to the locally saved background images. Firefox has the same problem. My second problem is: If there are images on the page I want to save that are hosted on another server (like ads) these wont be included. --span-hosts doesn't seem to solve that problem with the line below. I'm using: wget --no-parent --timestamping --convert-links --page-requisites --no-directories --no-host-directories -erobots=off http://domain.tld/webpage.html

    Read the article

  • Using wget to recursively download whole FTP directories

    - by user9406
    I want to copy all of the files and folders from one host to another. The files on the old host sit at /var/www/html and I only have FTP access to that server, and I can't TAR all the files. Regular connection to the old host through FTP brings me to the /home/admin folder. I tried running the following command form my new server: wget -r ftp://username:[email protected] But all I get is a made up index.html file. What the right syntax for using wget recursively over FTP?

    Read the article

  • Download discussions from usenet

    - by user22559
    Hello. Does anyone know a good (and maybe free) usenet client that would allow me to save all the discussions from one (or more) groups to text files? Preferably each post to its own text file. I need this to run some data mining on those discussions. Thanks :)

    Read the article

  • Setting up a download limit for a computer

    - by sprsr
    Hello all, A friend of me asked me a question like that,"I have a modem, and a house mate. He is using my modem, and slowing down my internet. What I want to do is, limit his bandwith without using any program like netlimiter or so. How can I do that?" What are the ways to do this ? Thanks.

    Read the article

  • What causes PHP pages to consistently download instead of running normally

    - by Jonathan
    Hi, I'm running a Ubuntu Server on a VM, to test out different web forum solutions. I have set up a ~/public_html/ to be accessible with the apache2 web server, and that works fine. However when I go to a .php file on a browser (using my VM's ip-address/~username/phpfile.php) it does not display it as it should. Instead it offers to save to file/asks what program to open it with. Interestingly though that dialog box does recognise that it is a php file. I have the following version of php installed on the system: PHP 5.3.2-1ubuntu4.5 with Suhosin-Patch (cli) (built: Sep 17 2010 13:49:46) Copyright (c) 1997-2009 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2010 Zend Technologies And the following server: Server version: Apache/2.2.14 (Ubuntu) Server built: Nov 18 2010 21:19:09 If anyone knows what might be causing this/potential solutions it would make me very happy :) EDIT: Turns out files this behaviour was only apparent on files in the ~/public_html/ directory. All php files in /var/www/ work fine. Prizes go to whoever can explain why? :D (And by prizes I just mean a well done, no actual prizes I'm afraid.)

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >