Search Results

Search found 16316 results on 653 pages for 'force download'.

Page 36/653 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • ASP.net download page

    - by Russel
    Hi I have a Reports.aspx ASP.NET page that allows users to download excel report files by clicking on several hyperlinks. When a report hyperlink is clicked, I open a new window using the javascript window.open method and navigate off to the download.aspx page. The code-behind for the download page creates a excel file on the fly using openxml(in memory) and send it back to the browser. Here is some code from the download.aspx page: byte[] outputFileBytes = CreateExcelReport().ToArray(); Response.Clear(); Response.BufferOutput = true; Response.ContentType = "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet"; Response.AddHeader("Content-Disposition", string.Format("attachment; filename={0}", "tempReport.xlsx")); Response.BinaryWrite(outputFileBytes); Response.Flush(); Response.Close(); Response.End(); My problem : Some of these reports take some time to generate. I would like to display a loading.gif file on my Reports.aspx page, while the download.aspx page is requested. Once the page request is completed, the loading.gif file should be made invisible. Is there a way to achieve this. Perhaps some kind of event. I have mootools to my disposal. Thanks PS. I know that generating reports like this is not ideal, but thats a different story all together...

    Read the article

  • Can't get to Pex download (or visual studio 2010)

    - by Vaccano
    I know this is probably not a "True" stackoverflow question, but here it is anyway: Is anyone able to get to the MSDN Download of Visual Studio 2010? I just get "Your search did not match any products." when I click on the New Downloads-Visual Studio 2010 link. I was able to download Visual Studio on Monday. But it is gone now. That is not really my issue though. That page is also where the pex download is. With that page down I can't download pex either. (I checked with a co-worker (who is having the same issue) before posting. So I know it is not just me.)

    Read the article

  • Cisco ASA 5505 and slow download speeds for Apple devices

    - by James
    For traffic routing through my ASA 5505, downloads for all Apple devices, including AppleTV iPad gen 1 IMac MacBook Pro are very slow. speedof.me show less than 1 Mbps download (where I should have 20 Mbps +), yet for any Windows-based device, the download speeds are in excess of 20 Mbps. The Windows device, including the iMac and MacBook Pro machines, are connected via ethernet cable. Why are Apple devices experiencing such pain? Is it an ASA setting, or something else? Thanks.

    Read the article

  • Save Website To Disk

    - by Christian
    Hello everyone! I have a very poor internet connection when I'm living at home. The only time I have a good internet is at college. When I get home, the most mundane task like opening a web-page becomes a five minute stress-test. So what I was thinking was to download the web-page, for example superdickery. I was wondering what the best method would be to download the entire image archive of the page? Would this be illegal, if I did this? It's just that I don't want to be frustrated every time I just want to load a simple jpeg image.

    Read the article

  • I am getting brute forced, what do I do

    - by Saif Bechan
    I am getting brute forced to my email server, IMAP and POP3. I have the full package of ASL installed but it just sends me the OSSEC logs. How can I ban the IP. I thought ASL automatically blocked these attacks after a few wrong tries. How can I do that.

    Read the article

  • How to use Selector when click a item in list

    - by notenking
    I occur a problem when develop a application on android. There have two image which can be download from server in a list item, it will show one image and will show another image when user select or click this item . if I try to download another image from server when user click it,the user will never have time to see it, for this time is shorter than download image form server, so i want to download two of them,and when I click it again,the app can invoke another image from local. but I do not know how to invoke this image ? Any suggestion will be appreciate. thx

    Read the article

  • IE browser doesn't close and file download popup needs focus

    - by jkranthi
    Hii all , I am trying to to click on a link after it is active which again produces a popup ( file download) after clicking. Here i have 2 problems 1) I start the code and leave it .what the code does is -after long process -it waits for the link to be active .Once the link is active it clicks on the link and a download popups opens (if everything goes well) and then it hangs there ( showing yellow flashing in the task bar which mean i have to click on the explorer for it to process whatever is next ).every time i have to click on the IE whenever the download popup appears .Is there a way to handle this or am i doing some wrong ? 2) The next problem is even if i click on the IE .the IE doesn't get close even though i write ie.close . my code is below : ## if the link is active ie.link(:text,a).click_no_wait prompt_message = "Do you want to open or save this file?" window_title = "File Download" save_dialog =WIN32OLE.new("AutoItX3.Control") save_dialog.WinGetText(window_title) save_dialog_obtained =save_dialog.WinWaitActive(window_title) save_dialog.WinKill(window_title) # end #' #some more code -normal puts statements # ie.close ie is hanging up for some strange reason ..?

    Read the article

  • Download all image or create zip file of all uploads from the gallary contained uploads

    - by Arpit Vaishnav
    I am on the photo sharing site , and i want to give functionality to download all the images available in the gallery ,, I have taken gallery in a relation where i can get all the iamges by @gallery.uploads , Now what i want is to download this all files , or if its possible to create any zipfile so that we can download that one file containing uploads inside the gallery , thanks

    Read the article

  • Slow download speeds on MacBook Pro

    - by Austin
    Just as the title says, I am getting very low download speeds on my MacBook Pro. I did a speed test at speedtest.net, and am getting 7 MbPS down, .5 up. However, I can only seem to get 270 KB PS max (averaging 100 K), whether on my school's network or on my home network, wired or wireless. I am on Mac OS X 10.5.8, with Google Chrome. My ethernet settings (under System Preferences - Network - Ethernet Connection - Advanced - Ethernet) are set to "Configure Automatically", "Speed: 100TX", "Duplex: full-duplex, flow-control", and "MTU: Standard (1500)". As far as I can tell, there are no throttles or anything between here and the ISP, so... Any ideas on why I'm getting such low download speeds?

    Read the article

  • cant download youtube video

    - by dsaccount1
    I'm having trouble retrieving the youtube video automatically, heres the code. The problem is the last part. download = urllib.request.urlopen(download_url).read() # Youtube video download script # 10n1z3d[at]w[dot]cn import urllib.request import sys print("\n--------------------------") print (" Youtube Video Downloader") print ("--------------------------\n") try: video_url = sys.argv[1] except: video_url = input('[+] Enter video URL: ') print("[+] Connecting...") try: if(video_url.endswith('&feature=related')): video_id = video_url.split('www.youtube.com/watch?v=')[1].split('&feature=related')[0] elif(video_url.endswith('&feature=dir')): video_id = video_url.split('www.youtube.com/watch?v=')[1].split('&feature=dir')[0] elif(video_url.endswith('&feature=fvst')): video_id = video_url.split('www.youtube.com/watch?v=')[1].split('&feature=fvst')[0] elif(video_url.endswith('&feature=channel_page')): video_id = video_url.split('www.youtube.com/watch?v=')[1].split('&feature=channel_page')[0] else: video_id = video_url.split('www.youtube.com/watch?v=')[1] except: print("[-] Invalid URL.") exit(1) print("[+] Parsing token...") try: url = str(urllib.request.urlopen('http://www.youtube.com/get_video_info?&video_id=' + video_id).read()) token_value = url.split('video_id='+video_id+'&token=')[1].split('&thumbnail_url')[0] download_url = "http://www.youtube.com/get_video?video_id=" + video_id + "&t=" + token_value + "&fmt=18" except: url = str(urllib.request.urlopen('www.youtube.com/watch?v=' + video_id)) exit(1) v_url=str(urllib.request.urlopen('http://'+video_url).read()) video_title = v_url.split('"rv.2.title": "')[1].split('", "rv.4.rating"')[0] if '&quot;' in video_title: video_title = video_title.replace('&quot;','"') elif '&amp;' in video_title: video_title = video_title.replace('&amp;','&') print("[+] Downloading " + '"' + video_title + '"...') try: print(download_url) file = open(video_title + '.mp4', 'wb') download = urllib.request.urlopen(download_url).read() print(download) for line in download: file.write(line) file.close() except: print("[-] Error downloading. Quitting.") exit(1) print("\n[+] Done. The video is saved to the current working directory(cwd).\n")

    Read the article

  • Recover open but deleted file on Linux using ln instead of cp

    - by Yang
    Say I have a file that's downloading (from a source that's hard to re-download from), but accidentally deleted from the filesystem namespace (/tmp/blah), and I'd like to recover this file. Normally I could just cp /proc/$PID/fd/$FD /tmp/blah, but in this case that would only get me a partial snapshot, since the file is still downloading. Furthermore, once the download completes, the downloading process (e.g. Chrome) will close the FD. Any way to recover by inode/create a hard link? Any other solutions? If it makes any difference, I'm mainly concerned with ext4. Thanks in advance.

    Read the article

  • why i cannot download jdk from oracle web site directly without AuthParam?

    - by hugemeow
    that is download with the following command, why it fails to download that file? wget http://download.oracle.com/otn-pub/java/jdk/6u35-b10/jdk-6u35-linux-i586.bin the following command works, but that AuthParam may not work after a while, why? wget http://download.oracle.com/otn-pub/java/jdk/6u35-b10/jdk-6u35-linux-i586.bin?AuthParam=1346955572_27e44512fe8ef5cb920c4c329e5f0fd8 how this AuthParam option is implemented? why i cannot download without this parameter? and why i can only get this parameter using explorer? is rewrite used in the oracle server when deal with wget request? why the same command not works after an hour, does the value of AuthParam expired? so how the server check whether the value of AuthParam is expired? wget http://download.oracle.com/otn-pub/java/jdk/6u35-b10/jdk-6u35-linux-i586.bin?AuthParam=1346955572_27e44512fe8ef5cb920c4c329e5f0fd8 --2012-09-07 03:51:01-- http://download.oracle.com/otn-pub/java/jdk/6u35-b10/jdk-6u35-linux-i586.bin?AuthParam=1346955572_27e44512fe8ef5cb920c4c329e5f0fd8 Resolving download.oracle.com... 23.67.251.50, 23.67.251.57 Connecting to download.oracle.com|23.67.251.50|:80... connected. HTTP request sent, awaiting response... 403 Forbidden 2012-09-07 03:51:01 ERROR 403: Forbidden. @KJ-SRS is that kind of CGI program which is used to judge if AuthParam is right? is that possible to download jdk package purely using wget command, and no need to get that AuthParam in explorer

    Read the article

  • fail2ban regex working but no action being taken

    - by fpghost
    I have the following snippet of fail2ban configuration on Ubuntu 13.10 server: #jail.conf [apache-getphp] enabled = true port = http,https filter = apache-getphp action = iptables-multiport[name=apache-getphp, port="http,https", protocol=tcp] mail-whois[name=apache-getphp, dest=root] logpath = /srv/apache/log/access.log maxretry = 1 #filter.d/apache-getphp.conf [Definition] failregex = ^<HOST> - - (?:\[[^]]*\] )+\"(GET|POST) /(?i)(PMA|phptest|phpmyadmin|myadmin|mysql|mysqladmin|sqladmin|mypma|admin|xampp|mysqldb|mydb|db|pmadb|phpmyadmin1|phpmyadmin2|cgi-bin) ignoreregex = I know the regex is good, because if I run the test command on my access.log: fail2ban-regex /srv/apache/log/access.log /etc/fail2ban/filter.d/apache-getphp.conf I get a SUCCESS result with multiple hits, and in my log I see entries like 187.192.89.147 - - [13/Apr/2014:11:36:03 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 301 585 "-" "-" 187.192.89.147 - - [13/Apr/2014:11:36:03 +0100] "GET /phpMyAdmin/scripts/setup.php HTTP/1.1" 301 593 "-" "-" Secondly I know email is configured correctly, as each time I service fail2ban restart I get an email for each of the filters stopping/starting. However despite all this no action seems to be taken when one of these requests comes in. No email with whois, and no entries in iptables. What possibly could be preventing fail2ban from taking action? (everything looks in order in fail2ban-client -d and I can see the chains have loaded with iptables -L)

    Read the article

  • Apache/PHP serving file multiple times

    - by easement
    I have a system with a download.php page. The page takes and id and loads a file based on from the DB Record and then serves it up. I've noticed a couple instances where files are requested multiple times in short time spans (20ms). Times that are too quick for human input. There are plenty of instances where the downloader functions fine. However, in taking a closer look at the downloader’s usage, I did see some interesting behavior. For instance, the IP address xxx.xxx.xxx.xxx (which is one in a range owned by xxxxxx.de in Germany) came to the site through Google. They browsed around and then came to the page http://site.com/xxxx/press+125.php There they issued a request for /download.php?id=/ZZ/n+aH55Y= (a PDF) at 9:04:23AM. That alone is not a big deal. However, what is interesting is that the server seems to have been quite preoccupied with serving that request. In the logs the request first completes between 9:09:48 and 9:10:00. It looks like the user must have gotten tired of waiting during that time and requested the document two more times. Between 09:14:47 and 09:15:00 the same request appears again, except it is from 9:04:43AM, 20ms later than the first request. Then it pops up a third time, with a request that started at 09:05:06 completing between 09:19:55 and 09:19:58! I’m suspicious of that document. In looking through the logs I see other instances where it takes the server a little while to handle that specific file. Check out this list of requests from zzz.zzz.zzz.zzz[different than above] for the file /download.php?id=/ZZ/n+aH55Y= (the same docuemnt as before): Request time Complete Time 04:32:43 04:33:36 04:32:50 04:33:36 04:32:51 04:33:38 04:33:05 04:33:38 04:33:34 04:33:42 04:33:05 04:33:42 So something is definitely going on. Whether it has to do with this specific document tripping up the server, the download.php page’s code, or if we’re just seeing the evidence of some server level overload as it plays out in real time I’m not yet sure. In fairness, there are other instances of people downloading /download.php?id=/ZZ/n+aH55Y= (the same PDF) without error. However, it is interesting that the multiple processes only seem to happen with this one file, and then only when it is accessed through the page http://site.com/press+125.php . It bears further investigation if there’s something amiss inside the code that causes the system to fire off multiple download requests that occupy the server. I don't know if this press+125.php is a rabbit hole, but there is weird consicence. Any ideas? I'm totally out of ideas. Apache maxed out? Things like that. ///DOWNLOAD.php $file = new files(); $file->comparison_filter("id", "=", $id); //sql to load if ($file->load()) { $file->serve(); } //FILES function serve() { if ($this->is_loaded) { if (file_exists($this->get_value("filename"))) { if ($this->get_value("content_type") != "") { header("Content-Type: " . $this->get_value("content_type")); } header("Content-Length: " . filesize($this->get_value("filename"))); if ($this->get_value("flag_image") == 0 || $this->get_value("flag_image") == false) { header("Cache-Control: private"); header("Content-Disposition: attachment; filename=" . urlencode($this->get_value("original_filename"))); } set_time_limit(0); @readfile($this->get_value("filename")); exit; } } }

    Read the article

  • Why do Cisco IOS routers hang in the middle of large downloads?

    - by cjavapro
    After a few years in use. We have seen Cisco 871 and 851 routers that would hang if you had a single download that was more than 100M large. It is intermittent. Sometimes the problem goes away, sometimes it happens on very small downloads (just a 10KB web page). It seems that the just about all the downloads eventually finish, but the bigger the download the longer the hang. Is there a way to resolve this? (short of router replacement which is what we have been doing)

    Read the article

  • Downloaded chm is blocked, is there a solution?

    - by David Rutten
    CHM files that are downloaded are often tagged as potentially malicious by Windows, which effectively blocks all the html pages inside of it. There's an easy fix (just unblock the file after you download it), but I was wondering if there's a better way to provide unblocked chm files. What if I were to download the chm file (as a byte stream) from our server inside the application, then write all the data to a file on the disk. Would it still be blocked? Is there another/better way still? Edit: Yes, downloading the file using a System.Net.WebClient does solve the problem. But, is there still a better way?

    Read the article

  • KDE on Windows won't download/install

    - by endolith
    No matter which mirror I use, I get this error. I've tried a few times over the last several weeks. Download failed --------------------------- The download of ftp://kde.mirrors.tds.net/pub/kde/stable/4.5.4/win32/libopensp-vc100-1.5.2-bin.tar.bz2 failed with error: archive downloaded from ftp://kde.mirrors.tds.net/pub/kde/stable/4.5.4/win32/libopensp-vc100-1.5.2-bin.tar.bz2 checksum error --------------------------- Retry Ignore Cancel Should I just ignore this and let it continue? Update: I ran as administrator, changed the install directory to C:\KDE, and ignored this error, and it seemed to install, but then gave me a different error, same file: Error --------------------------- Internal Error - File C:/Temp/KDE/libopensp-vc100-1.5.2-bin.tar.bz2 does not exist --------------------------- Cancel But now programs seem to work! Should I just ignore this error? I can't even find a plain English explanation of what libopensp is.

    Read the article

  • Linux servers seeing bad download performance behind Sonicwall firewall

    - by Joshua Penix
    I'm working with a pair of co-located CentOS Linux servers sitting behind a Sonicwall PRO 2040 Enhanced firewall running in transparent bridge mode. These servers are having a strange problem downloading files more than a few megabytes in size. For example, if I try to wget or FTP a copy of the Linux kernel from kernel.org, the first ~1-2MB will download at 600+K/s, and then throughput will drop off a cliff to 1K/s. I've reviewed all the firewall configuration settings for anything suspicious, but found nothing. More interestingly, I performed the same download with a Windows server sitting behind the same firewall, and it sailed right through at 600+K/s the whole way. Has anyone seen this? Where should I start looking to troubleshoot this problem?

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >