Search Results

Search found 29619 results on 1185 pages for 'external script'.

Page 529/1185 | < Previous Page | 525 526 527 528 529 530 531 532 533 534 535 536  | Next Page >

  • What's with the accesses to $random_existing_file/cache/df.php?

    - by Bernd Jendrissek
    Occasionally I eyeball Apache's access_log and lately I've been noticing these accesses to URLs that I don't serve. They're correctly 404'ed, but I'd like to know just who and what is involved here. "Obviously" it's some sort of vulnerability probing; I'd like to know which. (Not that it affects me, but I like to know the score.) Here's an example: 69.89.31.206 - - [28/Nov/2012:17:36:34 +0200] "GET /cvfull.pdf/cache/df.php HTTP/1.1" 404 489 "-" "-" Oddly, all 26 attempts are to either /cache/df.php, or to /cvfull.pdf/cache/df.php - they come in pairs. A few weeks ago it was zx.php, now it's df.php - I'm assuming the target is the same. Perhaps I should be flattered that a script is thinking of hiring me. Seriously, my CV is one of only two PDF files on my site, so I can only guess that non-PDF URLs aren't interesting? I've tried Googling for "cache df php", but my Google-fu is weak at the best of times, so I can only find a few reports of other script attacks. What's the vulnerability being scanned for here?

    Read the article

  • How do I send traffic to specific IP addresses through VPN and others directly to the internet?

    - by keithwarren7
    I am running Windows 7 and using the Cisco VPN adapter to connect to a private network where I access resources starting with the IP address 172.. My problem is that when connected to the VPN all external traffic is routed through the VPN. I want to set things up so only certain IP addresses go through the VPN and everything else goes out over the local adapter and out to the internet as normal. How?

    Read the article

  • Configure POP3 Connector for SBS 2008 (Exchange 2007)

    - by MadBoy
    I have a client which has all his mail on server outside of his company. Right now his exchange server (on SBS 2008 is configured using POP3 connector but problem is mail gets deleted from server when downloaded by connector. Is there a way to make pop3 connector leave emails on server (external one) and download them as well for use within Exchange. Client wants to "feel" exchange before making the move totally so he would like to play with it for longer while without loosing mails he has on his server.

    Read the article

  • Can a Barracuda Spam Filter 300 reject mail based on DNS?

    - by user84104
    Can a Barracuda SF 300 reject mail based on DNS? Specifically can it respond with a 4XX code for mail claiming to be from a domain without a valid MX or A record (similar to postfix's smtpd_sender_restrictions = reject_unknown_sender_domain). If so, how do I set it? (I realize it's probably something simple I've overlooked.) The barracuda can resolve using its configured name servers. The name servers can correctly resolve external domains.

    Read the article

  • IIS 7 - 403 Access Denied error on wwwroot trying to redirect to /owa

    - by cparker4486
    I'm trying to setup a redirect from http://mail.mydomain.com to https://mail.mydomain.com/owa. I've been unsuccessful in doing this by using IIS's HTTP Redirect so I looked to other options. The one I settled on is to create a default document in the wwwroot folder to handle the redirect. I created a file called index.aspx (and added index.aspx to the list of default documents) and put the following code in it: <script runat="server"> private void Page_Load(object sender, System.EventArgs e) { Response.Status = "301 Moved Permanently"; Response.AddHeader("Location","https://mail.mydomain.com/owa"); } </script> Instead of getting a redirect I get: 403 - Forbidden: Access is denied. You do not have permission to view this directory or page using the credentials that you supplied. I've been trying to find an answer to this but have been unsuccessful so far. One thing I did try was to add the Everyone group to wwwroot with read access. No change. The AppPool for Default Web Site is DefaultAppPool and the Identity is ApplicationPoolIdentity. (I don't know what these things are but maybe knowing this will help you.) Thanks!

    Read the article

  • OpenVPN: ERROR: could not read Auth username from stdin

    - by user56231
    I managed to setup openvpn but now I want to integrate a user/pass authentication method so, even though I haven't added the auth-nocache in the server config, whenever I try to connect it returns with the following message on the client side: ERROR: could not read Auth username from stdin My server.conf file contains basic stuff, everything works up untill I try to implement this for of authentication. mode server dev tun proto tcp port 1194 keepalive 10 120 plugin /usr/lib/openvpn/openvpn-auth-pam.so login client-cert-not-required username-as-common-name auth-user-pass-verify /etc/openvpn/auth.pl via-env ca /etc/openvpn/easy-rsa/2.0/keys/ca.crt cert /etc/openvpn/easy-rsa/2.0/keys/server.crt key /etc/openvpn/easy-rsa/2.0/keys/server.key dh /etc/openvpn/easy-rsa/2.0/keys/dh1024.pem user nobody group nogroup server 10.8.0.0 255.255.255.0 persist-key persist-tun #persist-local-ip status openvpn-status.log verb 3 client-to-client push "redirect-gateway def1" push "dhcp-option DNS 10.8.0.1" log-append /var/log/openvpn comp-lzo I searched all over the net for a solution and all answers seems to be related to the auth-nocache param which I haven't set. The directive auth-user-pass-verify /etc/openvpn/auth.pl via-env points to a script which is executed to perform the authentication. A false authentication should result in a exit 1 while a true one should result with exit 0. For testing, that script auth.pl returns exit 0 no matter what the input is but it seems that the file is not executed before the error raises. auth.pl file contents: #!/usr/bin/perl my $user = $ENV{username}; my $passwd = $ENV{password}; printf("$user : $passwd\n"); exit 0; Any ideas?

    Read the article

  • Is it possible do have a bigger resolution in a 22" FullHD VGA screen?

    - by Igoru
    I'm using a 13" laptop with FullHD (1920x1080) resolution and an external screen with FullHD resolution too, but of 22". It's quite strange to have a much bigger screen with the same "area space", and I was thinking about manually adding a custom resolution to linux config. I know how to do that, but I'm not sure about a good resolution to setup. Any ideas? Any "don't do that please" answer? If yes, why?

    Read the article

  • How to remove all Couchdb versions in Ubuntu 10.04 (server)? ( after multiple installs )

    - by DjangoRocks
    Hi all, I have done multiple installs of CouchDB using sudo aptitude install couchdb sudo ap-get install couchdb and more recently based on the instructions found at L http://wiki.apache.org/couchdb/Installing_on_Ubuntu May I know how do I uninstall or remove all the above installations? Best Regards. +++++++++++++++++++UPDATE++++++++++++++++++++++++ I've tried running the following commands: apt-get remove couchdb apt-get purge couchdb but received the following errors: (Reading database ... 39814 files and directories currently installed.) Removing couchdb ... invoke-rc.d: initscript couchdb, action "stop" failed. dpkg: error processing couchdb (--remove): subprocess installed pre-removal script returned error exit status 1 invoke-rc.d: initscript couchdb, action "start" failed. dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: couchdb E: Sub-process /usr/bin/dpkg returned an error code (1) May I know how do i fix this? ON issuing the command : dpkg -l | grep couchdb I received the following response: rF couchdb 0.10.0-1ubuntu2 RESTful document oriented database, system D iF couchdb-bin 0.10.0-1ubuntu2 RESTful document oriented database, programs How do i uninstall CouchDB ? I think there's some file corruption?

    Read the article

  • mac terminal run a file of commands

    - by Ilan Tal
    I am coming from Linux and trying to get a Mac to do what I want it to do. The question is what is the best tool to use. I want to mount (unmount) several remote disks. If I go into a terminal I can do the trick by mount -t smbfs //username:pass@addr /Users/me/RemoteDisks/mnt1 Since I want to mount several disks I would like to put all of the information into a file, store it in Documents/subfolder and make a link to it on the desktop (or somewhere better, if there is a better place). At the moment I have manually run the appropriate command in the terminal and the remote disk is mounted and I see its contents. What I need is a one click method to run a file to mount all the disks. I tried Apple script but that didn't like my commands. I don't know exactly what it is expecting to see and perhaps Apple script is the wrong tool. I have no problems in Linux, but the Mac is new to me and I don't know what I should be using. Thanks, Ilan

    Read the article

  • Unable to locate a specific shape in Visio

    - by Gnanam
    I'm trying to create (convert) a Visio architecture diagram from an existing image which is available in the format of JPG extension. My question is, in this complete architecture diagram which I'm trying to convert, there is one specific shape/symbol which I couldn't able to locate/find in the Visio stencil. Can somebody help me in locating this shape/symbol either inside Visio stencil or from any external stencils/symbols? NOTE: I'm using Visio Professional 2013.

    Read the article

  • How to copy files from HDD to HDD with integrity checking

    - by RafaelM
    I am moving data from an almost dead HDD to an external USB drive using linux , because for some reason Windows cannot see the data. I want to copy a large amount of data over from the HDD to the USB drive with integrity checking. I thought about copying everything over and then checking with md5summer but this would take a reaally long time because its a lot of data and this is not a very powerful PC. What tool can use to do this on Linux?

    Read the article

  • Can't include Javascript variable in PHP mysql_query call? [on hold]

    - by user198895
    I want the PHP mysql_query call to retrieve user values based on the Agency drop-down value but I can't get this to work. Am I unable to include the Javascript variable agency.value in PHP? <script type="text/javascript"> var agency = document.getElementById("agency"); var user = document.getElementById("user"); agency.onchange = onchange; // change options when agency is changed function onchange() { <?php include 'dbConnect.php'; ?> <?php $q = mysql_query("select id as UserID, CONCAT(LastName, ', ' , FirstName) as UserName from users where Agency = " . ?>agency.value<?php . " order by UserName");?> option_html = "<option value=0 selected>- All Users -</option>"; <?php while ($row1 = mysql_fetch_array($q)) {?> if (agency.value == 0 || agency.value == '<?php echo $row1[AgencyID]; ?>') { option_html += "<option value=<?php echo $row1[UserID]; ?>><?php echo $row1[UserName]; ?></option>"; } <?php } ?> user.innerHTML = option_html; } </script>

    Read the article

  • Which is faster, copying everything at once or one thing at a time?

    - by fredley
    I am transferring a bunch (20+) of large (1GB+) files to my external flash drive over USB 2.0. Is it quicker to just sling them all over at once (as in one at a time but not waiting for the previous transfer to finish) so that there are multiple transfers going on, or transfer one, wait for it to finish, transfer the next. The files are coming from a variety of locations so I can't do one single big transfer. Are there any other advantages to one way or the other that are worth considering?

    Read the article

  • POST Fail via AJAX Request?

    - by Jascha
    I can't for the life of me figure out why this is happening. This is kind of a repost (submitted to stackoverflow, but maybe a server issue?). I am running a javascript log out function called logOut() that has make a jQuery ajax call to a php script... function logOut(){ var data = new Object; data.log_out = true; $.ajax({ type: 'POST', url: 'http://www.mydomain.com/functions.php', data: data, success: function() { alert('done'); } }); } the php function it calls is here: if(isset($_POST['log_out'])){ $query = "INSERT INTO `token_manager` (`ip_address`) VALUES('logOutSuccess')"; $connection->runQuery($query); // <-- my own database class... // omitted code that clears session etc... die(); } Now, 18 hours out of the day this works, but for some reason, every once in a while, the POST data will not trigger my query. (this will last about an hour or so). I figured out the post data is not being set by adding this at the end of my script... $query = "INSERT INTO `token_manager` (`ip_address`) VALUES('POST FAIL')"; $connection->runQuery($query); So, now I know for certain my log out function is being skipped because in my database is the following data: if it were NOT being skipped, my data would show up like this: I know it is being skipped for two reasons, one the die() at the end of my first function, and two, if it were a success a "logOutSuccess" would be registered in the table. Any thoughts? One friend says it's a janky hosting company (hostgator.com). I personally like them because they are cheap and I'm a fan of cpanel. But, if that's the case??? Thanks in advance. -J

    Read the article

  • Tell VLC where to look for plugins.dat file

    - by puk
    I am trying to build vlc from source (I will include installation script below), but when I try to run vlc I get the following error main libvlc warning: cannot read /home/user/downloads/vlc3/vlc/src/.libs/vlc/plugins/plugins.dat (No such file or directory) Why is it even looking in that non existant directory? The plugins.dat file is in /usr/lib/vlc/plugins/. I tried export VLC_PLUGIN_PATH=/usr/lib/vlc/plugins/ But it still looks in that non existent path. I can create a symbolic link, but that is a terrible way to do it. If in 6 months I delete my downloads folder, all of a sudden my vlc will break. Here is the script I am running to install: ./configure --enable-rpi-omxil --enable-dvbpsi --enable-x264 --enable-xcb --with-x --enable-xvideo --enable-sdl --enable-avcodec --enable-avformat --enable-swscale --enable-mad --enable-a52 --enable-libmpeg2 --enable-dvdnav --enable-faad --enable-vorbis --enable-ogg --enable-theora --enable-mkv --enable-freetype --enable-fribidi --enable-speex --enable-flac --enable-live555 --enable-caca --enable-skins2 --enable-alsa --enable-ncurses --enable-debug --enable-lirc --enable-live555 --enable-shout --enable-taglib --enable-vcdx --enable-realrtsp --enable-svg --enable-dvdread --enable-dc1394 --enable-twolame --enable-dirac --enable-aa --enable-jack --enable-bluray --enable-opencv --enable-sftp --enable-pulse --enable-projectm --enable-vsxu --enable-atmo --enable-glspectrum '--with-extra-libs=/usr/local/lib' '--with-extra-includes=/usr/local/include' '--x-libraries=/usr/local/lib' '--x-includes=/usr/local/include' '--prefix=/usr/local' '--mandir=/usr/local/man' '--infodir=/usr/local/info/' EDIT: I am using the following version: VLC media player 2.2.0-git Weatherwax (revision 2.1.0-git-1168-g5804dd1) And the --plugin-path option is no longer supported.

    Read the article

  • Password-checking program for webmin

    - by Hubert Kario
    I'm trying to perform password quality checks using pwqcheck (part of passwdq) in webmin. Unfortunately when I set in "Users and Groups" module settings the "External password-checking program" to the same value that works for samba check password script: /usr/bin/pwqcheck -1 I get following error when I try to create a user (named test-user): Failed to save user : pwqcheck: Error parsing parameter "test-user": Invalid parameter So, How do I configure Webmin together with pwqcheck?

    Read the article

  • How to declutter and organize the cables on and under my desk?

    - by splattne
    Computer cables and external devices are a continuous source of frustration for everybody who likes a clean working environment. The more devices you add to your home office, the more disastrous the situation under the table becomes: cords falling behind the desk, ugly cables running along the sides and under of the desk, making it almost impossible to clean and remove the dust. This is not my office, but I've seen similar "setups:" I'm looking for good tips/products which help me in keeping the all cables somehow under control and organized. Thanks!

    Read the article

  • Migration from XP to Windows 7 using recovered HD

    - by KenK
    Is it possible to migrate the Windows XP Pro 32 bit operating system, applications etc from a USB external drive which is the recovered hard drive from that failed system to my new desktop computer with Windows7 64 bit system? In doing so, I would like to set up the new computer with the OS and all general applications on the primary 1TB hard drive while the secondary 1TB hard drive to have nothing else but graphics programs and gaming applications complete with their related files and add-ons. What will I need to accomplish this task efficiently and economically?

    Read the article

  • Force view text file instead of download in Firefox?

    - by davr
    Often times I'll click on a random link to a .sh or .py or .cpp or ... file in Firefox, and all I want is to view the code. I don't have a Firefox handler set up for every text file extension under the sun, and I don't want to have to. Is there an easy way to force Firefox to view the file as text instead of trying to save (or open in external app)?

    Read the article

  • samba sync password with unix password on debian wheezy

    - by Oz123
    I installed samba on my server and I am trying to write a script to spare me the two steps to add user, e.g.: adduser username smbpasswd -a username My smb.conf states: # This boolean parameter controls whether Samba attempts to sync the Unix # password with the SMB password when the encrypted SMB password in the # passdb is changed. unix password sync = yes Further reading brought me to pdbedit man page which states: -a This option is used to add a user into the database. This com- mand needs a user name specified with the -u switch. When adding a new user, pdbedit will also ask for the password to be used. Example: pdbedit -a -u sorce new password: retype new password Note pdbedit does not call the unix password syncronisation script if unix password sync has been set. It only updates the data in the Samba user database. If you wish to add a user and synchronise the password that im- mediately, use smbpasswd’s -a option. So... now I decided to try adding a user with smbpasswd: 1st try, unix user still does not exist: root@raspberrypi:/home/pi# smbpasswd -a newuser New SMB password: Retype new SMB password: Failed to add entry for user newuser. 2nd try, unix user exists: root@raspberrypi:/home/pi# useradd mag root@raspberrypi:/home/pi# smbpasswd -a mag New SMB password: Retype new SMB password: Added user mag. # switch to user pi, and try to switch to mag root@raspberrypi:/home/pi# su pi pi@raspberrypi ~ $ su mag Password: su: Authentication failure So, now I am asking myself: how do I make samba passwords sync with unix passwords? where are samba passwords stored? Can someone help enlighten me?

    Read the article

  • Slow Citrix connection related to mapped network drives

    - by George
    I have this weird issue with Citrix being slow and maybe users just being a little dramatic, but I am curious as to why that happens. Let me give you a little bit of a background. Citrix is running off of Windows 2003 server, TSprofiles and file server were located on the same server, until recently. We have moved our file server over to a new server with tons of space. We have Citrix on one server, TSprofiles on another and file server on third. We are using logon scripts to map home drives, shared drive and etc. Now, up until we made the file server move, the logon process took several seconds and most users couldn't even notice logon script being executed as they logon. Now, it takes upwards of several minutes and users can see logon script being executed at a slow pace, one line at a time. The only new variable in this whole scenario is the new file server. All the servers are physically located in the same location and on the same subnet. So, I guess my question is, if anyone can explain why a sudden sluggishness? And any tools I can use to troubleshoot the issue? Thanks!

    Read the article

  • OpenVPN server throws an "access denied" error

    - by HackToHell
    OpenVPN refuses to start up and exists with this error ever since i upgraded Ubuntu from 1.04 to 11.10 Dec 14 19:12:38 oogle ovpn-server[32150]: OpenVPN 2.2.0 i686-linux-gnu [SSL] [LZO2] [EPOLL] [PKCS11] [eurephia] [MH] [PF_INET6] [IPv6 payload 20110424-2 (2.2RC2)] built on Jul 4 2011 Dec 14 19:12:38 oogle ovpn-server[32150]: NOTE: the current --script-security setting may allow this configuration to call user-defined scripts Dec 14 19:12:38 oogle ovpn-server[32150]: Note: cannot open openvpn-status.log for WRITE Dec 14 19:12:38 oogle ovpn-server[32150]: Note: cannot open ipp.txt for READ/WRITE Dec 14 19:12:38 oogle ovpn-server[32150]: Diffie-Hellman initialized with 1024 bit key Dec 14 19:12:38 oogle ovpn-server[32150]: Cannot load private key file server.key: error:0200100D:system library:fopen:Permission denied: error:20074002:BIO routines:FILE_CTRL:system lib: error:140B0002:SSL routines:SSL_CTX_use_PrivateKey_file:system lib Dec 14 19:12:38 oogle ovpn-server[32150]: Error: private key password verification failed Dec 14 19:12:38 oogle ovpn-server[32150]: Exiting Dec 14 19:12:46 oogle ovpn-server[32201]: OpenVPN 2.2.0 i686-linux-gnu [SSL] [LZO2] [EPOLL] [PKCS11] [eurephia] [MH] [PF_INET6] [IPv6 payload 20110424-2 (2.2RC2)] built on Jul 4 2011 Dec 14 19:12:46 oogle ovpn-server[32201]: NOTE: the current --script-security setting may allow this configuration to call user-defined scripts Dec 14 19:12:46 oogle ovpn-server[32201]: Note: cannot open openvpn-status.log for WRITE Dec 14 19:12:46 oogle ovpn-server[32201]: Note: cannot open ipp.txt for READ/WRITE Dec 14 19:12:46 oogle ovpn-server[32201]: Diffie-Hellman initialized with 1024 bit key Dec 14 19:12:46 oogle ovpn-server[32201]: Cannot load private key file server.key: error:0200100D:system library:fopen:Permission denied: error:20074002:BIO routines:FILE_CTRL:system lib: error:140B0002:SSL routines:SSL_CTX_use_PrivateKey_file:system lib Dec 14 19:12:46 oogle ovpn-server[32201]: Error: private key password verification failed Dec 14 19:12:46 oogle ovpn-server[32201]: Exiting

    Read the article

  • cPanel web server redundancy advice?

    - by crgnz
    At present I operate a (reasonably low volume) web-hosting service with a Centos 5.3 server running cPanel/WHM. I would like to implement a level of redundancy such that in the event of server failure, I can restore service with a minimum of effort in less than 60 minutes. I also want to setup a secondary DNS that cPanel will replicate with. My current idea is to kill two birds with one stone by: My current server is called "www1" Purchase an identical server (HP DL360 G4) with mirrored disks. Call this server "www2" Install Centos 5.4 (or perhaps I should install 5.3 to be identical with www1) Install cPanel/WHM on this server and fully license it Setup www1 and www2 cPanel to replicate DNS with each other Setup a nightly replication script that does the following: a) rsync's the /home directory from www1 to www2 b) dumps all MySQL databases on www1 and copies them to a temp folder (with root access only) on www2 c) triggers a script to run on www2 that restores the MySQL dumps Thus each night a fully working copy of all the websites and MySQL databases is copied to www2. I do not have enough knowledge of MySQL replication to understand if it works safely and transparently with cPanel. Thus I propose the mysql dump/copy/restore due to not knowing any better! In the event that www1 dies a horrible death, I envisage that I could login to www2, change the IP addresses to those that www1 had, and presto, the websites are available again. The advantage of this idea is that it is fairly simple and "low tech" and thus does not require an expert sysadmin to setup and monitor (I am NOT an expert sysadmin) The disadvantage of this idea is that up to a full days worth of data changes would be lost. I think this would be acceptable to the sorts of customers I host at the moment. The other disadvantage would be having to pay for a full cPanel license, but I am comfortable with that cost, so for now all I want to discuss are technical considerations. Is this a sound scheme?

    Read the article

  • Unusual Apache->Tomcat caching issue.

    - by iftrue
    Right now, I have an Apache setup sitting in front of Tomcat to handle caching. This setup has been given to an external service to manage, and since the transition, I've noticed odd behavior. Specifically, when I request a swf file from the web server, I hit the Apache cache (good), but occasionally I'll receive a truncated file. Once I receive this truncated file, the cache will NOT refresh until I manually delete the cache and let the swf pull down from tomcat again. The external service claims that the configuration is fine, but I don't see any way this could be happening aside from improper configuration. Now, there are two apache and two tomcat servers under a load balancer, and occasionally one apache cache will break while another does not (leading to 50% of all requests getting bad, truncated data). Where should I start looking to debug this issue? What could POSSIBLY be causing this odd behavior? Edit: Inspecting the logs, tomcat throws this: java.io.IOException: Bad file number at java.io.FileInputStream.readBytes(Native Method) at java.io.FileInputStream.read(FileInputStream.java:199) at java.io.BufferedInputStream.read1(BufferedInputStream.java:256) at java.io.BufferedInputStream.read(BufferedInputStream.java:317) at java.io.FilterInputStream.read(FilterInputStream.java:90) at org.apache.catalina.servlets.DefaultServlet.copyRange(DefaultServlet.java:1968) at org.apache.catalina.servlets.DefaultServlet.copy(DefaultServlet.java:1714) at org.apache.catalina.servlets.DefaultServlet.serveResource(DefaultServlet.java:809) at org.apache.catalina.servlets.DefaultServlet.doGet(DefaultServlet.java:325) at javax.servlet.http.HttpServlet.service(HttpServlet.java:690) at javax.servlet.http.HttpServlet.service(HttpServlet.java:803) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:568) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.ha.session.JvmRouteBinderValve.invoke(JvmRouteBinderValve.java:209) at org.apache.catalina.ha.tcp.ReplicationValve.invoke(ReplicationValve.java:347) at org.terracotta.modules.tomcat.tomcat_5_5.SessionValve55.invoke(SessionValve55.java:57) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:286) at org.apache.jk.server.JkCoyoteHandler.invoke(JkCoyoteHandler.java:190) at org.apache.jk.common.HandlerRequest.invoke(HandlerRequest.java:283) at org.apache.jk.common.ChannelSocket.invoke(ChannelSocket.java:767) at org.apache.jk.common.ChannelSocket.processConnection(ChannelSocket.java:697) at org.apache.jk.common.ChannelSocket$SocketConnection.runIt(ChannelSocket.java:889) at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:690) at java.lang.Thread.run(Thread.java:619) followed by access_log.2009-12-14.txt:1.2.3.4 - - [14/Dec/2009:00:27:32 -0500] "GET /myApp/mySwf.swf HTTP/1.1" 304 - access_log.2009-12-14.txt:1.2.3.4 - - [14/Dec/2009:01:27:33 -0500] "GET /myApp/mySwf.swf HTTP/1.1" 304 - access_log.2009-12-14.txt:1.2.3.4 - - [14/Dec/2009:01:39:53 -0500] "GET /myApp/mySwf.swf HTTP/1.1" 304 - access_log.2009-12-14.txt:1.2.3.4 - - [14/Dec/2009:02:27:38 -0500] "GET /myApp/mySwf.swf HTTP/1.1" 304 - So apache is caching the bad file size. What could possibly be causing this, and possibly separate, how do I ensure that this exception does not get written to cache?

    Read the article

< Previous Page | 525 526 527 528 529 530 531 532 533 534 535 536  | Next Page >