Search Results

Search found 14653 results on 587 pages for 'disk cache'.

Page 80/587 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • AppFabric Cache - An existing connection was forcibly closed by the remote host

    - by Wallace Breza
    I'm trying to get AppFabric cache up and running on my local development environment. I have Windows Server AppFabric Beta 2 Refresh installed, and the cache cluster and host configured and started running on Windows 7 64-bit. I'm running my MVC2 website in a local IIS website under a v4.0 app pool in integrated mode. HostName : CachePort Service Name Service Status Version Info -------------------- ------------ -------------- ------------ SN-3TQHQL1:22233 AppFabricCachingService UP 1 [1,1][1,1] I have my web.config configured with the following: <configSections> <section name="dataCacheClient" type="Microsoft.ApplicationServer.Caching.DataCacheClientSection, Microsoft.ApplicationServer.Caching.Core, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" allowLocation="true" allowDefinition="Everywhere"/> </configSections> <dataCacheClient> <hosts> <host name="SN-3TQHQL1" cachePort="22233" /> </hosts> </dataCacheClient> I'm getting an error when I attempt to initialize the DataCacheFactory: protected CacheService() { _cacheFactory = new DataCacheFactory(); <-- Error here _defaultCache = _cacheFactory.GetDefaultCache(); } I'm getting the ASP.NET yellow error screen with the following: An existing connection was forcibly closed by the remote host Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host Source Error: Line 21: protected CacheService() Line 22: { Line 23: _cacheFactory = new DataCacheFactory(); Line 24: _defaultCache = _cacheFactory.GetDefaultCache(); Line 25: }

    Read the article

  • Java OutOfMemoryError due to Linux RAM disk cache not freed

    - by Markus Jevring
    The process will run fine all day, then, bam, without warning, it will throw this error. Sometimes seemingly in the middle of doing nothing. It will happen at seemingly random times during the day. I checked to see if anything else was running on the machine, like scheduled backups or something, but found nothing. The machine has enough physical memory (2GB, with about 1GB free for a 3-500MB load), and has sufficient -Xmx specified. According to our sysadmin, the problem is that the RAM that the kernel uses as a disk cache (apparently all but 8MB) is not freed when the JVM needs to allocate memory, so the JVM process throws an OutOfMemoryError. This could be because Java asks the kernel if enough memory is available before allocating and finds that it is insufficient, resulting in a crash. I would like to think, however, that Java simply tries to allocate the memory via the kernel, and when the kernel gets such a request, it makes room for the application by throwing our some of the disk cache. Has anyone else run in to the issue, and if so, what was the error, and how did you solve it? We are currently using jdk1.6.0_20 on SLES 10 SP2 Linux 2.6.16.60-0.42.9-smp in VMWare ESX.

    Read the article

  • jQuery image preload/cache halting browser

    - by Nathan Loding
    In short, I have a very large photo gallery and I'm trying to cache as many of the thumbnail images as I can when the first page loads. There could be 1000+ thumbnails. First question -- is it stupid to try to preload/cache that many? Second question -- when the preload() function fires, the entire browser stops responding for a minute to two. At which time the callback fires, so the preload is complete. Is there a way to accomplish "smart preloading" that doesn't impede on the user experience/speed when attempting to load this many objects? The $.preLoadImages function is take from here: http://binarykitten.me.uk/dev/jq-plugins/107-jquery-image-preloader-plus-callbacks.html Here's how I'm implementing it: $(document).ready(function() { setTimeout("preload()", 5000); }); function preload() { var images = ['image1.jpg', ... 'image1000.jpg']; $.preLoadImages(images, function() { alert('done'); }); } 1000 images is a lot. Am I asking too much?

    Read the article

  • Solution for cleaning an image cache directory on the SD card

    - by synic
    I've got an app that is heavily based on remote images. They are usually displayed alongside some data in a ListView. A lot of these images are new, and a lot of the old ones will never be seen again. I'm currently storing all of these images on the SD card in a custom cache directory (ala evancharlton's magnatune app). I noticed that after about 10 days, the directory totals ~30MB. This is quite a bit more than I expected, and it leads me to believe that I need to come up with a good solution for cleaning out old files... and I just can't think of a great one. Maybe you can help. These are the ideas that I've had: Delete old files. When the app starts, start a background thread, and delete all files older than X days. This seems to pose a problem, though, in that, if the user actively uses the app, this could make the device sluggish if there are hundreds of files to delete. After creating the files on the SD card, call new File("/path/to/file").deleteOnExit(); This will cause all files to be deleted when the VM exits (I don't even know if this method works on Android). This is acceptable, because, even though the files need to be cached for the session, they don't need to be cached for the next session. It seems like this will also slow the device down if there are a lot of files to be deleted when the VM exits. Delete old files, up to a max number of files. Same as #1, but only delete N number of files at a time. I don't really like this idea, and if the user was very active, it may never be able to catch up and keep the cache directory clean. That's about all I've got. Any suggestions would be appreciated.

    Read the article

  • Memory mapping of files and system cache behavior in WinXP

    - by Canopus
    Our application is memory intensive and deals with reading a large number of disk files. The total load can be more than 3 GB. There is a custom memory manager that uses memory mapped files to achieve reading of such a huge data. The files are mapped into the process memory space only when needed and with this the process memory is well under control. But what is observed is, with memory mapping, the system cache keeps on increasing until it occupies the available physical memory. This leads to the slowing down of the entire system. My question is how to prevent system cache from hogging the physical memory? I attempted to remove the file buffering (by using FILE_FLAG_NO_BUFFERING ), but with this, the read operations take considerable amount of time and slows down the application performance. How to achieve the scalability without sacrificing much on performance. What are the common techniques used in such cases? I dont have a good understanding of the WinXP OS caching behavior. Any good links explaining the same would also be helpful.

    Read the article

  • Why doesn't String's hashCode() cache 0?

    - by polygenelubricants
    I noticed in the Java 6 source code for String that hashCode only caches values other than 0. The difference in performance is exhibited by the following snippet: public class Main{ static void test(String s) { long start = System.currentTimeMillis(); for (int i = 0; i < 10000000; i++) { s.hashCode(); } System.out.format("Took %d ms.%n", System.currentTimeMillis() - start); } public static void main(String[] args) { String z = "Allocator redistricts; strict allocator redistricts strictly."; test(z); test(z.toUpperCase()); } } Running this in ideone.com gives the following output: Took 1470 ms. Took 58 ms. So my questions are: Why doesn't String's hashCode() cache 0? What is the probability that a Java string hashes to 0? What's the best way to avoid the performance penalty of recomputing the hash value every time for strings that hash to 0? Is this the best-practice way of caching values? (i.e. cache all except one?) For your amusement, each line here is a string that hash to 0: pollinating sandboxes amusement & hemophilias schoolworks = perversive electrolysissweeteners.net constitutionalunstableness.net grinnerslaphappier.org BLEACHINGFEMININELY.NET WWW.BUMRACEGOERS.ORG WWW.RACCOONPRUDENTIALS.NET Microcomputers: the unredeemed lollipop... Incentively, my dear, I don't tessellate a derangement. A person who never yodelled an apology, never preened vocalizing transsexuals.

    Read the article

  • How do I cache jQuery selections?

    - by David
    I need to cache about 100 different selections for animating. The following is sample code. Is there a syntax problem in the second sample? If this isn't the way to cache selections, it's certainly the most popular on the interwebs. So, what am I missing? note: p in the $.path.bezier(p) below is a correctly declared object passed to jQuery.path.bezier (awesome animation library, by the way) This works $(document).ready(function() { animate1(); animate2(); }) function animate1() { $('#image1').animate({ path: new $.path.bezier(p) }, 3000); setTimeout("animate1()", 3000); } function animate2() { $('#image2').animate({ path: new $.path.bezier(p) }, 3000); setTimeout("animate2()", 3000); } This doesn't work var $one = $('#image1'); //problem with syntax here?? var $two = $('#image2'); $(document).ready(function() { animate1(); animate2(); }) function animate1() { $one.animate({ path: new $.path.bezier(p) }, 3000); setTimeout("animate1()", 3000); } function animate2() { $two.animate({ path: new $.path.bezier(p) }, 3000); setTimeout("animate2()", 3000); }

    Read the article

  • Hibernate database integrity with multiple java applications

    - by Austen
    We have 2 java web apps both are read/write and 3 standalone java read/write applications (one loads questions via email, one processes an xml feed, one sends email to subscribers) all use hibernate and share a common code base. The problem we have recently come across is that questions loaded via email sometimes overwrite questions created in one of the web apps. We originally thought this to be a caching issue. We've tried turning off the second level cache, but this doesn't make a difference. We are not explicitly opening and closing sessions, but rather let hibernate manage them via Util.getSessionFactory().getCurrentSession(), which thinking about it, may actually be the issue. We'd rather not setup a clustered 2nd level cache at this stage as this creates another layer of complexity and we're more than happy with the level of performance we get from the app as a whole. So does implementing a open-session-in-view pattern in the web apps and manually managing the sessions in the standalone apps sound like it would fix this? Or any other suggestions/ideas please? <property name="hibernate.transaction.factory_class">org.hibernate.transaction.JDBCTransactionFactory</property> <property name="hibernate.current_session_context_class">thread</property> <property name="hibernate.cache.use_second_level_cache">false</property>

    Read the article

  • What should the name of this class be?

    - by Tim Murphy
    Naming classes is sometimes hard. What do you think name of the class should be? I originally created the class to use as a cache but can see its may have other uses. Example code to use the class. Dim cache = New NamePendingDictionary(Of String, Sample) Dim value = cache("a", Function() New Sample()) And here is the class that needs a name. ''' <summary> ''' Enhancement of <see cref="System.Collections.Generic.Dictionary"/>. See the Item property ''' for more details. ''' </summary> ''' <typeparam name="TKey">The type of the keys in the dictionary.</typeparam> ''' <typeparam name="TValue">The type of the values in the dictionary.</typeparam> Public Class NamePendingDictionary(Of TKey, TValue) Inherits Dictionary(Of TKey, TValue) Delegate Function DefaultValue() As TValue ''' <summary> ''' Gets or sets the value associated with the specified key. If the specified key does not exist ''' then <paramref name="createDefaultValue"/> is invoked and added to the dictionary. The created ''' value is then returned. ''' </summary> ''' <param name="key">The key of the value to get.</param> ''' <param name="createDefaultValue"> ''' The delegate to invoke if <paramref name="key"/> does not exist in the dictionary. ''' </param> ''' <exception cref="T:System.ArgumentNullException"><paramref name="key" /> is null.</exception> Default Public Overloads ReadOnly Property Item(ByVal key As TKey, ByVal createDefaultValue As DefaultValue) As TValue Get Dim value As TValue If createDefaultValue Is Nothing Then Throw New ArgumentNullException("createValue") End If If Not Me.TryGetValue(key, value) Then value = createDefaultValue.Invoke() Me.Add(key, value) End If Return value End Get End Property End Class

    Read the article

  • Making IE "forget" information entered in form when using back button.

    - by typoknig
    I have a page with a form where many of the fields are populated from variables passed in the URL. Those fields are disabled (NON-EDITABLE) and are only there for the user to view. The remaining fields require user input and are NOT disabled (EDITABLE). When the form is submitted a confirmation page comes up. It may be the case that the user needs to submit several of these forms where the NON-EDITABLE information is identical from form to form, so being able to go back to the form page from the confirmation page would save a lot of time. The way I want this to work is when a user presses the back button all the NON-EDITABLE fields are populated, but the EDITABLE fields are blank. This is what Firefox is doing, but IE8 is does not "forget" what has been entered in the EDITABLE fields. To disable the cache the following appears at the beginning of my page AND at the end of my page. <head> <meta http-equiv="Pragma" content="no-cache"/> <meta http-equiv="Cache-Control" content="no-store"/> <head/> What more must I do to make IE forget what was entered in the EDITABLE fields when the back button is pressed? All of my pages are generated with PHP if that matters. EDIT: It appears to me that this is a problem of IE caching my page even though I have told it not to. Are my meta tags correct? Do I need to do something else to prevent IE from caching my page?

    Read the article

  • Cannot reactivate RAID-5 volume: The size of the plex member is invalid

    - by Ian Boyd
    We had a 3-drive Windows Server 2008 R2 RAID-5 fail (operating in redundancy mode): WDC 1 TB WDC 1 TB WDC 1 TB We removed the failed hard drive, and put a WDC 1 TB drive (that we had standing by) into the machine. When launched, Disk Manager, asked permission to "initialize" the disk as either: Master Boot Record (MBR) Guid Partition Table (GPT) We initialized the disk as GPT, converted it to dynamic, and tried to use the Repair Volume command - except it was greyed out. (which is a terrifying thing on a failed production server hosting 3 virtual servers) i tried from the diskpart command line tool. First we look for our RAID-5 volume that is in Failed Rd mode: DISKPART> list volume Volume ### Ltr Label Fs Type Size Status Info ---------- --- ----------- ----- ---------- ------- --------- -------- Volume 0 E VMs (Raid5) NTFS RAID-5 1863 GB Failed Rd Volume 1 D DVD-ROM 0 B No Media Volume 2 System Rese NTFS Partition 100 MB Healthy System Volume 3 C NTFS Partition 1862 GB Healthy Boot There, Volume 0. Make that our active context: DISKPART> select volume 0 Volume 0 is the selected volume. Now we need to find the disk we will be repairing the volume with: DISKPART> list disk Disk ### Status Size Free Dyn Gpt -------- ------------- ------- ------- --- --- Disk 0 Online 931 GB 0 B * Disk 1 Online 931 GB 931 GB * Disk 2 Online 1863 GB 0 B Disk 3 Online 931 GB 0 B * Disk M0 Missing 0 B 0 B * The disk with 931 GB free, Disk 1. Now we just need to repair the volume: DISKPART> repair disk=1 Virtual Disk Service error: The size of the plex member is invalid.

    Read the article

  • Upgrading a PERC h310 to a PERC H710 mini RAID controller on a Dell R620

    - by Gregg Leventhal
    I have an ESXi 5.0 Free license host using an internal Datastore (RAID 5, 5 Disk) that was configured with a Dell PERC h310 RAID controller. The disk performance was very poor, so I upgraded to the PERC H710 Mini. The IT Tech installed the controller and powered the host back on. I had to rescan the controller and the datastore appeared. Should any settings be changed in the RAID BIOS, or should the default settings be sufficient? Is they anything to be aware of when performing this type of upgrade in order to achieve the maximum performance?

    Read the article

  • Ubuntu hibernate resume fails: "PM: Resume from disk failed"

    - by Neil
    I just upgraded to Ubuntu 10.4 from 9.10, and it's now hiding the hibernate and suspend options. How do I get them back? So the way you do this is make sure that your swap partition is in /etc/fstab and swap is enabled, and big enough. Look at /proc/swaps to see if anything is listed. Now I'm getting this error when I boot after suspending: init: ureadahead-other main process (705) terminated with status 4 Does anyone know how to fix this? I'm using Ubuntu with kernel 2.6.32-22-generic.

    Read the article

  • Why does my ReadyNas 4200 have a constant beep post disk failure

    - by swagner88
    Went to check on a high pitched constant beep coming from the server room and discovered that all the LED lights on the disks were black except one which had a constant green. Post a Re-boot nothing changed. Console indicated that that particular drive has failed. Pulled the drive out and BAM everything is fine. Except, the high pitched beep remains. Plan is currently to replace the drive with a new same size drive we happen to have purchased for expansion. My question is, what will this do? Will the NAS accept the new drive as a replacement for the failed one and decide to shut-up? The Manual for the device is almost to straight forward and says that I can just swap drives out as I need but I find that difficult to believe: http://www.readynas.com/download/documentation/HM/RN4200_HW_24May10.pdf

    Read the article

  • Partition disk for dual boot Windows XP and Windows 7 (shared files)

    - by soupagain
    I wish to dual boot Windows XP and Windows 7. I wish to have my documents available from both OSes. Do I create a single partition and it just works? 2 partitions, one for each OS? 3 partitions, one for each OS and one for the "my documents"? [EDIT] I used 3 partitions, one for each partition and one for docs which is mapped as a separate drive. Works perfectly. The only think you need to do is hide the other OS partition, ie for Windows XP, hide the Windows 7 partition. You do this from partition manager.

    Read the article

  • Partition disk for dual boot Windows XP and Windows 7 (shared documents)

    - by soupagain
    I wish to dual boot Windows XP and Windows 7. I wish to have my documents available from both OSes. Do I create a single partition and it just works? 2 partitions, one for each OS? 3 partitions, one for each OS and one for the "my documents"? [EDIT] I used 3 partitions, one for each partition and one for docs which is mapped as a separate drive. Works perfectly. The only think you need to do is hide the other OS partition, ie for Windows XP, hide the Windows 7 partition. You do this from partition manager.

    Read the article

  • Windows 2003 R2 x86 Mini dump fails to write to disk

    - by Randy K
    I have 3 blade servers that are Blue Screening with a 0xC2 error as far as we can tell randomly. When it started happening I found that the servers weren't set to do provide a dump because they each have 16GB RAM and a 16GB swap file divided over 4 partitions in 4GB files. I set them to provided a small dump file (64K mini dump), but the dump files aren't being written. On start up the server event log is reporting both Event ID 45 "The system could not sucessfully load the crash dump driver." and Event ID 49 "Configuring the Page file for crash dump failed. Make sure there is a page file on the boot partition and that is large enough to contain all physical memory." I understanding is the the small dump shouldn't need a swap file large enough for all physical memory, but the error seems to point to this not being the case. The issue of course is that the max swap file size is 4GB, so this seems to be impossible. Can anyone point me where to go from here?

    Read the article

  • DBan not working because disk has bad sectors?

    - by canadiancreed
    Attempting to wipe the drive of a laptop that I have before it's sold, and normally use DBAN to do so. However this time it starts and then finishes instantly with the following message. "DBAN finished with non-fatal errors This is usually cause by disks with bad sectors" Have tried multiple flags such as noverify to force it to skip this check (it doesn't show bad sectors in the OS scan in windows). but the error always comes back. This is the only time that I've seen this message, as every other of the few drives I've used this software on usually take 3-5 hours to do their job.

    Read the article

  • Windows XP SP3 on Macbook Has Limited Disk Space

    - by Mikey.B
    Hi Guys, I setup Windows XP SP3 on a 40 GB partition using bootcamp (partition formatted for NTFS). For some reason, the hard drive properties only show ~3 GB of space available. Funny thing is though, I've hardly installed anything on the system... and if I open the windows explorer, select all directories, and check properties, it appears that I'm only using 11 GB of space. What gives? I thought there might be hidden files/folders so I enabled the option to show it but still nada. Has anyone come across this before? Any suggestions for how to proceed here?

    Read the article

  • SQL Server & Disk Space

    - by Dismissile
    I created a database on a local SQL server that we use for development. I created the log and data files on a second hard drive (E:\MSSQL\DATA). I am using this database to do some speed tests so I created a lot of data (7 Million rows). I started running some pretty intensive queries and to get some test data I ran an update statement that updated all 7 million rows and now it has taken up all of the space on my C:\, which I don't understand since I put the data files on the E:\. Is there some files on the C:\ that would be growing based on me running queries on this other database, if so how do I stop it? I am doing with this database but I need to get my C:\ back in order. The database file group was PRIMARY, is this relevant?

    Read the article

  • Looking for cheap Wireless router, with USB for attached USB disk drive

    - by geoffc
    I have a 802.11b router at home (DLink DI-614+ (B rev)) and it is working perfectly well for me. I want to replace it though, since it is out of updates, and now several years old and heck, I want a new toy to configure! I was trying to decide what to get. I could care less about 802.11g or n support, since B is fast enough, but every device in my house is now B/G, so G would be fine for me. N buys me little to nothing. (Small enough house that range is a non-issue). The features I realized I want are a USB port for sharing a USB hard drive. I would like to have a central device I could store files on. I do not want to waste the power of an always running PC to do this, so a router seems like the place to go. I would love it, if it could support Vonage VOIP as well, then I could ditch a power brick from a second device (I have the small DLink Vonage VOIP box). All the current examples of this router (with USB drive, yet to find one with VOIP too!) are in the $100+ range, and N and silly, when a B/G is in the $30 range around here.

    Read the article

  • Solaris: detect hotswap SATA disk insert

    - by growse
    What's the method used on Solaris to get the system to rescan for new disks that have been hot-plugged on a SATA controller? I've got an HP X1600 NAS which had 9 drives configred in a ZFS pool. I've added 3 disks, but the format command still only shows the original 9. When I plugged them in, I saw this: cpqary3: [ID 823470 kern.notice] NOTICE: Smart Array P212 Controller cpqary3: [ID 823470 kern.notice] Hot-plug drive inserted, Port=1I Box=1 Bay=12 cpqary3: [ID 479030 kern.notice] Configured Drive ? ....... NO cpqary3: [ID 100000 kern.notice] cpqary3: [ID 823470 kern.notice] NOTICE: Smart Array P212 Controller cpqary3: [ID 823470 kern.notice] Hot-plug drive inserted, Port=1I Box=1 Bay=11 cpqary3: [ID 479030 kern.notice] Configured Drive ? ....... NO cpqary3: [ID 100000 kern.notice] cpqary3: [ID 823470 kern.notice] NOTICE: Smart Array P212 Controller cpqary3: [ID 823470 kern.notice] Hot-plug drive inserted, Port=1I Box=1 Bay=10 cpqary3: [ID 479030 kern.notice] Configured Drive ? ....... NO But can't figure out how to get the format command to see them so I know they've been detected by the system.

    Read the article

  • On Server Disk Storage VS SAN Storage

    - by Justin
    Hello, I am looking at buying three servers, and trying to figure out which storage solution makes the most sense in terms of performance and cost. Total budget is around: $10,000. OPTION 1: Dell servers with RAID 10 (4 Drives) each 7200RPM SAS 500GB, for a total capacity of 1TB. Each server is approx: $3000. Total storage then across all three servers is 3TB. OPTION 2: Same Dell servers with a cheap single drive no RAID for $2000 and go with a centralized SAN solution. The biggest problem is that I haven't been able to even find a SAN solution that is a reasonable price. Dell entry level storage servers are like $15,000. I am thinking just iSCSI, not fiber (too expensive). What do you guys recommend?

    Read the article

  • Cannot ping host stale ARP cache?

    - by gkchicago
    I am having a strange issue with a Debian (Lenny/Linux 2.6.26-2-amd64) that has been driving me nuts. On some machines within my network I can ping the host in question just fine, other times I have to manually hard-code the ARP ethernet address for the IP in order to establish connectivity. I've finally worked it down to somehow involving ARP. I just found how to fix it in a way that made it work but I'm looking for help explaining this issue and also I don't trust my fix to be permanent.. My thought process has been the following but I just can't make any sense out of it: Could it be the card? (Intel 82555 rev 4) Could it be because there are two network cards? (Default route is eth0) Could it be because of the network aliases? Lenny? AMD x86_64? Argh.. Thank you for any insight you might have // Ping doesn't go thru [gordon@ubuntu ~]$ ping 192.168.135.101 PING 192.168.135.101 (192.168.135.101) 56(84) bytes of data. --- 192.168.135.101 ping statistics --- 4 packets transmitted, 0 received, 100% packet loss, time 3014ms // Here's the ARP Table, sometimes the .151 address is good, sometimes it // also matches the Gateways MAC like .101 is doing right here. [gordon@ubuntu ~]$ cat /proc/net/arp IP address HW type Flags HW address Mask Device 192.168.135.15 0x1 0x2 00:0B:DB:2B:24:89 * eth0 192.168.135.151 0x1 0x2 00:0B:6A:3A:30:A6 * eth0 192.168.135.1 0x1 0x2 00:1A:A2:2D:2A:04 * eth0 192.168.135.101 0x1 0x2 00:1A:A2:2D:2A:04 * eth0 // Drop the bad arp table listing and set it manually based on /sbin/ifconfig [gordon@ubuntu ~]$ sudo arp -d 192.168.135.101 [gordon@ubuntu ~]$ sudo arp -s 192.168.135.101 00:0B:6A:3A:30:A6 // Ping starts going thru..?!? [gordon@ubuntu ~]$ ping 192.168.135.101 PING 192.168.135.101 (192.168.135.101) 56(84) bytes of data. 64 bytes from 192.168.135.101: icmp_seq=1 ttl=64 time=15.8 ms 64 bytes from 192.168.135.101: icmp_seq=2 ttl=64 time=15.9 ms 64 bytes from 192.168.135.101: icmp_seq=3 ttl=64 time=16.0 ms 64 bytes from 192.168.135.101: icmp_seq=4 ttl=64 time=15.9 ms --- 192.168.135.101 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3012ms rtt min/avg/max/mdev = 15.836/15.943/16.064/0.121 ms The following is my network config on this. gordon@db01:~$ /sbin/ifconfig eth0 Link encap:Ethernet HWaddr 00:0b:6a:3a:30:a6 inet addr:192.168.135.151 Bcast:192.168.135.255 Mask:255.255.255.0 inet6 addr: fe80::20b:6aff:fe3a:30a6/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:15476725 errors:0 dropped:0 overruns:0 frame:0 TX packets:10030036 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:18565307359 (17.2 GiB) TX bytes:3412098075 (3.1 GiB) eth0:0 Link encap:Ethernet HWaddr 00:0b:6a:3a:30:a6 inet addr:192.168.135.150 Bcast:192.168.135.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 eth0:1 Link encap:Ethernet HWaddr 00:0b:6a:3a:30:a6 inet addr:192.168.135.101 Bcast:192.168.135.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 eth1 Link encap:Ethernet HWaddr 00:e0:81:2a:6e:d0 inet addr:10.10.62.1 Bcast:10.10.62.255 Mask:255.255.255.0 inet6 addr: fe80::2e0:81ff:fe2a:6ed0/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:10233315 errors:0 dropped:0 overruns:0 frame:0 TX packets:19400286 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1112500658 (1.0 GiB) TX bytes:27952809020 (26.0 GiB) Interrupt:24 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:387 errors:0 dropped:0 overruns:0 frame:0 TX packets:387 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:41314 (40.3 KiB) TX bytes:41314 (40.3 KiB) gordon@db01:~$ sudo mii-tool -v eth0 eth0: negotiated 100baseTx-FD, link ok product info: Intel 82555 rev 4 basic mode: autonegotiation enabled basic status: autonegotiation complete, link ok capabilities: 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD advertising: 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD flow-control link partner: 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD gordon@db01:~$ sudo route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface localnet * 255.255.255.0 U 0 0 0 eth0 10.10.62.0 * 255.255.255.0 U 0 0 0 eth1 default 192.168.135.1 0.0.0.0 UG 0 0 0 eth0

    Read the article

  • Does MS Forefront TMG cache authentication?

    - by SnOrfus
    I'm testing a client machine that makes requests to a biztalk server using a forefront machine as a web proxy. Upon first test I put in an invalid name/password into the receive port and received the correct error message (407). Then, I set the correct name/password and everything worked correctly. From there, I kept the correct information in the receive port but put an invalid name/password into the send adapter but the process completed successfully (should have failed with 407). I've ensured that both the recieve and send ports are not bypassing the proxy for local addresses. So the only thing that seems to make sense is if TMG is caching the authentication request coming from the machine I'm working on. Is this thinking correct, and if so, does anyone know how to disable it in TMG?

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >