Search Results

Search found 2280 results on 92 pages for 'ressource pool'.

Page 27/92 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • IIS 7.5 doesn't load static html pages

    - by Kizz
    There is an IIS 7.5 freshly installed on a dedicated server. ASP.NET 4.0 Web app copied to its folder, new website is created on its own IP on post 80, IIS_IUSR and IUSR accounts have read/execute rights on site's folder, the site is assigned to its own Integrated app pool with 4.0 .NET (I tried Classic pool with the same results). The problem: when I try to access this web site, browser only loads content generated by .NET resources such as aspx pages, .axd files, etc. Static images, static js, css and html files are in the page source but IIS doesn't serve them. Dev tools in all browsers complain that all those static resources have been sent by the server with wrong content type (plain text instead of image, styles, etc). What do I do wrong?

    Read the article

  • Configuring NAT and static IP on Cisco 877W

    - by David M Williams
    Hi all, I'm having trouble setting up a static IP reservation on a network. What I want to do is assign IP 192.168.1.105 to MAC address 00:21:5d:2f:58:04 and then port forward 35394 to it. If it helps, output from show ver says Cisco IOS software, C870 software (C870-ADVSECURITYK9-M), version 12.4(4)T7, release software (fc1) ROM: System bootstrap, version 12.3(8r)YI4, release software I have done this - service dhcp ip routing ip dhcp excluded-address 192.168.1.1 192.168.1.99 ip dhcp excluded-address 192.168.1.200 192.168.1.255 ip dhcp pool ClientDHCP network 192.168.1.0 255.255.255.0 default-router 192.168.1.1 dns-server 192.168.1.1 lease 7 ip dhcp pool NEO host 192.168.1.105 255.255.255.0 hardware-address 0021.5D2F.5804 ip nat inside source static tcp 192.168.1.105 35394 <PUBLIC_IP> 35394 extendable However, the machine is getting assigned IP address 192.168.1.101 not .105 ... any suggestions? Thanks !

    Read the article

  • IIS/ASP.NET performance incident - Perfmon Current Annonymous Users going through roof but Requests/sec low

    - by Laurence
    Setup: ASP.NET 4.0 website on IIS 6.0 on Win 2003 64 bit, 8xCPUs, 16GB memory, separate SQL 2005 DB server. Had a serious slowdown today with any otherwise fairly well performing ASP.NET site. For a period of a couple of hours all page requests were taking a very long time to be served - e.g. 30-60s compared to usual 2s. The w3wp.exe's CPU and memory usage on the webserver was not much higher than normal. The application pool was not in the middle of recycling (and it hadn't recycled for several hours). Bottlenecks in the database were ruled out - no blocks occurring and query results were being returned quickly. I couldn't make any sense of it and set up the following Perfmon counters: Current Anonymous Users (for site in question) Get requests/sec (ditto) Requests/sec for the ASP.NET application running the site Get requests/sec was averaging 100-150. Requests/sec for ASP.NET was averaging 5-10. However Current Anonymous Users was around 200. And then as I was watching, the Current Anonymous Users began to climb steeply going up to about 500 within a few minutes. All this time Get requests/sec & Requests/sec for ASP.NET was if anything going down. I did a whole load of things (in a panic!) to try to get the site working, like shutting it down, recycling the app pool, and adding another worker process to the pool. I also extended the expiration time for content (in IIS under HTTP Headers) in an attempt to lower the number of requests for static files (there are a lot of images on the site). The site is now back to normal, and the counters are fairly steady and reading (added Current Connections counter): Current Anonymous Users : average 30 Get requests/sec : average 100 Requests/sec for ASP.NET : 5 Current Connections : average 300 I have also observed an inverse relationship between Get requests/sec & Current Anonymous Users. Usually both are fairly steady but there will be short periods when Get requests/sec will go down dramatically and Current Anonymous Users will go up in a perfect mirror image. Then they will flip back to their usual levels. So, my questions are: Thinking of the original performance issue - if w3wp.exe CPU, memory usage were normal and there was no DB bottleneck, what could explain page requests taking 20 times longer to be served than usual? What other counters should I be looking at if this happens again? What explains the inverse relationship between Get requests/sec & Current Anonymous Users? What could explain Current Anonymous Users going from 200 to 500 within a few minutes? Many thanks for any insight into this.

    Read the article

  • How to get old DLL's running on 64bit server

    - by quakkels
    Hello all, I'm moving my company's websites from a windows 2003x86 server to windows 2008x64 which is running IIS 7.5. The problem that I've got is that all the DLL's which were running fine on the old server, now error out whenever they're called. All I get is a generic error like: Server object error 'ASP 0177 : 800401f3' Server.CreateObject Failed /folder/scriptname.asp, line 24 800401f3 The line that errors is: '23 lines of comments set A0SQL_DATA = server.createobject("olddllname.Data") 'the rest of the script I already have that site running in an App Pool that is set to 32bit mode. But, I get the error anyway. Has anyone experienced this? I'm frusterated because all the info I look up says that all I need to do is set the app pool to run in 32bit mode. I did that and It's still not working. What else could I check?

    Read the article

  • Drbd Primary/Primary + iSCSI: accessing to different files avoids split brain?

    - by Eddie C.
    I have a question / curiosity about split-brain on a Drbd Primary/Primary configuration. Supposing two nodes (hosts), host1 and host2 configured with Drbd Primary/Primary and two different shares (NFS, CIFS o iSCSI) of a replicated area (saying /drbd) /drbd/file1.data /drbd/file2.data If a pool of client would access only by host1 share reading and wrinting only file1.data and another pool only by host2 share to file2.data, this scenario should avoid split brain situation in case of one node failure or it's just a conjecture? The final purpose is load balance between the two nodes in normal condition and collapsing to one node only in case of failure. Thank you! Eddie

    Read the article

  • Running IIS command on remote server via Powershell

    - by Paul Hunt
    I am trying to check if an IIS application pool exists on a remote server using a PowerShell script. The command I am running is: test-path "IIS:\AppPools\DefaultAppPool" If I run this script directly on the IIS server in question I get a response back of "True" so this tells me that I have IIS management correctly configure in PowerShell. However when I run the following script from a remote server I get a response of "False" invoke-command -ComputerName IISSERVER -ScriptBlock { test-path "IIS:\AppPools\DefaultAppPool" } I know that PowerShell remoting is correctly configured because I can run the following command and get a list of files invoke-command -ComputerName IISSERVER -ScriptBlock { get-childitems "c:\" } So why am I getting the wrong response about the existence of the application pool?

    Read the article

  • Tomcat memory usage grows until crash with no GC run

    - by Phil
    I'm administrating a server running Tomcat that is getting a lot of traffic lately. If I monitor memory usage in Task Manager I can see the memory usage growing and eventually tomcat crashes around the 1GB mark. Here's the memory relevent bits I've set in Tomcat Properties (this is a Windows Server): Intial memory pool: 1024 MB Maximum memory pool: 1024 MB -XX:MaxPermSize=256M The weird thing is since these problems arose I've deployed Lambda Probe to the Tomcat instance and the memory usage values I see there are much lower, for example Task Manager might show 467MB used while the "Total" used in Probe is 212 MB. Also, the Maximum Total listed in Probe is 1.29GB, when I would have expected 1GB, the maximum memory set above. If I force the garbage collector to run using Probe, I can keep Tomcat from crashing for a while (indefinitely, AFAIK). So why doesn't the GC run automatically and stop Tomcat from crashing? Thanks.

    Read the article

  • Can't get virtual desktops to show up on RDWeb for Server 2012 R2

    - by Scott Chamberlain
    I built a test lab using the Windows Server 2012 R2 Preview. The initial test lab has the following configuration (I have replaced our name with "OurCompanyName" because I would like it if Google searches for our name did not cause people to come to this site, please do the same in any responses) Physical hardware running Windows Server 2012 R2 Preview full GUI, acting as Hyper-V host (joined to the test domain as testVwHost.testVw.OurCompanyName.com) with the following VM's running on it VM running 2012 R2 Core acting as domain controller for the forest testVw.OurCompanyName.com (testDC.testVw.OurCompanyName.com) VM running 2012 R2 Core with nothing running on it joined to the test domain as testIIS.testVw.OurCompanyName.com A clean install of Windows 7, all that was done to it was all windows updates where loaded and sysprep /generalize /oobe /shutdown /mode:vm was run on it A clean install of Windows 8, all that was done to it was all windows updates where loaded and sysprep /generalize /oobe /shutdown /mode:vm was run on it I then ran "Add Roles and Features" from testVwHost and chose the "Remote Desktop Services Installation", "Standard Deployment", "Virtual machine-based desktop deployment". I choose testIIS for the roles "RD Connection Broker" and "RD Web Access" and testVwHost as "RD Virtualization Host" The Install of the roles went fine, I then went to Remote Desktop Services in server manager and wet to setup Deployment Properties. I set the certificate for all 3 roles to our certificate signed by a CA for *.OurCompanyName.com. I then created a new Virtual Desktop Collection for Windows 7 and Windows 8 and both where created without issue. On the Windows 7 pool I added RemoteApp to launch WordPad, For windows 8 I did not add any RemoteApp programs. Everything now appears to be fine from a setup perspective however if I go to https://testIIS.testVw.OurCompanyName.com/RDWeb and log in as the use Administrator (or any orher user) I don't see the virtual desktops I created nor the RemoteApp publishing of WordPad. I tried adding a licensing server, using testDC as the server but that made no difference. What step did I miss in setting this up that is causing this not to show up on RDWeb? If any additional information is needed pleas let me know. I have tried every possible thing I can think of and I am just groping around in the dark now. The virtual machines running on testVwHost The configuration screen for RD Services The Windows 7 Pool The Windows 8 Pool This is logged in as testVw\Administrator

    Read the article

  • Trigger ZFS dedup one-off scan/rededup

    - by Jake Wharton
    I have a ZFS filesystems which has been running for some time and I recently had the opportunity to upgrade it (finally!) to the latest ZSF version. Our data doesn't scream dedup but I firmly believe based on small tests that we could gain anywhere from 5-10% of our space back for free by utilizing it. I have enabled dedup on the filesystem and new files are slowly being dedupified but the majority (95%+) of our data already exists on the filesystem. Short of moving the data off-pool and then recopying it back, is there any way to trigger a dedup scan of existing data? It doesn't have to be asynchronous or live. (And FYI there isn't enough room on the pool to copy the entire filesystem to another and then just switch the mounts.)

    Read the article

  • How do you manage large web farms?

    - by Andrew Katz
    I have a quickly growing web farm running IIS 7 (30+ servers). All servers are identical copies of each other and all servers are physical. We update the software about once a month, and in the current process, we follow the following steps: Disable server from pool on F5 load balancer. Disable HTTP Keep-alives in IIS so connections drop quickly. Change default directory of website to new folder containing new binaries. Test server Enable HTTP Keep-alives. Enable server in F5 pool. Move to server 2 Microsoft used to have Application Center which was abandoned a while ago. They have made a second attempt with the Web Farm Framework, but this adds as much QA time testing the release package as it saves in the deployment. Has anyone seen a commercial off the shelf application that is tailored for managing and deploying to large web farms? Thanks!

    Read the article

  • Connection Reset by Peer error with Apache and JBoss 7.1.1

    - by vikingz
    We are seeing errors on some of our QA testing scripts that intermittently throw Connection Reset By Peer errors. The Test scripts submit requests via F5 which forwards requests to Apache (2.2.21) with a mod_jk load_balancer with the following setting for each worker in the worker.property worker1 props worker.worker1.type=ajp13 worker.worker1.port=8109 worker.worker1.lbfactor=1 worker.worker1.host=skunkhost1.com worker.worker1.connection_pool_timeout=30 and here is what is in the JBoss domain.xml for the AJP port from JBoss 7.1.1 <unbounded-queue-thread-pool name="SKUNKY.APP.AJP"> <max-threads count="300"/> <keepalive-time time="3" unit="minutes"/> </unbounded-queue-thread-pool> Here is httpd.conf Timeout 300 KeepAlive On KeepAliveTimeout 15 MaxKeepAliveRequests 100 TraceEnable Off My question is that is it posisbe that apache times out and closes the connection while jboss is still ready and working on the request? What might be causing the Connection Reset By Peer error?what am i missing here? Any help is majorly appreciated!! Sincerely KK

    Read the article

  • Why did my zpool replace never finish and what should I do now?

    - by Josh
    I have a ZFS zpool with two disks in a mirror configuration, da0 and da1. da1 failed, and so I replaced it with da2 using zpool replace BearCow da1 da2 This ran for a few hours, during which zpool status showed that the array was being resilvered. When that finished, zpool status showed that the resilver was completed, but the array was still degraded... I tried a zpool scrub and a zpool clear, but the array still shows as degraded: [root@chef] ~# zpool status BearCow pool: BearCow state: DEGRADED scrub: scrub completed after 0h20m with 0 errors on Tue Oct 9 16:13:27 2012 config: NAME STATE READ WRITE CKSUM BearCow DEGRADED 0 0 0 mirror DEGRADED 0 0 0 da0 ONLINE 0 0 0 replacing DEGRADED 0 0 0 da1 OFFLINE 0 0 0 da2 ONLINE 0 0 0 errors: No known data errors I can't zpool replace BearCow da1 da2 anymore because da2 is already a member of BearCow... This is FreeBSD (FreeNAS) running ZFS pool version 15. How do I get my array to show as healthy again?

    Read the article

  • Trigger ZFS dedup one-off scan/rededup

    - by Jake Wharton
    I have a ZFS filesystems which has been running for some time and I recently had the opportunity to upgrade it (finally!) to the latest ZFS version. Our data doesn't scream dedup but I firmly believe based on small tests that we could gain anywhere from 5-10% of our space back for free by utilizing it. I have enabled dedup on the filesystem and new files are slowly being dedupified but the majority (95%+) of our data already exists on the filesystem. Short of moving the data off-pool and then recopying it back, is there any way to trigger a dedup scan of existing data? It doesn't have to be asynchronous or live. (And FYI there isn't enough room on the pool to copy the entire filesystem to another and then just switch the mounts.)

    Read the article

  • Cisco 2911 - problem with DHCP

    - by bluszcz
    Hi, i am configuring DHCP daemon on Cisco 2911. At some point i assigned address 192.168.50.50 to one box (using MAC address 'relation'). When I wanted to saved config - I got warning, that address 192.168.50.50 is already in Pool, which sounds bit weird - it was first host which I started configuring. I tried with following commands: clear ip dhcp server statistics clear ip dhcp conflict but first one doesn't output anything (and show statistics shows that there are 2 addresses in pool), and second one throws "there is no conflicted ips" message. How I can force/purge/clear/allow to bind 192.168.50.50 to this box?

    Read the article

  • Feeding the kernels entropy source from other machines and/or increasing its maximum size

    - by David Spillett
    We have has a little trouble with a small box that acts as a VPN end-point and mail relay for our network, caused by the available entropy for /dev/random being too low (which causes TLS connection attempts by exim to fail). The machine doesn't do anything else, so the normal feed into the entropy pool (interrupt timings from things like disk access) is not enough. As a quick hack I've set a looping script that reads from /dev/hda at a couple of Mbyte/sec which keeps it topped up. Other than buying a hardware RNG, is there a clean way of piping data for entry from elsewhere, such as a copy of the data our file server uses for its entropy source? I've spotted several tips for using rng-tools to feed it from /dev/urandom on the same machine but that "feels dirty". Also, is it possible to increase the maximum pool size? It currently seems to max out at 3585.

    Read the article

  • What can stop IIS7 from restarting an ASP.NET app when uppdating a dll in the bin folder?

    - by Carl Björknäs
    We're running ASP.NET 2.0 on MS Server 2008 and IIS 7. During the last releases the app pool hasn't automatically been restarted after changes in the bin folder. It works like a charm on our test server but not on the live server. The site is browsable but runs with the logic of the old version of the updated dll. One of the changes we have done lately is that one of the dll:s in the bin folder consists of other dlls that have been merged with ILMerge. Interop.ADODB.dll and Interop.CDO.dll is included in the merged dll. It is the user dll of the merged dll that is updated. What can possibly hinder IIS from restarting the app pool although a file has changed in the bin folder?

    Read the article

  • Deploying Sharepoint Features in a Load Balanced Environment

    - by Adam
    Last night we deployed a new set of Sharepoint features to a load balanced environment. For some reason the new features are on 1 box but are not showing in the sharepoint sites on the others. We have 4 servers and we deployed to them by pulling 1 server out of rotation, stopping the app pool and deploying our new code and the new features. Then we would fire it back up and add it to the rotation. For the remaining servers we would only remove the server from rotation, stop the app pool, and deploy the code, NOT the features, then fire it back up and add it to the rotation. Any thoughts on why the features are not showing up on the other servers? Also, any thoughts on forcing the features to show up? Thanks in advance.

    Read the article

  • Advice for outdoor wifi hardware and topology

    - by Robot
    I haven't setup any wifi networks other than an access point or two at any single location, so I'd like advice on how to setup an outdoor/weatherproof network in an area approximately 150 feet by 200 feet. The interesting thing is there are a pair of pools in the middle of the coverage area. Here is a picture: blue is pool, green is coverage area, yellow is building with wired access. Can anyone advise me on weatherproof APs, antennas and placement for best coverage of the pool deck? I've looked at the Meraki stuff, but I'm thinking it's overkill.

    Read the article

  • JBoss DataSource - How can ConnectionCount be larget than MaxSize?

    - by Qben
    I am running JBoss 4.0.5GA and I have stumbled upon a strange scenario (In my eyes anyway). When I decreases the <max-connections> to 1 for a Quartz DataSource and restart the server everything works fine. When I check the JMX console I can see that ConnectionCount and MaxConnectionInUseCount are both 2. The question is, how can the ConnectionCount be higher than the pool MaxSize (Which is 1 in JMX console as expected)? As a note I did this to try to trigger a production problem I have from time to time where a Quartz DB connection cannot be retrieved for some odd reason (Pool not full).

    Read the article

  • ZFS Configuration advice

    - by rbarrette
    I need some advice on configuring ZFS. Here is what I have: Physical Disks: 4x 3 TB 2x 2 TB 2x 1 TB What is the best configuration for my Vdevs and storage pool. I want to maximaze space but still maintain redundancy. Should I just get 2 more 3TB's and just create 2x 3-3TB raid2z storage pools? Create a 1x 4-3TB raidz2 vdev? Can I put redundancy at the pool level and create individual vdevs for each drive and then add 2x 1TB+2TB striped vdevs to keep all vdevs the same size. Keep in mind I do need to migrate data from the smaller drives and am planning on adding more 3tb drives later on. What do you think?

    Read the article

  • ASP Fails with 500 Error

    - by VinceM
    We have a server setup as an IIS box and have some static pages with a few asp pages that handle the form submissions. The asp is really vbscript that sends a CDO message. When moving these pages to the new server the form will not submit, it gives a 500 error and the following shows in Event Viewer: Error: The Template Persistent Cache initialization failed for Application Pool 'DefaultAppPool' because of the following error: Could not create a Disk Cache Sub-directory for the Application Pool. The data may have additional error codes.. I can't seem to find any info on this anywhere... I was thinking it may have something to do with the fact that we created this server from an image of another server. Thanks for your help in advance... Vince

    Read the article

  • scsi and ata entries for same hard drive under /dev/disk/by-id

    - by John Dibling
    I am trying to set up a ZFS pool using 4 bare drives which I have attached to my Ubuntu system via a SATA hot swap backplane. These are Hitachi SATA drives. When I list the contents of /dev/disk/by-id, I see two entries for each drive: root@scorpius:/dev/disk/by-id# ls | grep Hitachi ata-Hitachi_HDS5C3030ALA630_MJ1323YNG0ZJ7C ata-Hitachi_HDS5C3030ALA630_MJ1323YNG1064C ata-Hitachi_HDS5C3030ALA630_MJ1323YNG190AC ata-Hitachi_HDS5C3030ALA630_MJ1323YNG1DGPC scsi-SATA_Hitachi_HDS5C30_MJ1323YNG0ZJ7C scsi-SATA_Hitachi_HDS5C30_MJ1323YNG1064C scsi-SATA_Hitachi_HDS5C30_MJ1323YNG190AC scsi-SATA_Hitachi_HDS5C30_MJ1323YNG1DGPC I know these are the same drives because I wrote down the serial numbers, and all the other drives in this system are either Seagate or WD. The serial number for the first one, for example, is YNG0ZJ7C. Why are there two entries here for each drive? More to the point, when I create my ZFS pool which one should I use; the scsi- one or the ata- one?

    Read the article

  • What are some good asp.net shared hosting pre-sales questions?

    - by P a u l
    I'm not asking for any host recommendations, those are covered in other questions. What are some good pre sales questions for asp.net shared hosting? They never seem to answer all the questions in their feature lists. So far I have a few: dedicated application pool? sql server management studio supported? Is tunneling required? can I reset my application pool in the control panel? are php and perl fully supported as well? are subdomains supported, and will I need a routing script in the root or are they routed automatically? etc. Developers have a critical need for good hosting to stage applications. I think this is absolutely developer related and don't want the question on serverfault.

    Read the article

  • mysql does not start properly

    - by Erik Svenson
    Hi I am using XAMPP on Windows XP. Since I changed the version from 1.73 to 1.77 MySQL does not start properly. That means that the status says, it is started, but the safety check says it is not . Because of that I cannot set any password, which is unacceptable. Any idea? That's mysql_error.log: 111007 9:42:56 [Note] Plugin 'FEDERATED' is disabled. 111007 9:42:56 InnoDB: The InnoDB memory heap is disabled 111007 9:42:56 InnoDB: Mutexes and rw_locks use Windows interlocked functions 111007 9:42:56 InnoDB: Compressed tables use zlib 1.2.3 111007 9:42:56 InnoDB: Initializing buffer pool, size = 16.0M 111007 9:42:56 InnoDB: Completed initialization of buffer pool 111007 9:42:57 InnoDB: highest supported file format is Barracuda. 111007 9:42:57 InnoDB: Waiting for the background threads to start 111007 9:42:58 InnoDB: 1.1.8 started; log sequence number 1595675 111007 9:42:58 [Note] Event Scheduler: Loaded 0 events 111007 9:42:58 [Note] mysql\bin\mysqld.exe: ready for connections. Version: '5.5.16' socket: '' port: 3306 MySQL Community Server (GPL)

    Read the article

  • Windows 8 Task Manager RAM Usage Accuracy

    - by user264892
    The new Task Manager has a great UI in windows 8, however, there are some discrepancies in the data I can not account for: Machine: 8 GB of total ram. (This is a physical machine, not a virtual) The processes tab shows 45% of Memory utilized. The listed process do not add up to 3.5 GB of RAM, but instead add up to 0.948 GB. There is no "processes for all users" option. The performance Tab Shows: In use : 3.6 GB Available: 4.4 GB Committed : 4.1 /9.2 GB Cached: 3.7 GB Paged Pool: 376 MB Non-paged pool: 135 MB My reading of this says I have ALOT of "cloaked" processes running some where eating my ram. How do I interpret this data and how do I verify it?

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >