Search Results

Search found 2297 results on 92 pages for 'dmi pool'.

Page 71/92 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • ADSL Modem/Router sometimes hands out incorrect IP addresses

    - by Peter Keevill
    My setup is as follows:- Main ADSL modem / router (switch) configured as DHCP server with address range 192.168.0.25-60 The office machines are configured with fixed IP ( not in the same address pool of course ) and hard wired to this router. A wireless access point ( Router ) is connected to provide Internet access for guests in a separate area. This router is NOT configured as a DHCP server. Wireless authentication is turned off. IP address lease times are set to 4 hours. Sometimes guests are able to connect to the wireless access point but they are not given a valid IP. They get 169.x.x.x addresses. Rebooting their machines does not resolve the problem. The only way to resolve is to reboot the main ADSL/router which is often frustrating for other users who are successfully connected with valid IP and DG. The problem seems to occur more frequently to Apple/Mac guests although it also sometimes occurs with Win machines. I personally use Ubuntu on my Laptop and thus far, never have had any problem connecting and getting a valid IP address in the guest area. One further point of note which may give a clue is that certain guests ( always Apple/Mac ) get lease times of 90 days. However, this does not 'stack out' the number of available addresses and of course, rebooting the router clears them until the next time they login.

    Read the article

  • Raid-z unaccessible after putting one disk offline

    - by varesa
    I have installed FreeNAS on a test server, with 3x 1Tb drives. They are setup in raidz. I tried to offline one of the disks (from the FreeNAS web-ui), and the array became degraded, as I think it should. The problem is with the array becoming unaccessible after that. I thought a raid like that should be able to run fine with one of the disks missing. Atleast very soon after I offline'd and pulled out the disk, the iSCSI share disappeared from a ESXi host's datastores. I also ssh'd into the FreeNAS server, and tried just executing ls /mnt/raid (/mnt/raid/ being the mount point). The whole terminal froze, not accepting ^C or anything. # zpool status -v pool: raid state: DEGRADED status: One or more devices are faulted in response to IO failures. action: Make sure the affected devices are connected, then run 'zpool clear'. see: http://www.sun.com/msg/ZFS-8000-HC scrub: none requested config: NAME STATE READ WRITE CKSUM raid DEGRADED 1 30 0 raidz1 DEGRADED 4 56 0 gptid/c8c9e44c-08e1-11e2-9ba6-001b212a83ea ONLINE 3 60 0 gptid/c96f32d5-08e1-11e2-9ba6-001b212a83ea ONLINE 3 63 0 gptid/ca208205-08e1-11e2-9ba6-001b212a83ea OFFLINE 0 0 0 errors: Permanent errors have been detected in the following files: /mnt/raid/ raid/iscsivol:<0x0> raid/iscsivol:<0x1> Have I understood the workings of a raidz wrong, or is there something else going on? It would not be nice to have the same thing happen on a production system...

    Read the article

  • Getting rid of your server in a small business environment

    - by andygeers
    In a small business environment, is it still necessary to have a central server? Speaking for my own company (a small charity with about 12 employees) we use our server (Windows Server 2003) for the following: Email via Microsoft Exchange Central storage Acting as a print server User authentication / Active Directory There are significant costs associated with running a server like this: Electricity, first for the server itself then for the air conditioning required (this thing pumps out a lot of heat) Noise (of which there is a lot) IT support bills (both Windows Server and Exchange are pretty complicated, and there are many ways they can go wrong) I've found ways to replace many of these functions with cheaper (better?) alternatives: Google Apps / GMail is a clear win for us: we have so many spam related problems it's not even funny, and Outlook is dog slow on our aging computers You can buy networked storage devices with built in print servers, such as the Netgear ReadyNAS™ RND4210 that would allow us to store/share all of our documents, and allow us to access printers over the network The only thing that I can't figure out how to do away with is the authentication side of things - it seems to me that if we got rid of our server, you'd essentially have a bunch of independent PCs that had no shared pool of user accounts / no central administrator. Is that right? Does that matter? Am I missing any other good reasons to keep a central server? Does anybody know of any good, cost-effective ways of achieving the same end but without the expensive central server?

    Read the article

  • Unable to access newly created web site in IIS 7.5

    - by Animesh
    Configuration: 32-bit Windows 7 development machine with IIS 7.5 I created a new web site in IIS to host only MVC sites called MVCHOST. The physical path to this website is set as C:\inetpub\mvcroot. I created a new v4.0 pool called mvcpool for this purpose. I have given Modify rights to IIS_WPG, IIS_IUSRS, ASPNET accounts. I created this web site with a host header "mvchost" and port 80, in the hopes of browsing MVC sites in the following way: mvchost/mvcapp1 mvchost/mvcapp2 instead of localhost/mvcapp1 localhost/mvcapp2 The only binding I set is the default one: http:*:80:mvchost. I have also copied the files iisstart.htm, web.config, welcome.png and folder aspnet_client from wwwroot over to mvcroot. Now when I try to the browse this site from IIS manager, I get the following error: This webpage is not available If I leave out the host header and give some port, say 99, I can access this website at localhost:99. What am I missing here? Why am I unable to access the web site at: http://mvchost/?

    Read the article

  • Can a folder on a NAS be made available as a physical drive in VMWare?

    - by asbjornu
    We are currently in the process of moving from a single web server to two load balanced web servers and are facing some challenges we don't quite know how to fix. One of these is that the current single server hosts applications that write stuff to disk. The applications running on the server expects that when something is written to disk it later will in fact exist, so it's important that this premise is fulfilled with the dual server architecture as well. The dual server setup is a couple of VMWare instances with Windows Server 2008 R2 as the guest operating system. Out of the box, these instances does not share any kind of file system, so just moving the applications over would make them break since one instance would write something to the file system that doesn't exist on the other. Thus we need to share a file system between the two virtual servers. Our host has proposed to create a network share on a SAN and map this share individually on each virtual machine. This doesn't work too well due to NTFS permissions, etc., because the share needs to be accessed by several independent web applications that won't even be in the same application pool. The only solution that kind of works is to hard code an "identity" for each web application into its web.config file, but this means password in clear text which doesn't sit well with me. Since the servers are virtual, I'm thinknig: Wouldn't it be possible to make a NAS area available as a physical disk in the gues operating system somehow? Since VMWare has full control of the virtual hardware, you'd think it would be able to "fake" a local hard drive in the virtual machine that in reality is a folder on a NAS, but so far I haven't found anything that states how and if this is possible. So I have to ask the wonderful Server Fault community: Can a folder on a NAS be made available as a physical drive (typical D:) in both of the virtual machines?

    Read the article

  • How to deploy website in IIS with a host name?

    - by Jayakumar
    I try to host my application in IIS. Below are the steps that I follow: Publish the code and place it in a path. Open IIS, right click on "sites" and select "Add Website". In that dialog I gave the site name and selected the app pool created for the application. I selected the physical path of the published code. I left the IP and port in the binding section without changes. and, finally, gave the host name as fus.km.com. When I try to browse the application the page is not Loading "Internet Explorer cannot display the Page" The machine domain is km.com UPDATE I tried to add the host name to the host file and flushed the DNS. The application asked for user credentials (I use windows Authentication in the application). But it did not login. On repeated tries it throws the error: HTTP Error 401.1 - Unauthorized You do not have permission to view this directory or page using the credentials that you supplied. I tried with different user to login but I get the same result.

    Read the article

  • GlusterFS as elastic file storage?

    - by Christopher Vanderlinden
    Is there any way to run GlusterFS in a replicated mode, but with the ability to dynamically scale the volume up and down? Say you have 3 servers all running glusterd. your Gluster volume would have to be setup with replica 3 gluster volume create test-volume replica 3 192.168.0.150:/test-volume 192.168.0.151:/test-volume 192.168.0.152:/test-volume You would then mount it as say \mnt\gfs_test What happens when I want to add 2 more servers to the storage pool and then also use them in this volume? Is there any easy way to expand the volume AND increase that replica count to 5? My end goal is to run this on EC2 instances, say 3 Apache front ends, with the webroot setup on the gluster volume mount. My concern is that if I ever need to spin up a server, I would want the server to not only be an additional Apache front end, but also another server in the gluster file system, adding to fault tolerance as well as possibly giving a slight boost in read speed. Maybe there are better options that would fit the bill here? Thanks.

    Read the article

  • copy large LVM volume(14TB) from one server to another

    - by bruce
    recently,I have to copy a very large LVM volume()rom server A to server B. Below is the filesystem of server A and server B - server A [root@AVDVD-Filer ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_avdvdfiler-lv_root 16T 14T 1.5T 91% / tmpfs 3.0G 0 3.0G 0% /dev/shm /dev/cciss/c0d0p1 194M 23M 162M 13% /boot /dev/mapper/vg_avdvdfiler-test 2.3T 201M 2.1T 1% /test /dev/sr0 3.3G 3.3G 0 100% /mnt server B [root@localhost ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup-LogVol00 20G 2.5G 16G 14% / tmpfs 3.0G 0 3.0G 0% /dev/shm /dev/cciss/c0d0p1 194M 23M 162M 13% /boot /dev/mapper/VolGroup00-LogVol00 16T 133M 15T 1% /xiangao/lv1 /dev/mapper/VolGroup00-LogVol01 4.7T 190M 4.5T 1% /xiangao/lv2 I want to copy LVM volume /dev/mapper/vg_avdvdfiler-lv_root on server A to LVM volume /dev/mapper/VolGroup00-LogVol00 on server B . The server A and server B is in the same IP segment. IN the LVM volume on server A , there is all average 500M avi wmv mp4 etc. I tried mount /dev/mapper/vg_avdvdfiler-lv_root on server A to server B through NFS , then use cp command copy. It is clear I faild . Because the LVM volume is too big , I do not have good idea . I hope a good solution here. I'm a chinese, my english is very pool. sorry thanks everyone!

    Read the article

  • Is there a limit setting a php_admin_value in php-fpm?

    - by PeeHaa
    I am trying to set a large value in the configuration of a pool in php-fpm, but at some point it just doesn't start anymore. php_admin_value[disable_functions] = dl,exec,passthru,shell_exec,system,proc_open,popen,curl_exec,curl_multi_exec,parse_ini_file,show_source,pcntl_exec,include,include_once,require,require_once,posix_mkfifo,posix_getlogin,posix_ttyname,getenv,get_current_use,proc_get_status,get_cfg_va,disk_free_space,disk_total_space,diskfreespace,getcwd,getlastmo,getmygid,getmyinode,getmypid,getmyuid,ini_set,mail,proc_nice,proc_terminate,proc_close,pfsockopen,fsockopen,apache_child_terminate,posix_kill,posix_mkfifo,posix_setpgid,posix_setsid,posix_setuid,fopen,tmpfile,bzopen,gzopen,chgrp,chmod,chown,copy,file_put_contents,lchgrp,lchown,link,mkdi,move_uploaded_file,rename,rmdi,symlink,tempnam,touch,unlink,iptcembed,ftp_get,ftp_nb_get,file_exists,file_get_contents,file,fileatime,filectime,filegroup,fileinode,filemtime,fileowne,fileperms,filesize,filetype,glob,is_di,is_executable,is_file,is_link,is_readable,is_uploaded_file,is_writable,is_writeable,linkinfo,lstat,parse_ini_file,pathinfo,readfile,readlink,realpath,stat,gzfile,create_function When trying to restart php-fpm it fails with the following message: Stopping php-fpm: [ OK ] Starting php-fpm: [20-Oct-2013 22:31:52] ERROR: [/etc/php-fpm.d/codepad.conf:235] value is NULL for a ZEND_INI_PARSER_ENTRY [20-Oct-2013 22:31:52] ERROR: Unable to include /etc/php-fpm.d/codepad.conf from /etc/php-fpm.conf at line 235 [20-Oct-2013 22:31:52] ERROR: failed to load configuration file '/etc/php-fpm.conf' [20-Oct-2013 22:31:52] ERROR: FPM initialization failed [FAILED] When I remove the last disabled function (create_function) it start again. I also tried with other functions, but this gives the same error so it's not related to the create_function function. The string currently is just over 1KB in size so it looks like I have hit a limit here? Is my assumption correct? Is there a way to overcome this limit? I also tried to add another php_admin_value[disable_functions] underneath it (hoping it would be appended), but that didn't work (it just used the first one).

    Read the article

  • How to access a port via OpenVpn only

    - by Andy M
    I've set up an openvpn server alongside an apache website that can only be accessed on port 8100 on the same machine. My /etc/openvpn/server.conf file looks like this: port 1194 proto tcp dev tun ca ./easy-rsa2/keys/ca.crt cert ./easy-rsa2/keys/server.crt key ./easy-rsa2/keys/server.key # This file should be kept secret dh ./easy-rsa2/keys/dh1024.pem # Diffie-Hellman parameter server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt # make sure clients can still connect to the internet push "redirect-gateway def1 bypass-dhcp" keepalive 10 120 comp-lzo persist-key persist-tun status openvpn-status.log verb 3 Now I tried to let only clients connected to the vpn network access the website on apache via port 8100. So I defined a few iptables rules: #!/bin/sh # My system IP/set ip address of server SERVER_IP="192.168.0.2" # Flushing all rules iptables -F iptables -X # Setting default filter policy iptables -P INPUT DROP iptables -P OUTPUT DROP iptables -P FORWARD DROP # Allow incoming access to port 8100 from OpenVPN 10.8.0.1 iptables -A INPUT -i tun0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o tun0 -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT # outgoing http iptables -A OUTPUT -o tun0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A INPUT -i tun0 -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT Now when I connect to the server from my client computer and try to access the website on 192.168.0.2:8100, my browser can't open it. Will I have to forward traffic from tun0 to eth0? Or is there anything else I'm missing?

    Read the article

  • ApplicationPoolIdentity IIS 7.5 to SQL Server 2008 R2 not working.

    - by Jack
    I have a small ASP.NET test script that opens a connection to a SQL Server database on another machine in the domain. It isn't working in all cases. Setup: IIS 7.5 under W2K8R2 trying to connect to a remote SQL Server 2008 R2 instance. All machines are in the same domain. Using the ApplicationPoolIdentity for the web site it fails to connect to the SQL Server with the following: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.SqlClient.SqlException: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. However if I switch the Process Model Identity to NETWORK SERVICE or my domain account the database connection is successful. I've granted the \$ access in SQL Server. I am not doing any sort of authentication on the web site, it is just a simple script to open a connection to a database to make sure it works. I have Anonymous Authentication enabled and set to use the Application pool identity. How do I make this work? Why is the ApplicationPoolIdentity trying to use ANONYMOUS LOGON? Better yet, how do I make it stop using the Anonymous logon?

    Read the article

  • Getting 404 error on MVC web-site

    - by RB
    I have an IIS7.5 web-site, on Windows Server 2008, with an ASP.NET MVC2 web-site deployed to it. The website was built in Visual Studio 2008, targeting .NET 3.5, and IIS 5.1 has been successfully configured to run it as well, for local testing. We've installed the world's simplest MVC application (the one which is created when you create a new MVC2 project in Visual Studio), and we are getting 404s on any page we try and access - e.g. <my_server>/Home/About will generate a 404. I've asked this question on StackOverflow as well, but that was before I knew it was a server issue. I have checked the following things: There are 404 entries in the IIS log, corresponding to each request. The application pool for the web-site is set to use the Integrated pipeline. The "customErrors" mode is set to off. .NET 3.5 SP1 is installed ASP.NET MVC 2 is installed I've used MVC Diagnostics to confirm all MVC DLLs are being found. ASP.NET is enabled in IIS, which we've demonstrated by running the MVC Diagnostics page. KB 2023146 did highlight that HTTP Redirection was off, so we've turned it on, but no joy. Any ideas will be greatly appreciated! Someone did suggest that there might be problems running it caused by Windows Server 2008 being 64-bit - does anyone know anything about this?

    Read the article

  • Nginx + PHP-FPM Too Many Resources

    - by user3393046
    My Server has the following Specs CPU: 6 Cores Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz RAM: 32 GB I have a problem with nginx+php-fpm. They are taking too many resources for an unknown reason. Even if i restart the nginx + php-fpm the start up processes will use many resources. My nginx Config is the following: user nginx; worker_processes auto; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; worker_rlimit_nofile 300000; events { worker_connections 6000; use epoll; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; include /etc/nginx/conf.d/*.conf; } My php-fpm pool config is the following [www] user = nginx group = nginx listen = /var/run/php5-fpm.sock listen.owner = nginx listen.group = nginx listen.allowed_clients = 127.0.0.1 pm = ondemand pm.max_children = 1500; pm.process_idle_timeout = 5; chdir = / security.limit_extensions = .php I'm using on pm.ondemand since my website has to support many concurrent connections at the same time and i was unable to to it with dynamic/static. I guess this isnt the problem because as i said earlier when i restart nginx+php-fpm at the same time, they are taking too much resources without any request. Here is the screenshot with the CPU Usage http://s28.postimg.org/v54q25zod/Untitled.png

    Read the article

  • Wireless router setup for 1-1 NAT

    - by Carlos
    What I have: A linksys router WAG160N with firmware version 2 A "pool" of 5 external static IP's provided by my ISP 213.xx.xxx.n All the required configuration values for the static IPs such as (Subnet Mask, Gateway and static DNS 1, 2, 3) Current WAN Configuration: Encapsulation: RFC 2364 PPPoA Multiplexing: VC QoS type: UBR DSL modulation: MultiMode What's connected to the network: 1 x Server (That I want to make available to the outside) 5 x Desktops with static internal IP's, such as 192.168.0.xx 2 x Network printers, also with internal static IP's 2 x Laptops 1 x NAS (Network Attached Storage) also on static IP What I want to do: I would like to make the server available from outside the network, for example from your house. The problem is that Im not really sure how to do this. I have tried following the steps on the instruction manual in Linksys but they do not seem to work, once I set it up as shown bellow, I loose internet and all hell breaks loose. Going into further detail, I would prefer if the network is changed as little as possible, by this I mean that all the computers stay networked within eachother and only the server is accessible from the outside the network. What I need HELP with: I have read around that it is possible to set a 1-1 NAT (I know where it is in the menu but have no clue what it does...) so that I can NAT a single public IP directly to a single private IP (in our case the server). But please, How do I do that? Or maybe an alternative?

    Read the article

  • How do I run multiple MVC apps within a subdomain on IIS7?

    - by Matthew Patrick Cashatt
    Hello and thanks for looking. Background I am currently wrapping up a development contract and the client would like for me to push a build of the application to their IIS 7-based server in which they would like to run multiple MVC apps. One of the issues I have off of the bat is that this server is already a subdomain on their larger network. So, if I enter SERVERNAME in my browser, it automatically directs to SERVERNAME.COMPANYNAME.COM. Now, this is just fine if I place my application in the default website/root. In this scenario, clicking a link that requests admin.html directs to `SERVERNAME.COMPANYNAME.COM/admin.html' as usual. BUT they want me to place the app in a subdomain on this server so that they can also run other apps on the same server. So I assume that I need MYAPP.SERVERNAME.COMPANYNAME.COM but I have no idea how to do that. Complicating matters is that my app and the future ones they wish to install are all MVC based which intercepts and re-writes URLs. I assume that this takes care of itself if I can just successfully get my app into a subdomain to begin with. What I have tried Creating a new site on the server in it's own app pool Setting the binding for that site to MYAPP.SERVERNAME.COMPANYNAME.COM Setting the binding for that site to MYAPP Setting the binding for that site to MYAPP.SERVERNAME Setting the binding for that site to MYAPP.SERVERNAME.COM Setting the binding for that site to MYAPP.COMPANYNAME.COM Nothing is working. Am I missing something simple here? Thanks, Matt

    Read the article

  • Help trying to figure out why IIS7 is crashing / locking up / denying connections

    - by Pure.Krome
    Hi folks, I've got a pretty busy website that is running on a single web front end machine, on W2K8 + IIS7. Every now and then - eg. maybe monday at 3am or something, then a few days later .. some early morning time .. then nothing for 2 weeks ... etc - the website fails to respond to any client connections. ie. no one can connect to the website. I can remote desktop to the machine, etc no probs. I restart the app pool (the website is running in intergrated mode), still nothing. I try and get a crash dump of the process (it's around 600 mb maybe even more) ... that fails after about a min of trying (and i have plenty of HD space). The only way to fix this issue, is to manually stop the www service and then start it again. The stopping takes a while (a minute?) while starting is nearly instant. I'm at a loss to figure out what part of my code is causing this. At first, I thought it might be a stack overflow because of some error that might be going to the error page, which in turn errors .. rinse repeat boom. But i've had a look at the error page and it feels ok. So, I'm hoping someone might be able to help and say how I can correctly get a proper dump of the IIS process so i can then do some more autopsy on it. I would email Tess Ferrandez (the goddess of crash debuging) but I thought I'd try here before I spam her. Can anyone have any suggestions to how I can figure out how to start to debug this issue?

    Read the article

  • XP box with 3 NICs on Server 2003 Domain

    - by Hannibal
    I have assigned all three NICs IP addresses that are outside the DHCP pool. I have 2 NICs are connected to 1 switch and the 3rd to my second switch. I want to assign one of the two NICs on the 1st switch to "normal" network activity (e.g. internet access, RDP, etc.) The other two NICs I want to reserve for mirroring ports on their respective switches. While this machine is connected to the domain I can access the internet and Remote Desktop. I have no idea which NIC I am using until I start mirroring a port, at which time, if I happen to be connected through one of the NICs I have dedicated to mirroring, I lose my remote desktop. I am aware that I would have more control over the NICs using Linux. I want to explore Windows solutions before I go that route because reinstalling another OS would be inconvenient (but not impossible). I would likely use a version of Backtrack (3 or 4 not sure). I would also have to learn how to access the machine remotely but I've done it before so this would be a minor obstacle. Thank you for your assistance.

    Read the article

  • Generalized strategy for file server virtualization in Xenserver

    - by Jamie
    I'm not shopping as much as I'm looking for some guidance on good idea / bad idea strategies. I'm sure I'm not in the "best practices" budget range. Currently, I have 3 dell poweredges running xenserver in a pool. Each node has a ubuntu file server, serving about 6TB. One is the primary, the other two are rsync targets for backup. The 6TB is stored on their respective local storage disks as an LVM of 3x2tb virtual disks. The fileserver VM disks are also stored on the node local disks. Each node also runs a smattering of light-weight VMs for web, development, windows VMs, and stuff like that. Several of those VM's disks reside on a QNAP NAS to play with live migration. These VM's are often clients of the primary file server (like all the mail, web content, user files are stored on the file server, not on the mail, web, and samba VMs). This all works fine, and is a major step up for us. The downside is that the QNAP is a single point of failure. And the only thing the QNAP is doing is serving migratable VM images, not client data. Someday the poweredge local arrays will be full, and we will have to reinvent ourselves again. Is it wise to have heavywieght vms (like the fileserver, with its 6+ TB disks) on a SAN or NAS? Would it be better to keep the VMs lightweight, have the VM images on a SAN or NAS, and use 2 or more NAS act as NFS-serving file appliances? A hybrid SAN/NAS that can serve iscsi for images and NFS for the client vms? It seems like live-magration would be a misnomer if you have to migrate a fileserver with its entire 6+ TB disk. I recognize there are plenty of ways to skin the cat. We've already skinned it a few ways. What makes sense?

    Read the article

  • Login failed for user 'XXX' on the mirrored sql server

    - by hp17
    Hello, We have 4 web servers that host our asp.net (3.5) application. Randomly, we get error messages like : 1) "Login failed for user 'userid'" 2) "A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server)" we are running sql2005 and have a principle and a mirror db (sync). When these exceptions are thrown, I look at the SQL error logs on the mirrored db and noticed the failed login messages in there. The principle db is running fine and the other web apps are working great. this will happen for maybe 10 min, then the app pool recycles and it starts hitting the principle db again. Is there a configuration I have incorrect? my theory is that our principle db is forwarding the request to the mirror, but that should never happen. any help??

    Read the article

  • Cisco ASA user authentication options - OpenID, public RSA sig, others?

    - by Ryan
    My organization has a Cisco ASA 5510 which I have made act as a firewall/gateway for one of our offices. Most resources a remote user would come looking for exist inside. I've implemented the usual deal - basic inside networks with outbound NAT, one primary outside interface with some secondary public IPs in the PAT pool for public-facing services, a couple site-to-site IPSec links to other branches, etc. - and I'm working now on VPN. I have the WebVPN (clientless SSL VPN) working and even traversing the site-to-site links. At the moment I'm leaving a legacy OpenVPN AS in place for thick client VPN. What I would like to do is standardize on an authentication method for all VPN then switch to the Cisco's IPSec thick VPN server. I'm trying to figure out what's really possible for authentication for these VPN users (thick client and clientless). My organization uses Google Apps and we already use dotnetopenauth to authenticate users for a couple internal services. I'd like to be able to do the same thing for thin and thick VPN. Alternatively a signature-based solution using RSA public keypairs (ssh-keygen type) would be useful to identify user@hardware. I'm trying to get away from legacy username/password auth especially if it's internal to the Cisco (just another password set to manage and for users to forget). I know I can map against an existing LDAP server but we have LDAP accounts created for only about 10% of the user base (mostly developers for Linux shell access). I guess what I'm looking for is a piece of middleware which appears to the Cisco as an LDAP server but will interface with the user's existing OpenID identity. Nothing I've seen in the Cisco suggests it can do this natively. But RSA public keys would be a runner-up, and much much better than standalone or even LDAP auth. What's really practical here?

    Read the article

  • DD-WRT Acces Point as a Router

    - by Dzh
    Following suggestion to this question asked on Network Engineering, I am asking the question here. this is an extension to my previous question (I think it was deleted), where I was claiming that DDWRT was disabling it's DHCP server once connected to the network. I was wrong, as it now seems that it is bridging itself with another parallel connected wireless router. I have two Draytek 2820 and one Netgear WG602v3 with latest DDWRT. Lets call one wired-Draytek and it has wireless disabled. The other one, let's call it wireless-Draytek, is connected to wired-Draytek and has wireless with MAC filtering enabled. Once I connect Netgear to the wired-Draytek, the client that connects to Netgear, will be assigned with IP address from the wireless-Draytek. If the MAC address is not on the wireles-Draytek, the client is unable to obtain IP address and has no connectivity at all, even with manually assigned static IP configuration. To illustrate further, this is how network is set up: wired-Draytek ---------- wireless-Draytek \_________ Netgear What I wish to have, is that Netgear issues IP addresses from it's own IP pool and ignores the MAC filtering rules from wireless-Draytek. This is kind of puzzling how this they are bridging (if they are) themselves automatically. Thanks. UPDATE: It's not a home network. I gave you a bit simplified set-up. If there is a better site on Stack Exchange to ask this, please let me know. The Drayteks are running stock firmware, it's only Netgear that I've flashed to get more stability. In addition to these routers, I have also three 3COM Baseline switch 2824, and another Draytek router with Prosafe FS752TP PoE switch dedicated for VoIP phones. Wired-Draytek has IP 10.0.0.1, DHCP disabled as there is AD DC which is issuing IP addresses. Wireless-Draytek has IP 1.1.1.1 and DHCP enabled. Netgear has default - 192.168.1.1. As per suggestion, the specific question is - how do I isolate these two wireless routers?

    Read the article

  • Php5-fpm Crash if much visitors

    - by chillah
    I decided to change my OP to Nginx from Litespeed because i read much about the low resource that Nginx would cost. Im running a Wordpress site with 500 users online My www.conf in "/etc/php5/fpm/pool.d" pm.max_children = 200 pm.start_servers = 20 pm.min_spare_servers = 20 pm.max_spare_servers = 60 ;pm.process_idle_timeout = 10s; ;pm.max_requests = 1000 Since i changed that config to higher requests and more children the Site needs longer till it shows me a White blank page. If i restart the php5-fpm prozess then, the site is running fine again. The CPU usage is at 5% if the php5-fpm is crashed and if its running its about 20 to 90%! Heres my Nginx.conf: user www-data; worker_processes 2; pid /var/run/nginx.pid; events { worker_connections 3048; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; I already tried changing the settings, tryed all variants but nothing worked. The Mashine: Dualcore 4gb ram

    Read the article

  • How to configure traffic from a specific IP hardcoded to an IP to forward to another IP:PORT using i

    - by cclark
    Unfortunately we have a client who has hardcoded a device to point at a specific IP and port. We'd like to redirect traffic from their IP to our load balancer which will send the HTTP POSTs to a pool of servers able to handle that request. I would like existing traffic from all other IPs to be unaffected. I believe iptables is the best way to accomplish this and I think this command should work: /sbin/iptables -t nat -A PREROUTING -s $CUSTIP -j DNAT -p tcp --dport 8080 -d $CURR_SERVER_IP --to-destination $NEW_SERVER_IP:8080 Unfortunately it isn't working as expected. I'm not sure if I need to add another rule, potentially in the POSTROUTING chain? Below I've substituted the variables above with real IPs and tried to replicate the layout in my test environment in incremental steps. $CURR_SERVER_IP = 192.168.2.11 $NEW_SERVER_IP = 192.168.2.12 $CUST_IP = 192.168.0.50 Port forward on the same IP /sbin/iptables -t nat -A PREROUTING -p tcp -d 192.168.2.11 --dport 16000 -j DNAT --to-destination 192.168.2.11:8080 Works exactly as expected. IP and port forward to a different machine /sbin/iptables -t nat -A PREROUTING -p tcp -d 192.168.2.11 --dport 16000 -j DNAT --to-destination 192.168.2.12:8080 Connections seem to timeout. Restrict IP and port forward to only be applied to requests from a specific IP /sbin/iptables -t nat -A PREROUTING -p tcp -s 192.168.0.50 -d 192.168.2.11 --dport 16000 -j DNAT --to-destination 192.168.2.12:8080 Times out as well. Probably for the same reason as the previous entry. Does anyone have any insights or suggestions? thanks,

    Read the article

  • SQL Server 2008 login problem with ASP.NET application: Failed to open the explicitly specified database

    - by eulerfx
    I am running SQL Server 2008 Express Edition on Windows Server 2008 with an ASP.NET application which must access the server. The ASP.NET application is associated with an application pool that runs on the NetworkService account. This account in turn has a Login and User record on SQL Server in the required database. When I attempt to run the ASP.NET website I get a blank page and when viewed in the error log, I seem to be getting this information event record: Login failed for user 'NT AUTHORITY\NETWORK SERVICE'. Reason: Failed to open the explicitly specified database. [CLIENT: myLocalMachine] The connection string has Trusted_Connection=True; and the required database specified. When I explicitly specify the user name and password I get another login error stating the password is incorrect, even though the same un/pw combination works through SQL Server Management studio. The NETWORK SERVICE account seems to have all the required privileges for the database. Also, I made a test ASP.NET website project which does a simple select from a table in that database, and using the same config file I am not getting the error and it seems to work. Is it something to do with trust levels then, because the original ASP.NET web app references various DLLs including open source libraries. Also, the application does not seem to be able to write to the event log itself, throwing a security exception, even though everything in the config files, including machine.config states the app is in full trust.

    Read the article

  • Server slowdown

    - by Clinton Bosch
    I have a GWT application running on Tomcat on a cloud linux(Ubuntu) server, recently I released a new version of the application and suddenly my server response times have gone from 500ms average to 15s average. I have run every monitoring tool I know. iostat says my disks are 0.03% utilised mysqltuner.pl says I am OK other see below top says my processor is 99% idle and load average: 0.20, 0.31, 0.33 memory usage is 50% (-/+ buffers/cache: 3997 3974) mysqltuner output [OK] Logged in using credentials from debian maintenance account. -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.63-0ubuntu0.10.04.1-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 370M (Tables: 52) [--] Data in InnoDB tables: 697M (Tables: 1749) [!!] Total fragmented tables: 1754 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 19h 25m 41s (1M q [28.122 qps], 1K conn, TX: 2B, RX: 1B) [--] Reads / Writes: 98% / 2% [--] Total buffers: 1.0G global + 2.7M per thread (500 max threads) [OK] Maximum possible memory usage: 2.4G (30% of installed RAM) [OK] Slow queries: 0% (1/1M) [OK] Highest usage of available connections: 34% (173/500) [OK] Key buffer size / total MyISAM indexes: 16.0M/279.0K [OK] Key buffer hit rate: 99.9% (50K cached / 40 reads) [OK] Query cache efficiency: 61.4% (844K cached / 1M selects) [!!] Query cache prunes per day: 553779 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 34K sorts) [OK] Temporary tables created on disk: 4% (4K on disk / 102K total) [OK] Thread cache hit rate: 84% (185 created / 1K connections) [!!] Table cache hit rate: 0% (256 open / 27K opened) [OK] Open file limit used: 0% (20/2K) [OK] Table locks acquired immediately: 100% (692K immediate / 692K locks) [OK] InnoDB data size / buffer pool: 697.2M/1.0G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Enable the slow query log to troubleshoot bad queries Increase table_cache gradually to avoid file descriptor limits Variables to adjust: query_cache_size (> 16M) table_cache (> 256)

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >