Search Results

Search found 13573 results on 543 pages for 'the great widi'.

Page 383/543 | < Previous Page | 379 380 381 382 383 384 385 386 387 388 389 390  | Next Page >

  • NSclient++ NRPE issues

    - by Kyle
    I have had NSclient++ working with Nagios for a while now. Recently I started testing Nagwin just to see how it would work, out of pure curiosity. I stopped checking a test server with my main Nagios config, set NSclient++ to NRPE mode, and pointed Nagwin at it. It worked great for a few hours then suddenly I started seeing "UNKNOWN: No Handler for that command." I figured it has to be Nagwin's fault since it's so new, I'll just unload NRPElistner.dll and return my server to being monitored by check_NT. However now check_NT doesn't work my main Nagios server returns timeout errors and is unable to connect at all. My Nagwin server can connect to it, the server just doesn't know how to handle the check_NRPE commands even though it did with no changes a few hours earlier. I have been working on this for a day now and am fairly certain it is NSclient++ who is to blame here. My nagwin box has successfully stayed connected to a similar server throughout the night, without any issues. And my main Nagios config is not having any problems at all. I have been able to successfully switch another server between being monitored by nagios and nagwin without any problems by simply loading and unloading the NRPE.dll. I have tried uninstalling NSclient++ and reinstalling with fresh configuration but still receive the errors. As of now the firewall is off on the server, NSclient++ is setup to accept connection from any server, there is no password, I have also turned ssl off, and the NRPE module is loaded. Any Ideas would be appreciated, I am not an advanced Nagios user but I do know my way around it and can easily break it down and set it up again. I also want to add that while in test mode NSclient++ is unable to handle check_NRPE commands there either.

    Read the article

  • Is it possible to play multiple audio streams from one "jukebox" to multiple Airport Express devices?

    - by Alex Reynolds
    I have set up a Mac mini as a jukebox that streams audio to an Airport Express in another room in the house, using the AirPlay/AirTunes feature in iTunes. I control this with the iOS Remote app, and this works great. At the present time, it looks like the Mac mini's copy of iTunes gets taken over by the Remote app, while streaming. If I set up a second Airport Express in room B, is there a way to set it up (as well as the jukebox) so that it can receive and play its own unique music stream ("stream B"), separate from what's going on at the Mac mini, or in room A, which is playing stream A? To accomplish this, I would be happy to buy a copy of Rogue Amoeba's AirFoil if it will allow sending multiple, separate audio streams from one computer to the multiple wireless bridges, while using the Remote app (or a Rogue Amoeba equivalent for iOS). However, it is unclear to me from their site documentation, whether that is possible or not. I'd prefer to give the points to an answer that solves this problem. If you don't know if it can be done, or do not think it can be done, please allow others to answer. I appreciate your help. Thanks for your advice.

    Read the article

  • Outlook errors and blank address book?

    - by Chasester
    I have a user with a fresh install of Windows 7 64 bit. Office 2010 was installed also. Everything worked great until we installed Adobe Pro 9 and Wordperfect 12. (for outlook we are not using exchange). Then Outlook popped an error saying could not start Outlook. I then modified it to run in compatibility mode and Outlook starts. Then took compatibility mode off and outlook continues to start without difficulty. However, her address book went blank (contacts are all there). In the properties of the contact folder it is checked (and grayed out) to include this folder in the address book, and the Outlook address book is listed when I look at account properties. I tried creating a new profile to no avail. I tried creating a new profile and creating a new pst file - to no avail. I tried uninstalling office, removing the folders inside roaming and local, reinstalled office; got the same could not start Outlook error - did the compatibility mode bit and got it to start - but the Address book continues to be empty. Has anyone run across this before? I'm thinking that there must be some other preferences type folder other than those in Roaming and Local since her signature remained when I reinstalled Office.

    Read the article

  • Can I change the "From" name and address without going into Mail.app's preferences?

    - by Arjan van Bentem
    Can I somehow add an editable "From" field to Mail.app, a bit like Virtual Identity for Thunderbird, just like I could change the "Reply To" address on the fly? In Mail.app, one can set up multiple email addresses for a single account by just entering a comma-seperated list like [email protected], [email protected]. Next, when composing a message, Mail offers a dropdown to select a "From" address. And when replying†, it automatically selects the right address if it can find a match: Nice, but I'd like to be able to change the "From" on the fly, without going into the Account Information. Also, in previous versions of Mail one could even specify multiple Full Names for a single account: Email Address: Arjan <[email protected]>, Arjan on SU <[email protected]> But nowadays, Mail only uses the setting of Full Name, and even ignores names in the Email Address field when no value for Full Name is set at all. Hence, it would be great if I could change the Full Name on the fly as well. I had no luck finding a plugin for Mail yet. † When using sub-addressing, anything that is sent to [email protected] is simply delivered to [email protected]. I'd then like to reply with the same full address, rather than just [email protected] if Mail cannot find a match, without going into the Account Information first. I sometimes also want to compose a new message with a new sub-addressing-address.

    Read the article

  • Ubuntu NBR karmic boot freezes at fsck from util-linux-ng 2.16

    - by BlueBill
    Hi all, I have a netbook (emachine e250 - equivalent to an acer aspire one) and I have Ubunutu NBR 9.10 installed on it. Every other cold boot freezes at the following error message: "fsck from util-linux-ng 2.16" There is no disk activity, no activity what so ever. I have left the machine sit for over an hour and nothing. It takes a couple of hard resets to be able to boot properly. Once it boots everything works great (wireless, suspend/resume, etc.)! I have spent the last couple of weeks researching the problem and the only thing that seems to work is setting nolapic in the boot string in grub - it boots every time. Unfortunately, nolapic disables the second core and causes problems with suspend resume. At first I thought it was an fsck problem with the first partition on the hard disk as it is a hidden ntfs partition containing the windows xp recover information. So in /etc/fstab I set the partition so that it would be ignored by fsck. This didn't seem to do anything. I have these partitions: /dev/sda1 - an ntfs recovery partition /dev/sda2 - /boot /dev/sda3 - swap /dev/sda5 - / /dev/sda6 - /home I am running kernel version 2.6.31-19-generic and have all the patches (as indicated by update manager). I also have no splash screen so I can see the boot progress. I have only been using NBR since January, I have been using Ubuntu on my desktop since last June (2009-06). What logs should I be looking at? Is there a log for failed boots? Thanks, Troy

    Read the article

  • Auto load balancing two node Cluster Hyper-v 2008 R2 enterprise?

    - by Kristofer O
    My setup is a 2 node cluster with 72GB ram each and a ~10TB MD3000i Iscsi SAN. I have about 30VMs running I keep about 15 on either server. I do a live migration to the other server if I need to run updates or whatever... Either one of the servers is able of running all VM if needed, but the cpu is pretty high. Here's my issues. I know Hyper-v has a limit of a single Live-migration at a time. But Why doesn't it queue them up to move one at a time? If I multi select I don't get the option to live migrate a one at a time. OR if I'm in the process of Migrating one it will give me an error that it's currently migrating a VM... Is there a button I missed that will tell a Node that it needs to migrate all the VMs elsewhere? Another question, does anyone know whats the best way to load balance VMs based on CPU and/or network utilzation. I have some VMs that don't do much. and some that trash the CPU or network. I'd like to balance it out on both servers if at all possible. and Is there any way to automate it? last question... If I overcommit my Cluster is there a way to tell the cluster that I want certian VMs the be running and to savestate other VMs based on availible system resources? Say when my one node blue screens and the other node begins starting the VMs up. I want the unimportant ones to shutdown or savestate so the important ones can stay running or come back online. Thanks just for reading all that. Any help would be great.

    Read the article

  • Virutal Machine loses network connectivity on Hyper V Cluster

    - by Chris W
    We're running a number of VMs on a 6 node failover cluster of blades using Hyper V. We have an intermittent issue (every few days at different times - not a fixed frequency) of VMs losing network connectivity. Console access to the VM suggests all is fine and the underlying blade has normal connectivity. To resolve the problem we either have to re-start the VM or, more usually, we do a live migration to another blade which fires up connectivity and we then migrate it back to the original blade. I've had 3 instances of this happen with a specific VM running on a particular blade however it has happened once with a different VM running on a different blade. All VMs and blades have the same basic setup and are running Windows 2008 R2. Any ideas where I should be looking to diagnose the possible causes of this problem as the event logs provide no help? Edit: I've checked that each blade is running the latest NIC drivers and all seem to be fine. Something that is confusing me - a failover or restart of the VM resolves the issue. Whilst I need to work out the underlying issue that is causing the NICs to hang I'm also concerned that the VM didn't failover to another node which would have solved the outage for me. Is there a way to configure the cluster so that it can tell that the VM guest has lost connectivity and fail it over? As things stand the cluster is assuming that the VM is running happily as I presume Hyper V says everything is great even though there is a problem.

    Read the article

  • IIS URL Rewrite HTTP to HTTPS with Port

    - by Andy Arismendi
    My website has two bindings: 1000 and 1443 (port 80/443 are in use by another website on the same IIS instance). Port 1000 is HTTP, port 1443 is HTTPS. What I want to do is redirect any incoming request using "htt p://server:1000" to "htt ps://server:1443". I'm playing around with IIS 7 rewrite module 2.0 but I'm banging my head against the wall. Any insight is appreciated! BTW the rewrite configuration below works great with a site that has an HTTP binding on port 80 and HTTPS binding on port 443, but it doesn't work with my ports. P.S. My URLs intentionally have spaces because the 'spam prevention mechanism' kicked in. For some reason google login doesn't work anymore so I had to create an OpenID account (No Script could be the culprit). I'm not sure how to get XML to display nicely so I added spaces after the opening brackets. < ?xml version="1.0" encoding="utf-8"? < configuration < system.webServer < rewrite < rules < rule name="HTTP to HTTPS redirect" stopProcessing="true" < match url="(.*)" / < conditions trackAllCaptures="true" < add input="{HTTPS}" pattern="off" / < /conditions < action type="Redirect" redirectType="Found" url="htt ps: // {HTTP_HOST}/{R:1}" / < /rule < /rules < /rewrite < /system.webServer < /configuration

    Read the article

  • Sharepoint db issue after DB move to SQL 08

    - by JohnyV
    Recently we have moved our sharepoint 2007 db from sql 2000 server to 2008 x64 SQL server. All seems well, however there is a problem where the sql server stops running and the service has to be restarted. The errors mention insufficient internal memory etc. I have tried to start the db using -g384 which is the default in sql 2000 but 256 is default for 2008 I believe. This has not rectified the issue. I was advised that perhaps the issue may be rectified by upgrading to wss 3.0 sp2 however When I have tried to install this i get another error post sp2 update and have to refer back to a vm snapshot. The error after the service pack is Server error: http://go.microsoft.com/fwlink?LinkID=96177 So I guess I have a few questions How can I fix the first issue and the 2nd issue. I have checked out many forums and posts and have tried a few things and still get no joy. Any assistance would be great. UPDATE I have fixed the Server error: http://go.microsoft.com/fwlink?LinkID=96177 the i needed to run the wss sp2 as well as the office servers sp2 then the config wizard then the moss configuration worked. The errors I am getting in SQL are SQL Server was unable to run a new system task, either because there is insufficient memory or the number of configured sessions exceeds the maximum allowed in the server. Verify that the server has adequate memory. Use sp_configure with option 'user connections' to check the maximum number of user connections allowed. Use sys.dm_exec_sessions to check the current number of sessions, including user processes. A read operation on a large object failed while sending data to the client. A common cause for this is if the application is running in READ UNCOMMITED isolation level. The connection will be terminated. There is insufficient system memory in resource pool 'internal' to run this query. These errors are by a user that was created as a service for sharepoint.

    Read the article

  • Apache2 Enabling Includes module causes svn access to quit working

    - by Matthew Talbert
    I have dav_svn installed to provide http access to my svn repos. The url is directly under root, eg mywebsite.com/svn/individual-repo. This setup has been working great for some time. Now, I need SSI (server-side includes) for a project, so I enabled this module with a2enmod include. Now, tortoisesvn can't access the repo; it always returns a 301 permanent redirect. Some playing with it reveals I can access it in a browser if I'm sure to include the trailing / but it still doesn't work in TortoiseSVN. I've looked at all of the faq's for this problem with TortoiseSVN and apache, and none of them seem to apply to my problem. Anyone have any insight into this problem? I'm running Ubuntu 9.10 with Apache 2.2.12. The only change I've made to my configuration is to enable the includes mod. Here's my dav_svn conf: <Location /svn> DAV svn SVNParentPath /home/matthew/svn AuthType Basic AuthName "Subversion repository" AuthUserFile /etc/subversion/passwd Require valid-user </Location> and here's the relevant part of my virtual host conf: <Location /svn> SetHandler None Order allow,deny Allow from all </Location> Edit: OK, I've discovered that the real conflict is between the include module and basic authentication. That is, if I disable the include module, browse to the subversion repo, enter my user/pass for the basic authentication, I can browse it just fine. It even continues to work after I re-enable the include module. However, if I browse with another browser where I'm not already authenticated, then it no longer works.

    Read the article

  • SCCM 2012 - some remote clients unable to download some applications, 401.2 error

    - by growse
    I've got a small SCCM 2012 deployment with about 35 clients attached. Most of these clients are in the same network as the single SCCM host, but three are about 1000 miles away. Oddly, these three clients have stopped being able to download some application packages over BITS. Publishing a new package works for all the other clients, but for these three it never seems to download. If I go to the software centre, it just hangs at "0% downloaded". On the client, the DataTransfer.log says (repeatedly): CDTSJob::HandleErrors: DTS Job '{2DCBBB4C-6D84-479A-9218-885B72C834B9}' BITS Job '{E78147DD-4A26-4942-B4FD-6EC3EB77EECD}' under user 'S-1-5-18' OldErrorCount 442 NewErrorCount 443 ErrorCode 0x80072EE2 DataTransferService 30/07/2012 09:27:41 2964 (0x0B94) CDTSJob::HandleErrors: DTS Job ID='{2DCBBB4C-6D84-479A-9218-885B72C834B9}' URL='http://sccm-host:80/SMS_DP_SMSPKG$/Content_3e7f6982-6346-4f27-ae00-ad5dcb391455.1' ProtType=1 DataTransferService 30/07/2012 09:27:41 2964 (0x0B94) Cas.log says (repeatedly): Location update from CTM for content Content_3e7f6982-6346-4f27-ae00-ad5dcb391455.1 and request {AD041FCB-03D2-4FE6-A6FA-38A6B80FB2A1} ContentAccess 30/07/2012 08:33:39 5048 (0x13B8) Download location found 0 - http://lonsbrndsccm02.mcs.int.thomsonreuters.com/SMS_DP_SMSPKG$/Content_3e7f6982-6346-4f27-ae00-ad5dcb391455.1 ContentAccess 30/07/2012 08:33:39 5048 (0x13B8) Download request only, ignoring location update ContentAccess 30/07/2012 08:33:39 5048 (0x13B8) On the server, I've enabled failed request log tracing. The raw IIS log says the following: 2012-07-30 08:28:42 10.13.111.35 GET /SMS_DP_SMSPKG$/Content_3e7f6982-6346-4f27-ae00-ad5dcb391455.1/sccm /NSCP-0.4.0.172-x64.msi 80 - 10.2.27.19 Microsoft+BITS/7.5 401 2 5 293 Which is a 401.2 error, meaning access denied. The failed request log is large, but the punchline is that it chucks out a Unauthorized: Access is denied due to invalid credentials. message. All clients are members of the same domain and appear to be (otherwise) working great. I've re-installed the SCCM client, deleted and re-added the computer to SCCM. Some other packages seem to work fine, the daily anti-malware delta gets downloaded and patched without issue. Why are these packages failing?

    Read the article

  • Apache Reverse Proxy not working inside a VirtualHost running a Mono Web Application

    - by Arwen
    I have a mono web application running with this virtual host below. It is running on Apache 2.2.20 / Ubuntu 11.10. I tried to add a reverse proxy inside this virtualhost so I can make asynchronous or AJAX type calls back to this same domain. My asynchronous requests would have problems in many browsers calling services that are on another domain (cross domain requests problem). I am wanting to do reverse proxy calls to this other service using http://www.whatever.com/monkey/. So, I added the directive and top directive to try to make this work. It is weird though...nothing I do seems to have any effect. I can put the exact same markup in my default website virtualhost file and it works great. What is the deal? Are some of these Mono directives causing problems? <VirtualHost *:80> ServerName www.whatever.com ServerAlias whatever.com *.whatever.com ServerAdmin [email protected] DocumentRoot /home/myuser/web/whatever ProxyRequests off <Proxy *> Order allow,deny Allow from all </Proxy> <Location /monkey/> ProxyPass http://www.google.com/ ProxyPassReverse http://www.google.com/ </Location> MonoServerPath www.whatever.com "/usr/bin/mod-mono-server2" MonoSetEnv www.whatever.com MONO_IOMAP=all MonoApplications www.whatever.com "/:/home/myuser/web/whatever" <Location "/"> Allow from all Order allow,deny MonoSetServerAlias www.whatever.com SetHandler mono SetOutputFilter DEFLATE SetEnvIfNoCase Request_URI "\.(?:gif|jpe?g|png)$" no-gzip dont-vary </Location> <IfModule mod_deflate.c> AddOutputFilterByType DEFLATE text/html text/plain text/xml text/javascript </IfModule> </VirtualHost>

    Read the article

  • How can I map a Windows group login to the dbo schema in a database?

    - by Christian Hayter
    I have a database for which I want to restrict access to 3 named individuals. I thought I could do the following: Create a local Windows group on the database server and add the named individuals to it. Create a Windows login in SQL Server mapped to the local Windows group. Map the login to the "dbo" schema in the database, so that the users can access all objects without having to qualify them with the schema name. When I try to do step 3, I get the following error: Msg 15353, Level 16, State 1, Line 1 An entity of type database cannot be owned by a role, a group, an approle, or by principals mapped to certificates or asymmetric keys. I have tried to do this via the IDE, the sp_changedbowner sproc, and the ALTER AUTHORIZATION command, and I get the same error each time. After searching MSDN and Google, I find that this restriction is by design. Great, that's useful. Can anyone tell me: Why this restriction exists? It seems very arbitrary. More importantly, can I accomplish my requirement some other way? Other info that might be pertinent: The server is fully up to date with service packs and hotfixes. All objects in the database are owned by the "dbo" schema, and it's not feasible to change that. The database is running in compatibility level 80, and it's not feasible to change that to 90 yet. I am free to make any other changes (within reason, depending on what they are).

    Read the article

  • Force Windows Local Subnet Traffic through a Gateway

    - by Beerey
    Hi all, We are attempting to route all traffic from a certain machine to a gateway. This works ok for traffic destined for subnets outside of the machine's subnet. However, traffic to machines in the same subnet as the source machine goes through an On-Link gateway in Windows. This means that the default gateway is ignored, and traffic in a subnet (for example, 192.168.50.10 - 192.168.50.11) flows. Destination Netmask Gateway Interface Metric 192.168.50.0 255.255.255.0 On-link 192.168.50.214 276 This route can be deleted from Windows, but when the machine is rebooted it always comes back. Adding a persistant static route to the gateway with a lower metric doesn't work, since it will still try the On-Link gateway after the persistant route fails. Adding each machine in a VLAN isn't an option due to the setup we have Adding a startup script to delete the gateway isn't a great option either, since users will have full admin access to the machine and might disable the script. We cannot transperantly intercept all network traffic on the subnet using Gratuitous ARPs or transparent proxying, since there are other machines on the subnet which use a different gateway The only way we have gotten it to work is by adding a persistant route to the gateway for the subnet traffic, and deleting the On-link route on reboot. The question is then. Is there a way to permanently remove this On-link route If not, is there a way to otherwise force even local subnet traffic to go through a gateway?

    Read the article

  • Outbound HTTP performance tuning recommendations

    - by Richard Gadsden
    I'll detail my exact setup below, but general recommendations for a better web-browsing experience will be useful. A nice checklist of things to try would be great! I have 600 users on a single site with an 8MB leased line. I get a lot of moans about the performance of "the internet" (ie web-browsing). What recommendations do the community have for speeding things up without just throwing more bandwidth at it? I expect I will end up buying some more, but good management tips are always valuable. My setup is this: Cisco PIX (515E) firewall on the edge of the network. It's just doing some basic NAT, and opening up a handful of ports to various bastion hosts (aka DMZ servers). The DMZ is just a switch that the servers are plugged into. ISA 2006 Enterprise array (two servers) connecting DMZ to the internal LAN, with WebSense Web Security filtering HTTP traffic so users can't look at porn or waste bandwidth on YouTube during working hours. I've done a few things - I've just switched my internal DNS over to use root hints, which halved DNS query latency from 500ms to 250ms. Well worth doing. I'm trying to cache more aggressively, but so much more of the internet is AJAXy and doesn't cache very well as compared to five years ago. Plus the 70GB of cache which felt like a lot a few years ago really isn't any more. I'm getting about 45% cache hits by number of requests, but only about 22% by size, ie larger objects are less likely to be cached. Latency seems to be part of the problem. Is that attributable to the bandwidth problem, or are there things I can look at to try to reduce latency even on heavily-loaded bandwidth?

    Read the article

  • Distributed development staff needing a common IP range

    - by bakasan
    I work on a development staff that is geographically distributed, mostly all throughout the state of CA, but several key members also must travel frequently. We rely quite heavily on a 3rd party provider API for a great deal of our subsystems (can't get into who it is or what they do). The 3rd party however is quite stringent on network access and have no notion of a development sandbox. Access is restricted to 2, 3 IP numbers and that's about it. Once we account for our production servers, that leaves us with an IP or two to spare for our dev team--which is still problematic as people's home IP changes, people travel, we have more than 2 devs, etc. Wide IP blocks are not permitted by the 3rd party. Nor will they allow dynamic DNS type services. There is no simple console to swap IPs on the fly either (e.g. if a dev's IP at home changes or they are on the road). As none of us are deep network experts, I'm wondering what our viable options are? Are there such things as 3rd party hosts to VPNs? Generally I think of a VPN as a mechanism to gain access to a home office, but the notion would be a 3rd party VPN that we'd all connect to and we'd register this as an IP origin w/ our 3rd party. We've considered using Amazon EC2 to effectively host a dev environment for each dev and using that to connect. Amazon only gives you so many static IPs however (I believe 5?) so this would only be a stop gap solution until our team size out strips our IP count at Amazon. Those were the only viable thoughts that I had, but again, I'm far from a networking guy. Tried searching for similar threads, but I'm not even sure I know the right vernacular to look around for.

    Read the article

  • fail2ban Error Gentoo

    - by Mark Davidson
    Hi All I've recently setup a new VPS running Gentoo (My first time using the distro so please forgive me is this is a really easy one) and as I've done with other servers installed fail2ban. Setting it up to block the host via iptables, on too many unsuccessful logins with ssh. However I'm getting a strange error that I can't quite solve. When I start fail2ban I get these lines in the error log 2009-11-13 18:02:01,290 fail2ban.jail : INFO Jail 'ssh-iptables' started 2009-11-13 18:02:01,480 fail2ban.actions.action: ERROR iptables -N fail2ban-SSH iptables -A fail2ban-SSH -j RETURN iptables -I INPUT -p tcp --dport ssh -j fail2ban-SSH returned 100 If I try and force a ban these errors show up in the log and the host is not banned 2009-11-13 11:23:26,905 fail2ban.actions: WARNING [ssh-iptables] Ban XXX.XXX.XXX.XXX 2009-11-13 11:23:26,929 fail2ban.actions.action: ERROR iptables -n -L INPUT | grep -q fail2ban-SSH returned 100 2009-11-13 11:23:26,930 fail2ban.actions.action: ERROR Invariant check failed. Trying to restore a sane environment 2009-11-13 11:23:27,007 fail2ban.actions.action: ERROR iptables -N fail2ban-SSH iptables -A fail2ban-SSH -j RETURN iptables -I INPUT -p tcp --dport ssh -j fail2ban-SSH returned 100 2009-11-13 11:23:27,016 fail2ban.actions.action: ERROR iptables -n -L INPUT | grep -q fail2ban-SSH returned 100 2009-11-13 11:23:27,016 fail2ban.actions.action: CRITICAL Unable to restore environment My versions are as follows Linux masked 2.6.18-xen-r12 #2 SMP Wed Mar 4 11:45:03 GMT 2009 x86_64 Intel(R) Xeon(R) CPU E5504 @ 2.00GHz GenuineIntel GNU/Linux net-analyzer/fail2ban-0.8.4 net-firewall/iptables-1.4.3.2 If anyone could shead some light on these errors that would be great, I did wonder if it was a problem with iptables or some kernel modules but I can block an IP if I do. iptables -I INPUT -s 25.55.55.55 -j DROP so makes me think its something a bit more unusual. Thanks a lot in advance

    Read the article

  • ISA Server 2006 SSL Certificate Dilemma

    - by JohnyD
    I'm making so great headway in offering our services over https with help from a Go Daddy certificate, later to be upgraded to Thawte SSL123 certs. But, I've just run into one whopper of a problem. Here's my setup: I run an ISA 2006 firewall. Our web services are distributed over 2 servers. One is Windows 2000 (www.domain.com) and the other is Windows 2003 (services.domain.com). So, I'll need to purchase 2 certs for both www and services, import them into IIS6 on their respective machines, then export them with the primary key (making sure to Include all certificates in the certification path if possible... that had me stumped for a while), and then to finally import them into ISA's local computer Personal store. The problem I've just run into is that I have separate firewall rules for services.domain.com and www.domain.com... because requests need to be forwarded to different web servers. Each of these firewall rules use the same httplistener. I have just found out that you can only use 1 certificate per httplistener. To make matters worse you can only have a single httplistener per ip / port. Is this correct? I can only use a single certificate for a single ip address? This would seem to be a severe limitation. Am I wrong? If I'm not then I've got a whole lot more work ahead of me as I'll have to set up extra ip's, add them to the firewall's network interface, create new listeners using that ip, etc... Can someone please confirm that I'm doing this correctly / incorrectly? Once I got my head wrapped around it all it seemed easy... then this. Thanks in advance.

    Read the article

  • Split Excel worksheet into multiple worksheets based on a column with VBA (Redux)

    - by Ceeder
    I'm rather new to VBA and I've been working with the code generously displayed and explained by Nixda: Split Excel Worksheet... My only challenge is I've been trying desperately to find a way to include the top 3 rows as a title bu it seems to only allow for one. Here's the code have: Dim Titlesheet As Worksheet iCol = 23 '### Define your criteria column strOutputFolder = (Sheets("Operations").Range("D4")) '### <--Define your path of output folder Set ws = ThisWorkbook.ActiveSheet Set rngLast = Columns(iCol).Find("*", Cells(3, iCol), , , xlByColumns, xlPrevious) Set Titlesheet = Sheets("Input") ws.Columns(iCol).AdvancedFilter Action:=xlFilterInPlace, Unique:=True Set rngUnique = Range(Cells(4, iCol), rngLast).SpecialCells(xlCellTypeVisible) If Dir(strOutputFolder, vbDirectory) = vbNullString Then MkDir strOutputFolder For Each strItem In rngUnique If strItem < "" Then Sheets("Input").Select Range("A1:V3").Select Selection.Copy ws.UsedRange.AutoFilter Field:=iCol, Criteria1:=strItem.Value Workbooks.Add Sheets("Sheet1").Select ActiveSheet.PasteSpecial ws.UsedRange.SpecialCells(xlCellTypeVisible).Copy Destination:=[A4] strFilename = strOutputFolder & "\" & strItem ActiveWorkbook.SaveAs Filename:=strFilename, FileFormat:=xlWorkbookNormal ActiveWorkbook.Close savechanges:=False End If Next ws.ShowAllData Is there something I can change to include these lines? Thanks so much, this code provided by Nixda has taught me a great deal!

    Read the article

  • Apache Won't Restart After Compiling PHP with Postgres

    - by gonzofish
    I've compiled PHP (v5.3.1) with Postgres using the following configure: ./configure \ --build=x86_64-redhat-linux-gnu \ --host=x86_64-redhat-linux-gnu \ --target=x86_64-redhat-linux-gnu \ --program-prefix= \ --prefix=/usr/ \ --exec-prefix=/usr/ \ --bindir=/usr/bin/ \ --sbindir=/usr/sbin/ \ --sysconfdir=/etc \ --datadir=/usr/share \ --includedir=/usr/include/ \ --libdir=/usr/lib64 \ --libexecdir=/usr/libexec \ --localstatedir=/var \ --sharedstatedir=/usr/com \ --mandir=/usr/share/man \ --infodir=/usr/share/info \ --cache-file=../config.cache \ --with-libdir=lib64 \ --with-config-file-path=/etc \ --with-config-file-scan-dir=/etc/php.d \ --with-pic \ --disable-rpath \ --with-pear \ --with-pic \ --with-bz2 \ --with-exec-dir=/usr/bin \ --with-freetype-dir=/usr \ --with-png-dir=/usr \ --with-xpm-dir=/usr \ --enable-gd-native-ttf \ --with-t1lib=/usr \ --without-gdbm \ --with-gettext \ --without-gmp \ --with-iconv \ --with-jpeg-dir=/usr \ --with-openssl \ --with-zlib \ --with-layout=GNU \ --enable-exif \ --enable-ftp \ --enable-magic-quotes \ --enable-sockets \ --enable-sysvsem \ --enable-sysvshm \ --enable-sysvmsg \ --with-kerberos \ --enable-ucd-snmp-hack \ --enable-shmop \ --enable-calendar \ --with-libxml-dir=/usr \ --enable-xml \ --with-system-tzdata \ --with-mime-magic=/usr/share/file/magic \ --with-apxs2=/usr/sbin/apxs \ --with-mysql=/usr/include/mysql \ --without-gd \ --with-dom=/usr/include/libxml2/libxml \ --disable-dba \ --without-unixODBC \ --disable-pdo \ --enable-xmlreader \ --enable-xmlwriter \ --without-sqlite \ --without-sqlite3 \ --disable-phar \ --enable-fileinfo \ --enable-json \ --without-pspell \ --disable-wddx \ --with-curl=/usr/include/curl \ --enable-posix \ --with-mcrypt \ --enable-mbstring \ --with-pgsql=/mnt/mv/pgsql I'm using Postgres 8.4.0 and Apache 2.2.8; I have the following line in my Apache conf file: LoadModule php5_module /usr/lib64/httpd/modules/libphp5.so And when I attempt to restart Apache, I get the following error message: Starting httpd: httpd: Syntax error on line 205 of /etc/httpd/conf/httpd.conf: Cannot load /usr/lib64/httpd/modules/libphp5.so into server: /usr/lib64/httpd/modules/libphp5.so: undefined symbol: lo_import_with_oid Now, I know that this is a problem with Postgres with PHP because lo_import_with_oid is a function in the Postgres source which allows the importing of large objects; also, if I remove the --with-pgsql option, PHP and Apache get along great. I've scoured the Internet looking for answers all day, but to no avail. Does anyone have ANY insight into what is causing my problems.

    Read the article

  • Two hosting providers running simultaneously... possible / not possible? good practice / unnecessary?

    - by user29600
    For the sake of their reputation, I won't mention the names. But I'll just use: Business I worked for previously - ABC Web Dev Hosting company they used - XYZ Hosting I recently found out that XYZ Hosting had some sort of incident where they ended up losing a lot of their client's data - including ABC Web Dev's. ABC Web Dev was able to recover some of their customer's websites, after pulling them from their local development computers and putting them up on another hosting provider. They ended up losing a lot of clients because of it and their reputation ruined. I'm starting my own web dev company and I don't want to run into this same issue. I'm planning on using Rackspace but, although they are a great company, according to wikipedia they still have had downtime in their past. I thought it might be a good idea to try to run two providers at once, to ensure that if anything happened in one the websites would still be live because of the other. I know the websites would have to be pulling from one server at all times, but if there's a way to redirect requests to the second server if the first one is down that would solve my issue. As a note, we will have a staging environment setup locally which will allow for quick recovery if a provider did have any issues, however I'd like to avoid any downtime at all if possible. So my questions are: Has anyone tried running two providers simultaneously? Would this be considered good practice or am I going too far? Is there really any way to run two simultaneously where one server acts as a backup?

    Read the article

  • Reloading NAT configuration on a running VMWare Server 2.0.2

    - by Jonathan Clarke
    I have a server running VMWare Server 2.0.2. The host is Debian Lenny. I have 15-20 virtual machines running, all attached to a single NAT network (named vmnet8). I have configured VMWare's NAT (the vmnet-natd daemon) to forward some incoming to ports to one of the VMs, since it hosts some publicly accessible services. I did this via the file /etc/vmware/vmnet8/nat/nat.conf by adding lines like the following: 80 = 192.168.100.100:80 This works great, I can reach the web server on the VM at 192.168.100.100 by connecting to the host's IP address. Sometimes, I need to add port redirections to this NAT configuration. So, I add a line to the configuration file. Now for the question. How do I make the natd process take this new configuration into account? Clearly, restarting the host machine does take it into account, and the newly added port is forwarded. However, this is not an option on this server, so how should one do this without restarting the whole host? Thanks for any ideas!

    Read the article

  • email output of powershell script

    - by Gordon Carlisle
    I found this wonderful script that outputs the status of the current DFS backlog to the powershell console. This works great, but I need the script to email me so I can schedule it to run nightly. I have tried using the Send-MailMessage command, but can't get it to work. Mainly because my powershell skills are very weak. I believe most of the issue revolve around the script using the Write-Host command. While the coloring is nice I would much rather have it email me the results. I also need the solution to be able to specify a mail server since the dfs servers don't have email capability. Any help or tips are welcome and appreciated. Here is the code. $RGroups = Get-WmiObject -Namespace "root\MicrosoftDFS" -Query "SELECT * FROM DfsrReplicationGroupConfig" $ComputerName=$env:ComputerName $Succ=0 $Warn=0 $Err=0 foreach ($Group in $RGroups) { $RGFoldersWMIQ = "SELECT * FROM DfsrReplicatedFolderConfig WHERE ReplicationGroupGUID='" + $Group.ReplicationGroupGUID + "'" $RGFolders = Get-WmiObject -Namespace "root\MicrosoftDFS" -Query $RGFoldersWMIQ $RGConnectionsWMIQ = "SELECT * FROM DfsrConnectionConfig WHERE ReplicationGroupGUID='"+ $Group.ReplicationGroupGUID + "'" $RGConnections = Get-WmiObject -Namespace "root\MicrosoftDFS" -Query $RGConnectionsWMIQ foreach ($Connection in $RGConnections) { $ConnectionName = $Connection.PartnerName.Trim() if ($Connection.Enabled -eq $True) { if (((New-Object System.Net.NetworkInformation.ping).send("$ConnectionName")).Status -eq "Success") { foreach ($Folder in $RGFolders) { $RGName = $Group.ReplicationGroupName $RFName = $Folder.ReplicatedFolderName if ($Connection.Inbound -eq $True) { $SendingMember = $ConnectionName $ReceivingMember = $ComputerName $Direction="inbound" } else { $SendingMember = $ComputerName $ReceivingMember = $ConnectionName $Direction="outbound" } $BLCommand = "dfsrdiag Backlog /RGName:'" + $RGName + "' /RFName:'" + $RFName + "' /SendingMember:" + $SendingMember + " /ReceivingMember:" + $ReceivingMember $Backlog = Invoke-Expression -Command $BLCommand $BackLogFilecount = 0 foreach ($item in $Backlog) { if ($item -ilike "*Backlog File count*") { $BacklogFileCount = [int]$Item.Split(":")[1].Trim() } } if ($BacklogFileCount -eq 0) { $Color="white" $Succ=$Succ+1 } elseif ($BacklogFilecount -lt 10) { $Color="yellow" $Warn=$Warn+1 } else { $Color="red" $Err=$Err+1 } Write-Host "$BacklogFileCount files in backlog $SendingMember->$ReceivingMember for $RGName" -fore $Color } # Closing iterate through all folders } # Closing If replies to ping } # Closing If Connection enabled } # Closing iteration through all connections } # Closing iteration through all groups Write-Host "$Succ successful, $Warn warnings and $Err errors from $($Succ+$Warn+$Err) replications." Thanks, Gordon

    Read the article

  • How do I fix libdispatch problem crashing Mac OS X apps?

    - by david-ocallaghan
    In the last day I have started having a lot of brokenness on my Mac (MacBook Air running Mac OS X 10.6.2 with all software updates). Most noticably, iTunes no longer syncs with my iPhone. It fails with a crash dialog reporting "AppleMobileDeviceHelper quit unexpectedly" and an error dialog "iTunes was unable to load dataclass information from SyncServices. Reconnect or try again later." I've attempted the fix at support.apple.com/kb/HT1747 but it failed. I've also been having problems (at first seemingly unrelated) with the horrible Cisco VPN client, which started giving me this error: Error 51: Unable to communicate with the VPN subsystem I followed the steps at www.anders.com/cms/192/CiscoVPN/Error.51:.Unable.to.communicate.with.the.VPN.subsystem which don't seem to work for me, although I can connect if I use the command line with sudo : sudo vpnclient connect MyProfile I had a look in the Console app at the diagnostic messages and I noticed a pattern, that a number of apps were reporting "BUG IN CLIENT OF LIBDISPATCH". The affected programs are: AppleMobileBackup AppleMobileDeviceHelper Safari Webpage Preview Fetcher cvpnd (the Cisco VPN daemon) Of these, only the last is non-Apple software! The common text in the diagnostic messages is: Exception Type: EXC_BAD_INSTRUCTION (SIGILL) Exception Codes: 0x0000000000000001, 0x0000000000000000 Crashed Thread: 1 Dispatch queue: com.apple.libdispatch-manager Application Specific Information: BUG IN CLIENT OF LIBDISPATCH: Do not close random Unix descriptors I'm beginning to wonder if there's a permissions problem, or corruption of an important library, ... I should note that I've rebooted several times and verified the disk permissions and the disk. Any help would be great!

    Read the article

  • Network monitoring tools with API features

    - by Kev
    We use ks-soft's Advanced Hostmonitor package to monitor around 2000 items on our network. We think it's great, the chap that supports it is fantastic, the product is fast, stable and mature but I feel as as we grow as a company it's beginning to show some friction points in the area of integration with our back office admin systems. One of the things we'd like to do is be able to add new tests to whatever monitoring tool we use via an API. For example, when orders for servers come from our retail interface, the server gets built automatically, and as part of the automated build process we'd like to automatically add new tests to the network monitoring systems. Hostmonitor has some support for this via a feature called HM Script but we're starting to encounter some speedbumps - we can't add new operators/users we can't define new "Action Profiles" - these are the actions to be taken when a test goes good or bad. What we love about hostmonitor though are the Action Profiles. For example if a Windows IIS box goes bad our action profile for a bad test does something like: Check host again (one time) Wait another 30 seconds then test again Try restart app pool on remote machine (up to two times) Send an email to ops about the restart failure Try restarting IIS on remote machine (up to four times) Page duty admin (up to 5 times - stops after duty admin ACKS alert) Page backup duty admin (5 times - stops after duty admin ACKS alert) I'm starting to look around at other network monitoring tools and I'm looking for: a comprehensive API to be able to add/remove/control tests/test "action profiles"/operators (not just plugins, we need control and admin interfaces) the ability to have quite detailed action/escalation profiles (and define these via an API) I've looked at Nagios and Icinga but Ican't seem to glean from their documentation whether we could have these features or not, or if we could, how much work would be involved to implement/customise. Can anyone provide any advice, guidance or experiences?

    Read the article

< Previous Page | 379 380 381 382 383 384 385 386 387 388 389 390  | Next Page >