Search Results

Search found 39047 results on 1562 pages for 'process control'.

Page 586/1562 | < Previous Page | 582 583 584 585 586 587 588 589 590 591 592 593  | Next Page >

  • Cisco "copy startup-config tftp" results in a 0 byte file on the server?

    - by Geoffrey
    I'm pulling my hair out figuring this out. My startup-config is good, I can view it with a show command. I'm trying to copy it to a tftp server: asa5505# copy startup-config tftp Address or name of remote host []? ipaddress Destination filename [startup-config]? t !! %Error writing tftp://ipaddress/t (Timed out attempting to connect) On my TFTP server (SolarWinds), I get the following: binary, PUT. Started file name: C:\TFTP-Root\t binary, PUT. File Exists, C:\TFTP-Root\t binary, PUT. Deleting Existing File. binary, PUT. Interrupted by client, cause: The process cannot access the file 'C:\TFTP-Root\t' because it is being used by another process I've used tftpd32 with same results. I've tried different servers, even one on the same network as the asa ... same results. It'll create a 0 byte file and never do the dump. What's going on? Everything is working normally except for this.

    Read the article

  • Cannot start listening on a certain TCP port, but there's nothing currently listening on it

    - by John Rasch
    I have Windows Service that uses a WCF service host to listen for connections on TCP port 61000. When I try to start the service, I get the error: Service cannot be started. System.ServiceModel.AddressAlreadyInUseException: HTTP could not register URL http://+:61000/ because TCP port 61000 is being used by another application. ---> System.Net.HttpListenerException: The process cannot access the file because it is being used by another process at System.Net.HttpListener.AddAll() at System.Net.HttpListener.Start() at System.ServiceModel.Channels.SharedHttpTransportManager.OnOpen() --- End of inner exception stack trace --- at System.ServiceModel.Channels.SharedHttpTransportManager.OnOpen() at System.ServiceModel.Channels.TransportManager.Open(TransportChannelListener channelListener) at System.ServiceModel.Channels.TransportManagerContainer.Open(SelectTransportManagersCallback selectTransportManagerCallback) at System.ServiceModel.Channels.HttpChannelListener.OnOpen(TimeSpan timeout) at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout) at System.ServiceModel.Dispatcher.ChannelDispatcher.OnOpen(TimeSpan timeout) at... A quick netstat -a shows there is nothing listening on port 61000. I've also found several posts online that mention reserving namespaces using netstat, but the account that the service runs under has administrator privileges so that shouldn't be necessary. Any other ideas as to why I'm getting this message? This service is running on 64-bit Windows Server 2008 R2 Standard.

    Read the article

  • DFSR NTFS Permissions Not Working!??!

    - by megadood
    I have two windwos 2008 standard servers running DFSR okay. I can create a file on one server, it is replicated to the other okay etc. I have the namespace shared folder on each server shared with full control administrators / everyone change/read permissions. I then browse to the folder on server 1 e.g.\server1\namespace\share\folder1. I right click the folder, and configure the NTFS permissions as I would like for example Adminsitrators Full Control / One User Read/Write Access / No other users in the user list. I save this and then double check the second server e.g. \server2\namespace\share\folder1. I right click the same folder name as before and can see the NTFS permissions have replicated accordingly. I right click the folder and go to properties - security - advanced - effective permissions and select a user that shouldnt be able to get into that folder e.g. testuser. It agrees with the NTFS permissions and shows that testuser has no ticks next to any permissions so should be denied access. I logon to any network PC or the server as testuser. Browse to \server1\namespace\share\folder1. It lets me straight in, no access denied messages. The same applies to server2. It seems as thought all my NTFS permissions are being ignored. I have 1 DFS share and then all the subfolders are a mixture of private folders and public folders so need the NTFS permissions to work ideally. Any idea whats going on? Is this normal? From my tests all users can access any DFSR folder under the namespace\share which is quite worrying. Thanks

    Read the article

  • Outlook slow to open attachments

    - by Alistair McMillan
    When a colleague tries to open attachments in her email (Outlook 2003 talking to an Exchange 2007 server) they talk ages to open. The files are relatively small, all less than 1MB. We've tried creating a new Windows profile for the user and tried creating new Outlook profiles, however that hasn't made any difference. And we've tried accessing her account from someone else's PC, and the attachments open immediately there. The only thing that might provide a clue is that Process Monitor shows Outlook on her PC trying to write the file to a folder within the user's "Temporary Internet Files" folder with FAST I/O DISALLOWED errors. Can't find a lot of useful information on that message online though. What causes the FAST I/O DISALLOWED errors? And would that make opening attachments so incredibly slow that opening a < 1MB file can take a matter of minutes? UPDATE: Discovered that this isn't just an issue with Outlook. Other files being accessed over the network show the same FAST I/O DISALLOWED errors in Process Monitor. The problem is just more noticeable with Outlook, because although other applications take a while to open files it isn't a matter of minutes.

    Read the article

  • Outlook 2010, 2007 Sync problems after migration from SMTP to Exchange

    - by kirgy
    Our organization recently switched from an SMTP server to an Exchange server, since then several user's Outlook's are not synchronizing their emails as expected with the Exchange server. Our move over from an SMTP server to an Exchange server consisted of adding the new Exchange account alongside the existing SMTP account, drag-dropping/copy-pasting folders client-side from the SMTP account in the folder pane in outlook, to the newly created Exchange account. The problem happens when a user moves an email to a folder from their inbox or another folder. At this point the email disappears from Outlook client side. Re-syncing the folder, send/receive, closing/opening outlook and even system reboots do not make this email reappear. The Outlook web interface (OWA) reports the email is in fact in the folder they placed it in, and is not deleted. Doing a "search all mail items" for the emails shows that the email is still there; not deleted nor removed. To add to the confusion, when new folders are created and the email is placed in these folders, the synchronization happens without any issue both client side and server side. As the emails are appearing server side, we are confident to presume this is a client side issue. We have tried adding/removing accounts on one system which resulted in the same issue. This was a very long and slow process due to the sheer volume of emails (20gig+ from most users). We have tried reinstalling outlook restoring accounts from back-ups which has not resolved the issue. We also tried upgrading one system from outlook 2007 to outlook 2010 which, again, did not resolve the issue. We have experienced issues with a lot of emails disappearing during the copy-over process in which I'm not convinced it was the best route of migration, but nonetheless we are where we are. Can anyone suggest potential avenues of solutions to resolve this issue? Thank you. Systems: Windows 7 (10 systems) Windows XP (2 systems) Outlook 2007 (2 systems) Outlook 2010 (7 systems) Problem Outlook systems: Windows XP, Outlook 2007 x 1 Windows 7, Outlook 2007 x1 Windows 7, Outlook 2010 x 2

    Read the article

  • Apache + mod_fcgid + perl = error 500

    - by f-aminov
    Hi guys! I'm trying to setup Apache2.2 with mod_fcgid and libapache2-mod-perl2 with no luck. I've created a fcgi-bin directory in the root directory of my website and put there a test.fcgi file with the following content: #!/usr/bin/perl use CGI; print "This is test.fcgi!\n"; While trying to access it via http://www.website.dom/fcgi-bin/test.fcgi I get error 500 (Internal Server Error). Here is my vhost config: <VirtualHost 95.131.29.226:8080> ServerName website.com DocumentRoot /var/www/data/website.com SuexecUserGroup user group ServerAlias www.website.com AddType application/x-httpd-php .php .php3 .php4 .php5 .phtml <Directory "/var/www/data/website.com/fcgi-bin/"> Options +ExecCGI Allow from all Order allow,deny AddHandler fcgid-script .fcgi </Directory> </VirtualHost> fcgid.conf: <IfModule mod_fcgid.c> AddHandler fcgid-script .fcgi SocketPath /var/lib/apache2/fcgid/sock IdleTimeout 3600 ProcessLifeTime 7200 MaxProcessCount 8 DefaultMaxClassProcessCount 2 IPCConnectTimeout 8 IPCCommTimeout 60 </IfModule> SuExec log: [2010-04-06 03:02:47]: uid: (500/equ) gid: (502/equ) cmd: test.fcgi Apache error log: test! test! [Tue Apr 06 03:02:51 2010] [notice] mod_fcgid: process /var/www/data/website.com/fcgi-bin/test.fcgi(26267) exit(communication error), terminated by calling exit(), return code: 0 [Tue Apr 06 03:02:53 2010] [notice] mod_fcgid: process /var/www/data/website.com/fcgi-bin/test.fcgi(26261) exit(server exited), terminated by calling exit(), return code: 0 I've no clue why I'm getting error 500, but when I'm trying to access this file using console ($ perl /var/www/data/website.com/fcgin-bin/test.fcgi) everthing works fine without any errors... Any suggestions on how to solve this problem would be greatly appreciated. Thank you!

    Read the article

  • Windows 7 disk backup and clone for deployment to multiple systems

    - by gregmac
    I'm in the process of deploying some new PCs (there's only 8), all identical hardware. What I'd like to do is install Windows 7 (64bit), join to domain etc, install a bunch of other software, and then clone that drive to multiple other machines. I'd also like to be able to use it as a backup image, so the machine can be restored back to that image at some future date. I understand this involves at least sysprep, but I am confused after reading some tutorials that talk about using Windows Automated Installation Kit, or hacks with the registry and custom-build batch files. This process seems overly complex to me: I did something similar 10+ years ago, and and don't remember it being this bad. Surely things have improved in a decade? There's also some products that involve having network servers running deployment software, network boot, etc etc.. this is way more than I want to set up. My systems are all identical hardware. Is there a simplified way to clone PCs? Preferably (since I'm a lazy developer, and not an IT admin) I'd like to find some off-the-shelf product that I can run after I get the machine setup, that will spit out a bootable DVD I can run on all the other systems, which will boot up, ask for a computer name, join it to the domain, and that's it. Does such as product exist?

    Read the article

  • run as dialog always pops up

    - by user12006
    I recently got some malware on a machine that I don't use for much (partly intentional). I've cleaned it, but now everytime I open any .exe the 'Run As' dialog pops up asking me which user I want to use to run the program. What causes this, and what's the fix for it? edit My process to remove the malware was as such: Disconnected from the network Deleted DisableTaskMgr reg key Inspected with Process Explorer and Task Manager and noticed that all applications were being run within another executable located in Documents and Settings...\Temp\Some.exe The system tray application was also in Documents and Settings...\Temp\SomeOther.exe I suspected that a service was in place as the system tray application would restart if it was killed, but couldn't find any service that I didn't recognize. Removed permissions from Some.exe and SomeOther.exe (on those files only) Restarted and deleted Some.exe and SomeOther.exe Deleted startup entries that were created Ran AVG Free and Windows Defender to remove anything else (they would be killed immediately before the two .exe's were removed) Cleaned registry via CCleaner note that system restores would finish saying something to the effect of 'couldn't restore system: there were no changes made'. I attempted to restore to a week ago, and I only got the malware yesterday.

    Read the article

  • Why is my RapidSSL Certificate chain is not trusted on ubuntu?

    - by olouv
    I have a website that works perfectly with Chrome & other browser but i get some errors with PHP in CLI mode so i'm investigating it, running this: openssl s_client -showcerts -verify 32 -connect dev.carlipa-online.com:443 Quite suprisingly my HTTPS appears untrusted with a Verify return code: 27 (certificate not trusted) Here is the raw output : verify depth is 32 CONNECTED(00000003) depth=2 C = US, O = GeoTrust Inc., CN = GeoTrust Global CA verify error:num=20:unable to get local issuer certificate verify return:1 depth=2 C = US, O = GeoTrust Inc., CN = GeoTrust Global CA verify error:num=27:certificate not trusted verify return:1 depth=1 C = US, O = "GeoTrust, Inc.", CN = RapidSSL CA verify return:1 depth=0 serialNumber = khKDXfnS0WtB8DgV0CAdsmWrXl-Ia9wZ, C = FR, O = *.carlipa-online.com, OU = GT44535187, OU = See www.rapidssl.com/resources/cps (c)12, OU = Domain Control Validated - RapidSSL(R), CN = *.carlipa-online.com verify return:1 So GeoTrust Global CA appears to be not trusted on the system (Ubuntu 11.10). Added Equifax_Secure_CA to try to solve this... But i get in this case Verify return code: 19 (self signed certificate in certificate chain) ! Raw output : verify depth is 32 CONNECTED(00000003) depth=3 C = US, O = Equifax, OU = Equifax Secure Certificate Authority verify error:num=19:self signed certificate in certificate chain verify return:1 depth=3 C = US, O = Equifax, OU = Equifax Secure Certificate Authority verify return:1 depth=2 C = US, O = GeoTrust Inc., CN = GeoTrust Global CA verify return:1 depth=1 C = US, O = "GeoTrust, Inc.", CN = RapidSSL CA verify return:1 depth=0 serialNumber = khKDXfnS0WtB8DgV0CAdsmWrXl-Ia9wZ, C = FR, O = *.carlipa-online.com, OU = GT44535187, OU = See www.rapidssl.com/resources/cps (c)12, OU = Domain Control Validated - RapidSSL(R), CN = *.carlipa-online.com verify return:1 Edit Looks like my server does not trust/provide the Equifax Root CA, however i do correctly have the file in /usr/share/ca-certificates/mozilla/Equifax...

    Read the article

  • Makecert.exe hangs

    - by Robert
    I was following the steps in Scott Hanselman's blog post describing how to create a certificate authority and code signing certificate for PowerShell scripts. Initially, I created the certificate authority and a personal certifcate and used it to sign a powershell script successfully. All went as described in the blog post. The problem starts (as most do) when I did something that was (probably) stupid, although it seemed reasonable at the time. I wanted to start over and repeat the process again with a clean slate, so from the mmc certificates snap-in console, I deleted the personal certificate and the certificate authority I created previously. After that any time I try to use makecert, (just as I did the first time around), makecert either hangs or faults (which prompts to end or debug). Did I hose something up by deleting via the certificates snap-in? It didn't complain or warn me that it could be potentially hazardous. Is this just coincidence and something else entirely could be hosed? I have Event Log entries from the times when makecert crashed, which all look very similar; here is one: Log Name: Application Source: Application Error Date: 8/5/2009 3:55:04 PM Event ID: 1000 Task Category: (100) Level: Error Description: Faulting application makecert.exe, version 6.0.6000.16384, time stamp 0x4545910b, faulting module ntdll.dll, version 6.0.6002.18005, time stamp 0x49e03821, exception code 0xc0000005, fault offset 0x00067409, process id 0xe58, application start time 0x01ca160efdf30625. Anyone have any ideas as to what exactly caused this and/or what I can do to fix it. I'm on 32-bit Vista Enterprise w/SP2.

    Read the article

  • Pushing updates to live server... FTP isn't cutting it... a better method?

    - by Jenkz
    I'm the lead developer in a team of 2. My partner has only just joined the project and despite using GIT for version control etc, we are still stuck in the dark ages when it comes to code deployment. Currently I make all site updates via FTP (this way I have control / responsibility over everything that goes live), using Filezilla. I've done this for years, but we now have some large PHP classes (300KB), and a lot of traffic. So in short, every time I upload a key class "general" for example, the site goes down until the file finishes uploading. This is only 5/6 seconds at a time, but this is increasingly unacceptable. I realise I can upload the file under a different name and then rename both files... but really there must be a better way? I've heard about rsyncing code across from another server, but I don't see how this prevents switching to the new file whilst uploading. We only have one server (for DB and Apache) but also use some cloud servers (for openx as an example).

    Read the article

  • iSCSI SAN Implementation with several ESXi hosts and two Equallogic SANs

    - by Sergey
    I work for a small state college. We currently have 4 ESXi hosts (all made by Dell), 2 EqualLogic SANs (PS4000 and PS4100) and a bunch of old HP Procurve switches. The current setup is very far from being redundant and fast so we want to improve it. I read several threads but get even more confused. The Procurve Switches are 2824. I know they don't support Jumbo Frames and Flow Control at the same time, but we have plans to upgrade to something like Procurve 3500yl. Any suggestions? I heard Dell Powerconnects 6xxx are pretty good but I'm not sure how they compare to HPs. There will be a 4-port Etherchannel (Link Aggregation) between the switches, and all control modules on SAN will be connected to different switches. Is there anything that will make the setup better? Are there better switches then Procurves 3500yl that cost less than 5k? What kind of bandwidth can I expect between ESXi hosts (they will also be connected to 2824 with multiple cables) and SANs?

    Read the article

  • Change the Default Date setting in Word 2010

    - by Chris
    I am using Word 2010 and Windows 7. You know how when you start typing a date in Word it will automatically suggest what it thinks you want? Like if I start typing “6/29”, a little grey bubble will display “6/29/13 (Press ENTER to Insert)”. How do I get it so the bubble will display the year in a 4 digit format, such as "6/29/2013 (Press Enter to Insert)"? The below picture is how it looks when typing a date into Word. I have already gone to the Date & Time option under the Insert menu and the date format that I want is already selected. I think this is only for using quickparts anyway, so the date automatically updates when you open a document. The Region and Language settings under the Control Panel are correct as well. I thought at one point I found it somewhere under options, but I am sure I looked through everything many times and I can’t find it. I posted this exact question at the Microsoft website and someone replied: Go to the Windows Control Panel and click on Clock, Language and Region and then on Change the date, time, or number format and then modify the Short date format so that it is what you want to be used. So please don't suggest this again, because in my question I did say that I already tried this and it doesn't work, at least not for Word, in this situation. Thanks.

    Read the article

  • Is it possible to be a professional studying on your own?

    - by Marc Jr
    I read economics at university(nothing to see with linux, isn't it? :P). I have some basic knowledge about booting process, Linux Kernel compiling from source and stuff like that. But of course I have still much to learn sometimes some errors appears and "voila" I am lost. I had: Ubuntu, Fedora, OpenSuse, Arch.. using Gentoo now. I'd like to know what you linux users, professionals, administrators... would think it is the best way to learn linux in a professional way. Is it worth studying it and passing the LPIC test enough to work in the linux world? or do I need going to IT uni? I've heard LFS is a good way of learning about linux, is that real? I've been thinking about getting to LFS learn about more deeply about the linux process and learning scripts. It is possible to do this way? if anyone has a tip or a good way of doing, maybe someone did it. Any tip is very welcome. Words from a person in love with linux. :D The best, Marc

    Read the article

  • How can ICS in Windows 7 be managed via command line, scripts, config files, etc.?

    - by Skya
    I've been using ICS successfully for years, but now I'm looking for a way to control it through something else than the GUI in Control Panel\Network and Internet\Network Connections - Connection Properties: I want to do everything that the encircled checkbox does, without touching the GUI. But what does the checkbox do? Microsoft don't provide specific information and the most helpful forum post I've found is from 2003. Assuming that some of the advice is still valid, I've come to the conclusion that ICS is broken down into 6 parts that have to be set up individually: the sharedAccess service interface settings firewall rules a static route dnsproxy autodhcp I've already learned that the service can be started/stopped with the command net start/stop sharedAccess and that netsh is a good tool for changing the interface settings and the firewall rules. But I don't understand how ICS handles routing and DNS. All hosts in my network are configured statically, so I don't care much about autodhcp. Thanks for your help! EDIT: I've spent the whole day scanning through ProcMon and I've seen reads/writes to both the registry and the filesystem and it is difficult to determine what parts of it actually make ICS work. I'm trying to look for an API instead. I'm looking into this right now, but I still want to know more about the inner workings.

    Read the article

  • System fans connected to a Gigabyte Z77-D3H motherboard do not increase in speed

    - by Andrew
    The motherboard (Gigabyte Z77-D3H) controls my 3-pin CPU fan just fine. My system fans are a 3-pin fan (plugged into SYS_FAN1), and a 4-pin fan plugged into SYS_FAN3. All 3 of the system fan headers are 4-pin, but the user manual states that SYS_FAN1 is really a 3-pin header (that it can control the speed of a 3-pin fan) and the 4th pin is just a reserve. All my fans have a max RPM of 2000. Normally, all the fans run around 1000 RPMs when I'm not doing anything intensive. This proves that the motherboard can set the speed. However,when I run Folding@Home and my temperatures increase (around 70C) only the CPU fan increases to around 2000 RPMs. The system fans stay around 1000 RPMs. Through the BIOS I am able to disable the system fan control and the system fans then run at max RPMs (meaning the motherboard was doing something). I've updated the BIOS to the latest version and tried out Speedfan, but neither helped my situation. What I'd like is for the system fans to ramp up their RPMs as needed. Any thoughts? Tl;dr: My system (case) fans, but not my CPU fan, are always stuck around 1000 RPMs out of 2000 no matter the temperature.

    Read the article

  • How do I resolve BSOD: PAGE_FAULT_IN_NONPAGED_AREA?

    - by Burnzy
    I have been trouble shooting this for a few days and cannot fix this anyhow. Computer specifications Mobo: ASUS Sabertooth X58 LGA 1366 Intel X58 SATA 6Gb/s USB 3.0 ATX Intel Motherboard CPU: Intel(R) Core(TM) i7 CPU 920 (Bloomfield) @ 2.67 ( no OC ) RAM: 6144MB RAM GPU: 2x NVIDIA GeForce GTS 250 1Go in SLI (sli is not enabled anyway at the moment anyway) Drives: OCZ RevoDrive OCZSSDPX-1RVD0120 PCI-E x4 120GB PCI Express MLC Internal SSD [RAID-0]. (I know this could potentilly cause trouble but I had the BSOD before using this drive) Seagate Barracuda 7200.11 ST31500341AS 1.5TB 7200 RPM 32MB Cache SATA 3.0Gb/s 3.5" Internal Hard Drive - Bare Drive Click here for a log of a crash I just had. Click here for a log of a crash I had 30 minutes later, note that it's another driver. Some info Occurence: It seems pretty random so far, haven't noticed any kind of pattern I tried: Windows memory diagnostic (went smoothly at 1066mhz) As I said, it was still happening on my HDD, so when I bought the revodrive I install a new OS on there and still got the error, I believed it happened and I had no drivers installed at that point (not 100% sure) Change the following registry value to 1 (true): HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SessionManager \MemoryManagement\ClearPageFileAtShutdown Tried to lower even more ram clock Made sure ram timing was set to recommended by manufacturer Verified if motherboard was in good physical condition (yes and its brand new) There is one thing to note, when I got the new motherboard, I installed the new drivers WITHOUT formatting and the I removed the motherboard drivers that I could remove from the control panel (pretty much the first things that have been installed). Could this cause an issue even ON THE OTHER drive (revodrive). Hopefully someone can help me, I am getting tired of this, spending so much money and cannot get this to work correctly. If you need any other information let me know, thank you!

    Read the article

  • Ubuntu server PPTPD with OS X clients Problems

    - by Nakedsteve
    I'm trying to get a PPTP server running on a ubuntu server, but I've run into some issues with it. I followed this guide on how to set up pptpd on my server, and everything went smooth, but when I try to connect with my mac, it gives me this error: Here's my configuration: Does anyone have any idea as to what I'm doing wrong here? Update: Here's what the pptpd.log has to say about it: steve@debian:~$ sudo tail /var/log/pptpd.log sudo: unable to resolve host debian Sep 3 21:46:43 debian pptpd[2485]: MGR: Manager process started Sep 3 21:46:43 debian pptpd[2485]: MGR: Maximum of 11 connections available Sep 3 21:46:43 debian pptpd[2485]: MGR: Couldn't create host socket Sep 3 21:46:43 debian pptpd[2485]: createHostSocket: Address already in use Sep 3 21:46:56 debian pptpd[2486]: CTRL: Client 192.168.1.101 control connection started Sep 3 21:46:56 debian pptpd[2486]: CTRL: Starting call (launching pppd, opening GRE) Sep 3 21:46:56 debian pptpd[2486]: GRE: read(fd=6,buffer=204d0,len=8196) from PTY failed: status = -1 error = Input/output error, usually caused by unexpected termination of pppd, check option syntax and pppd logs Sep 3 21:46:56 debian pptpd[2486]: CTRL: PTY read or GRE write failed (pty,gre)=(6,7) Sep 3 21:46:56 debian pptpd[2486]: CTRL: Reaping child PPP[2487] Sep 3 21:46:56 debian pptpd[2486]: CTRL: Client 192.168.1.101 control connection finished My pptpd options are: asyncmap 0 noauth crtscts lock hide-password modem debug proxyarp lcp-echo-interval 30 lcp-echo-failure 4 nopix

    Read the article

  • Are web service handler chains possible under IIS / ASP.NET

    - by Mike
    I'm working with a client who wants me to implement a particular design in an IIS/ASP.NET environment. This design was already implemented in Java, but I am not sure it is possible using Microsoft technologies. In a Tomcat/Java environment one can create so call Handler Chains. In essence a handler runs on the server on which the web service is running and it intercepts the SOAP message coming to the web service. The handler can perform a number of tasks before passing control to the web service. Some of these tasks may refer to authentication and authorization. Moreover, one can create handler chains, such that the handlers can run in a particular sequence before passing control to the web service. This is a very elegant solution, as certain aspects of authentication and authorization can be automatically performed, without the developer of the client application and of the web service having to invest anything in it. The code for the client application and the web service is not affected. You may find a number of articles on internet on this subject by searching on Google for "web service handler chain". I performed searches for web service handlers in IIS or ASP.NET. I get some hits, but apparently handlers in IIS have another meaning than that described above. My question therefor is: Can handler chains (as available in Java and Tomcat) be created in IIS? If so, how (any article, book, forum...)? Either a negative or a positive answer will be greatly appreciated. Mike

    Read the article

  • AD server within another network - DNS issues

    - by Harry Muscle
    Here's a quick summary of the environment I support: we have a domain (domain A) that has about 20 client computers. The domain server for this domain and all the clients sit within the network infrastructure of a larger domain (domain B). All the computers get their network settings via DHCP from domain B's servers. I have no control and am unable to make changes to anything to do with domain B. The problem I have is that currently in order for my domain's (domain A) clients to be able to resolve the domain server and the shares on it they have their DNS server IP address set to domain A's domain server (via the default GPO). Unfortunately when a laptop (windows and mac) gets taken home, they are still looking for the domain server as their DNS server and obviously can't access the internet correctly outside of our environment. Ideally I need a solution where the machines use domain A's domain server as their DNS when inside the office and use what ever DNS server DHCP gives them when they are outside the office. However, since I have no control over the office DHCP server, I'm not sure how this can be accomplished. Any help and advice that anyone can offer is highly appreciated. Thanks, Harry P.S. The solution I'm trying to find needs to require no involvement from the user.

    Read the article

  • Server Names Inside Private Network

    - by thyandrecardoso
    Our office has a private network, where any requests on a (pre-determined) public IP are forwarded to a private IP inside said network. On that private IP, we've got a server running several services, including HTTP servers, and SCM systems. We only control our private network, having no control on the public IP configuration. We bought a domain name, and pointed it to that public IP, so people can access our services from the outside. But, when inside the office, people can't use that DNS name, because the server and any other hosts inside the network share the same public IP! For desktops, inside the office network, dealing with names is really easy: one entry on the hosts file and we're done. However, for laptops, that keep going in and out, and need to access services inside the office, the naming is really annoying. I don't know the "standard" process for dealing with these kind of situations. I've considered installing BIND in the office, and make people configure their wireless and wired connections to use that DNS server. What is the correct approach in this situation? If using BIND (or any other DNS server) is the answer, how should I configure it so that people inside the office can use it to get our custom names, and get forwarded to the ISP DNS when trying to reach the internet?

    Read the article

  • Windows Vista: Screen remains darkened for 30-60 seconds *after* UAC prompt

    - by sf2k
    Fixing someone's Vista computer. Process: I click any program or process that opens a User Account Control prompt. Screen goes dim so you may hit Continue to perform a secure user action. I click Continue Screen goes black for 30 seconds to 1 minute while you wait for the screen to return. In another example I click Cancel and screen still then goes black for 30 seconds to a minute. In that timeframe a chime goes off while you wait. (No chime if it was being cancelled.) Then screen comes back to continue with whatever. Something is occurring after the UAC prompt. Considering everything is practically a UAC acceptance this can get pretty annoying pretty quickly. Laptop has external monitor to regular external plug. Works fine. Laptop also has USB IOGEAR additional external video card. This is problematic but when unplugged same above behaviour occurs. I've ruled out monitor interference since same blackout after the UAC prompt appears with external monitors plugged in or when rebooted with no external monitors. Any suggestions on how to address this problem?

    Read the article

  • "postgres blocked for more than 120 seconds" - is my db still consistent?

    - by nn4l
    I am using an iscsi volume on an Open-E storage system for several virtual machines running on a XenServer host. Occasionally, when there is a very high disk I/O load on the virtual machines (and therefore also on the storage system), I got this error message on the vm consoles: [2594520.161701] INFO: task kjournald:117 blocked for more than 120 seconds. [2594520.161787] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [2594520.162194] INFO: task flush-202:0:229 blocked for more than 120 seconds. [2594520.162274] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [2594520.162801] INFO: task postgres:1567 blocked for more than 120 seconds. [2594520.162882] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. I understand this error message is caused by the kernel to inform that these processes haven't been run for 120 seconds, most likely because a disk access to the storage system has not yet been processed. But what is the effect on the processes. For example, will the postgres process eventually write its data when the storage system is idle again after a few minutes, so that all data is still consistent? Or will it abort the write, leaving some tables in an inconsistent state? I certainly expect that the former should be the case - if the disk access is slow, postgres (or any other affected process) should just wait as long as it takes. I can live with the application hanging for a few minutes. But if there is a chance for data corruption then any of these errors is really bad news. Please advise what to do here.

    Read the article

  • Domino nchronos.exe multiple instances causing server to die, and Sametime problems

    - by Kevin
    I've had this problem for a few months now. I thought it started when I installed the Traveller software on the server to add ActiveSync support, but I removed that and the problem still persists. Basically new instances of "nchronos.exe" keeps spawning (and not ending), so over a period of a few days the server eventually gets drowned in nchronos.exe processes, stops responding and I need to kill Domino. My process count the last time was up at about 330, and when I killed it and restarted the Domino my process count went to 160. I'm running Domino 8.5.1 with Fix Pack 2. I don't know if it's relevant, but my Domino server was also acting as a Sametime server. At around the same time that nchronos started playing up sametime also stopped working. None of my users can connect to sametime and in the domino log it keeps telling me "stpolicy.exe" has terminated. I've googled for that and tried a few things, but nothing seems to make sametime work again. Any thoughts?? Cheers, Kevin

    Read the article

  • Why does ATI 5570 HD video card driver installation cause Windows 7 To Blue Screen?

    - by Mort
    This one is for the hive mind. I have a brand new Dell Optiplex 760 workstation with 4 gigabytes of RAM running Windows 7 Professional (32bit). This is a new box with nothing installed other than what was provided for directly by Dell. I installed a Saphire ATI PCI Express 5570 HD. Upon trying to install the 10.4 Catalyst drivers the system will blue screen. It blue screens during the hardware detection phase of the installation process. I have already performed the following trouble shooting steps: Changed system RAM Installed only 2 gigabytes of RAM Installed different versions of Catalyst drivers (10.4 - 9.12) Tried to install video only component of driver (vs entire Catalyst suite) Made sure Windows 7 was fully updated Flashed mother board BIOS to current version Removed and re-seated video card Contacted ATI Support (We all know how this went......) Verified supply outputting properly The blue screen error (via Windows BugCheck entry in event log) is a 0x000000CA and refers to a plug and play error most likely caused by a bad driver. The problem is that the driver installation process never gets far enough to actually install a driver. The resolution center in Windows provides a solution of installing the 10.4 Catalyst driver to resolve issue (which fails). Looking for some alternate views to resolve.

    Read the article

< Previous Page | 582 583 584 585 586 587 588 589 590 591 592 593  | Next Page >