Search Results

Search found 18913 results on 757 pages for 'ideas'.

Page 513/757 | < Previous Page | 509 510 511 512 513 514 515 516 517 518 519 520  | Next Page >

  • OpenLDAP mirror mode replication failing with TLS behind a load balancer

    - by Lynn Owens
    I have two OpenLDAP servers that are both running TLS. They are: ldap1.mydomain.com ldap2.mydomain.com I also have a load balancer cluster with a dns name of it's own: ldap.mydomain.com The SSL certificate has a CN of ldap.mydomain.com, with SANs of ldap1.mydomain.com and ldap2.mydomain.com. Everything works... Except mirror mode replication. My mirror mode replication is setup like this: ldap.conf TLS_REQCERT allow cn=config.ldif olcServerID: 1 ldap://ldap1.mydomain.com olcServerID: 2 ldap://ldap2.mydomain.com On ldap1, olcDatabase{1}hdb.ldif olcMirrorMode: TRUE olcSyncrepl: {0}rid=001 provider=ldap://ldap2.mydomain.com bindmethod=simple bindmethod=simple binddn="cn=me,dc=mydomain,dc=com" credentials="REDACTED" starttls=yes searchbase="dc=mydomain,dc=com" schemachecking=on type=refreshAndPersist retry="60 +" On ldap2, olcDatabase{1}hdb.ldif olcMirrorMode: TRUE olcSyncrepl: {0}rid=001 provider=ldap://ldap1.mydomain.com bindmethod=simple bindmethod=simple binddn="cn=me,dc=mydomain,dc=com" credentials="REDACTED" starttls=yes searchbase="dc=mydomain,dc=com" schemachecking=on type=refreshAndPersist retry="60 +" Here's the errors I'm getting in syslog: Dec 1 21:05:01 ldap1 slapd[6800]: slap_client_connect: URI=ldap://ldap2.mydomain.com DN="cn=me,dc=mydomain,dc=com" ldap_sasl_bind_s failed (-1) Dec 1 21:05:01 ldap1 slapd[6800]: do_syncrepl: rid=001 rc -1 retrying Dec 1 21:05:08 ldap1 slapd[6800]: conn=1111 fd=20 ACCEPT from IP=ldap.mydomain.com:2295 (IP=ldap1.mydomain.com:636) Dec 1 21:05:08 ldap1 slapd[6800]: conn=1111 fd=20 closed (TLS negotiation failure) Any ideas? I've been working on OpenLdap for way too long now.

    Read the article

  • GDB breakpoint problems attaching to QEMU

    - by Rickard von Essen
    Hi, I have the following problem. When I connect gdb to qemu for debugging it won't break on breakpoints. I can set breakpoints, break with ctrl-c etc. Any clues how this can be fixed? I have: $ qemu --version QEMU PC emulator version 0.11.0 (qemu-kvm-0.11.0), Copyright (c) 2003-2008 Fabrice Bellard $ gdb --version GNU gdb (GDB) 7.0-ubuntu. This GDB was configured as "x86_64-linux-gnu". This is an example session: (And yes this is pintos) gdb -x src/misc/gdb-macros kernel.o GNU gdb (GDB) 7.0-ubuntu Copyright (snip...) License (snip...) This GDB was configured as "x86_64-linux-gnu". Reading symbols from ../../threads/build/kernel.o...done. (gdb) debugpintos 0x0000fff0 in ?? () (gdb) break main Breakpoint 1 at 0xc01000b6: file ../../threads/init.c, line 68. (gdb) info break Num Type Disp Enb Address What 1 breakpoint keep y 0xc01000b6 in main at ../../threads/init.c:68 (gdb) cont Continuing. Remote connection closed Any ideas are welcome.

    Read the article

  • SSTP BPDU with bad TLV and macflap -- info please

    - by Adeodatus
    Hi All, I'm slowly locking down the network I've inherited and mac-flapping has been a problem in the past with customers doing all kinds of crazy things. Thats changing but I am now encountering this error: Dec 30 18:31:31 10.50.1.50 1565: 001567: Dec 30 18:31:30: %SW_MATM-4-MACFLAP_NOTIF: Host xxxx.xxxx.f681 in vlan 1 is flapping between port Gi0/5 and port Gi0/48 Dec 30 18:43:28 10.50.1.50 1566: 001568: Dec 30 18:43:26: %SPANTREE-2-RECV_BAD_TLV: Received SSTP BPDU with bad TLV on GigabitEthernet0/5 VLAN1. Dec 30 18:48:18 10.50.1.50 1567: 001569: .Dec 30 18:48:17: %SPANTREE-2-RECV_BAD_TLV: Received SSTP BPDU with bad TLV on GigabitEthernet0/5 VLAN1. unfortunately, that mac address is the mac of our core router, the only link to the internet, on port gi0/48 On the other end of gi0/5, I have about 50 bridged customer machines connected through a series of managed and unmanaged L2 switches. Yes, on VLAN1 too ... like I said, working on changing this slowly. In the mean time, it has me quite baffled on how to deal with this and track down the customer or switch that is the problem. What else could be going on with these messages ... the bad TLV is a new one for me. Any ideas? Thank you and Happy New Year to you all!!

    Read the article

  • OpenSolaris / Nexenta problems with NetXen 4-port NIC card (ntxn driver)

    - by ewwhite
    Hello, I'm running NexentaStor Enterprise on an HP ProLiant DL180 G6 server. The onboard NIC interfaces surface as igb0 and igb1 and work well. However, I've added an HP NC375T 4-port network card using the NetXen 3031 chipset. This card should be handled by the ntxn driver in the SUNWntxn package, but that results in "ntxn0: failed to map doorbell" messages upon boot. The network interfaces don't show up. After some research, I found HP's driver package for the card. The release notes for the driver package state: This version of the Driver is supported only on Oracle Solaris 10 5/09 & 10/09. Oracle Solaris 10 5/09 & 10/09 contain an older version of NetXen P3 driver package called SUNWntxn. So, adding another version of NetXen P3 driver package using pkgadd command might result in conflicts with the NetXen driver binary & related files. Users are advised to uninstall native SUNWntxn driver package before installing the new package. The install completes, but I end up with a different set of errors in initializing the card. ifconfig ntxn0 plumb ifconfig: cannot open link "ntxn0": DLPI link does not exist dmesg output: Jan 29 07:20:17 ch-san2 ntxn: [ID 977263 kern.warning] WARNING: Memory not available Jan 29 07:20:17 ch-san2 ntxn: [ID 404858 kern.notice] NOTICE: ntxn0: Mac registration error Trying to manually create the device files: root@ch-san2:/volumes# add_drv -i "4040,100" ntxn ("ntxn") already in use as a driver or alias. Update the driver: root@ch-san2:/volumes# update_drv -f ntxn devfsadm: driver failed to attach: ntxn Warning: Driver (ntxn) successfully added to system but failed to attach Any ideas on how to get this driver working, or should I ditch the card and go with an Intel or something else?

    Read the article

  • fedora12, yum not releasing "lock" after performing an action

    - by James.Elsey
    Hello, This problem has been occurring quite frequently recently and I can't seem to find a way of preventing it. Whenever I perform an action with yum such as to install or remove software, it appears to execute successfully but then I'm unable to move onto the next yum command For example, I executed yum remove skype, it appeared to remove ok, but next when I try to yum search skype it appears that yum is still processing, and I have to manually kill that process via kill 1234 (or whatever the PID is) My output is as follows [root@nevada james]# yum remove skype Loaded plugins: presto, refresh-packagekit Setting up Remove Process Resolving Dependencies --> Running transaction check ---> Package skype.i586 0:2.1.0.47-fc10 set to be erased --> Finished Dependency Resolution Dependencies Resolved ================================================================================ Package Arch Version Repository Size ================================================================================ Removing: skype i586 2.1.0.47-fc10 installed 24 M Transaction Summary ================================================================================ Remove 1 Package(s) Reinstall 0 Package(s) Downgrade 0 Package(s) Is this ok [y/N]: y Downloading Packages: Running rpm_check_debug Running Transaction Test Finished Transaction Test Transaction Test Succeeded Running Transaction Erasing : skype-2.1.0.47-fc10.i586 1/1 Removed: skype.i586 0:2.1.0.47-fc10 Complete! [root@nevada james]# yum search skype Loaded plugins: presto, refresh-packagekit Existing lock /var/run/yum.pid: another copy is running as pid 3639. Another app is currently holding the yum lock; waiting for it to exit... The other application is: PackageKit Memory : 79 M RSS (372 MB VSZ) Started: Fri Dec 18 08:39:18 2009 - 00:01 ago State : Sleeping, pid: 3639 Kernel version : 2.6.31.6-166.fc12.x86_64 Any ideas how I can prevent this behaviour? Thanks

    Read the article

  • Setting up quotas on 64-bit RHEL6 OS with ext4 filesystem

    - by Rob Mangiafico
    Setting up a new 64 bit RHEL 6 server with ext4 FS. Have only worked with ext3 and 32bit RHEL5 before. No matter what I try, I cannot get it to work. Current settings for mount (from "mount" command): /dev/sda7 on / type ext4 (rw,noatime) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw,noexec,nosuid) /dev/sdb1 on /backup type ext4 (rw) /dev/sda1 on /boot type ext4 (rw,noatime) /dev/sda8 on /home type ext4 (rw,noatime,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0) /dev/sda2 on /tmp type ext4 (rw,noexec,noatime) /dev/sda6 on /usr type ext4 (rw,noatime) /dev/sda5 on /var type ext4 (rw,noatime,usrjquota=aquota.user,jqfmt=vfsv0) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) Essentially trying to get user/group quotas on /home, and user quotas on /var. Created the aquota.user and aquota.group files on /home and /var: -rw-r--r-- 1 root root 0 Nov 17 13:37 /home/aquota.group -rw-r--r-- 1 root root 0 Nov 17 13:37 /home/aquota.user -rw------- 1 root root 0 Nov 17 11:43 /var/aquota.user When I run quoatcheck I get: quotacheck -vguma quotacheck: WARNING - Quotafile /home/aquota.user was probably truncated. Cannot save quota settings... quotacheck: WARNING - Quotafile /home/aquota.group was probably truncated. Cannot save quota settings... quotacheck: WARNING - Quotafile /var/aquota.user was probably truncated. Cannot save quota settings... Then I attempt quotaon and get: quotaon -av quotaon: Cannot find quota file on /home [/dev/sda8] to turn quotas on/off. quotaon: Cannot find quota file on /home [/dev/sda8] to turn quotas on/off. quotaon: Cannot find quota file on /var [/dev/sda5] to turn quotas on/off. quota rpms installed: rpm -qa|grep -i quota quota-3.17-16.el6.x86_64 quota-devel-3.17-16.el6.x86_64 Any ideas what I'm doing wrong or what I should adjust to get quotas to work as they do in ext3/32bit?

    Read the article

  • Mac OS X Server (10.5) mail trapped in queue

    - by Meltemi
    We've got mail accumulating in our Leopard Server's queue and not sure exactly why. This machine has required little maintenance over the years so I'm hoping someone here spot the obvious and save us some time. Let me know what other information would be helfull. Server appears to be functioning normally except for "clogged" queue and the following error associated with each "trapped" message: Looking at messages in the queue each one states something like this: Message ID: 4213C3B8B3F Date: October 27, 2009 11:33:27 AM Size: 1824 Sender: [email protected] Recipient(s) & Status: ---------------------- [email protected]: connect to 127.0.0.1[127.0.0.1]: Connection refused Under SettingsRelay we have checked Accept SMTP relays only from these hosts and networks: 127.0.0.0/8 10.0.1.0/24 The mail in queue is addressed to users whose accounts are on this server. Mail.app on the client appears to be functioning normally and checking checking mail on the server. We did add a virtual domain some time ago but all that was working fine for some time... This just started happening recently...any ideas? Edit: toggling the filter services on and off seems to have fixed this except for 2 remaining queued messages that show "mail transport unavailable" as an error!?!

    Read the article

  • uninstall google chrome in fedora

    - by tbleckert
    Yesterday I installed Fedora 15 Beta with GNOME 3 - it works well. One problem though is that I installed Chrome 32-bit (which was wrong, should have been the 64-bit version) and now I can't uninstall it. I can't find it in Add/Remove Software, and I also can't install the correct version of Chrome because it complains about my other copy of Chrome. Any ideas how I can remove the existing copy and get the 64-bit version installed? Here's the message I get when trying to install: Test Transaction Errors: file /etc/cron.daily/google-chrome from install of google-chrome-stable-11.0.696.65-84435.x86_64 conflicts with file from package google-chrome-stable-11.0.696.65-84435.i386 file /opt/google/chrome/chrome from install of google-chrome-stable-11.0.696.65-84435.x86_64 conflicts with file from package google-chrome-stable-11.0.696.65-84435.i386 file /opt/google/chrome/chrome-sandbox from install of google-chrome-stable-11.0.696.65-84435.x86_64 conflicts with file from package google-chrome-stable-11.0.696.65-84435.i386 file /opt/google/chrome/libffmpegsumo.so from install of google-chrome-stable-11.0.696.65-84435.x86_64 conflicts with file from package google-chrome-stable-11.0.696.65-84435.i386 file /opt/google/chrome/libpdf.so from install of google-chrome-stable-11.0.696.65-84435.x86_64 conflicts with file from package google-chrome-stable-11.0.696.65-84435.i386 file /opt/google/chrome/libppGoogleNaClPluginChrome.so from install of google-chrome-stable-11.0.696.65-84435.x8...

    Read the article

  • Authenticating Windows 7 against MIT Kerberos 5

    - by tommed
    Hi There, I've been wracking my brains trying to get Windows 7 authenticating against a MIT Kerberos 5 Realm (which is running on an Arch Linux server). I've done the following on the server (aka dc1): Installed and configured a NTP time server Installed and configured DHCP and DNS (setup for the domain tnet.loc) Installed Kerberos from source Setup the database Configured the keytab Setup the ACL file with: *@TNET.LOC * Added a policy for my user and my machine: addpol users addpol admin addpol hosts ank -policy users [email protected] ank -policy admin tom/[email protected] ank -policy hosts host/wdesk3.tnet.loc -pw MYPASSWORDHERE I then did the following to the windows 7 client (aka wdesk3): Made sure the ip address was supplied by my DHCP server and dc1.tnet.loc pings ok Set the internet time server to my linux server (aka dc1.tnet.loc) Used ksetup to configure the realm: ksetup /SetRealm TNET.LOC ksetup /AddKdc dc1.tnet.loc ksetip /SetComputerPassword MYPASSWORDHERE ksetip /MapUser * * After some googl-ing I found that DES encryption was disabled by Windows 7 by default and I turned the policy on to support DES encryption over Kerberos Then I rebooted the windows client However after doing all that I still cannot login from my Windows client. :( Looking at the logs on the server; the request looks fine and everything works great, I think the issue is that the response from the KDC is not recognized by the Windows Client and a generic login error appears: "Login Failure: User name or password is invalid". The log file for the server looks like this (I tail'ed this so I know it's happening when the Windows machine attempts the login): Screen-shot: http://dl.dropbox.com/u/577250/email/login_attempt.png If I supply an invalid realm in the login window I get a completely different error message, so I don't think it's a connection problem from the client to the server? But I can't find any error logs on the Windows machine? (anyone know where these are?) If I try: runas /netonly /user:[email protected] cmd.exe everything works (although I don't get anything appear in the server logs, so I'm wondering if it's not touching the server for this??), but if I run: runas /user:[email protected] cmd.exe I get the same authentication error. Any Kerberos Gurus out there who can give me some ideas as to what to try next? pretty please?

    Read the article

  • IIS6 Wildcard Mapping to ASP.NET - no file extension results in IIS 404

    - by Ian Robinson
    I'm trying to perform what I understand to be a relatively simple task. I'd like to remove the extensions from the URLs on my website. I have the proper set up in my application to handle and rewrite the URLs - the trouble is I can't get past IIS to actually get to my application without the extensions. The details: I'm running IIS6 on Windows Server 2003. I've gone into the web site for my application, gone to the home directory tab, clicked "Configuration" and added a wildcard map to the following file: c:\windows\microsoft.net\framework\v2.0.50727\aspnet_isapi.dll Which I verified is the same as what is used above in the application extensions portion by .ascx, etc. If I navigate to http://mywebsite.com/Blogs the result is as follows: HTTP/1.1 404 Not Found Content-Length: 1635 Content-Type: text/html Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET Date: Thu, 14 Jan 2010 15:04:49 GMT Which seems to be a standard IIS 404 message. If I navigate to http://mywebsite.com/Blogs.aspx I get my ASP.NET app.... How can I troubleshoot this? I feel like I've double checked everything a dozen times but to no avail. I must be missing something obvious. Update: Here are the exact instructions given by the asp.net url rewriter that I'm using: IIS 6.0 - Windows 2003 Server open property page for website / virtual directory. click the 'home directory' tab click the 'configuration' button, select the 'mappings' tab click 'insert' next to the 'Wildcard application maps' section browse to the aspnet_isapi.dll (normally at c:\windows\microsoft.net\framework\v2.0.50727\aspnet_isapi.dll) Ensure that 'check that file exists' is unchecked Click OK, OK, OK to close and apply changes Update 2: I have yet to find a resolution for this. The application does not seem to be receiving the request from IIS, any further ideas?

    Read the article

  • Gnome-Applets Package Causing Gnome to Segfault

    - by FranticPedantic
    When I boot ubuntu I get a bunch of windows saying "The panel encountered a problem while loading: OAFIID:GNOME_ShowDesktopApplet", with a bunch of others(5) besides ShowDesktopApplet. I am able to launch a terminal, and I do have the taskbar at the top, although the bar at the bottom that holds the windows is blank, so when I minimize a window it disappears. Furthermore, when I try to launch most applications (firefox for example) it segfaults. I have googled the error but it seems to be common but general issue, and a lot of the solutions that worked for others didn't work for me. I tried deleting the gconf, gnome, and gnome2 folders in my home directory. I tried deleting a bunch of the gnome folders (although I might have done this wrong, I was confused as to which ones). I have also tried to use apt-get and dpkg to re-install ubuntu-desktop and gnome-applets. However, when I try apt-get install ubuntu-desktop I get errors regarding /var/cache/apt/archives/gnome-applets_2.28.0-0ubunt2_amd64.deb I tried dpkg --configure -a and interestingly I get Package gnome-applets is not installed. When I try to use apt-get to install it, that same pesky problem pops up error processing /var/cache/apt/archives/gnome-applets_2.28.0-0ubunt2_amd64.deb (--unpack) subprocess new post-removal script returned error exist status 245 sub process /usr/bin/dpkg returned an error code(1) This file has been popping up with regards to a lot of the solutions suggested I see. Any ideas?

    Read the article

  • Drop outs when accessing share by DFS name.

    - by Stephen Woolhead
    I have a strange problem, aren't they all! I have a DFS root \domain\files\vms, it has a single target on a different server than the namespace. I can copy a test file set from the target directly via \server\vms$\testfiles and all is well, the files copy fine. I have repeated these tests many times. If I try and copy the files from the dfs root I get big pauses in the network traffic, about 50 seconds every couple of minutes, all the traffic just stops for the copy. If I start another copy between the same two machines during this pause, it starts copying fine, so I know it's not an issue with the disks on the server. Every once in a while the copy will fail, no errors, the progress bar will just zip all the way to 100% and the copy dialog will close. Checking the target folder show that the copy is incomplete. I've moved the LUN to another server and had the same problem. The servers are all 2008 R2, the clients are Vista x64, Windows7 x64 and 2008 R2, all have the same problem. Anyone got any ideas? Cheers, Stephen More Information: I've been running a NetMon trace on the connection when the file copy fails and what seems to be standing out is that when opening a file that the copy completes on the SMB command looks like this: SMB2: C CREATE (0x5), Name=Training\PDC2008\BB34 Live Services Notifications, Awareness, and Communications.wmv@#422082, Context=DHnQ, Context=MxAc, Context=QFid, Context=RqLs, Mid = 245376 SMB2: R CREATE (0x5), Context=MxAc, Context=RqLs, Context=DHnQ, Context=QFid, FID=0xFFFFFFFF00000015, Mid = 245376 But for the last file when the copy dialog closes looks like this: SMB2: C CREATE (0x5), Name=gt\files\Media\Training\PDC2008\BB36 FAST Building Search-Driven Portals with Microsoft Office SharePoint Server 2007 and Microsoft Silverlight.wmv@#859374, Context=DHnQ, Context=MxAc, Context=QFid, Context=RqLs, Mid = 77 SMB2: R , Mid = 77 - NT Status: System - Error, Code = (58) STATUS_OBJECT_PATH_NOT_FOUND The main difference seems to be in the name, one is relative to the open file share, the other has gained the gt\files\media prefix which is the name of the DFS target. These failures are always preceded by logoff and back on of the SMB target. Might have to bump this one to PSS.

    Read the article

  • Xenserver 6.2 cannot send alert using gmail smtp

    - by Crimson
    I'm using Xenserver 6.2 and configured ssmtp.conf an mail_alert.conf in order to receive alerts through email. I followed the instructions on http://support.citrix.com/servlet/KbServlet/download/34969-102-706058/reference.pdf document. I'm using gmail smtp to send the emails. When i try: [root@xen /]# ssmtp [email protected] from the command line and try to send the email, no problem. It is right on the way. But when i set some VM to generate alerts, alerts are generated. I see in XenCenter but emailing is not working. I see this in /var/log/maillog file: May 27 16:17:09 xen sSMTP[30880]: Server didn't like our AUTH LOGIN (530 5.7.0 Must issue a STARTTLS command first. 18sm34990758wju.15 - gsmtp) From command line every thing works fine. This is the log record for the above command line operation: May 27 15:55:58 xen sSMTP[27763]: Creating SSL connection to host May 27 15:56:01 xen sSMTP[27763]: SSL connection using RC4-SHA May 27 15:56:04 xen sSMTP[27763]: Sent mail for [email protected] (221 2.0.0 closing connection ln3sm34863740wjc.8 - gsmtp) uid=0 username=root outbytes=495 Any ideas?

    Read the article

  • Can't delete C:\Config.Msi\75ce84f.rbf

    - by Hugh Allen
    I can't delete C:\Config.Msi\75ce84f.rbf It's not causing any problems but it's a mystery I'd like to solve, preferably before the next reboot because it's scheduled for deletion then (see pendmoves). it's not readonly, system or hidden it's not in use by another process (according to Process Explorer) the NT security permissions aren't the problem either - I am the owner and have Full Control ; as a double-check, the Effective Permissions tab shows that I have permission to delete. Yet trying to delete the file gives "Access is Denied" from both Explorer and cmd. I can however rename it or move it to another folder on the same drive. I can also read it and Virustotal says it's clean which is what I would expect (it's just a Windows Installer temp file - a copy of some DLL I think). The relevant line from Process Monitor is: 6:52:14.3726983 PM 112 Explorer.EXE SetDispositionInformationFile C:\Config.Msi\75ce84f.rbf CANNOT DELETE Delete: True Write 1232 Background: I'm using XP SP2. I recently repaired my Adobe Reader installation to make it the default browser plugin again instead of Foxit. (there seems to be no UI to do it otherwise?) So the installer did its thing and then asked to reboot. As is my habit when rebooting is inconvenient I declined the offer and ran pendmoves to find out what files the installer had scheduled to move / delete. It wanted to delete two files with .rbf extension (rollback files) located in C:\Config.msi\. (this applies to both even though I've been speaking about one). So I tried to delete them manually and couldn't. Does anyone have any ideas what could be preventing deletion? (and I don't think it's malware even though I'm not running AV at the moment)

    Read the article

  • Appcrash and possible malware

    - by Chris Lively
    First off, I'm running MS Intune Endpoint Protection. It is completely up to date. On 10/25 @ 11:53PM I came across a site that caused Intune to freak out: Microsoft Antimalware has detected malware or other potentially unwanted software. For more information please see the following: http://go.microsoft.com/fwlink/?linkid=37020&name=Trojan:Win64/Sirefef.B&threatid=2147646729 Name: Trojan:Win64/Sirefef.B ID: 2147646729 Severity: Severe Category: Trojan Path: file:_C:\Windows\System32\consrv.dll Detection Origin: Local machine Detection Type: Concrete Detection Source: Real-Time Protection User: NT AUTHORITY\SYSTEM Process Name: C:\Windows\explorer.exe Signature Version: AV: 1.115.526.0, AS: 1.115.526.0, NIS: 10.7.0.0 Engine Version: AM: 1.1.7801.0, NIS: 2.0.7707.0 I, of course, elected to simply delete the file. Since then my machine has been randomly giving an error about "Host Process for Windows Services" stopped working. There are generally two different pieces of info: Description Faulting Application Path: C:\Windows\System32\svchost.exe Problem signature Problem Event Name: BEX64 Application Name: svchost.exe Application Version: 6.1.7600.16385 Application Timestamp: 4a5bc3c1 Fault Module Name: StackHash_52d4 Fault Module Version: 0.0.0.0 Fault Module Timestamp: 00000000 Exception Offset: 000062bdabe00000 Exception Code: c0000005 Exception Data: 0000000000000008 OS Version: 6.1.7601.2.1.0.256.27 Locale ID: 1033 Additional Information 1: 52d4 Additional Information 2: 52d47b8b925663f9d6437d7892cdf21b Additional Information 3: ed24 Additional Information 4: ed24528f3b69e8539b5c5c2158896d3e and Description Faulting Application Path: C:\Windows\System32\svchost.exe Problem signature Problem Event Name: APPCRASH Application Name: svchost.exe Application Version: 6.1.7600.16385 Application Timestamp: 4a5bc3c1 Fault Module Name: mshtml.dll Fault Module Version: 9.0.8112.16437 Fault Module Timestamp: 4e5f1784 Exception Code: c0000005 Exception Offset: 00000000002ed3c2 OS Version: 6.1.7601.2.1.0.256.27 Locale ID: 1033 Additional Information 1: 3e9e Additional Information 2: 3e9e8b83f6a5f2a25451516023078a83 Additional Information 3: 432a Additional Information 4: 432a0284c502cce3bbb92a3bd555fe65 Intune claims the machine is clean. I've also tried some of the online scanners like trendmicro, all of which claimed the system is clean. Finally, I tried the "sfc /scannow" and it said all was good. I left my machine on after I left last night and there were about 50 of those messages. Ideas on how to proceed?

    Read the article

  • Override <customErrors mode="Off"/> message from .NET Framework even when in web.config detailed err

    - by GrZeCh
    Hello, is this possible to override .NET Framework error: Server Error in '/' Application. Runtime Error Description: An application error occurred on the server. The current custom error settings for this application prevent the details of the application error from being viewed remotely (for security reasons). It could, however, be viewed by browsers running on the local server machine. ... <!-- Web.Config Configuration File --> <configuration> <system.web> <customErrors mode="Off"/> </system.web> </configuration> even if web.config (IIS7.5) is set to <httpErrors errorMode="Detailed"/> ? I'm asking because my default setting for IIS7 is <deployment retail="true" /> so no error is being showed and only adding additional error handling module to website will allow to see errors generated by this application and thats why I want to override this message to inform users about it. Any ideas? Thanks

    Read the article

  • ssh tunnel - bind: Cannot assign requested address

    - by JosephK
    Trying to create a socks (-D) ssh tunnel - Linux box to Linux box (both centos): sshd running on remote side ok. From local machine we do / see this: ssh -D 1080 [email protected]. [email protected]'s password: bind: Cannot assign requested address (where 8.8.8.8 is really my server's IP and 'user' is my real username) I am logged into the remote side in this terminal-window. I can verify that the local port was unused prior to this command, and then used by an ssh process, after the command, via: netstat -lnp | grep 1080 So, unlike most googled-responses with this error, the problem would not seem to be the loopback interface assignment. If I try to use this tunnel with a mail client, the local-side permits the attempt (no 'proxy-failed' error), but no data / reply is returned. On the remote side, I do have "PermitTunnel yes" in my sshd_config (though 'yes' should be the default, anyway). Ideas or Clues? Here is the relevant debug-output OpenSSH_5.3p1, OpenSSL 1.0.0-fips 29 Mar 2010 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * .... debug1: Authentication succeeded (password). debug1: Local connections to LOCALHOST:1080 forwarded to remote address socks:0 debug1: Local forwarding listening on 127.0.0.1 port 1080. debug1: channel 0: new [port listener] debug1: Local forwarding listening on ::1 port 1080. bind: Cannot assign requested address debug1: channel 1: new [client-session] debug1: Entering interactive session. debug1: Sending environment. debug1: Sending env LANG = en_US.utf8 Other clue: If I run a Virtual Box on the client running Windows, open a tunnel with putty in that box, that tunnel, to the same remote server, works.

    Read the article

  • ASP.NET AJAX, WebSeal Junctions, and Sessions

    - by powella
    I've run up across a problem with ASP.NET AJAX (hooked up to WebServices directly) and accessing our site through a WebSeal junction. Listing 11. On this page; http://www.ibm.com/developerworks/tivoli/library/t-ajaxtam/index.html explains that requests to pages which do not result in a content type of text/html are not sent with cookie data. Hence, no session. ASP.NET AJAX requests are returned with a content type of "application/json; charset=utf-8". As such, the WebSeal junction is not appending the Session Cookie to the request. This results in our WebService seeing the user as invalid, due to no session information. The Junction has been setup properly with the -J parameter (thats an uppercase J, which appends the required script for WebSeal to the bottom of the page - this prevents forcing IE into quirks mode.) and we've confirmed that the necessary script exists in the output source. I'm up for any suggestions at this point, as I'm out of ideas. FWIW, the site runs perfectly when not accessed through the WebSeal Junction.

    Read the article

  • Indefinite hang when restoring SQL 2005 database on a SQL 2008 server in EC2

    - by erinloy
    I'm trying to restore a 25 GB database backup taken from a Windows 2003/SQL 2005 machine to a Windows 2008/SQL 2008 machine in the Amazon EC2 cloud, using a .bak file and the SQL Management Studio. SQL Management Studio reports the restore reaches 100% complete, and then just hangs indefinitely (24+ hours) using a lot of CPU, until I restart the SQL Server service. Upon restart, SQL again uses a lot of CPU activity for what seems to be an indefinite amount of time, but the DB never comes online. Here are some details: - I have created two EBS volumes, one for DATA and one for LOGS, and I have set the default directories in SQL Server to the \DATA and \LOG directory on these respective volumes. (I wonder if the issue could be related to this, but the DB is too big to restore on the root drive.) - I have given the SQL Server user group full access to these directories. - The server can create a new empty test DB in these directories just fine, and can backup and restore the test DB. - I have tried both restoring of a .bak file and attaching directly to copies of the original .mdf/.ldf files, and the result is the same in both cases. - Both the .bak restore and the .mdf/.ldf attach occur from/to the EBS volumes. - I've also tried the above via SQL script, and "WITH RECOVERY", with no difference in the result, just less UI. - The backup contains two full text indexes. - I have to use "WITH MOVE" for most of the files in the backup. - There's nothing wrong with the backup or .mdf/.ldf files, as this works just fine on a Windows 2003/SQL 2005 machine in the Amazon EC2, but not Windows 2008/SQL 2008. - The DB is NOT marked as "Restoring" in the SQL Management Studio - it is just listed as a normal database, but throws errors when I try to do anything with it (expand the object browser tree, view properties, etc.) Any ideas?

    Read the article

  • CHKDSK: What option DOES NOT delete files and turn them into .chk files?

    - by CHKDSKuser
    I had a recent power outage while using my computer, with a 1TB hard drive being directly accessed as the power went out. When the power came back on, and I rebooted my computer, one of my 1TB hard drives would not register with WinXP SP3, and showed a Total Space of 0, and an Available Space of 0. The file system (NTFS) also did not register...every entry for the drive was either blank or zeroed. My assumption is that the file tables were damaged/corrupted because the drive was being directly accessed when the power went out. After doing some research, I ran CHKDSK with whatever default options it runs with (I'm not sure what they are as I didn't see them displayed). Upon completion of CHKDSK, the drive registered with WinXP as a 1TB hard drive, with an accurately-reflected amount of available space. But CHKDSK also deleted about 16GB of files from their original directories, and changed them all into sequentially-named *.chk files. My question is how can CHKDSK be run in a situation like mine where the file tables needed to be restored, but without having CHKDSK delete any files from their original directories, even if they may be damaged/corrupt? I'd simply like to be able to run CHKDSK and have it restore the file tables, and repair bad sector damage, as it did, but not have it do anything else such as delete files and convert them to CHK files. Any ideas? Or is there a CHKDSK alternative that can perform the same functions without the file deletions?

    Read the article

  • Unable to connect to Postgres on Vagrant Box - Connection refused

    - by Ben Miller
    First off, I'm new to Vagrant and Postgres. I created my Vagrant instance using http://files.vagrantup.com/lucid32.box with out any trouble. I am able to run vagrant up and vagrant ssh with out issue. I followed the instructions http://blog.crowdint.com/2011/08/11/postgresql-in-vagrant.html with one minor alteration. I installed "postgresql-8.4-postgis" package instead of "postgresql postgresql-contrib" I started the server using: postgres@lucid32:/home/vagrant$ /etc/init.d/postgresql-8.4 start While connected to the vagrant instance I can use psql to connect to the instance with out issue. In my Vagrantfile I had already added: config.vm.forward_port 5432, 5432 but when I try to run psql from localhost I get: psql: could not connect to server: Connection refused Is the server running locally and accepting connections on Unix domain socket "/tmp/.s.PGSQL.5432"? I'm sure I am missing something simple. Any ideas? Update: I found a reference to an issue like this and the article suggested using: psql -U postgres -h localhost with that I get: psql: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.

    Read the article

  • Time machine disk icon on boot disk

    - by Ben Lings
    The icon for Macintosh HD (my boot disk) shows as a Time Machine disk. There is a file .com.apple.timemachine.supported in the root of the disk. If I delete the file and restart the computer, the icon goes back to a normal HD icon. However, the .com.apple.timemachine.supported file is recreated at some point on boot because when I log in again, the file has been recreated. If then reboot again, the icon goes back to being a Time Machine one. Any ideas about what is creating this file and why? More importantly - how can I get it to stop? It looks like something thinks the boot disk should be a Time Machine volume, but what? Console.app shows the following messages at approximately hourly intervals: 19/01/2010 19:23:54 /System/Library/CoreServices/backupd[7459] Starting standard backup 19/01/2010 19:23:54 /System/Library/CoreServices/backupd[7459] Cookie file is not readable or does not exist at path: /.<12 hex digits of MAC address for en0> 19/01/2010 19:23:54 /System/Library/CoreServices/backupd[7459] Volume at path / does not appear to be the correct backup volume for this computer. (Cookies do not match) 19/01/2010 19:23:59 /System/Library/CoreServices/backupd[7459] Backup failed with error: 18 Other possibly relevant information: The boot HD isn't the original - the original failed so this is a SuperDuper'd clone of the original drive. I used to use the same disk for a SuperDuper clone as for Time Machine. These are the same same symptoms as this and this.

    Read the article

  • Problem booting virtual machine after converting VMDK to VHD

    - by vg1890
    I used the VMWare VCenter Converter Standalone Client to convert a physical drive on my old PC to a virtual drive. The conversion worked fine and I ended up with a valid VMDK file. Next, I wanted to convert the VMDK to a VHD for use with Microsoft Virtual PC, since that's what I use on my new box. I used WinImage for the conversion and that worked fine, too. I can access the files from the virtual drive through WinImage. However, when I create a new virtual machine using Virtual PC and add the existing VHD file, the machine doesn't boot. The initial boot screen flashes with the amount of RAM and then the screen goes black. If I turn off the VM and reboot in safe mode I can see the drivers being loaded until eventually it gets to crcdisk.sys and hangs indefinitely. Any ideas how to fix this? I'm not opposed to starting over from scratch if there's another method to turn my physical machine into a Virtual PC VM. Thanks! EDIT - I should add that the virtual drive is a system boot drive and not a secondary drive. EDIT - I tried booting from the install CD and doing a repair. The result was that the system could not be repaired due to a "driver error."

    Read the article

  • Unable to execute gs program: No such file or directory

    - by Imran
    I've setup CUPS + Avahi on my NAS box in order to enable AirPrint with my existing network printer. Printing a test page via CUPS and printing us lp works fine, and I am able to see my printer on the printer list on my iOS device. However when sending a print job from my iOS device the printer status is set to paused and doesnt print anything. When checking the error_logs I have found this line which I believe is causing the error. D [04/Sep/2012:03:20:25 +0100] [Job 11] Started filter gs (PID 7485) D [04/Sep/2012:03:20:25 +0100] [Job 11] Started filter pstops (PID 7486) D [04/Sep/2012:03:20:25 +0100] [Job 11] Set job-printer-state-message to "Unable to execute gs program: No such file or directory", current level=ERROR D [04/Sep/2012:03:20:25 +0100] [Job 11] PID 7485 (gs) stopped with status 1! D [04/Sep/2012:03:20:25 +0100] [Job 11] PID 7486 (pstops) stopped with status 1! D [04/Sep/2012:03:20:25 +0100] [Job 11] Backend returned status 1 (failed) D [04/Sep/2012:03:20:25 +0100] [Job 11] Printer stopped due to backend errors; please consult the error_log file for details. I have installed Ghostscript, so I'm not quite sure why its saying its unable to execute the program, unless there are configurations for GS that I havent set yet. Any ideas?

    Read the article

  • WebDav rename fails on an Apache mod_dav install behind NginX

    - by The Daemons Advocate
    I'm trying to solve a problem with renaming files over WebDav. Our stack consists of a single machine, serving content through Nginx, Varnish and Apache. When you try to rename a file, the operation fails with the stack that we're currently using. To connect to WebDav, a client program must: Connect over https://host:443 to NginX NginX unwraps and forwards the request to a Varnish server on http://localhost:81 Varnish forwards the request to Apache on http://localhost:82, which offers a session via mod_dav Here's an example of a failed rename: $ cadaver https://webdav.domain/ Authentication required for Webdav on server `webdav.domain': Username: user Password: dav:/> cd sandbox dav:/sandbox/> mkdir test Creating `test': succeeded. dav:/sandbox/> ls Listing collection `/sandbox/': succeeded. Coll: test 0 Mar 12 16:00 dav:/sandbox/> move test newtest Moving `/sandbox/test' to `/sandbox/newtest': redirect to http://webdav.domain/sandbox/test/ dav:/sandbox/> ls Listing collection `/sandbox/': succeeded. Coll: test 0 Mar 12 16:00 For more feedback, the WebDrive windows client logged an error 502 (Bad Gateway) and 303 (?) on the rename operation. The extended logs gave this information: Destination URI refers to different scheme or port (https://hostname:443) (want: http://hostname:82). Some other Restrictions: Investigations into NginX's Webdav modules show that it doesn't really fit our needs, and forwarding webdav traffic to Apache isn't an option because we don't want to enable Apache SSL. Are there any ways to trick mod_dav to forward to another host? I'm open to ideas :).

    Read the article

< Previous Page | 509 510 511 512 513 514 515 516 517 518 519 520  | Next Page >