Search Results

Search found 963 results on 39 pages for 'troubleshoot'.

Page 25/39 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • ArchBeat Link-o-Rama Top 10 for November 1, 2012

    - by Bob Rhubart
    Hurricane Sandy Edition Power outages in the Cleveland area made it impossible to publish posts on Tuesday and Wednesday. In my neighborhood most are still without power. The sound of howling winds that dominated on Monday and Tuesday has been replaced by the sound of of portable generators. My internet connection was restored only after AT&T U-Verse crewmen hooked up a portable generator to power the relay station up the street. Bear in mind that Cleveland is 500 miles from the Atlantic coast. Mobile Development Platform Strategy Chart: ADF Mobile, WebCenter Sites, Portal, Content and Social "Unlike desktop web focused efforts, the world of mobile has undergone change at a feverish pace," says social enterprise expert John Brunswick. His extensive post charts various resources that will help you keep up. ADF Essentials - The Bare Necessities | Floyd Teter The experiment is over... And now Oracle ACE Director Floyd Teter shares his impressions after spending some time with Oracle ADF Essentials, the free version of Oracle ADF. Expanding the Oracle Enterprise Repository with functional documentation Capgemini middleware specialist Marc Kuijpers shares information on how Oracle Enterprise Repository can be configured "to contain functional assets, i.e. functional designs, use cases and a logical data model" to aid in SOA governance efforts. A review of Oracle SOA Suite 11g Administrator’s Handbook | RedStack "More so than any other single piece of content that I have seen on the topic, it provides the information that a SOA administrator needs to know in order to successfully configure, manage, monitor, troubleshoot and backup an Oracle SOA environment." So says Oracle Fusion Middleware A-Team solution architect Mark Nelson of Oracle SOA Suite 11g Administrator’s Handbook, by Ahmed Aboulnaga and Arun Pareek. Eating our own dog food – Oracle’s internal deployment of Oracle IDM Oracle Fusion Middleware A-Team member Brian Eidelman recommends the recent podcast on Oracle’s internal deployment of Oracle OAM and OID. "This was a big project that involved migrating a bunch of critical, high volume applications to leverage OAM and OID," says Eidelman. "So I suggest you tune in to see and hear more about how we deploy our own software." Thought for the Day "Anyone who says they're not afraid at the time of a hurricane is either a fool or a liar, or a little bit of both." — Anderson Cooper Source: BrainyQuote

    Read the article

  • New Information Center - Reviewing Security For FMW 11g

    - by Daniel Mortimer
    Announcing ... Information Center: Reviewing Security For Oracle Fusion Middleware 11g [ID 1458051.2] has been published.  Screenshot of ID 1458051.2 What is an Information Center? Information Centers use widgets to aggregate knowledge content, such as support documents, product documentation, support community threads, which is pertinent to a given task or intent. Widgets either contain static lists or better still some widgets are dynamic. A dynamic widget uses a query criteria to present a list of support documents relevant to the title / subject matter of the widget. The content of a dynamic widget is refreshed automatically every 24 hours. Once you are in an Information Center, you can use the left hand menu to navigate to other Tasks / Intent Information Centers (e.g "Install and Configure", "Patch", "Troubleshoot", "Upgrade" which are available for the chosen product. Are Information Centers easy to find? You can go straight to the new "Reviewing Security" Information Center by using the hyperlink given above. There are, however, two other methods which make Information Centers easier to find. Browse Knowledge Refine Your Search Browse Knowledge The "Browse Knowledge" is currently found in the "Knowledge" Tab Page in My Oracle Support. As illustrated by the screenshots below, you can find Information Centers by choosing a product (e.g "Oracle Fusion Middleware"), a version and an action / intent. If an Information Center exists for your selection the "Advisor Found" button is enabled. Clicking on this button will take you straight to the desired Information Center.Screenshot - Browse Knowledge 1 Screenshot - Browse Knowledge 2 Screenshot - Browse Knowledge 3 Refine Your Search Refine your search is a dialogue which is triggered by certain keywords that you may enter into the Global Search field in the top right hand corner of My Oracle Support. The "Refine Your Search" works in a similar manner to "Browse Knowledge". Choose your product and version. The appropriate Task / Intent should already be selected for you. Thereafter, click the Go button. Screenshot - Refine Your Search 1 Screenshot - Refine Your Search 2 Screenshot - Refine Your Search 3

    Read the article

  • "sr0" I/O read boot error in Xubuntu 12.10

    - by Entropicurity
    Just recently upgraded Xubuntu 12.10 after having installed 12.04 originally. Ever since that happened, any time I inserted a disk into my DVD/CD reader/writer I would end up getting a number of error messages from my DVD player/Audio player softer, typically with any of them either crashing or returning a GStreamer backend error. So doing some research I tried to mount it manually, and it typically returned the error messages you'll see below: (using the dmesg | grep "sr0" command) [17545.435584] end_request: I/O error, dev sr0, sector 0 [17545.492968] sr 3:0:0:0: >[sr0] [17545.492981] sr 3:0:0:0: >[sr0] [17545.492992] sr 3:0:0:0: >[sr0] [17545.493003] sr 3:0:0:0: >[sr0] CDB: [17545.493020] end_request: I/O error, dev sr0, sector 0 [17545.493058] EXT3-fs (sr0): error: unable to read superblock [17545.553224] sr 3:0:0:0: >[sr0] [17545.553237] sr 3:0:0:0: >[sr0] [17545.553247] sr 3:0:0:0: >[sr0] [17545.553258] sr 3:0:0:0: >[sr0] CDB: [17545.553275] end_request: I/O error, dev sr0, sector 0 [17545.553312] EXT4-fs (sr0): unable to read superblock [17545.611482] sr 3:0:0:0: >[sr0] [17545.611494] sr 3:0:0:0: >[sr0] [17545.611504] sr 3:0:0:0: >[sr0] [17545.611514] sr 3:0:0:0: >[sr0] CDB: [17545.611531] end_request: I/O error, dev sr0, sector 0 [17545.611568] FAT-fs (sr0): unable to read boot sector When I grepped this, the number of error messages just seemed to spill all over the screen, and typically all revolve around the I/O error. I then went into VLC and tried to run the stuff through there, but for some reason it won't read with the default devices. Instead I have to manually input sr0 into each of the /dev/ references and it will play just fine, until I exit out of the program, at which point I have to re-input everything. I have already installed all the media dependencies, both open source and restricted, through the software center. I just found this strange, as out of the box 12.04 there wasn't any problem what so ever. At this point, it could be that all my audio CD's are not in a format that is familiar with the OS (I've read in some places that EXTF4 has issues with this) but I don't see how that could be a problem when the above error messages even tested to see if it is EXT3 as well? I've tried my best to troubleshoot this up till now using existing topics, but it seems that this is a regular problem with a multitude of varying factors that incorporate into it, including but not limited to read/write problems with the format of the CD and right protection. Is there something integral that I am missing?

    Read the article

  • How to get wireless (Alfa) operate in full power and speed up wireless Internet?

    - by MahboobeAlam
    I am using Wireless to connect to the internet (router). My laptop has Atheros wireless (AR 242x/542x) adapter but the router is a little bit far-away from my room so I have to use an external wireless adapter i.e. Alfa (Realtek 8187) for connectivity. However, since I have started using Ubuntu I noticed that Alfa is not working in full power, internet speed in Ubuntu is much slower than in Windows on my laptop. When I am using Windows 7 the LED (bulb) on Alfa blinks as it should, but when in Ubuntu, it doesn't blink rather it is on but very dim. Connection using Atheros adapter is also the same (slow)... I have tried 4 methods (I found on the Internet) to troubleshoot this matter but no success. It seems to me that Ubuntu/Linux is not letting these wireless adapters to operate in full strength. iwconfig shows that power management is off for both. So what's the problem? Details: ifconfig eth0 Link encap:Ethernet HWaddr 00:1f:16:1e:36:ec UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:45 Base address:0x6000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:1657 errors:0 dropped:0 overruns:0 frame:0 TX packets:1657 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:241697 (241.6 KB) TX bytes:241697 (241.6 KB) wlan0 Link encap:Ethernet HWaddr 00:22:5f:9b:24:b5 inet6 addr: fe80::222:5fff:fe9b:24b5/64 Scope:Link UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:715460 errors:0 dropped:0 overruns:0 frame:0 TX packets:694246 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:493539292 (493.5 MB) TX bytes:235159393 (235.1 MB) wlan1 Link encap:Ethernet HWaddr 00:c0:ca:42:14:62 inet addr:192.168.100.102 Bcast:192.168.100.255 Mask:255.255.255.0 inet6 addr: fe80::2c0:caff:fe42:1462/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:171053 errors:0 dropped:0 overruns:0 frame:0 TX packets:181363 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:74094659 (74.0 MB) TX bytes:59474204 (59.4 MB) iwconfig lo no wireless extensions. eth0 no wireless extensions. wlan0 IEEE 802.11bg ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=20 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off wlan1 IEEE 802.11bg ESSID:"Zia" Mode:Managed Frequency:2.412 GHz Access Point: 00:0D:F0:9C:A6:18 Bit Rate=54 Mb/s Tx-Power=20 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=70/70 Signal level=-31 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:204 Invalid misc:6610 Missed beacon:0

    Read the article

  • MSCC: Global Windows Azure Bootcamp - 29th March 2014

    The Mauritius Software Craftsmanship Community proudly presents you the Global Windows Azure Bootcamp 2014 in Mauritius. Global Windows Azure Bootcamp 2014 in Mauritius - MSCC together with Microsoft, Ceridian and Emtel We are very happy and excited about our participation in this global event and would like to draw your attention to the official invitation letter below. Please sign up and RSVP on the official website of the MSCC. Participation is for free! Call for action Please create more awareness of this event in Mauritius and use the hash tag #gwabmru as well as the shortened link: http://aka.ms/gwabmru And remember: Sharing is Caring! Official invitation letter to the GWAB 2014 in Mauritius With over 130 confirmed locations around the globe, the Global Windows Azure Bootcamp is going to be a truly memorable event - and now here's your chance to take part! In April of 2013 we held the first Global Windows Azure Bootcamp at more than 90 locations around the globe! This year we want to again offer up a one day deep dive class to help thousands of people get up to speed on discovering Cloud Computing Applications for Windows Azure. In addition to this great learning opportunity the hands on labs will feature pooling a huge global compute farm to perform diabetes research! In Mauritius, the event will be organised by Microsoft Indian Ocean Islands & French Pacific in partnership with The Mauritius Software Craftsmanship Community (MSCC) and sponsored by Microsoft, Ceridian and Emtel. What do I need to bring?  You will need to bring your own computer which can run Visual Studio 2012 or 2013 (i.e. Windows, OSX, Ubuntu with virtualization, etc.) and have it preloaded with the following: Visual Studio 2012 or 2013 The Windows Azure SDK - http://www.windowsazure.com/en-us/develop/net/ Optionally (or if you will not be doing just .NET labs), the following can also be installed: Node.js SDK - http://www.windowsazure.com/en-us/develop/nodejs/ JAVA SDK - http://www.windowsazure.com/en-us/develop/java/ Doing mobile? Android? iOS? Windows Phone or Windows 8? - http://www.windowsazure.com/en-us/develop/mobile/ PHP - http://www.windowsazure.com/en-us/develop/php/ More info here: http://www.windowsazure.com/en-us/documentation Important: Please do the installation upfront as there will be very little time to troubleshoot installations during the day.  

    Read the article

  • System freezes while not in use, how do I fix this?

    - by PHLAK
    Bare with me, the following is a bit winded. I have Ubuntu 10.10 Desktop 64-bit installed on my laptop and up until a few weeks ago it has been running great. Then one day, while I was not using the laptop it froze. I was logged in as my user but had locked the screen locked and closed the lid. I didn't notice that it had frozen until I opened the lid and wiggled the mouse to try and log in. The screen remained black and I got no response. I immediately tried Alt + F2, F3, F4, etc. but got no response. The only thing I could do was hold the power button to power off the machine. The freezing has happened as quickly as within 10-20 minutes of the system being logged off and lid closed and as long as 4-6 hours. My machine is NOT configured to go into standby when plugged in and this has happened both on AC power and battery. Troubleshooting I have performed: I uninstalled programs I knew that I had installed between when it was working fine and having problems. Those programs were CrashPlan, Shutter and Conky. After uninstalling ALL of these programs the freezing still occurs. Next, I decided to SSH into the machine from my desktop and leave an htop and tail of the syslog running. Here are screenshots of the last thing shown on both when the system froze: htop, syslog Here is a dump of my syslog after another freeze. The freeze happened at 9:14 and I didn't notice it until about 10 minutes later and rebooted, hence the 10 minute gap from 9:14 to 9:24. In the above syslog dump I noticed a lot of NVRM: os_raise_smp_barrier(), invalid context! and upon investigating that message learned it was from the proprietary Nvidia driver I had installed. Thinking this could be part of the problem I uninstalled the Nvidia driver and reverted to using the Nouveau driver. The computer still froze after a few hours. Lastly, thinking the problem could be caused by overheating I used compressed air to blow out any dust in the CPU vents and all other openings on the laptop. None of the above troubleshooting has helped and the freezing still occurs. What other steps can I take to troubleshoot and/or fix this problem? Note: Yesterday X started to eat up a lot of CPU power and eventually froze my system while I was forwarding an X session over SSH (from another PC to my laptop). I'm unsure if this is related or not as it doesn't match any of the symptoms of the problem above. Aside from this, the system has never frozen while in use, even under heavy load. EDIT: I just ran Memtest86+ and it made it through two passes without any errors. Just eliminating possible causes here.

    Read the article

  • external USB HD doesn't show up on 'sudo fdisk -l' on one Ubuntu, shows up on another (both 'precise')

    - by Menelaos
    I have a 1.5TB WD external USB HDD and two Ubuntu systems, both 'precise'. When I plug the disk on system A it doesn't show up on the output of sudo fdisk -l nor is it automatically mounted (note that that wasn't always the case - it used to appear in the past). When I plug it on system B (again Ubuntu precise) it shows up when I do a sudo fdisk -l (output appended at the end) and mounts automatically just fine. What does this discrepancy point to and what kind of diagnostics should I run / tools I should use to troubleshoot the problem? I followed the suggestion I received to do a sudo tail -f /var/log/syslog and the output is the following when I plug, unplug and replug the USB cable on System A: Sep 14 23:27:09 thorin mtp-probe: checking bus 2, device 3: "/sys/devices/pci0000:00/0000:00:13.2/usb2/2-2" Sep 14 23:27:09 thorin mtp-probe: bus: 2, device: 3 was not an MTP device Sep 14 23:28:01 thorin kernel: [ 338.994295] usb 2-2: USB disconnect, device number 3 Sep 14 23:28:04 thorin kernel: [ 341.808139] usb 2-2: new high-speed USB device number 4 using ehci_hcd Sep 14 23:28:04 thorin mtp-probe: checking bus 2, device 4: "/sys/devices/pci0000:00/0000:00:13.2/usb2/2-2" Sep 14 23:28:04 thorin mtp-probe: bus: 2, device: 4 was not an MTP device Sep 14 23:29:54 thorin AptDaemon: INFO: Quitting due to inactivity Sep 14 23:29:54 thorin AptDaemon: INFO: Quitting was requested (I guess the last two message are irrelevant). Output of sudo fdisk -l on System B $ sudo fdisk -l Disk /dev/sda: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00070db4 Device Boot Start End Blocks Id System /dev/sda1 * 2048 2905114623 1452556288 83 Linux /dev/sda2 2905116670 2930276351 12579841 5 Extended /dev/sda5 2905116672 2930276351 12579840 82 Linux swap / Solaris Disk /dev/sdb: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x9c849c84 Device Boot Start End Blocks Id System /dev/sdb1 * 63 488375999 244187968+ 7 HPFS/NTFS/exFAT Disk /dev/sdg: 1500.3 GB, 1500299395072 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930272256 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0003e17f Device Boot Start End Blocks Id System /dev/sdg1 2048 2930272255 1465135104 7 HPFS/NTFS/exFAT

    Read the article

  • How do I prevent ISPs from killing downloads of files in mid-transfer?

    - by Gorchestopher H
    I run a small website with a few users, low traffic, mostly to share personal mp3 files with a small community. Depending on their ISP, my users can't always download or stream larger files. By larger I mean larger than 1MB. Essentially the host either stops sending, or the client stops receiving. One of the links along the connection chain simply ends its connection before the transfer completes Trace-route shows no connection issues. There are no connection issues with short transfers that don't take more than a few seconds. It's these 10 second transfers that just end up ending. Just doing a straight download with a direct link can yield this error if you have the wrong ISP. Strangely enough, this is most common with users with ISPs who are essentially independent providers that buy service via a fiber link. Unfortunately these providers aren't very knowledgeable, are unable to do any testing, and insist it's a problem with the host. I have gotten my host to transfer my site to different servers of their, to the same effect. Nearly identical sites (affiliate sites actually) experience no such issue. What can I be doing to further troubleshoot this matter? How can I prove that someone is dropping the ball, and identify who that party is? Can I do a 5Mb traceroute? EDIT Maybe I can clear up some misconceptions with my question: The files are not very large. They are simply over 2Mb. The users do not have "slow" connections, they are at least 5mbps. This "time out" happens very quickly, in the realm of 5 seconds, so I don't know if it's a timeout or not. The user often gets 1 or 2Mb in this chunk of time. I have tried streaming with a flash player. I have tried saving the target. Forcing the download. I have tried allowing the browser to stream the file. I have tried different browsers (FF, IE, Chrome). Users are able to download identical files when on different hosts.

    Read the article

  • Event-Driven Debugging

    - by Brian Donahue
    Most application troubleshooting involves getting an error, analyzing the error message, and at worst, attaching a debugger to work out the real cause. What is not really covered is how to troubleshoot an applicaiton that is not errant, but is having a performance issue, and more than likely, in the middle of the night when you are snug in your bed, sawing logs. What you need is an ever-vigilant cyborg who never sleeps to sit in front of your server all night, but as SkyNet is not live yet, you can settle for the next-best thing. Windows provides performance counters and alerts that can tell you when an applicaiton reaches an unacceptable threshold of naughty behavior, but although it can tattle on your brainchild, it won't be the child psychiatrist that you need to tell you why he's pulling your server's pigtails and pulling faces at the teacher. What you need is to plug a debugger into performance monitor and have it tell you what's going on with your applicaiton at the time. For this purpose, I'd used Microsoft's MDbgEngine as the basis for an applicaiton that will dump a program's stacks, I call it Application Slicer Dicer Wonder Dumper Super Cyborg, or StackOMatic for short. StackOMatic can look at a program's behavior and tell you if the stacks are not moving, but it can also work on the command-line to dump all managed methods on the stack at will. Now that there is a command you can use to dump the stacks, all you need to do is politely tell Windows to run it when you're displeased with your creation as it's trashing the CPU of your server at 3 AM. The first step is to create a scheduled task to tell StackOMatic to dump your applicaiton. Start Task Scheduler and right-click Task Scheduler Library and then Create Task. For this exercise I'm creating a task that will dump the Red Gate SQL Monitor Base Monitor Service. In the Actions tab, I enter the path to StackOMatic and use the arguments to log the stack dump to a file: /PN:RedGate.Response.Engine.Alerting.Base.Service /OUT:c:\users\administrator\MonitorLog.txt Next, I go into Windows Server 2008's Reliability and Performance Monitor and add a new Data Collector Set. This set will produce an alert on the %Processor Time for the service. When the processor time breaches 50%, it will run the StackDumpBaseService task I created. Whenever the service misbehaves, it will append to the log file. Now when I go to work in the morning, I can see what the service was doing when it overloaded the processor and take action.

    Read the article

  • Mapping Drive Error - System Error 1808

    - by Julian Easterling
    A vendor is attempting to map and preserve a network drive using nt authority/system; so it stays persistent when the interactive session of the server is lost. They were able to do this on one server (Windows 2008 R2) but not a second computer (also Windows 2008 R2). D:\PsExec.exe -s cmd.exe PsExec v1.98 - Execute processes remotely Copyright (C) 2001-2010 Mark Russinovich Sysinternals - www.sysinternals.com Microsoft Windows [Version 6.1.7600] Copyright (c) 2009 Microsoft Corporation. all rights reserved. C:\Windows\system32>whoami nt authority\system C:\Windows\system32>net use New connections will be remembered. Status Local Remote Network -------------------------------------------------------------------- OK X: \\netapp1\share1 Microsoft Windows Network The command completed successfully. C:\Windows\system32>net use q: \\netapp1\share1 System error 1808 has occurred. The account used is a computer account. Use your global user account or local user account to access this server. C:\Windows\system32> I am unsure on how to set up a "machine account mapping" which will preserve the drive letter of the Netapp path being mapped, so that the service account running a Windows service can continue to access the share after interactive logon has expired on the server. Since they were able to do this on one server but not another, I'm not sure how to troubleshoot the problem? Any suggestions?

    Read the article

  • MS NPS denying access, can't validate server certificate

    - by Fred Weston
    At my office we use a Cisco WLC2504 wireless controller and starting about a week ago we started having problems with users connecting to one of our secure wireless network. We are running AD on Windows Server 2008 R2 and use network policy server to control access to our wireless network. When I look at the logs in event viewer after a failed connection attempt I see an access reject message: Reason Code: 262 Reason: The supplied message is incomplete. The signature was not verified. Looking this up on Google I found this article: http://support.microsoft.com/kb/838502 I tried disabling server certificate validation on my computer and as soon as I did that I was able to connect to the network, so it seems that there is some sort of certificate validation issue. I'm not sure which certificate is unable to be validated or how to fix it. This used to work and stopped suddenly by itself so I am thinking a certificate may have expired. When I go to NPS Policies Network Policies My policy Constraints Auth methods Microsoft PEAP and view the properties, the certificae specified here expires in 2016, so doesn't seem as though this could be the problem. Any suggestions on how to troubleshoot this issue?

    Read the article

  • ADFS 2.1 proxy trust establishment error

    - by Tommy Jakobsen
    I'm trying to install an ADFS proxy. In our intranet we have a ADFS 2.1 server running on Windows 2012 which is working fine. Now we're trying to deploy a proxy to this one for internet access, using Windows 2012 R2's Web Application Proxy. I'm getting the following error on the proxy server, event ID 393: Message : An error occurred while attempting to establish a trust relationship with the Federation Server. An error occurred when attempting to establish a trust relationship with the federation service. Error: Forbidden Context : DeploymentTask Status : Error I'm not getting any errors on the ADFS server. I've tried with different credentials. The ADFS service account, a domain administrator who is a member of the local administrators group on the ADFs server, and the local administrator account on the ADFS server. Same error message. Both port 80 and 443 is accessible from the proxy server to the internal ADFS server, and I can access the ADFS metadata endpoint from the proxy server. I'm using the same trusted SSL certificate (wildcard) on both machines. Do you have any ideas that can help me troubleshoot this problem?

    Read the article

  • CA SiteMinder Configuration for Ubuntu

    - by Matt Franklin
    I receive the following error when attempting to start apache through the init.d script: *apache2: Syntax error on line 186 of /etc/apache2/apache2.conf: Syntax error on line 4 of /etc/apache2/mods-enabled/auth_sm.conf: Cannot load /apps/netegrity/webagent/bin/libmod_sm22.so into server: libsmerrlog.so: cannot open shared object file: No such file or directory* SiteMinder does not officially support Ubuntu, so I am having trouble finding any configuration documentation to help me troubleshoot this issue. I successfully installed the SiteMinder binaries and registered the trusted host with the server, but I am having trouble getting the apache mod to load correctly. I have added the following lines to a new auth_sm.conf file in /etc/apache2/mods-available and symlinked to it in /etc/apache2/mods-enabled: SetEnv LD_LIBRARY_PATH /apps/netegrity/webagent/bin SetEnv PATH ${PATH}:${LD_LIBRARY_PATH} LoadModule sm_module /apps/netegrity/webagent/bin/libmod_sm22.so SmInitFile "/etc/apache2/WebAgent.conf" Alias /siteminderagent/pwcgi/ "/apps/netegrity/webagent/pw/" <Directory "/apps/netegrity/webagent/pw/"> Options Indexes MultiViews ExecCGI AllowOverride None Order allow,deny Allow from all </Directory> UPDATE: Output of ldd libmod_sm22.so: ldd /apps/netegrity/webagent/bin/libmod_sm22.so linux-gate.so.1 = (0xb8075000) libsmerrlog.so = /apps/netegrity/webagent/bin/libsmerrlog.so (0xb7ec0000) libsmeventlog.so = /apps/netegrity/webagent/bin/libsmeventlog.so (0xb7ebb000) libpthread.so.0 = /lib/tls/i686/cmov/libpthread.so.0 (0xb7e9a000) libdl.so.2 = /lib/tls/i686/cmov/libdl.so.2 (0xb7e96000) librt.so.1 = /lib/tls/i686/cmov/librt.so.1 (0xb7e8d000) libstdc++.so.5 = /usr/lib/libstdc++.so.5 (0xb7dd3000) libm.so.6 = /lib/tls/i686/cmov/libm.so.6 (0xb7dad000) libgcc_s.so.1 = /lib/libgcc_s.so.1 (0xb7d9e000) libc.so.6 = /lib/tls/i686/cmov/libc.so.6 (0xb7c3a000) libsmcommonutil.so = /apps/netegrity/webagent/bin/libsmcommonutil.so (0xb7c37000) /lib/ld-linux.so.2 (0xb8076000) UPDATE: The easiest way to set environment variables for the Apache run user in Ubuntu is to edit the /etc/apache2/envvars file and add export statements for any library paths you may need

    Read the article

  • SSL23_WRITE:ssl handshake failure:s23_lib.c:177

    - by Armin
    When attempting to connect to an xmpp server over SSL, openssl fails with the following error: 3071833836:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:s23_lib.c:177 I believe that the server uses the RC4-MD5 cipher, here is the full output: [root@localhost ~]# openssl s_client -connect 184.106.52.124:5222 -cipher RC4-MD5 CONNECTED(00000003) >>> SSL 2.0 [length 0032], CLIENT-HELLO 01 03 03 00 09 00 00 00 20 00 00 04 01 00 80 00 00 ff b0 c9 c2 3f 0b 0e 98 df b4 dc fe b7 e8 8f 17 9a 34 b5 9b 17 1b 2b ac 01 dc bd 2b a9 2d 18 44 0c 3071866604:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:s23_lib.c:177: --- no peer certificate available --- No client certificate CA names sent --- SSL handshake has read 0 bytes and written 52 bytes --- New, (NONE), Cipher is (NONE) Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE --- Using gnutls-cli: [root@localhost ~]# gnutls-cli 184.106.52.124 -p 5222 Resolving '184.106.52.124'... Connecting to '184.106.52.124:5222'... *** Fatal error: A TLS packet with unexpected length was received. *** Handshake has failed GNUTLS ERROR: A TLS packet with unexpected length was received. Connecting to the same server on port 5223 works fine. Using OpenSSL 1.0.1e-fips on CentOS 6.5 and OpenSSL 1.0.1f on Ubuntu 14.04.1 Any tips on how to troubleshoot this? Thanks in advance.

    Read the article

  • IIS6 Wildcard Mapping to ASP.NET - no file extension results in IIS 404

    - by Ian Robinson
    I'm trying to perform what I understand to be a relatively simple task. I'd like to remove the extensions from the URLs on my website. I have the proper set up in my application to handle and rewrite the URLs - the trouble is I can't get past IIS to actually get to my application without the extensions. The details: I'm running IIS6 on Windows Server 2003. I've gone into the web site for my application, gone to the home directory tab, clicked "Configuration" and added a wildcard map to the following file: c:\windows\microsoft.net\framework\v2.0.50727\aspnet_isapi.dll Which I verified is the same as what is used above in the application extensions portion by .ascx, etc. If I navigate to http://mywebsite.com/Blogs the result is as follows: HTTP/1.1 404 Not Found Content-Length: 1635 Content-Type: text/html Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET Date: Thu, 14 Jan 2010 15:04:49 GMT Which seems to be a standard IIS 404 message. If I navigate to http://mywebsite.com/Blogs.aspx I get my ASP.NET app.... How can I troubleshoot this? I feel like I've double checked everything a dozen times but to no avail. I must be missing something obvious. Update: Here are the exact instructions given by the asp.net url rewriter that I'm using: IIS 6.0 - Windows 2003 Server open property page for website / virtual directory. click the 'home directory' tab click the 'configuration' button, select the 'mappings' tab click 'insert' next to the 'Wildcard application maps' section browse to the aspnet_isapi.dll (normally at c:\windows\microsoft.net\framework\v2.0.50727\aspnet_isapi.dll) Ensure that 'check that file exists' is unchecked Click OK, OK, OK to close and apply changes Update 2: I have yet to find a resolution for this. The application does not seem to be receiving the request from IIS, any further ideas?

    Read the article

  • Can connect through Watchguard mobile VPN, but can't ping or access network drives

    - by johnnyb10
    We're having any issue in which some of our employess can no longer connect to our network drives when out of the office. We use Watchguard Mobile VPN (we have a Watchguard Firebox firewall) and the users are able to connect. That is, their status in the the VPN client says "Connected" and they have the correct IP address listed as the VPN Endpoint. The problem is, when they try to map drives, or even ping the IP address of a server on our network, it fails. Last week, we temporarily switched one of our Comcast modems to our backup DSL modem because the Comcast was accidentally shut off by Comcast, and the problem seemed to start around then. We've since switched back and the problem persists, so that doesn't seem to have been it (which makes sense). But we also made other changes at the time that might have thrown something off, although we feel like we've checked them all. Plus, some people can successfully connect to network drives through the VPN. Can someone please suggest some steps to help troubleshoot? We've checked the policies on our Watchguard box, and they seem fine. We've looked at the settings on the Mobile VPN client, but nothing seems like a probable cause. Thanks.

    Read the article

  • Intel Wireless 4965AGN not achieving N throughput when connected to an Airport Express N network

    - by BenA
    I have an Intel Wireless WiFi Link 4965AGN adaptor in my laptop (HP Pavillion dv2000 series) which is connecting to a 5Ghz-only 802.11n network provided by an Apple Airport Express. The network is using WPA2 encryption. My desktop is also connected the Airport, via a Linksys WUSB600N USB adaptor. Both are running with the latest drivers, and the Airport is running the latest firmware. The Airport is also configured to use wide channels. The problem I have is that I never get throughput above 4MB/s when transferring files between the two machines. Even a pessimistic calculation shows a 270Mbps network as being capable of transfer rates at well above 10MB/s. I'm pretty sure I've isolated the issue to being the Intel adaptor, as wiring the desktop to the AP, and using the Linksys adaptor on the laptop immediately yielded speeds limited by the 100MB/s ethernet connection. I know that 802.11n is still a draft standard, and so mixing kit from different manufacturers can easily lead to poor results, but I was just wondering if anybody else out there has had success with this Intel adaptor on an N network? Or even better, connecting it to an Airport Express? Can anybody give me any advice on how to troubleshoot this issue? I should also mention that the Airport Express doesn't allow you to manually specify channels when running in N mode, and that I've been able to rule out interference from other Wireless LANs by scanning. There aren't any other 5GHz networks in my area. All ideas welcome! Update: A while later, I've just updated to the most recent drivers for both the Intel chip in the laptop, and the USB adaptor. Unfortunately this hasn't improved things :(. If anybody has any advice it would be be gratefully received.

    Read the article

  • Uploadify Flash Uploader and Random UPLOAD_ERR_CANT_WRITE errors

    - by dcneiner
    I am using Uploadify to provide progress bar support for file uploads on a PHP app I built. It works perfectly for a few uploads,then every few uploads it fails and the data from the $_FILES array reveals an UPLOAD_ERR_CANT_WRITE error. (Error code 7). I ran Paros proxy between my browser and the server to see the difference between a passing and failing request. The only difference was the content separator for the multi-part post which changes every time. I would conclude this was fully a server error, except with a plain jane form, I cannot reproduce the error. I am not a server guy, so please let me know what information is needed to troubleshoot this and I will update the question with those details. I did place these lines in the .htaccess, but to know avail. The site is hosted on Rackspace Cloudsites so my configuration options are limited: <IfModule mod_security.c> SecFilterEngine Off SecFilterScanPOST Off </IfModule> php_value upload_max_filesize 10M php_value post_max_size 10M php_value max_execution_time 200 php_value max_input_time 200

    Read the article

  • IIS 7.5 Basic authorization issue

    - by Alsin
    When I log on using correct user name\password (I always copy-paste them) I get 401.1 error. User name and password are correct (user is created on server locally, not a domain one). I can run program as this user (runas /noprofile /user:tmp notepad.exe). Basic authorization's default domain is a server name, realm is empty. I've saved FailedReqLogFile. AUTH_BASIC_LOGON_FAILED shows ErrorCode="Logon failure: unknown user name or bad password. (0x8007052e)" and MODULE_SET_RESPONSE_ERROR_STATUS shows ModuleName="BasicAuthenticationModule", Notification="AUTHENTICATE_REQUEST", HttpStatus="401", HttpReason="Unauthorized", HttpSubStatus="1", ErrorCode="Logon failure: unknown user name or bad password. (0x8007052e)", ConfigExceptionInfo="" And one more thing - if I use my domain login\password it woks! Basic Authentications is only enabled authentication in application... Could you please suggest me how I can troubleshoot and fix this issue? Maybe somebody hit it before... Best regards, Alex UPDATE: I get 401.1 when I trying to access site from local host. I can actually access files from remote host.

    Read the article

  • Trouble with IIS SMTP relaying to Gmail

    - by saille
    I appreciate that similar questions have been asked about how to setup SMTP relaying with IIS's virtual SMTP server. However I'm still completely stumped on this problem. Here's the setup: IIS 6.0 SMTP server running on Win2k3 box with a NAT'ed IP. Company uses Gmail for all email services. An app on the box needs to send email, so normally we'd just set the app up to talk to smtp.gmail.com directly, but this app doesn't support TLS. Easy, we just setup a local SMTP relay right? So I thought. What we have done so far: Setup IIS SMTP server to relay to smtp.gmail.com, as per these excellent instructions: http://fmuntean.wordpress.com/2008/10/26/how-to-configure-iis-smtp-server-to-forward-emails-using-a-gmail-account/ The local SMTP relay allows anonymous access. Both the local IP and the loopback IP have been explicitly allowed in the Connection... and Relay... dialogs. Tried sending email from 2 different apps via the local SMTP server, but failed (the emails end up in the Queue folder, but never get sent). The IIS logs show the conversation with the local app, but zero conversation happening with smtp.gmail.com. The port used by gmail is open outbound, and indeed the apps we have that support TLS can send email directly via smtp.gmail.com, so there is no problem with the network. At this point I changed the smtp settings in IIS SMTP server to use a different external SMTP server and hey-presto, the local apps can send email via local IIS SMTP relay. So smtp.gmail.com fails to work with our IIS SMTP relay, but another 3rd party SMTP service works fine. We need to use smtp.gmail.com, so how to troubleshoot this one?

    Read the article

  • Cisco ASA intermittently fails to see traffic

    - by DrStalker
    users | Mikrotik -- Internet | ASA | ServerA and ServerB I'm trying to troubleshoot a problem with a new Cisco ASA 5505. The network design is as above - the Microtik is the existing router, ServerA and ServerB used to plug directly into it. ServerA has IP 10.30.1.10, ServerB has IP 10.30.1.11 The ASA is configured with no NAT, a "allow anything" firewall, and uses the microtik as its default gateway. In effect, it is currently a simple IP router; the firewall and VPN stuff will all come later once the basics are working. Th problem is access to ServerA and ServerB is erratic - sometimes it will work, sometimes it will fail. It can fail for either one of the servers only, or both. When it is working: The Mikrotik logs show ping packets being sent out over the proper interface The ASA logs show the incoming connections. When it is failing: The Mikrotik logs show ping packets being sent out over the proper interface The ASA logs show nothing reaching the ASA. This can fail for one server only (e.g.: the Mikrotik is putting out packets to 10.30.1.10 and 10.30.1.11, but the ASA is only seeing packets arrive destined for 10.30.1.11) It can fail for one source only (e.g.: ClientA on the users network can ping 10.30.1.11, but clientB cannot) The problem can also be seen from the mikrotik router itself; sometimes it can ping ServerA and ServerB, sometimes it can only ping one of them What could be causing this? I can't think of any possible cause that is intermittent and could explain why the problem may occur for one destination server and not others. edit: Link to ASA config

    Read the article

  • My computer loses network connectivity every 30 minutes

    - by Logan Garland
    My LAN connected PC loses network connectivity every 30 minutes. It has a static IP address and I've checked to make sure there aren't any IP conflicts on my network. If I'm streaming from that PC to my Xbox the stream will be interrupted and it normally takes about a minute to come back online. The same happens if I'm actually on the PC and just browsing the web. I'm looking for suggestions on how to track down this issue. I've tried checking the available logs on my router to see if there is an issue with DCHP but have been unsuccessful in finding any evidence. Any suggestions would be helpful. I can't think of any recent changes to my network, PC or software installations that may have caused this. I am a software developer and have intermediate networking knowledge. EDIT: During one outage I told windows to troubleshoot the network problem and it said that it could automatically fix the problem by changing DCHP info. It basically said my network adapter from static to automatically obtain. This did fix the issue quicker than just waiting it out, but the outage occurred again 30 minutes later even when leaving those settings.

    Read the article

  • ubuntu 9.04 pptp broken after a power failure

    - by kevin42
    I have a small Ubuntu 9.04 router setup as a NAT box and a PPTP server. After a power failure everything except the PPTP server still works. A windows client gets to "registering your computer on the network" but then says Error 742: The remote computer does not support the required data encryption type. I did some research and I think the problem is with the ppp_mppe module. When I try to run 'modprobe ppp_mppe' it hangs indefinitely. What would cause this hang? Any ideas how I can troubleshoot this further? Thanks for the help! UPDATE: I am still having the problem, however I have found some more information. When the first user tries to connect to pptp, the process list shows modprobe sha1 running, and one instance of modprobe ppp_mppe for each connection attempt. If I killall modprobe at this point the next connection attempt works, and everything is fine until the next reboot. I'm planning to do a clean install at some point in the future but I'd really like to get to the real cause of this.

    Read the article

  • How to inspect remote SMTP server's TLS certificate?

    - by Miles Erickson
    We have an Exchange 2007 server running on Windows Server 2008. Our client uses another vendor's mail server. Their security policies require us to use enforced TLS. This was working fine until recently. Now, when Exchange tries to deliver mail to the client's server, it logs the following: A secure connection to domain-secured domain 'ourclient.com' on connector 'Default external mail' could not be established because the validation of the Transport Layer Security (TLS) certificate for ourclient.com failed with status 'UntrustedRoot. Contact the administrator of ourclient.com to resolve the problem, or remove the domain from the domain-secured list. Removing ourclient.com from the TLSSendDomainSecureList causes messages to be delivered successfully using opportunistic TLS, but this is a temporary workaround at best. The client is an extremely large, security-sensitive international corporation. Our IT contact there claims to be unaware of any changes to their TLS certificate. I have asked him repeatedly to please identify the authority that generated the certificate so that I can troubleshoot the validation error, but so far he has been unable to provide an answer. For all I know, our client could have replaced their valid TLS certificate with one from an in-house certificate authority. Does anyone know a way to manually inspect a remote SMTP server's TLS certificate, as one can do for a remote HTTPS server's certificate in a web browser? It could be very helpful to determine who issued the certificate and compare that information against the list of trusted root certificates on our Exchange server.

    Read the article

  • Puppet Agent fails sporadically, with either timeout or "Could not find class" error

    - by smokris
    I have puppet master running on a Xen dom0, and 3 domUs syncing to it via an hourly crontab puppet agent --test. About 80% of the time, the puppet agent --test completes successfully: info: Retrieving plugin info: Caching catalog for test3 info: Applying configuration version '1333319732' notice: Finished catalog run in 5.08 seconds The other 20% of the time, it fails midway, with errors such as the following: err: Could not retrieve catalog from remote server: Error 400 on SERVER: Could not find class iptables for test1 at /etc/puppet/manifests/site.pp:1 on node test1 warning: Not using cache on failed catalog err: Could not retrieve catalog; skipping run or info: Retrieving plugin info: Caching catalog for test2 info: Applying configuration version '1333319732' notice: Finished catalog run in 24.73 seconds err: Could not send report: Error 500 on SERVER: Internal Server Error private method `gsub' called for WEBrick::HTTPStatus::RequestTimeout:Class WEBrick/1.3.1 (Ruby/1.8.5/2006-08-25) OpenSSL/0.9.8e-rhel5 at puppet:8140 or info: Retrieving plugin err: Could not retrieve catalog from remote server: execution expired warning: Not using cache on failed catalog err: Could not retrieve catalog; skipping run or info: Retrieving plugin info: Caching catalog for test3 info: Applying configuration version '1333319732' notice: Finished catalog run in 9.47 seconds err: Could not send report: Error 408 on SERVER: Request Timeout During this time, I've not made any changes to the Puppet configuration — it just sporadically fails. I'm running puppet-2.7.12 on CentOS, and followed the setup instructions described on http://docs.puppetlabs.com/learning/agent_master_basic.html. Any ideas about how I can troubleshoot this?

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >