Search Results

Search found 7628 results on 306 pages for 'internal communications'.

Page 272/306 | < Previous Page | 268 269 270 271 272 273 274 275 276 277 278 279  | Next Page >

  • Moving from single-site to multi-site Active Directory has broken OWA proxying

    - by messick
    Originally we had the following setup: OfficeExch01 has Mailbox Role and CAS Role OfficeExch01 is in the office. CoLoExch01 had just CAS Role. CoLoExch01 is internet facing and in a CoLo. Three AD domain controllers in the default site. Users could go to https://webmail.whatever.com/owa, get proxyed to OfficeExch01 and everything was great. Well, we recently setup a separate AD site and put a domain controller and the ColoExch01 server in the new site. I also made that remote DC be a Global Catalog. Now, users get the following error: Outlook Web Access is not available. If the problem continues, contact technical support for your organization and tell them the following: There is no Microsoft Exchange Client Access server that has the necessary configuration in the Active Directory site where the mailbox is stored. I also see event 41 errors in the logs: The Client Access server "https://webmail.xxxxxxx.com/owa" attempted to proxy Outlook Web Access traffic for mailbox "/o=XXXXX/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Recipients/cn=xxxxxxk". This failed because no Client Access server with an Outlook Web Access virtual directory configured for Kerberos authentication could be found in the Active Directory site of the mailbox. The simplest way to configure an Outlook Web Access virtual directory for Kerberos authentication is to set it to use Integrated Windows authentication by using the Set-OwaVirtualDirectory cmdlet in the Exchange Management Shell, or by using the Exchange Management Console. If you already have a Client Access server deployed in the target Active Directory site with an Outlook Web Access virtual directory configured for Kerberos authentication, the proxying Client Access server may not be finding that target Client Access server because it does not have an internalUrl parameter configured. You can configure the internalUrl parameter for the Outlook Web Access virtual directory on the Client Access server in the target Active Directory site by using the Set-OwaVirtualDirectory cmdlet. Looking this up I see a lot talk about ExternalURL and InternalURL settings. However, everything worked great until we made the new AD site. I also made sure the internal CAS server's /owa virtual directory is set to use Integrated Authentication. Is there something I need to do to allow Exchange to see that I've made these AD changes?

    Read the article

  • mobile broadband recommendations for Lenovo T500

    - by Justin Grant
    I use a Lenovo T500 primarily in and around San Francisco, although I do some travel in the US on business and occasionally to Europe/Asia. I'm looking for a good mobile broadband option for my T500. I am admittedly baffled by the various mobile-broadband choices (3G vs. 4G, WiMax vs. LTE vs. MIMO vs. ..., etc.). My priorities are (in this order): Compatible with Lenovo T500 and Windows 7. I realize only the AT&T accessory card is listed on Lenovo's site, but I've also heard that other cards will work in my T500 too, like the WiMax/Wifi combo card-- so I'm interested in what actually works, not necessarily only what Lenovo is promoting. Reliable coverage in US large cities, especially the SF Bay Area. my IPhone has lousy coverage in many spots, so I'd be nervous about an AT&T 3G option unless the problem is with the IPhone and not AT&T's network. I'm OK with non-great coverage outside major US cities, since I don't do much travel in those areas. Speed. faster is better. Internal card. I'd slightly prefer something I could install inside my T500 instead of a dongle on the side that might break off, although this is my lowest priority so it's not a big deal. Price. I don't want to pay over $100/month. I've tried lots of Googling and haven't come up with clear answers. I've seen lots of general overviews without recommendations, and lots of passionate opinions which don't feel objective (and don't help me understand compatibility with my hardware & geography). Can you recommend a good, objective guide online, ideally for Lenovo although general guide is OK too, which can help me figure out which option is the best one for me? I'd also be interested in your own personal experiences of using mobile broadband using a Lenovo T500. I'll accept the answer which gets me closest to making a decision.

    Read the article

  • Authenticate by libpam-mysql and libnss-mysql (CentOS)

    - by Chris
    I'm trying to get MySQL to function as a backend for authenticating users on CentOS 6.3. So far I have successfully installed and configured libnss-mysql. I can test this by doing: # groups testuser testuser : sftp Testuser is a member of the sftp group in fact, all MySQL based useraccounts will be hardcoded to it. The sftp group is chrooted and forced to use internal-sftp so they cannot do anything but access their home directory. Then I configured pam-mysql and PAM to allow mysql logins. This also works.. When SELinux is not enforcing. When I do setenforce 1 users can no longer login. Error: Permission denied, please try again. This is my pam_mysql.conf file: users.host=localhost users.db_user=nss-pam-user users.db_passwd=*********** users.database=sftpusers users.table=users users.user_column=username users.password_column=password users.password_crypt=6 verbose=1 My /etc/pam.d/sshd: #%PAM-1.0 auth sufficient pam_sepermit.so auth include password-auth auth required pam_mysql.so config_file=/etc/pam_mysql.conf account sufficient pam_nologin.so account include password-auth account required pam_mysql.so config_file=/etc/pam_mysql.conf password include password-auth # pam_selinux.so close should be the first session rule session required pam_selinux.so close session required pam_loginuid.so # pam_selinux.so open should only be followed by sessions to be executed in the user context session required pam_selinux.so open env_params session optional pam_keyinit.so force revoke session include password-auth And to be complete the contents of some log files.. /var/logs/secure Nov 20 14:52:20 hostname unix_chkpwd[4891]: check pass; user unknown Nov 20 14:52:20 hostname unix_chkpwd[4891]: password check failed for user (testuser) Nov 20 14:52:20 hostname sshd[4880]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.10.107 user=testuser Nov 20 14:52:22 sftpusers sshd[4880]: Failed password for testuser from 192.168.10.107 port 51849 ssh2 /var/logs/audit/audit.log type=USER_AUTH msg=audit(1353420107.070:812): user pid=5285 uid=0 auid=500 ses=24 subj=unconfined_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=pubkey acct="testuser" exe="/usr/sbin/sshd" hostname=? addr=192.168.10.107 terminal=ssh res=failed' type=USER_AUTH msg=audit(1353420112.312:813): user pid=5285 uid=0 auid=500 ses=24 subj=unconfined_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=PAM:authentication acct="testuser" exe="/usr/sbin/sshd" hostname=192.168.10.107 addr=192.168.10.107 terminal=ssh res=failed' type=USER_AUTH msg=audit(1353420112.456:814): user pid=5285 uid=0 auid=500 ses=24 subj=unconfined_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=password acct="testuser" exe="/usr/sbin/sshd" hostname=? addr=192.168.10.107 terminal=ssh res=failed' I tried to let audit2why explain the problem but it remains silent even though there are some errors. Does anyone see the problem? Thanks! EDIT: Turns out it's almost working with setenforce 0 I can mkdir foobar but if I do a single ls I get an error: Received message too long 16777216

    Read the article

  • Why are the analoge stereo input and output of my M-Audio 24/96 soundcard not available to me in Ubu

    - by user37968
    I have installed Lucid on an old Mac PowerPC G4 desktop with a M-Audio Audiophile 24/96 soundcard. The only inputs and outputs I can select in the audio preferences are digital ones for the digital input and output. "lspci -v" shows the card as so: 0001:10:13.0 Multimedia audio controller: VIA Technologies Inc. ICE1712 [Envy24] PCI Multi-Channel I/O Controller (rev 02) Subsystem: VIA Technologies Inc. Device d634 Flags: bus master, medium devsel, latency 16, IRQ 53 I/O ports at 0440 [size=32] I/O ports at 04b0 [size=16] I/O ports at 04a0 [size=16] I/O ports at 0400 [size=64] Capabilities: <access denied> Kernel driver in use: ICE1712 Kernel modules: snd-ice1712 "cat /proc/asound/cards" as so: 0 [Tumbler ]: PMac Tumbler - PowerMac Tumbler PowerMac Tumbler (Dev 21) Sub-frame 0 1 [M2496 ]: ICE1712 - M Audio Audiophile 24/96 M Audio Audiophile 24/96 at 0x440, irq 53 "aplay -L" shows these as listed: pulse Playback/recording through the PulseAudio sound server front:CARD=Tumbler,DEV=0 PowerMac Tumbler, PowerMac Tumbler Front speakers front:CARD=M2496,DEV=0 M Audio Audiophile 24/96, ICE1712 multi Front speakers surround40:CARD=M2496,DEV=0 M Audio Audiophile 24/96, ICE1712 multi 4.0 Surround output to Front and Rear speakers surround41:CARD=M2496,DEV=0 M Audio Audiophile 24/96, ICE1712 multi 4.1 Surround output to Front, Rear and Subwoofer speakers surround50:CARD=M2496,DEV=0 M Audio Audiophile 24/96, ICE1712 multi 5.0 Surround output to Front, Center and Rear speakers surround51:CARD=M2496,DEV=0 M Audio Audiophile 24/96, ICE1712 multi 5.1 Surround output to Front, Center, Rear and Subwoofer speakers iec958:CARD=M2496,DEV=0 M Audio Audiophile 24/96, ICE1712 multi IEC958 (S/PDIF) Digital Audio Output I believe it is a problem with detecting the analogue input/output. Sometimes I can get sound from the device but it is a sheet of white noise and tinkering makes it go away again I don't know if that is a separate problem or if it is linked to not being able to see the analogue input/outputs in the sound preferences. Any help would be greatly appreciated As for the white noise I have installed the Envy24 control panel and spend lots of time playing with the settings but when I can get the white noise I can never get it to an quality where I can actually hear what is being played. The internal speaker plays audio fine and plugging in a NI Audio 4DJ via usb also plays sound, although with some static but I believe that is due to an underpowered usb2 pci expansion card not being able to get enough electricity to the device. Alternatively I have seen other people with problems with this device so it may be a bug in the driver but that is another matter. I would like to get the M-Audio card working so I can begin to enjoy my music once again. As a note, I do not currently have any audio equipment capable of sending or receiving audio via the digital inputs and output so I can not check if they are working. The sound preferences show a wide range of digital in and out options with various surround sound options but no analogue ins and outs.

    Read the article

  • How do I combine static and dynamic DHCP leases on a Cisco router?

    - by Brad
    Basically, what I need is super similar to the unanswered cisco forum question below: https://supportforums.cisco.com/message/3139749#3139749 I have a Cisco 850 Series router. I have configured a DHCP pool for the 10.0.0.0/24 network. I have excluded 10.0.0.1 - 10.0.0.99 from the DHCP pool. I want to add a static DHCP pool for stuff and I want DHCP to statically assign them the addresses of my choice below 100. Actually, I don't care what addresses I statically assign. They can be anything in the pool for all I care, I just want it to work. Why are you doing this? Just statically assign the IPs on the devices! I don't want to do this because I have some laptop users. They could obviously only use that static IP here. This isn't a problem if they could be bothered to change any location setting or something. They can't. So it HAS to be DHCP. It also has to be static IPs because I need to forward ports to them. I know, I know, this is weird but it's an apartment LAN/WLAN so this isn't exactly a typical use case. Relevant sections of config below: ip dhcp excluded-address 10.0.0.1 10.0.0.99 ! ip dhcp pool Internal-net import all network 10.0.0.0 255.255.255.0 default-router 10.0.0.1 domain-name 1770.local lease 7 ! ip dhcp pool static-pool import all origin file flash://staticmap default-router 10.0.0.1 domain-name 1770.local Contents of staticmap: *time* Aug 5 2010 09:00 AM *version* 2 !IP address Type Hardware address Lease expiration 10.0.0.100/24 1 001f.5b3e.d50a Infinite *end* You can see here I was trying addresses outside the excluded-address range to see if that would make any difference. My testing machine's MAC: mainframe:~ brad$ ifconfig en1 en1: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether 00:1f:5b:3e:d5:0a What shows up in the DHCP binding table: basestar#show ip dhcp binding Bindings from all pools not associated with VRF: IP address Client-ID/ Lease expiration Type Hardware address/ User name 10.0.0.112 0100.1f5b.3ed5.0a Aug 12 2010 10:06 AM Automatic What's up with the funny looking MAC in the DHCP binding table?? Is what I'm trying to accomplish basically impossible? Am I going about this the wrong way? All I want to to be able to port forward some ports to specific devices. The way I would do this with a consumer router is to do what I'm trying to do here; assign static DHCP to those devices then configure PAT for ports on those addresses.

    Read the article

  • Why are the analoge stereo input and output of my M-Audio 24/96 soundcard not available to me in Ubuntu Lucid

    - by MIDoubleKO
    I have installed Lucid on an old Mac PowerPC G4 desktop with a M-Audio Audiophile 24/96 soundcard. The only inputs and outputs I can select in the audio preferences are digital ones for the digital input and output. "lspci -v" shows the card as so: 0001:10:13.0 Multimedia audio controller: VIA Technologies Inc. ICE1712 [Envy24] PCI Multi-Channel I/O Controller (rev 02) Subsystem: VIA Technologies Inc. Device d634 Flags: bus master, medium devsel, latency 16, IRQ 53 I/O ports at 0440 [size=32] I/O ports at 04b0 [size=16] I/O ports at 04a0 [size=16] I/O ports at 0400 [size=64] Capabilities: <access denied> Kernel driver in use: ICE1712 Kernel modules: snd-ice1712 "cat /proc/asound/cards" as so: 0 [Tumbler ]: PMac Tumbler - PowerMac Tumbler PowerMac Tumbler (Dev 21) Sub-frame 0 1 [M2496 ]: ICE1712 - M Audio Audiophile 24/96 M Audio Audiophile 24/96 at 0x440, irq 53 "aplay -L" shows these as listed: pulse Playback/recording through the PulseAudio sound server front:CARD=Tumbler,DEV=0 PowerMac Tumbler, PowerMac Tumbler Front speakers front:CARD=M2496,DEV=0 M Audio Audiophile 24/96, ICE1712 multi Front speakers surround40:CARD=M2496,DEV=0 M Audio Audiophile 24/96, ICE1712 multi 4.0 Surround output to Front and Rear speakers surround41:CARD=M2496,DEV=0 M Audio Audiophile 24/96, ICE1712 multi 4.1 Surround output to Front, Rear and Subwoofer speakers surround50:CARD=M2496,DEV=0 M Audio Audiophile 24/96, ICE1712 multi 5.0 Surround output to Front, Center and Rear speakers surround51:CARD=M2496,DEV=0 M Audio Audiophile 24/96, ICE1712 multi 5.1 Surround output to Front, Center, Rear and Subwoofer speakers iec958:CARD=M2496,DEV=0 M Audio Audiophile 24/96, ICE1712 multi IEC958 (S/PDIF) Digital Audio Output I believe it is a problem with detecting the analogue input/output. Sometimes I can get sound from the device but it is a sheet of white noise and tinkering makes it go away again I don't know if that is a separate problem or if it is linked to not being able to see the analogue input/outputs in the sound preferences. Any help would be greatly appreciated As for the white noise I have installed the Envy24 control panel and spend lots of time playing with the settings but when I can get the white noise I can never get it to an quality where I can actually hear what is being played. The internal speaker plays audio fine and plugging in a NI Audio 4DJ via usb also plays sound, although with some static but I believe that is due to an underpowered usb2 pci expansion card not being able to get enough electricity to the device. Alternatively I have seen other people with problems with this device so it may be a bug in the driver but that is another matter. I would like to get the M-Audio card working so I can begin to enjoy my music once again. As a note, I do not currently have any audio equipment capable of sending or receiving audio via the digital inputs and output so I can not check if they are working. The sound preferences show a wide range of digital in and out options with various surround sound options but no analogue ins and outs.

    Read the article

  • External USB HD issues with a twist (works on Windows7 but not XP)

    - by Eruditass
    I have this older external USB HD, 160 GB. I was using it to copy my Steam games to another computer. On the source computer, Windows 7 64-bit, everything worked fine. Drive reported no errors, had no hiccups, etc. Plugging it into the Windows XP 32-bit computer, it worked fine for looking through the files, moving files around on it (no real reading/writing, just modifying the filesystem table). However, when copying files from it to my internal HD, after a couple seconds to tens of minutes (seemingly random times), the USB device becomes unrecognized and it reports a delayed write error. Events in system log go like this, chronologically: (number times displayed)xSource (Event ID): "message" 2xdisk (51): An error was detected on device \Device\Harddisk1\D during a paging operation. 1xftdisk (57): The system failed to flush data to the transaction log. Corruption may occur. 1xApplication popup (26): Windows - Delayed Write Failed : Windows was unable to save all the data for the file E:\$Mft. The data has been lost. This error may be caused by a failure of your computer hardware or network connection. Please try to save this file elsewhere. 1xntfs (50): {Delayed Write Failed} Windows was unable to save all the data for the file . The data has been lost. This error may be caused by a failure of your computer hardware or network connection. Please try to save this file elsewhere. These repeat for a while, then there is 10+ disk messages or ftdisk messages. Other notes: This occurs on random files at random times. This problem cannot be replicated on the Windows 7 source machine when copying from the HD to a different location on its local disk chkdsk /f was run and found no errors. chkdsk /f/r has the delayed write issue. drive was set to quick removal. Setting to performance in device manager yielded same result I am not writing anything to the USB external drive, so I am not sure why there is even a delayed write error (writing file access times?) local Windows XP was chkdsk'd without problems Windows XP machine has no problems with other USB HD's Various USB ports were attempted Rebooting did not help Occurs with SyncToy as well as windows explorer SMART status is good on both local drive and the external one Lack of gaming is making me cranky

    Read the article

  • Exchange 2007 and migrating only some users under a shared domain name

    - by DomoDomo
    I'm in the process of moving two law firms to hosted Exchange 2007, a service that the consulting company I work for offers. Let's call these two firms Crane Law and Poole Law. These two firms were ONE firm just six months ago, but split. So they have three email domains: Old Firm: craneandpoole.com New Firm 1: cranelaw.com New Firm 2: poolelaw.com Both Firm 1 & Firm 2 use craneandpoole.com email addresses, as for the other two domains, only people who work at the respective firm use that firm's domain name, natch. Currently these two firms are still using the same pre-split internal Exchange 2007 server, where MX records for all three domains point. Here's the problem. I'm not moving both companies at the same time. I'm moving Crane Law two weeks before Poole Law. During this two weeks, both companies need to be able to: Continue to receive emails addressed to craneandpoole.com Send emails between firms, using cranelaw.com and poolelaw.com accounts I also have a third problem: I'd like to setup all three domains in my hosting infrastructure way ahead of time, to make my own life easier What would solve all my problems would be, if there is some way I can tell Exchange 2007, even though this domain exists locally forward on the message to the outside world using public MX record as a basis for where to send it (or if I could somehow create a route for it statically that would work too). If this doesn't work, to address points #1 when I migrate Crane Law, I will delete all references locally to cranelaw.com on their current Exchange server, and setup individual forwards for each of their craneandpool.com mailboxes to forward to our hosted exchange server. This will also take care of point #2, since the cranelaw.com won't be there locally, when poolelaw.com tries to send to cranelaw.com, public MX records will be used for mail routing decisions and go to my hosted exchange. The bummer of that though is, I won't be able to setup poolelaw.com ahead of time in hosted Exchange, will have to wait to do it day of :( Sorry for the long and confusing post. Just wondering if there is a better or simpler way to do what I want? Three tier forests and that kind of thing are out, this is just a two week window where they won't be in the same place.

    Read the article

  • Connecting multiple ColdFusion 10 instances to a single Apache 2.2 server

    - by Adam Cameron
    This is on Windows 7 Home Premium edition. I have got two ColdFusion 10 (updater 2) instances: "cfusion" (the default one), and "scratch". I have got a single instance of Apache 2.2 running. Within Apache, I have set up two virtual hosts, each of which needs to be served by a different ColdFusion instance. Each of the CF instances serves files fine via Tomcat's internal web server. Apache serves vanilla HTML files fine too. So both CF instances, and both virtual hosts separately work OK. I can get wsconfig.exe to connect either one of the CF instances to the Apache server, and serve CF files via Apache & that instance. However I cannot find a way of connecting the second CF instance to Apache as well, so that both CF instances are conected, each serving one of the virtual hosts. WSConfig doesn't seem to understand the notion of "multiple CF instances", and the changes it makes to the httpd.conf (via mod_jk.conf) does not seem to be implemented in such a way as to accommodate multiple CF instances talking to a single Apache instance, or multiple virtual hosts. I freely admit to not being confident enough with how mod_jk (or even really httpd.conf) works to be able to guess if I can change stuff to make it work. If I try to add the second CF instance using WSConfig, I just get a message "the web server is already configured for ColdFusion". Be that as it may... not the instance of ColdFusion I want to connect it to! If I remove the existing connector to whichever instance is already connected, I can then connect the other one no problems. Not that this helps, but it demonstrates that the CF instance can connect to Apache. This all used to be fairly straight fwd under older versions of CF and JRun :-( The only docs I have found are on the "Connect multiple Apache virtual hosts on a web server to a single ColdFusion server" page, but that specifically only deals with a single CF instance. There is no equivalent page for multiple CF instances. I'm kinda hoping I can move some of the mod_jk config into my virtual host entries in httpd-vhosts.conf (this is how it used to work for JRun), but I've no idea what to put where. I think I've covered all the necessary info here? If not, sing out and I'll add more. Thanks. PS: tried to specifically tag this as "ColdFusion-10" as the answer will be different from previous CF versions, but it won't let me cos my rep on this site is too low (odd how it doesn't consider my rep from other S/O sites...). If someone with sufficient rep can add it, that'd be cool: it's probably a valid tag to have. Ta.

    Read the article

  • Win7 x64 unresponsive for a minute or so. HD failing?

    - by Gaia
    On a fully updated Win7 x64, every so often the system stalls for a minute or so. This has been going on for a couple months now. By stalling I mean the mouse responds and I can move windows around, but any window, any program, that is open becomes whiteish when I select it AND any new programs will not open. It doesn't matter what kind of program it is. When the stall stops all clicks I made (open new programs for example) take effect. Nothing shows up consistently (as in every time this happens) in the event log. Today though I was able to find something, but it doesn't reveal much other than the "system was unresponsive". It's a 7009 for "A timeout was reached (30000 milliseconds) while waiting for the Windows Error Reporting Service service to connect." It doesn't matter if I have any USB devices plug-in or not. I've ran Microsoft Security Essentials and Malwarebytes. While the machine is unresponsive, I've noticed that Drive D (the other partition on the single internal HD in this laptop) is displayed like this in explorer. This never occurs with Drive C or any other drive on the machine. . SMART report for the physical drive: Read benchmark by HD Tune 5 Pro, probably the most telling piece of the puzzle. Isn't this alone enough to see there is a problem with the drive, regardless of whether the unresponsiveness is caused by such purported problem? Here is a short hardware report: Computer: LENOVO ThinkPad T520 CPU: Intel Core i5-2520M (Sandy Bridge-MB SV, J1) 2500 MHz (25.00x100.0) @ 797 MHz (8.00x99.7) Motherboard: LENOVO 423946U Chipset: Intel QM67 (Cougar Point) [B3] Memory: 8192 MBytes @ 664 MHz, 9.0-9-9-24 - 4096 MB PC10600 DDR3 SDRAM - Samsung M471B5273CH0-CH9 - 4096 MB PC10600 DDR3 SDRAM - Patriot Memory (PDP Systems) PSD34G13332S Graphics: Intel Sandy Bridge-MB GT2+ - Integrated Graphics Controller [D2/J1/Q0] [Lenovo] Intel HD Graphics 3000 (Sandy Bridge GT2+), 3937912 KB Drive: ST320LT007, 312.6 GB, Serial ATA 3Gb/s Sound: Intel Cougar Point PCH - High Definition Audio Controller [B2] Network: Intel 82579LM (Lewisville) Gigabit Ethernet Controller Network: Intel Centrino Advanced-N 6205 AGN 2x2 HMC OS: Microsoft Windows 7 Professional (x64) Build 7601 The drive less than 1 year old. Do I have a defective drive? Seagate Tools diag says there is nothing wrong with the drive... UPDATE: I noticed that the windows error reporting service entered the running state then the stopped state and the space between the two events was exactly 2 minutes. Which error it was trying to report I don't know. I check the "Reliability Monitor" and it shows no errors to be reported. I've disabled the windows error reporting service to see if the problem stops.

    Read the article

  • Mercurial browser on Windows 2003 takes several refreshes before displaying repositories

    - by Tim Murphy
    When attempt to browse my Mercurial repositories it usually takes several refreshes before the repository list is displayed. The configuration is as follows: Windows Server 2003 (Dedicated machine hosted by http://www.server4you.com/. Site has anonymous password protection with self-signed SSL. Mercurial 1.5.3 Python 2.6.5 Python for Windows 32 extensions 214 py2.6 isapi-wsgi 0.4.2 The repositories are being served via ISAPI using the standard hgwebdir_wspi.py file (copy to follow). Other problems with the repository server: Before doing a clone/push/etc I have to browse the repositories first otherwise hg on my local machine can not locate the site. I have one a repository with a large changeset that after a minute or so throw error "abort: error: An existing connection was forcibly closed by the remote host". Will be asking another question for this problem. What can I do to start tracking down this problem? hgwebdir_wsgi.py # Configuration file location hgweb_config = r'C:\Public\Mercurial\WebSite\hgweb.config' # Global settings for IIS path translation path_strip = 0 # Strip this many path elements off (when using url rewrite) path_prefix = 0 # This many path elements are prefixes (depends on the # virtual path of the IIS application). import sys # Adjust python path if this is not a system-wide install #sys.path.insert(0, r'c:\path\to\python\lib') # Enable tracing. Run 'python -m win32traceutil' to debug if hasattr(sys, 'isapidllhandle'): import win32traceutil # To serve pages in local charset instead of UTF-8, remove the two lines below import os os.environ['HGENCODING'] = 'UTF-8' import isapi_wsgi from mercurial import demandimport; demandimport.enable() from mercurial.hgweb.hgwebdir_mod import hgwebdir # Example tweak: Replace isapi_wsgi's handler to provide better error message # Other stuff could also be done here, like logging errors etc. class WsgiHandler(isapi_wsgi.IsapiWsgiHandler): error_status = '500 Internal Server Error' # less silly error message isapi_wsgi.IsapiWsgiHandler = WsgiHandler # Only create the hgwebdir instance once application = hgwebdir(hgweb_config) def handler(environ, start_response): # Translate IIS's weird URLs url = environ['SCRIPT_NAME'] + environ['PATH_INFO'] paths = url[1:].split('/')[path_strip:] script_name = '/' + '/'.join(paths[:path_prefix]) path_info = '/'.join(paths[path_prefix:]) if path_info: path_info = '/' + path_info environ['SCRIPT_NAME'] = script_name environ['PATH_INFO'] = path_info return application(environ, start_response) def __ExtensionFactory__(): return isapi_wsgi.ISAPISimpleHandler(handler) if __name__=='__main__': from isapi.install import * params = ISAPIParameters() HandleCommandLine(params) hgweb.config [paths] / = C:\Public\Mercurial\Repositories\* [web] allow_archive = bz2 gz zip ; Allows archive downloads. allow_push = ######## ; Users that are allowed to push.

    Read the article

  • Cisco ASA SSL VPN options?

    - by JonH
    Disclaimer: I am not a network admin so I may be wrong here but I thought asking here would help. I'm a developer mainly on the .net framework as well as helping get a mobile intranet app working. Because this app is only allowed to be used on our network I can easily run this app on our wireless network connection within our building. All is fine and dandy but we'd also like to be able to run this mobile app at say a customer plant using VPN software. I thought surely this could be easy as we exclusively use Samsung s4 phones so I thought I'd download Cisco's Samsung any connect software to allow us to VPN...its right on the play store. Sure enough it doesn't work. I mention it to our network admin who says not possible since we have old technology that doesn't support SSL. He mentions we'd have to upgrade all of our hardware, the firewall, etc. to get this to work. We really need VPN on our phones not only for this app but other internal apps, etc. He did mention the following: We can’t upgrade the software on our ASA, because we don’t have enough memory for the new version.  (the asa is very old).  We can’t add more memory, so we would have to get a new firewall, which I have been told I cannot do. In addition he also mentioned: The Samsung AnyConnect client uses SSL to connect.  With the current (old) version of software that our firewall is running, the SSL connections are unreliable.  We need different hardware in order to upgrade our firewall, which we are unable to attain at this time.  This is the same reason that Windows 8 clients are not able to connect. I am curious hence me asking. vpns seem to be fairly simple to setup. What other options do I have aside from making this a public site or web service that consumes this data over the internet as this is a complete no no. What can we do to make this work without that much effort or cost.

    Read the article

  • udp expected behaviour not responding to test result

    - by ernst
    I have a local network topology that is structured as follows: three hosts and a switch in the middle. I am using a switch that supports 10,100,1000 Mbit/s full/half duplex connection. I have configured the hosts with a static ip 172.16.0.1-2-3/25. This is the output of ifconfig eth0 Link encap: Ethernet HWaddr ***** inet addr:172.16.0.3 Bcast:172.16.0.127 Mask:255.255.255.128 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:16 The output on H1 and H2 is perfectly matchable They are mutually reachable since i have tested the network with ping. I have forced the ethernet interface to work at 10M with ethtool -s eth0 speed 10 duplex full autoneg on this is the output of ethtool eth0 supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Half 1000baseT/Full S upported pause frame use: No Supports auto-negotiation: Yes Advertised link modes: 10baseT/Full Advertised pause frame use: Symmetric A dvertised auto-negotiation: Yes Speed: 10Mb/s Duplex: Full Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: on MDI-X: Unknown Supports Wake-on: g Wake-on: d Current message level: 0x000000ff (255) drv probe link timer ifdown ifup rx_err tx_err Link detected: yes – I am doing an experimental test using nttcp to calculate the GOODPUT in the case that H1 and H2 at the same time send data to H3. Since the three links have the same forced capability and the amount of arrving data speed is 10 from H1+10 from H2--20M to H3 it would be expected a bottleneck effect and, due to the non reliable nature of udp, a packet loss. But this doesn't appen since the output of nttcp application shows the same number of byte sended and received. this is the output of nttcp on h3 nttcp -T -r -u 172.16.0.2 & nttcp -T -r -u 172.16.0.1 [1] 4071 Bytes Real s CPU s Real-MBit/s CPU-MBit/s Calls Real-C/s CPU-C/s l 8388608 13.74 0.05 4.8848 1398.0140 2049 149.14 42684.8 Bytes Real s CPU s Real-MBit/s CPU-MBit/s Calls Real-C/s CPU-C/s l 8388608 14.02 0.05 4.7872 1398.0140 2049 146.17 42684.8 1 8388608 13.56 0.06 4.9500 1118.4065 2051 151.28 34181.1 1 8388608 13.89 0.06 4.8310 1198.3084 2051 147.65 36623.0 – How is this possible? Am i missing something? Any help will be gratefully apprecciated, Best regards

    Read the article

  • My hosting server giving memory allocation error

    - by Usman
    I have hosted my website on shared hosting linux server. As there are atmost 10-15 visitors come to my website daily but My wordpress website most of the times gives 500 Internal Server Error. I accessed my server Error Log following error is showing: [Tue Dec 04 08:57:45 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:45 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:43 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:43 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:42 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:42 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:41 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:41 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:40 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:40 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:32 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:31 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:29 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:29 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:26 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php Is my hosting service really bad. And any solution. Thanks in advance.

    Read the article

  • Is my Cisco switch port bad?

    - by ewwhite
    I've been chasing a packet-loss and network stability issue for a handful of end-users on an internal network for the past few days... These issues surfaced last week, however the location was struck by lightning six weeks ago. I was seeing 5-10% packet loss between a stack of four Cisco 2960's and several PC's and phones on the other side of a 77-meter run. The PC's were run inline with the phones over a trunked link (switchport configuration pastebin). We were seeing dropped calls and interruptions in client-server applications and Microsoft Exchange connectivity. I tried the usual troubleshooting steps remotely, having a local technician do the following during breaks in user and production activity: change cables between the wall jack and device. change patch cables between the patch panel and switch port(s). try different switch ports within the 2960 stack. change end-user devices with known-good equipment (new phones, different PC's). clear switch port interface counters and monitor incrementing errors closely. (Pastebin output of sh int) Pored over the device logs and Observium RRD graphs. No link up/down issues from the switch side. change power strips on the end-user side. test cable runs from the Cisco 2960 using test cable-diagnostics tdr int Gi4/0/9 (clean)* test cable runs with a Tripp-Lite cable tester. (clean) run diagnostics on the switch stack members. (clean) In the end, it took three changes of switch ports to find a stable solution. The only logical conclusion is that a few Cisco 2960 switch ports are bad or flaky... Not dead, but not consistent in behavior either. I'm not used to seeing individual ports die in this manner. What else can I test or check to determine if these devices are bad? Is it common for single ports to have problems, rather than a contiguous bank of ports? BTW - show cable-diagnostics tdr int Gi4/0/14 is very cool... Interface Speed Local pair Pair length Remote pair Pair status --------- ----- ---------- ------------------ ----------- -------------------- Gi4/0/14 1000M Pair A 79 +/- 0 meters Pair B Normal Pair B 75 +/- 0 meters Pair A Normal Pair C 77 +/- 0 meters Pair D Normal Pair D 79 +/- 0 meters Pair C Normal

    Read the article

  • IIS6 Virtual Directory 500 Error on Remote Share

    - by David Boike
    We have our servers at the server farm in a domain. Let's call it LIVE. Our developer computers live in a completely separate corporate domain, miles and miles away. Let's call it CORP. We have a large central storage unit (unix) that houses images and other media needed by many webservers in the server farm. The IIS application pools run as (let's say) LIVE\MediaUser and use those credentials to connect to a central storage share as a virtual directory, retrieve the images, and serve them as if they were local on each server. The problem is in development. On my development machine. I log in as CORP\MyName. My IIS 6 application pool runs as Network Service. I can't run it as a user from the LIVE domain because my machine isn't (and can not be) joined to that domain. I try to create a virtual directory, point it to the same network directory, click Connect As, uncheck the "Always use the authenticated user's credentials when validating access to the network directory" checkbox so that I can enter the login info, enter the credentails for LIVE\MediaUser, click OK, verify the password, etc. This doesn't work. I get "HTTP Error 500 - Internal server error" from IIS. The IIS log file reports sc-status = 500, sc-substatus = 16, and sc-win32-status = 1326. The documentation says this means "UNC authorization credentials are incorrect" and the Win32 status means "Logon failure: unknown user name or bad password." This would be all and good if it were anywhere close to accurate. I double- and trouble-checked it. Tried multiple known good logins. The IIS manager allows me to view the file tree in its window, it's only the browser that kicks me out. I even tried going to the virtual directory's Directory Security tab, and under Authentication and Access Control, I tried using the same LIVE domain username for the anonymous access credential. No luck. I'm not trying to run any ASP, ASP.NET, or other dynamic anything out of the virtual directory. I just want IIS to be able to load static images, css, and js files. If anyone has some bright ideas I would be most appreciative!

    Read the article

  • Event ID 9331 MSExchangeSA & Event ID 9335 MSExchangeSA

    - by George
    I get this two Exchange 2010 Global Address book related event IDs: Event ID 9331 MSExchangeSA OABGen encountered error 80004005 (internal ID 50101f1) accessing the public folder database while generating the offline address list for address list '/'. -\Default Offline Address List and Event ID 9335 MSExchangeSA OABGen encountered error 80004005 while cleaning the offline address list public folders under /o=xxxxx xxxx/cn=addrlists/cn=oabs/cn=Default Offline Address List. Please make sure the public folder database is mounted and replicas exist of the offline address list folders. No offline address lists have been generated. Please check the event log for more information. -\Default Offline Address List It is Exchange 2010 SP2 sitting on Windows 2008 enterprise edition. Essentially the issue is that the global address book is not being updated on Outlook clients. We are using Outlook 2007 and 2010. So far I have tried running the following command: Update-FileDistributionService -Identity ExchangeServer -Type "OAB" And I tried this solution as well: 1) Make sure the Microsoft Exchange System Attendant is running. It will be set to start automatically by default, but it doesn't. This is a known issue. Start this service manually. When running, you will not get an error when trying to update the GAL. 2) "Apply" any changes made to any address lists before the GAL will update Outlook properly. In Organization Configuration - Mailbox in EMC, view the properties of the Default Global Address Book in the Offline Address Book tab. In the properties window, select the Address Lists tab. This shows which address lists makes up the GAL. 3) Close the properties window and select the Address Lists tab in the Organization Configuration - Mailbox. Right-click each address list used by the Def GAL and click "Apply" (make sure the "Immediately" radio button is checked). 4) Last, go back to the Offline Address Book tab, right-click the GAL and select "Update". After a few send/receives in the Outlook clients, their Glogal Address List should update to show the latest changes. Neither one of those solutions helped. So I am not really sure what to do here. Also, I am aware of changing registry on each local computers, but it would be close to impossible as we have 8 offices in 3 different countries. Any suggestions? EDIT 7.XII.2012 @ 10.35 I forgot to mention that we did rebuild the address book and that didn't help.

    Read the article

  • Why might one host be unable to access the Internet, when it can ping the router and when all other hosts can?

    - by user1444233
    I have a Draytek Vigor 2830n. It's kicking out a 192.168.3.0 LAN. It performs load-balancing across dual-WAN ports, although I've turned off the second WAN to simplify testing. There are many hosts on the LAN. All IPs are allocated through DHCP, most freely allocated from the pool, but one or two are bound to NIC MAC addresses. All hosts can access the Internet, save one. That host (192.168.3.100 or 'dot100' for short) gets allocated an IP address (and the right gateway address, DNS server addresses, subnet etc.) dot100 can ping itself. It can ping the gateway, and access the latter's web interface via port 80. It's responsive and loss-free (sustained ping over a couple of minutes reports no data loss). Yet, for some reason that evades me, dot100 can't ping an external IP address or domain name. I suspect it's never been able to, because it was getting some Internet access from a second adaptor (different subnet), but that's now been turned off, which exposed the problem. In dot100, I've tried: two operating systems (Windows 8 and Knoppix), to rule out anti-virus programs etc. two physical adaptors two cables, on each adaptor two IPs (e.g. .100 and .103 assigned by Mac and .26 from the pool) both dynamic and assigned (MAC-bound) DHCP-allocated IPs but none of this experiments yielded any variation in the result. dot100 is a crucial host. It's a file server for the network, so I need it to be reliably allocated a consistent IP. Can anyone offer a potential solution or a way forward with the analysis please? My guess My analysis so far leads me to believe it's a router issue. I've checked the web interface very carefully. There are no filters setup in Firewall - General Setup or Filter Setup. I suspect it's a corrupted internal routing table, but the web UI shows this as the Routing table: Key: C - connected, S - static, R - RIP, * - default, ~ - private * 0.0.0.0/ 0.0.0.0 via 62.XX.XX.X WAN1 * 62.XX.XX.X/ 255.255.255.255 via 62.XX.XX.X WAN1 S 82.YY.YYY.YYY/ 255.255.255.255 via 82.YY.YYY.YYY WAN1 C 192.168.1.0/ 255.255.255.0 directly connected WAN2 C~ 192.168.3.0/ 255.255.255.0 directly connected LAN2

    Read the article

  • Need advice on which PCI SATA Controller Card to Purchase

    - by Matt1776
    I have a major issue with the build of a machine I am trying to get up and running. My goal is to create a file server that will service the needs of my software development, personal media storage and streaming/media server needs, as well as provide a strong platform for backing up all this data in a routine, cron-job oriented German efficiency sort of way. The issue is a simple one - all my drives are SATA drives and my motherboard controller only contains 4 ports. Solving the issue has proven to be an unmitigated nightmare. I would like advice on the purchase of the following: 4 Port internal SATA / 2 Port external eSATA PCI SATA Controller Card that has the following features and/or advantages: It must function. If I plug it in and attach drives, I expect my system to still make it to the Operating System login screen. It must function on CentOS, and I mean it must function WELL and with MINIMAL hassle. If hassle is unavoidable, there shall be CLEAR CUT and EASY TO FOLLOW instructions on how to install drivers and other supporting software. I do not need nor want fakeRAID - I will be setting up any RAID configurations from within the operating system. Now, if I am able to find such a mythical device, I would be eternally grateful to whomever would be able to point me in the right direction, a direction which I assume will be paved with yellow bricks. I am prepared to pay a considerable sum of money (as SATA controller cards go) and so paying anywhere between 60 to 120 dollars will not be an issue whatsoever. Does such a magical device exist? The following link shows an "example" of the type of thing I am looking for, however, I have no way of verifying that once I plug this baby in that my system will still continue to function once I've attached the drives, or that once I've made it to the OS, I will be able to install whatever drivers or software programs I need to make it work with relative ease. It doesn't have to be dog-shit simple, but it cannot involve kernels or brain surgery. http://www.amazon.com/gp/product/B00552PLN4/ref=pd_lpo_k2_dp_sr_1?pf_rd_p=486539851&pf_rd_s=lpo-top-stripe-1&pf_rd_t=201&pf_rd_i=B003GSGMPU&pf_rd_m=ATVPDKIKX0DER&pf_rd_r=1HJG60XTZFJ48Z173HKY So does anyone have a suggestion regarding the subject I am asking about? PCI SATA Controller Cards? It would help if you've had experience with the component before - that is after all why I am asking here - for those who have had experience that I do not have. Bear in mind that this is for a home setup and that I do not have a company credit card. I have a budget with a 'relative' upper limit of about $150.00.

    Read the article

  • Is DHCP lease expriring years from now okay?

    - by sharptooth
    I'm reviewing Azure web role logs and there's output from ipconfig /all IPv4 Address. . . . . . . . . . . : 10.61.145.37(Preferred) . Subnet Mask . . . . . . . . . . . : 255.255.254.0. Lease Obtained. . . . . . . . . . : Monday, September 24, 2012 12:26:00 PM. Lease Expires . . . . . . . . . . : Thursday, October 31, 2148 6:55:12 PM. you see, the lease expires in year 2148 but my VM will likely not run for more than one month - when I deploy the new version of my code I first deploy it to new VMs, then switch traffic, then release the new VMs. In general such usage pattern is normal - VMs typically live from several dozen minutes to several weeks on Azure. I suspect the lease that long will cause problems on the internal Azure network sooner or later. Is such long DHCP lease okay or is it likely a misconfiguration?

    Read the article

  • Nagios shell script cannot be executed

    - by MeinAccount
    I'm trying to monitor GitLab with nagios. I've created the following command definition and shell script but when checking the service I'm receiving the following e-mail. How can I solve this? The file is executable. [...] nagios : 3 incorrect password attempts ; TTY=unknown ; PWD=/ ; USER=git ; COMMAND=/bin/bash -c /var/lib/nagios/custom_plugins/check_gitlab.sh Command definition: define command { command_name custom_check_gitlab command_line /var/lib/nagios/custom_plugins/check_gitlab.sh } Shell script: #! /bin/sh # [...] RAILS_ENV="production" # Script variable names should be lower-case not to conflict with internal /bin/sh variables such as PATH, EDITOR or SHELL. app_root="/home/git/gitlab" app_user="git" unicorn_conf="$app_root/config/unicorn.rb" pid_path="$app_root/tmp/pids" socket_path="$app_root/tmp/sockets" web_server_pid_path="$pid_path/unicorn.pid" sidekiq_pid_path="$pid_path/sidekiq.pid" ### Here ends user configuration ### # Switch to the app_user if it is not he/she who is running the script. if [ "$USER" != "$app_user" ]; then sudo -u "$app_user" -H -i $0 "$@"; exit; fi # Switch to the gitlab path, if it fails exit with an error. if ! cd "$app_root" ; then echo "Failed to cd into $app_root, exiting!"; exit 1 fi ### Init Script functions check_pids(){ if ! mkdir -p "$pid_path"; then echo "Could not create the path $pid_path needed to store the pids." exit 1 fi # If there exists a file which should hold the value of the Unicorn pid: read it. if [ -f "$web_server_pid_path" ]; then wpid=$(cat "$web_server_pid_path") else wpid=0 fi if [ -f "$sidekiq_pid_path" ]; then spid=$(cat "$sidekiq_pid_path") else spid=0 fi } # Checks whether the different parts of the service are already running or not. check_status(){ check_pids # If the web server is running kill -0 $wpid returns true, or rather 0. # Checks of *_status should only check for == 0 or != 0, never anything else. if [ $wpid -ne 0 ]; then kill -0 "$wpid" 2>/dev/null web_status="$?" else web_status="-1" fi if [ $spid -ne 0 ]; then kill -0 "$spid" 2>/dev/null sidekiq_status="$?" else sidekiq_status="-1" fi } check_pids check_status if [ "$web_status" != "0" -a "$sidekiq_status" != "0" ]; then echo "GitLab is not running." exit 2 fi if [ "$web_status" != "0" ]; then printf "The GitLab Unicorn webserver is \033[31mnot running\033[0m.\n" exit 1 fi if [ "$sidekiq_status" != "0" ]; then printf "The GitLab Sidekiq job dispatcher is \033[31mnot running\033[0m.\n" exit 1 fi if [ "$web_status" = "0" -a "$sidekiq_status" = "0" ]; then printf "GitLab and all it's components are \033[32mup and running\033[0m.\n" exit 0 fi

    Read the article

  • Adaptec 5805 not recognized after reboot

    - by Rakedko ShotGuns
    After rebooting the system, the controller is not recognized. It only works if the computer is shut down and turned off. I have recently updated the firmware to "Adaptec RAID 5805 Firmware Build 18948". How do I fix the problem? Configuration summary --------------------------- 1. Server name.....................raid_test Adaptec Storage Manager agent...7.31.00 (18856) Adaptec Storage Manager console.7.31.00 (18856) Number of controllers...........1 Operating system................Windows Configuration information for controller 1 ------------------------------------------------------- Type............................Controller Model...........................Adaptec 5805 Controller number...............1 Physical slot...................2 Installed memory size...........512 MB Serial number...................8C4510C6C9E Boot ROM........................5.2-0 (18948) Firmware........................5.2-0 (18948) Device driver...................5.2-0 (16119) Controller status...............Optimal Battery status..................Charging Battery temperature.............Normal Battery charge amount (%).......37 Estimated charge remaining......0 days, 16 hours, 12 minutes Background consistency check....Disabled Copy back.......................Disabled Controller temperature..........Normal (40C / 104F) Default logical drive task priority High Performance mode................Dynamic Number of logical devices.......1 Number of hot-spare drives......0 Number of ready drives..........0 Number of drive(s) assigned to MaxCache cache0 Maximum drives allowed for MaxCache cache8 MaxCache Read Cache Pool Size...0 GB NCQ status......................Enabled Stay awake status...............Disabled Internal drive spinup limit.....0 External drive spinup limit.....0 Phy 0...........................No device attached Phy 1...........................No device attached Phy 2...........................No device attached Phy 3...........................1.50 Gb/s Phy 4...........................No device attached Phy 5...........................No device attached Phy 6...........................No device attached Phy 7...........................No device attached Statistics version..............2.0 SSD Cache size..................0 Pages on fetch list.............0 Fetch list candidates...........0 Candidate replacements..........0 69319...........................31293 Logical device..................0 Logical device name............. RAID level......................Simple volume Data space......................148,916 GB Date created....................09/19/2012 Interface type..................Serial ATA State...........................Optimal Read-cache mode.................Enabled Preferred MaxCache read cache settingEnabled Actual MaxCache read cache setting Disabled Write-cache mode................Enabled (write-back) Write-cache setting.............Enabled (write-back) Partitioned.....................Yes Protected by hot spare..........No Bootable........................Yes Bad stripes.....................No Power Status....................Disabled Power State.....................Active Reduce RPM timer................Never Power off timer.................Never Verify timer....................Never Segment 0.......................Present: controller 1, connector 0, device 0, S/N 9RX3KZMT Overall host IOs................99075 Overall MB......................4411203 DRAM cache hits.................71929 SSD cache hits..................0 Uncached IOs....................29239 Overall disk failures...........0 DRAM cache full hits............71929 DRAM cache fetch / flush wait...0 DRAM cache hybrid reads.........3476 DRAM cache flushes..............-- Read hits.......................0 Write hits......................0 Valid Pages.....................0 Updates on writes...............0 Invalidations by large writes...0 Invalidations by R/W balance....0 Invalidations by replacement....0 Invalidations by other..........0 Page Fetches....................0 0...............................0 73..............................10822 8...............................3 46138...........................4916 27184...........................15226 20875...........................323 16982...........................1771 1563............................5317 1948............................2969 Serial attached SCSI ----------------------- Type............................Disk drive Vendor..........................Unknown Model...........................ST3160815AS Serial Number...................9RX3KZMT Firmware level..................3.AAD Reported channel................0 Reported SCSI device ID.........0 Interface type..................Serial ATA Size............................149,05 GB Negotiated transfer speed.......1.50 Gb/s State...........................Optimal S.M.A.R.T. error................No Write-cache mode................Write back Hardware errors.................0 Medium errors...................0 Parity errors...................0 Link failures...................0 Aborted commands................0 S.M.A.R.T. warnings.............0 Solid-state disk (non-spinning).false MaxCache cache capable..........false MaxCache cache assigned.........false NCQ status......................Enabled Phy 0...........................1.50 Gb/s Power State.....................Full rpm Supported power states..........Full rpm, Powered off 0x01............................113 0x03............................98 0x04............................99 0x05............................100 0x07............................83 0x09............................75 0x0A............................100 0x0C............................99 0xBB............................100 0xBD............................100 0xBE............................61 0xC2............................39 0xC3............................69 0xC5............................100 0xC6............................100 0xC7............................200 0xC8............................100 0xCA............................100 Aborted commands................0 Link failures...................0 Medium errors...................0 Parity errors...................0 Hardware errors.................0 SMART errors....................0 End of the configuration information for controller 1

    Read the article

  • Port forwarding using a BT Home Hub 2.0 (Supplied to new BT Infinity Customers in the UK)

    - by Jasarien
    I don't usually have trouble with port forwarding, I've been able to do it successfully on a number of different routers, including Linksys, Belkin, Netgear and Apple (Time Capsule / Airport Extreme). So I'm quite confused here. I had been using my Apple Time Capsule as my router for a few years now, with several port mappings all working fine. But it died recently, so I've had to resort to using the BT Home Hub 2.0 that was supplied with my BT Infinity broadband subscription. The forwarding interface for the Home Hub is simplified for the most part, allowing you to select an application or game and assign it to a particular computer on the network which you choose from a list that the Home Hub has 'discovered'. My Mac Pro has a manually assigned static IP 192.168.1.4 and my router is static at 192.168.1. I have chosen SSH from the list of applications and assigned it to my Mac Pro (the only computer in the list currently). The Home Hub also has a feature to keep a DNS service updated, and I have set it to keep my external IP address updated on my hostname. This is how I had it setup in the past with other routers and not had trouble before. I am able to ping my hostname (and external IP) from outside the network and get a response. But when I try to connect using SSH, the connection times out. The Home Hub also has "Firewall settings". The currently selected setting is: Default: Allow all outgoing connections and block all incoming traffic. Games and application sharing is allowed. But I've tried changing it to: Disabled: All traffic is allowed to pass through your BT Home Hub to your devices. Note: you’ll still need to use the games and application sharing feature to make sure that certain applications work properly. And the connection still times out... So frustrating. The OS X firewall on my Mac is disabled, so I don't think that's in the way. I have tried setting the port forwarding manually, instead of relying on the preset "SSH" option (incase it's not using the port I expect). So I set up my own "application" (as the Home Hub calls it) and forwarded external port 22 TCP to internal port 22 TCP to 192.168.1.4 - but that just gives the same result - unable to connect. Next, with the router's firewall disabled and OS X's firewall disabled, I ran the Shields Up test (https://www.grc.com/x/ne.dll?bh0bkyd2) and the result was that all my service ports (0 - 1055) are in 'Stealth' mode. I.e. nothing even exists at my IP as far as any outsider is concerned... Strange. The only thing that seems to work is setting my Mac Pro as the DMZ - which I don't want to do for obvious reasons. Any help with this would be extremely appreciated, thanks.

    Read the article

  • How do large blobs affect SQL delete performance, and how can I mitigate the impact?

    - by Max Pollack
    I'm currently experiencing a strange issue that my understanding of SQL Server doesn't quite mesh with. We use SQL as our file storage for our internal storage service, and our database has about half a million rows in it. Most of the files (86%) are 1mb or under, but even on fresh copies of our database where we simply populate the table with data for the purposes of a test, it appears that rows with large amounts of data stored in a BLOB frequently cause timeouts when our SQL Server is under load. My understanding of how SQL Server deletes rows is that it's a garbage collection process, i.e. the row is marked as a ghost and the row is later deleted by the ghost cleanup process after the changes are copied to the transaction log. This suggests to me that regardless of the size of the data in the blob, row deletion should be close to instantaneous. However when deleting these rows we are definitely experiencing large numbers of timeouts and astoundingly low performance. In our test data set, its files over 30mb that cause this issue. This is an edge case, we don't frequently encounter these, and even though we're looking into SQL filestream as a solution to some of our problems, we're trying to narrow down where these issues are originating from. We ARE performing our deletes inside of a transaction. We're also performing updates to metadata such as file size stats, but these exist in a separate table away from the file data itself. Hierarchy data is stored in the table that contains the file information. Really, in the end it's not so much what we're doing around the deletes that matters, we just can't find any references to low delete performance on rows that contain a large amount of data in a BLOB. We are trying to determine if this is even an avenue worth exploring, or if it has to be one of our processes around the delete that's causing the issue. Are there any situations in which this could occur? Is it common for a database server to come to the point of complete timeouts when many of these deletes are occurring simultaneously? Is there a way to combat this issue if it exists? (cross-posted from StackOverflow )

    Read the article

  • Cannot connect to WEBrick on home network

    - by Chris Stewart
    I'm an Android developer and often my applications require server-side code. I typically use Ruby on Rails for the web app, and during development will run the server on my local machine (Mac OS X) with WEBrick. In the morning when I get to the office, I'll run ifconfig in the console to see what IP my laptop has been given that day. I'll use that IP in my Android app when making requests to the web app in question. This all works fine, when I'm in my office. When I get home, I attempt to do the same thing, find my laptop's IP via ifconfig, set it in my app's config file, but the destination can never be found. To exclude my app from the set of hurdles, I attempt to visit the web server IP (e.g., http://192.168.1.4:3000) from my phone's browser, and it cannot connect. If I try from my laptop, which is running the web server, it works fine. If I try from another machine, on the same network, it also is unable to connect. Given this, I think I've narrowed it down to some kind of configuration in my home network, but I frankly have no idea what the cause could be. I don't have anything special at home, your basic Verizon FiOS router/modem with everything connected via Wi-Fi (Wi-Fi for both phone and laptop at work as well, fyi). I've tried disabling the firewall on my Verizon router, enabling port forwarding, and just about everything else I could do for port 3000, and nothing has changed. Dear Server Fault geniuses, please help a poor developer out. :) Edit: Some follow up items to add. My Mac's firewall is not active, and all incoming requests are allowed. I've also verified on my phone and laptop, that they're on the same network (192.168.1.4 Mac, 192.168.1.9 Phone). I have no idea why this isn't working. Edit 2: I went into System Preferences, enabled Web Sharing, and tried to view the website from my phone and it didn't connect. So it's not WEBrick or related to Rails. The firewall on my machine is off and the firewall on my router is off. Edit 3: Some progress. I set up port forwarding for port 3000 to my laptop, found the external IP, and used that and it connected fine. So, there's definitely something not quite set up correctly on my internal network.

    Read the article

< Previous Page | 268 269 270 271 272 273 274 275 276 277 278 279  | Next Page >