Search Results

Search found 19093 results on 764 pages for 'max path'.

Page 619/764 | < Previous Page | 615 616 617 618 619 620 621 622 623 624 625 626  | Next Page >

  • Hyper-V virtual machine can't be migrated to a specific host in the cluster

    - by Massimo
    I have a three-node Hyper-V cluster running on Windows Server 2008 R2 which is working quite flawlessly: there are no errors, live migration works, all hosts can and will happily run all virtual machines, and so on. But one specific virtual machinee is trying to make me go mad: it works on two nodes of the cluster, but not on the third one. Whenever I try to move the VM to that node, be it in a live migration or with the VM powered off, it always fails. In the event log of the host these events are logged: Source: Hyper-V-VMMS Event ID: 16300 Cannot load a virtual machine configuration: General access denied error (0x80070005) (Virtual machine ID <GUID>) Source: Hyper-V-VMMS Evend ID: 20100 The Virtual Machine Management Service failed to register the configuration for the virtual machine '<GUID>' at 'C:\ClusterStorage\<PATH>\<VM>': General access denied error (0x80070005) Source: Hyper-V-High-Availability Event ID: 21102 'Virtual Machine Configuration <VM>' failed to register the virtual machine with the virtual machine management service. All other VMs can be moved to/from the offending host, and the offending VM can be moved between the other two hosts. Also, this is not a storage problem, because there are other VMs in the same cluster volume, and the host has no troubles running them. What's going on here?

    Read the article

  • How to get the permissions right for /dev/raw1394

    - by Mark0978
    I recently upgraded one of my ubuntu machines to Karmic and I'm having trouble getting the permissions of /dev/raw1394 set to 0666. They only thing this machine is used for is recording audio from a firepod which uses /dev/raw1394 via jackd and there are no other FireWire devices connected, so security around this device is not really an issue. If I run as root, everything works as expected, but I have some folks that run the recorder that I don't want to have root access. However, I can't figure out which lines setup the perms I've tied this: /etc/udev/permissions.d/raw1394.rules:raw1394:root:root:0666 And I have this setup (default install) /lib/udev/rules.d/75-persistent-net-generator.rules:SUBSYSTEMS=="ieee1394", ENV{COMMENT}="Firewire device $attr{host_id})" /lib/udev/rules.d/75-cd-aliases-generator.rules:# the "path" of usb/ieee1394 devices changes frequently, use "id" /lib/udev/rules.d/75-cd-aliases-generator.rules:ACTION=="add", SUBSYSTEM=="block", SUBSYSTEMS=="usb|ieee1394", ENV{ID_CDROM}=="?*", ENV{GENERATED}!="?*", \ /lib/udev/rules.d/60-persistent-storage-tape.rules:KERNEL=="st*[0-9]|nst*[0-9]", ATTRS{ieee1394_id}=="?*", ENV{ID_SERIAL}="$attr{ieee1394_id}", ENV{ID_BUS}="ieee1394" /lib/udev/rules.d/50-udev-default.rules:# FireWire (deprecated dv1394 and video1394 drivers) /lib/udev/rules.d/50-udev-default.rules:KERNEL=="dv1394-[0-9]*", NAME="dv1394/%n", GROUP="video" /lib/udev/rules.d/50-udev-default.rules:KERNEL=="video1394-[0-9]*", NAME="video1394/%n", GROUP="video" /lib/udev/rules.d/60-persistent-storage.rules:KERNEL=="sd*[!0-9]|sr*", ATTRS{ieee1394_id}=="?*", SYMLINK+="disk/by-id/ieee1394-$attr{ieee1394_id}" /lib/udev/rules.d/60-persistent-storage.rules:KERNEL=="sd*[0-9]", ATTRS{ieee1394_id}=="?*", SYMLINK+="disk/by-id/ieee1394-$attr{ieee1394_id}-part%n" And I find these lines in /var/log/syslog Apr 30 09:11:30 record kernel: [ 3.284010] ieee1394: Node added: ID:BUS[0-00:1023] GUID[000a9200c7062266] Apr 30 09:11:30 record kernel: [ 3.284195] ieee1394: Host added: ID:BUS[0-01:1023] GUID[00d0035600a97b9f] Apr 30 09:11:30 record kernel: [ 18.372791] ieee1394: raw1394: /dev/raw1394 device initialized What I can't figure out, is which line actually creates that raw1394 device in the first place. How do you get /dev/raw1394 to have permissions 0666?

    Read the article

  • ISA Server 2006 SSL Certificate Dilemma

    - by JohnyD
    I'm making so great headway in offering our services over https with help from a Go Daddy certificate, later to be upgraded to Thawte SSL123 certs. But, I've just run into one whopper of a problem. Here's my setup: I run an ISA 2006 firewall. Our web services are distributed over 2 servers. One is Windows 2000 (www.domain.com) and the other is Windows 2003 (services.domain.com). So, I'll need to purchase 2 certs for both www and services, import them into IIS6 on their respective machines, then export them with the primary key (making sure to Include all certificates in the certification path if possible... that had me stumped for a while), and then to finally import them into ISA's local computer Personal store. The problem I've just run into is that I have separate firewall rules for services.domain.com and www.domain.com... because requests need to be forwarded to different web servers. Each of these firewall rules use the same httplistener. I have just found out that you can only use 1 certificate per httplistener. To make matters worse you can only have a single httplistener per ip / port. Is this correct? I can only use a single certificate for a single ip address? This would seem to be a severe limitation. Am I wrong? If I'm not then I've got a whole lot more work ahead of me as I'll have to set up extra ip's, add them to the firewall's network interface, create new listeners using that ip, etc... Can someone please confirm that I'm doing this correctly / incorrectly? Once I got my head wrapped around it all it seemed easy... then this. Thanks in advance.

    Read the article

  • TicTac Photo and Windows 7

    - by Ben
    Hello, My wife has been creating a tictac photo album. I had to upgrade to windows 7 as i had enough of Vista so i backed up the tic tac photo file and the photos to an external hard disk and performed a fresh install of win7. Now here is the problem. TicTacPhoto says it can find the photos in the album. The locations were as follows: Vista: C:\Users\Kelly\Pictures Win 7 C:\Users\Kelly\My Pictures When i try to create a Pictures folder under Kelly it popups a message about merging the two folders and simply moves the pictures to the My Pictures folder. Does anyone know a way to make a foler called pictures so i can eliminate the file path problem and then try again with tic tac photo support to get them to fix my file. My wife is going to kill me as its our wedding album and she has spent upwards of 30hrs designing it and me upgrading to win 7 means its all my fault. She does not understand file paths etc. Im going to try and open the album file in a text editor and see if i can see anything but thought i would ask here as well. Any help appreciated.

    Read the article

  • ISCSI Target Ubuntu

    - by erai
    I'm trying to setup iscsitarget on Ubuntu 12.04 but I can't connect to it. On the windows machine it says Target Error. with no other output. My ietd.conf is Target iqn.2012-06.com.org:virtual_machines.lun Lun 0 Type=fileio,Path=/media/volume0/storlun0.bin When I run iscsiadm -m discovery -t st -p localhost The output is iscsiadm: Connection to Discovery Address 127.0.0.1 failed iscsiadm: Login I/O error, failed to receive a PDU iscsiadm: retrying discovery login to 127.0.0.1 iscsiadm: Connection to Discovery Address 127.0.0.1 closed iscsiadm: Login I/O error, failed to receive a PDU iscsiadm: retrying discovery login to 127.0.0.1 iscsiadm: Connection to Discovery Address 127.0.0.1 failed iscsiadm: Login I/O error, failed to receive a PDU iscsiadm: retrying discovery login to 127.0.0.1 iscsiadm: Connection to Discovery Address 127.0.0.1 failed iscsiadm: Login I/O error, failed to receive a PDU iscsiadm: retrying discovery login to 127.0.0.1 iscsiadm: Connection to Discovery Address 127.0.0.1 failed iscsiadm: Login I/O error, failed to receive a PDU iscsiadm: retrying discovery login to 127.0.0.1 iscsiadm: connection login retries (reopen_max) 5 exceeded iscsiadm: Could not perform SendTargets discovery. dmesg output: [ 3324.804665] iscsi_trgt: Removing all connections, sessions and targets [ 3325.875343] iSCSI Enterprise Target Software - version 1.4.20.3 [ 3325.875415] iscsi_trgt: Registered io type fileio [ 3325.875420] iscsi_trgt: Registered io type blockio [ 3325.875425] iscsi_trgt: Registered io type nullio

    Read the article

  • Split Excel worksheet into multiple worksheets based on a column with VBA (Redux)

    - by Ceeder
    I'm rather new to VBA and I've been working with the code generously displayed and explained by Nixda: Split Excel Worksheet... My only challenge is I've been trying desperately to find a way to include the top 3 rows as a title bu it seems to only allow for one. Here's the code have: Dim Titlesheet As Worksheet iCol = 23 '### Define your criteria column strOutputFolder = (Sheets("Operations").Range("D4")) '### <--Define your path of output folder Set ws = ThisWorkbook.ActiveSheet Set rngLast = Columns(iCol).Find("*", Cells(3, iCol), , , xlByColumns, xlPrevious) Set Titlesheet = Sheets("Input") ws.Columns(iCol).AdvancedFilter Action:=xlFilterInPlace, Unique:=True Set rngUnique = Range(Cells(4, iCol), rngLast).SpecialCells(xlCellTypeVisible) If Dir(strOutputFolder, vbDirectory) = vbNullString Then MkDir strOutputFolder For Each strItem In rngUnique If strItem < "" Then Sheets("Input").Select Range("A1:V3").Select Selection.Copy ws.UsedRange.AutoFilter Field:=iCol, Criteria1:=strItem.Value Workbooks.Add Sheets("Sheet1").Select ActiveSheet.PasteSpecial ws.UsedRange.SpecialCells(xlCellTypeVisible).Copy Destination:=[A4] strFilename = strOutputFolder & "\" & strItem ActiveWorkbook.SaveAs Filename:=strFilename, FileFormat:=xlWorkbookNormal ActiveWorkbook.Close savechanges:=False End If Next ws.ShowAllData Is there something I can change to include these lines? Thanks so much, this code provided by Nixda has taught me a great deal!

    Read the article

  • Apache Won't Restart After Compiling PHP with Postgres

    - by gonzofish
    I've compiled PHP (v5.3.1) with Postgres using the following configure: ./configure \ --build=x86_64-redhat-linux-gnu \ --host=x86_64-redhat-linux-gnu \ --target=x86_64-redhat-linux-gnu \ --program-prefix= \ --prefix=/usr/ \ --exec-prefix=/usr/ \ --bindir=/usr/bin/ \ --sbindir=/usr/sbin/ \ --sysconfdir=/etc \ --datadir=/usr/share \ --includedir=/usr/include/ \ --libdir=/usr/lib64 \ --libexecdir=/usr/libexec \ --localstatedir=/var \ --sharedstatedir=/usr/com \ --mandir=/usr/share/man \ --infodir=/usr/share/info \ --cache-file=../config.cache \ --with-libdir=lib64 \ --with-config-file-path=/etc \ --with-config-file-scan-dir=/etc/php.d \ --with-pic \ --disable-rpath \ --with-pear \ --with-pic \ --with-bz2 \ --with-exec-dir=/usr/bin \ --with-freetype-dir=/usr \ --with-png-dir=/usr \ --with-xpm-dir=/usr \ --enable-gd-native-ttf \ --with-t1lib=/usr \ --without-gdbm \ --with-gettext \ --without-gmp \ --with-iconv \ --with-jpeg-dir=/usr \ --with-openssl \ --with-zlib \ --with-layout=GNU \ --enable-exif \ --enable-ftp \ --enable-magic-quotes \ --enable-sockets \ --enable-sysvsem \ --enable-sysvshm \ --enable-sysvmsg \ --with-kerberos \ --enable-ucd-snmp-hack \ --enable-shmop \ --enable-calendar \ --with-libxml-dir=/usr \ --enable-xml \ --with-system-tzdata \ --with-mime-magic=/usr/share/file/magic \ --with-apxs2=/usr/sbin/apxs \ --with-mysql=/usr/include/mysql \ --without-gd \ --with-dom=/usr/include/libxml2/libxml \ --disable-dba \ --without-unixODBC \ --disable-pdo \ --enable-xmlreader \ --enable-xmlwriter \ --without-sqlite \ --without-sqlite3 \ --disable-phar \ --enable-fileinfo \ --enable-json \ --without-pspell \ --disable-wddx \ --with-curl=/usr/include/curl \ --enable-posix \ --with-mcrypt \ --enable-mbstring \ --with-pgsql=/mnt/mv/pgsql I'm using Postgres 8.4.0 and Apache 2.2.8; I have the following line in my Apache conf file: LoadModule php5_module /usr/lib64/httpd/modules/libphp5.so And when I attempt to restart Apache, I get the following error message: Starting httpd: httpd: Syntax error on line 205 of /etc/httpd/conf/httpd.conf: Cannot load /usr/lib64/httpd/modules/libphp5.so into server: /usr/lib64/httpd/modules/libphp5.so: undefined symbol: lo_import_with_oid Now, I know that this is a problem with Postgres with PHP because lo_import_with_oid is a function in the Postgres source which allows the importing of large objects; also, if I remove the --with-pgsql option, PHP and Apache get along great. I've scoured the Internet looking for answers all day, but to no avail. Does anyone have ANY insight into what is causing my problems.

    Read the article

  • Openconnect for Cisco VPN doesn't recognize private key file - asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag

    - by Alexander Skwar
    I'm trying to use my Synology DS212 NAS box also act as VPN gateway to my companies VPN. Sadly, they only use Cisco ASA and to complicate stuff even further, we've got to use personal certificates (which is of course more secure, but more complicate to get going…). So I compiled OpenConnect v4.06 from http://www.infradead.org/openconnect/. As a very basic test, I tried to build a connection by manually invoking openconnect, passing along the key and cert files, like so: /lib/ld-linux.so.3 --library-path /opt/lib \ /opt/openconnect/sbin/openconnect \ --certificate=$VPN_CFG/alexander.crt \ --sslkey=$VPN_CFG/alexander.key \ --cafile=$VPN_CFG/Company_VPN_CA.crt \ --user=alexander --verbose <ip>:443 It fails :( Attempting to connect to <ip>:443 Using certificate file $VPN_CFG/alexander.crt Using client certificate '/[email protected]/OU=Company VPN' 5919:error:0D0680A8:asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag:tasn_dec.c:1315: Loading private key failed (see above errors) Loading certificate failed. Aborting. Failed to open HTTPS connection to <ip> Failed to obtain WebVPN cookie When I run the same command with the same cert/key files on a Ubuntu 12.04 box, it works: openconnect \ --certificate=$VPN_CFG/alexander.crt \ --sslkey=$VPN_CFG/alexander.key \ --cafile=$VPN_CFG/Company_VPN_CA.crt \ --user=alexander --verbose <ip>:443 Attempting to connect to <ip>:443 Using certificate file $VPN_CFG/alexander.crt Extra cert from cafile: '/CN=Company AG VPN CA/O=Company AG/L=Zurich/ST=ZH/C=CH' SSL negotiation with <ip> Server certificate verify failed: self signed certificate Certificate from VPN server "<ip>" failed verification. Reason: self signed certificate Enter 'yes' to accept, 'no' to abort; anything else to view: yes Connected to HTTPS on <ip> GET https://<ip>/ […] Well… The error on the NAS is this: 5919:error:0D0680A8:asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag:tasn_dec.c:1315: Any ideas, what's causing this? On Syno, I use OpenConnect 4.06. On Ubuntu, I just compiled and installed to a custom location OpenConnect 4.06 as well. Thanks, Alexander

    Read the article

  • restrict access to IIS virtual directory from root website

    - by Senthil
    I have two domains (domain1.com and domain2.com). Both of them use the same Windows hosting server with IIS7. One of the domains is being called the "primary domain" by my hosting provider (GoDaddy) and it always points to the root folder that I was given. For the other domain, I have created a virtual directory in IIS and pointed it there. The folder structure is like this - root/ --Default.aspx --SomeFile.aspx --domain2folder/ ----Default.aspx ----Domain2SomeFile.aspx So, if I type domain1.com, I see the regulakr Default.aspx. But if I type domain2.com, I am shown the contents of domain2folder as if it were a separate web application - I think that is what IIS virtual directory is meant for. Well and good. But the problem is, when I type http://domain1.com/domain2folder, I see the domain2's website! But I don't want that to be shown when I use the path like that from domain1. Only if they use domain2.com, user should be able to see those contents. How can I do that? Hope I am making sense. Thanks.

    Read the article

  • Should I use /etc/bind/zones/ or /var/cache/bind/?

    - by nbolton
    Each tutorial seems to have a different opinion on this. For my ISC BIND zones, should I use /etc/bind/zones/ or /var/cache/bind/? In the last install, I used /var/cache/bind/ but only because I was guided to do so; however I just spotted a pid file in there for this new Debian install, so I figured that using the "working directory" to store zone files probably wasn't the best idea. It seems that many admins use this so they don't have to type the full path when declaring a new zone. For example: file "/etc/bind/zones/db.foobar.com"; Instead of: file "db.foobar.com"; Is obviously easier to type, but is it good or bad practice? Some may also suggest setting the working directory to /etc/bind/zones: options { // directory "/var/cache/bind"; directory "/etc/bind/zones"; } ... but something tells me this isn't good practice, since the pid file would be created there I assume (unless it's just in /var/cache/bind by coincidence). I took a look at the manpage but it didn't seem to say what the directory option was for, any ideas exactly what it was design for?

    Read the article

  • How can I close a port that appears to be orphaned by Xvfb?

    - by Jim Fiorato
    I'm running Xvfb on a FC8 Amazon EC2 image. On occasion Xvfb will crash (unable at the moment to find out the reason for the crash), and after crashing the TCP port will appear to be orphaned. I'm unable to get a PID to kill any process that may be using it. I'm starting Xvfb with: Xvfb :7 -screen 0 1024x768x24 & Examples of what I'm working with are below, the Xvfb port is (was) 6007: # netstat -ap Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 *:ssh *:* LISTEN 1894/sshd tcp 0 0 *:6007 *:* LISTEN - tcp 0 352 ip-10-84-69-165.ec2.int:ssh c-71-194-253-238.hsd1:51689 ESTABLISHED 2981/0 udp 0 0 *:bootpc *:* 1817/dhclient udp 0 0 *:bootpc *:* 1463/dhclient Active UNIX domain sockets (servers and established) Proto RefCnt Flags Type State I-Node PID/Program name Path unix 2 [ ] DGRAM 871 668/udevd @/org/kernel/udev/udevd unix 2 [ ACC ] STREAM LISTENING 5385 1880/dbus-daemon /var/run/dbus/system_bus_socket unix 6 [ ] DGRAM 5353 1867/rsyslogd /dev/log unix 2 [ ] DGRAM 11861 2981/0 unix 2 [ ] DGRAM 5461 1974/crond unix 2 [ ] DGRAM 5451 1904/console-kit-da unix 3 [ ] STREAM CONNECTED 5438 1880/dbus-daemon /var/run/dbus/system_bus_socket unix 3 [ ] STREAM CONNECTED 5437 1904/console-kit-da unix 3 [ ] STREAM CONNECTED 5396 1880/dbus-daemon unix 3 [ ] STREAM CONNECTED 5395 1880/dbus-daemon unix 2 [ ] DGRAM 5361 1871/rklogd # lsof -i COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME dhclient 1463 root 3u IPv4 4704 UDP *:bootpc dhclient 1817 root 4u IPv4 5173 UDP *:bootpc sshd 1894 root 3u IPv4 5414 TCP *:ssh (LISTEN) sshd 2981 root 3u IPv4 11825 TCP ip-10-84-69-165.ec2.internal:ssh->c-71-194-253-238.hsd1.il.comcast.net:51689 (ESTABLISHED) Attempting to force the port closed with iptables doesn't seem to work either. iptables -A INPUT -p tcp --dport 6007 -j DROP I'm at a loss as to how to reclaim/free the port. From what I can tell, this port will remain in this state until the EC2 instance is shut down. So, how can I close this port so I can restart Xvfb?

    Read the article

  • Why are symbolic links not working in MySQL?

    - by Eno
    I'm having an issue, I searched a lot but I'm not sure if it's related to a previous security patch. On the last version of MySQL on Debian Lenny ( 5.0.51a-24 ) I need to share one table between two db, those two db are in the same path ( /var/lib/mysql/db1 & db2 ). I created symbolic links for db2 pointing to the table in db1. When I query the same table from db2 I get this : 'ERROR 1030 (HY000): Got error 140 from storage engine' This is how it looks : test-lan:/var/lib/mysql/test3# ls -alh drwx------ 2 mysql mysql 4.0K 2010-08-30 13:28 . drwxr-xr-x 6 mysql mysql 4.0K 2010-08-30 13:29 .. lrwxrwxrwx 1 mysql mysql 28 2010-08-30 13:28 blbl.frm -> /var/lib/mysql/test/blbl.frm lrwxrwxrwx 1 mysql mysql 28 2010-08-30 13:28 blbl.MYD -> /var/lib/mysql/test/blbl.MYD lrwxrwxrwx 1 mysql mysql 28 2010-08-30 13:28 blbl.MYI -> /var/lib/mysql/test/blbl.MYI -rw-rw---- 1 mysql mysql 65 2010-08-30 13:24 db.opt I really need those symlinks, is there a way to make them working like before ? ( old MySQL-server is fine ) Thanks,

    Read the article

  • Python easy_install confused on Mac OS X

    - by slf
    environment info: $ echo $PATH /opt/local/bin:/opt/local/sbin:/sw/bin:/sw/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin:/usr/X11R6/bin:/opt/local/bin:/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin:~/.utility_scripts $ which easy_install /usr/bin/easy_install specifically, let's try the simplejson module (I know it's the same thing as import json in 2.6, but that isn't the point) $ sudo easy_install simplejson Searching for simplejson Reading http://pypi.python.org/simple/simplejson/ Reading http://undefined.org/python/#simplejson Best match: simplejson 2.1.0 Downloading http://pypi.python.org/packages/source/s/simplejson/simplejson-2.1.0.tar.gz#md5=3ea565fd1216462162c6929b264cf365 Processing simplejson-2.1.0.tar.gz Running simplejson-2.1.0/setup.py -q bdist_egg --dist-dir /tmp/easy_install-Ojv_yS/simplejson-2.1.0/egg-dist-tmp-AypFWa The required version of setuptools (>=0.6c11) is not available, and can't be installed while this script is running. Please install a more recent version first, using 'easy_install -U setuptools'. (Currently using setuptools 0.6c9 (/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python)) error: Setup script exited with 2 ok, so I'll update setuptools... $ sudo easy_install -U setuptools Searching for setuptools Reading http://pypi.python.org/simple/setuptools/ Best match: setuptools 0.6c11 Processing setuptools-0.6c11-py2.6.egg setuptools 0.6c11 is already the active version in easy-install.pth Installing easy_install script to /usr/local/bin Installing easy_install-2.6 script to /usr/local/bin Using /Library/Python/2.6/site-packages/setuptools-0.6c11-py2.6.egg Processing dependencies for setuptools Finished processing dependencies for setuptools I'm not going to speculate, but this could have been caused by any number of environment changes like the Leopard - Snow Leopard upgrade, MacPorts or Fink updates, or multiple Google App Engine updates.

    Read the article

  • Always failed in connecting to the Outlook Anywhere through TMG 2010 with certificate ?

    - by Albert Widjaja
    Hi, I have successfully published Exchange Activesync using TMG 2010 and OWA internally only but somehow when I tried to publish the Outlook Anywhere it failed ( as can be seen from the https://www.testexchangeconnectivity.com ) Settings: IIS 7 settings, I have unchecked the require SSL and "Ignore" the client certificate Exchange CAS settings: ServerName : ExCAS02-VM SSLOffloading : True ExternalHostname : activesync.domain.com ClientAuthenticationMethod : Basic IISAuthenticationMethods : {Basic} MetabasePath : IIS://ExCAS02-VM.domainad.com/W3SVC/1/ROOT/Rpc Path : C:\Windows\System32\RpcProxy Server : ExCAS02-VM AdminDisplayName : ExchangeVersion : 0.1 (8.0.535.0) Name : Rpc (Default Web Site) DistinguishedName : CN=Rpc (Default Web Site),CN=HTTP,CN=Protocols,CN=ExCAS02-VM,CN=Servers,CN=Exchange Administrative....... Identity : ExCAS02-VM\Rpc (Default Web Site) Guid : 59873fe5-3e09-456e-9540-f67abc893f5e ObjectCategory : domainad.com/Configuration/Schema/ms-Exch-Rpc-Http-Virtual-Directory ObjectClass : {top, msExchVirtualDirectory, msExchRpcHttpVirtualDirectory} WhenChanged : 18/02/2011 4:31:54 PM WhenCreated : 18/02/2011 4:30:27 PM OriginatingServer : ADDC01.domainad.com IsValid : True Test-OutlookWebServices settings: 1013 Error When contacting https://activesync.domain.com/Rpc received the error The remote server returned an error: (500) Internal Server Error. 1017 Error [EXPR]-Error when contacting the RPC/HTTP service at https://activesync.domain.com/Rpc. The elapsed time was 0 milliseconds. https://www.testexchangeconnectivity.com testing result: Checking the IIS configuration for client certificate authentication. Client certificate authentication was detected. Additional Details Accept/Require client certificates were found. Set the IIS configuration to Ignore Client Certificates if you aren't using this type of authentication. environment: Windows Server 2008 (HT-CAS) Exchange Server 2007 SP1 TMG 2010 Standard Outlook 2007 client SP2. Any kind of help would be greatly appreciated. Thanks.

    Read the article

  • Split horizon, route filtering, and having RIPv2 announce a non-attached route to host

    - by Paul
    Routers A, B & C live at 10.1.1.1, 10.1.1.2 and 10.1.1.3 on a /24 metro Ethernet subnet. Each router also has its own private subnet on another interface. Router B's private subnet links thru a firewall to a 10.20.20.0 network at another organization. Router B redistributes to A and C several static routes for hosts on 10.20.20.0. However, a new host 10.20.20.5/32 must be reached via a different path that goes through router C. I know that C can advertise this host-based route with no problem, but I'd like to keep all my 10.20.20.x static routes in one place. So, how can B tell A via RIPv2 to send packets for 10.20.20.5/32 to C? So far it looks like I need no ip split-horizon on router B's 10.1.1.2 interface, perhaps because B has already learned from C other routes with a next hop of 10.1.1.3. But how does RIPv2 split horizon with no auto-summary and network 10.0.0.0 really work? If B learns a route to ANY 10.x.x.x network or host from A or C, is that enough for split horizon to keep it from redistributing ip route 10.20.20.5 255.255.255.255 10.1.1.3? And if I want to suspend split horizon only for this one new host, how do I filter out the mess of regurgitated routes that B advertises when I try no ip split-horizon? Thanks much.

    Read the article

  • IIS 7.0 404 Custom Error Page and web.config

    - by Colin
    I am having trouble with a custom 404 error page. I have a domain running a .NET proj with it's own error handling. I have a web.config running for the domain which contains: <customErrors mode="RemoteOnly"> <error statusCode="500" redirect="/Error"/> <error statusCode="404" redirect="/404"/> </customErrors> On a sub dir of that domain I am ignoring all routes there by doing routes.IgnoreRoute("Assets/{*pathInfo}"); in the .NET proj and I want to put a custom 404 error page on that and any sub dir's of Assets. The sub dir contains static content like images, css, js etc etc. So in the Error Pages section of IIS I put a redirect to an absolute URL. The web.config for that dir looks like the following: <system.webServer> <httpErrors> <remove statusCode="404" subStatusCode="-1" /> <error statusCode="404" prefixLanguageFilePath="" path="http://mydomain.com/404" responseMode="Redirect" /> </httpErrors> </system.webServer> But I navigate to an unknown URL under that dir and yet I still see the default IIS 404 page. I am also seeing an alert in IIS that reads: You have configured detailed error messages to be returned for both local and remote requests. When this option is selected, custom error configuration is not used. Does this have anything to do with the customErrors mode="RemoteOnly" in the site web.config? I have tried to overwrite the customErrors in the sub dir web.config but nothing changes. Any help would be appreciated. Thanks.

    Read the article

  • Making OpenSSL work on PHP Windows 2008 server with FastCGI

    - by KacieHouser
    I have been researching all day. Here is what I have done: In C:/PHP/php.ini and C:/PHP/php-cgi-fcgi.ini I have made the extension_dir = "C:/PHP/ext" I uncommented extension=php_openssl.dll I went to http://windows.php.net/download/ and got the thread safe version with the PHP 5.4 (5.4.8) version of DLL's In C:/PHP/ext I replaced the php_openssl.dll with the one I downloaded In System32 and SysWOW64 I added the following DLL's ssleay.dll libeay.dll I restarted the IIS server in the Server Manager under Web Server and stopped and started the World Wide Web Publishing Service That didn't work, so I tried same thing with the unthreaded versions. I still get: Fatal error: Call to undefined function ftp_ssl_connect() in C:\inetpub\wwwroot\REMOVED_dev\save_data.php on line 5 Here are related things from phpinfo(): System Windows NT DEV-WEB1 6.1 build 7601 (Windows Server 2008 R2 Standard Edition Service Pack 1) i586 Compiler MSVC9 (Visual C++ 2008) Architecture x86 Configure Command cscript /nologo configure.js "--enable-snapshot-build" "--enable-debug-pack" "--disable-zts" "--disable-isapi" "--disable-nsapi" "--without-mssql" "--without-pdo-mssql" "--without-pi3web" "--with-pdo-oci=C:\php-sdk\oracle\instantclient10\sdk,shared" "--with-oci8=C:\php-sdk\oracle\instantclient10\sdk,shared" "--with-oci8-11g=C:\php-sdk\oracle\instantclient11\sdk,shared" "--with-enchant=shared" "--enable-object-out-dir=../obj/" "--enable-com-dotnet" "--with-mcrypt=static" "--disable-static-analyze" "--with-pgo" Server API CGI/FastCGI Configuration File (php.ini) Path C:\Windows Loaded Configuration File C:\PHP\php-cgi-fcgi.ini Scan this dir for additional .ini files (none) Additional .ini files parsed (none) Registered PHP Streams php, file, glob, data, http, ftp, zip, compress.zlib, compress.bzip2, https, ftps, sqlsrv, phar Registered Stream Socket Transports tcp, udp, ssl, sslv3, sslv2, tls FTP support enabled Protocols dict, file, ftp, ftps, gopher, http, https, imap, imaps, ldap, pop3, pop3s, rtsp, scp, sftp, smtp, smtps, telnet, tftp openssl OpenSSL support enabled OpenSSL Library Version OpenSSL 0.9.8t 18 Jan 2012 OpenSSL Header Version OpenSSL 0.9.8x 10 May 2012 What am I missing here?

    Read the article

  • Default or fink python and lxml under 10.6.8

    - by songdogtech
    Ah, confusion. I'm trying to install a python library called lxml as needed by a python script. I've been through numerous SU quesitons and answers. I haven't been able to make much progress. I run easy_install lxml and get: Processing lxml-3.0.1-py2.6-macosx-10.6-universal.egg lxml 3.0.1 is already the active version in easy-install.pth Using /Library/Python/2.6/site-packages/lxml-3.0.1-py2.6-macosx-10.6-universal.egg Processing dependencies for lxml Finished processing dependencies for lxml but when I run my script, I get: File "scraper.py", line 3, in import lxml.html File "/Library/Python/2.6/site-packages/lxml-3.0.1-py2.6-macosx-10.6-universal.egg/lxml/html/init.py", line 42, in from lxml import etree ImportError: dlopen(/Library/Python/2.6/site-packages/lxml-3.0.1-py2.6-macosx-10.6-universal.egg/lxml/etree.so, 2): Symbol not found: _htmlParseChunk Referenced from: /Library/Python/2.6/site-packages/lxml-3.0.1-py2.6-macosx-10.6-universal.egg/lxml/etree.so Expected in: flat namespace in /Library/Python/2.6/site-packages/lxml-3.0.1-py2.6-macosx-10.6-universal.egg/lxml/etree.so I think that maybe I'm not using the correct python install? I've installed python with fink, but should I use OS X's python? This is in my .profile: test -r /sw/bin/init.sh && . /sw/bin/init.sh which points to the fink install. echo $PATH gives me: /sw/bin:/sw/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin:/usr/X11R6/bin Should I change that to point to snow leopard's python? (Which is 2.6.1) In Library/, there is: which are the lxml libaries I need, it appears, as well as requests. And whereis python gives me /usr/bin/python What do I do? How do I get python to use these libraries. And which python?

    Read the article

  • Apache HTTPD - Segmentation fault when loading mod_jk module

    - by hansengel
    I just set up mod_jk with my Apache httpd 2.0.52 installation, but now when I try to start Apache, it has a segmentation fault. I've checked that I am using the mod_jk compiled for 2.0.x.. built against the same version I have, in fact. I've also verified that the path I'm giving to LoadModule is correct, and the permissions and the ownership of the file are the same as the rest of the modules'. When I remove the "LoadModule" command for mod_jk from my httpd.conf, there is no segmentation fault. Nothing shows in Apache's error logs. I have tried restarting the server with this module using both service httpd restart and httpd. These are the last few lines returned of strace httpd -X: gettimeofday({1292100295, 434487}, NULL) = 0 socket(PF_INET6, SOCK_STREAM, IPPROTO_IP) = -1 EAFNOSUPPORT (Address family not supported by protocol) socket(PF_NETLINK, SOCK_RAW, 0) = 3 bind(3, {sa_family=AF_NETLINK, pid=0, groups=00000000}, 12) = 0 getsockname(3, {sa_family=AF_NETLINK, pid=22378, groups=00000000}, [12]) = 0 time(NULL) = 1292100295 sendto(3, "\24\0\0\0\26\0\1\3\307\342\3M\0\0\0\0\0\305\333\267", 20, 0, {sa_family=AF_NETLINK, pid=0, groups=00000000}, 12) = 20 recvmsg(3, {msg_name(12)={sa_family=AF_NETLINK, pid=0, groups=00000000}, msg_iov(1)=[{"<\0\0\0\24\0\2\0\307\342\3MjW\0\0\2\10\200\376\1\0\0\0"..., 4096}], msg_controllen=0, msg_flags=0}, 0) = 664 recvmsg(3, {msg_name(12)={sa_family=AF_NETLINK, pid=0, groups=00000000}, msg_iov(1)=[{"\24\0\0\0\3\0\2\0\307\342\3MjW\0\0\0\0\0\0\1\0\0\0\10\0"..., 4096}], msg_controllen=0, msg_flags=0}, 0) = 20 close(3) = 0 socket(PF_INET, SOCK_STREAM, IPPROTO_IP) = 3 --- SIGSEGV (Segmentation fault) @ 0 (0) --- +++ killed by SIGSEGV +++ Process 22378 detached Has anyone had a similar problem using Apache 2.0.52 with mod_jk? I might try downloading and building the source for the Apache server and mod_jk myself if there isn't a discovered fix for this.

    Read the article

  • Openpgp does not work in my Thunderbird-Installation

    - by zerozero
    Hello community, as mentioned above - i encountered serious troubles. here are the versions of the related software: SuSE 11.2, Thunderbird v.3.1.6, released October 27, 2010, Firefox 3.6 v3.6.12 I created an installations with an own partition both for the user and for the TB-Mails. For a new installation on another hardware i took these partitions. When i wanted to read the emails, i got an error-message like this: The GPG-agent for your GnuPG-version 2.0.12 couldn´t get started Further i got an error message for the access on services of enigmail. The file jar:file:///usr/lib/mozilla/extensions/{3550f703-e582-4d05-9a08-453d09bdfdc6}/{847b3a00-7ab1-11d4-8f02-006008948af5}/chrome/enigmail.jar!/locale/de-DE/enigmail/help/initError.html couldn´t get found. I found out, that this path doesn´t come from TB nor from firefox, but from enigmail. I installed several (un-)pack-programs. the only effect was, that in the menu of TB the entry for OpenPGP appeared in the TB-Menu. The errors as described above repeat at every try to read an email. I deleted and re-installed enigmail, but the errors dont´t disappear. What can i do to get rid these error-messages ? Thanks in advance

    Read the article

  • Multiple SVN repos on Debian HTTPd vhost setup

    - by Jonathon Reinhart
    I would like to have my svn/http server setup so I can access multiple repositories via a "svn" subdomain: https://svn.example.com/repo1 https://svn.example.com/repo2 I am using Debian 6, and already have multiple vhosts set up via the standard sites-available method. Resources and their problems: How To: subversion SVN with Apache2 and DAV This one doesn't deal with a server with multiple vhosts. Installing and Configuring Subversion This one only considers one subversion repository. This one does show putting the SVN DAV <Location> in the svn vhost file. However, it doesn't say whether to put it inside or outside the <VirtualHost> tag. Does this really limit the subversion access to just that vhost? I just tried, and can access /foorepo from any subdomain. Setting Up Subversion And Trac As Virtual Hosts On An Ubuntu Server This one appears to be very close, but I can still access repos from any vhost. In other words, it doesn't matter what subdomain I specify, as long as the path matches the repo name. Doesn't make any sense. And yes, my <Location> tag is inside the <VirtualHost>. A lot of these articles seem to have been written in 2006 or earlier, and don't necessarily conform to the configuration methods that newer distros are using. Can anyone guide me in the right direction?

    Read the article

  • How can I install Satchmo?

    - by Jonathan Hayward
    I am trying to install Satchmo 0.9 on an Ubuntu 9.10 32-bit guest off of the instructions at http://bitbucket.org/chris1610/satchmo/downloads/Satchmo.pdf. I run into difficulties at 2.1.2: pip install -r http://bitbucket.org/chris1610/satchmo/raw/tip/scripts/requirements.txt pip install -e hg+http://bitbucket.org/chris1610/satchmo/@v0.9#egg=satchmo The first command fails because a compile error for how it's trying to build PIL. So I ran an "aptitude install python-imaging", locally copy the first line's requirements.text, and remove the line that's unsuccessfully trying to build PIL. The first line completes without reported error, as does the second. The next step tells me to change directory to the /path/to/new/store, and run: python clonesatchmo.py A little bit of trouble here; I am told that clonesatchmo.py will be in /bin by now, and it isn't there, but I put some Satchmo stuff under /usr/local, create a symlink in /bin, and run: python /bin/clonesatchmo.py This gives: jonathan@ubuntu:~/store$ python /bin/clonesatchmo.py Creating the Satchmo Application Traceback (most recent call last): File "/bin/clonesatchmo.py", line 108, in <module> create_satchmo_site(opts.site_name) File "/bin/clonesatchmo.py", line 47, in create_satchmo_site import satchmo_skeleton ImportError: No module named satchmo_skeleton A find after apparently checking out the repository reveals that there is no file with a name like satchmo*skeleton* on my system. I thought that bash might be prone to take part of the second pip invocation's URL as the beginning of a comment; I tried both: pip install -e hg+http://bitbucket.org/chris1610/satchmo/@v0.9\#egg=satchmo pip install -e hg+http://bitbucket.org/chris1610/satchmo/@v0.9#egg=satchmo Neither way of doing it seems to take care of the import error mentioned above. How can I get a Satchmo installation under Ubuntu, or at least enough of a Satchmo installation that I am able to start with a skeleton of a store and then flesh it out the way I want? Thanks, Jonathan

    Read the article

  • Upgrading Visio 2000 to Visio 2007

    - by dirtside
    I have Microsoft Visio 2000 SR 1, and recently purchased Microsoft Office Visio Standard 2007 with the understanding (supported by the product info and some other research) that I'd be able to upgrade. However, when I install 2007, it tells me it can't find a previous install of Visio, but... it's right there! Here's the exact message: "Setup can't find a version of Microsoft Office on your computer. If Office is installed on a disk or network share, click the browse button to select the appropriate disk or share... (etc.)" No matter which directory or drive I pick (various Office installs, the old Visio install, various subdirectories) it gives the following message: "The path you have chosen does not point at a qualifying upgradeable product. Click 'Retry' to try again or 'Cancel' to quit setup." Any ideas? This is a legit copy of Visio 2007 (purchased from Amazon) and the copy of Visio 2000 is legit as well. I'm not sure what exactly the installer is looking for that it would consider a "qualifying upgradeable product". A specific file?

    Read the article

  • EFI pxe network boot error

    - by Lee
    Asking this on both [serverfault][1] and [superuser][2]. When attempting to network boot RHEL 5.4 on an old ia64 machine I get the following error : ![alt text][3] So I've basically followed the tutorial here : [http://www-uxsup.csx.cam.ac.uk/pub/doc/suse/sles9/adminguide-sles9/ch04s03.html][4] DHCPD,TFTPD etc are already setup and working with standard x86 PXE clients. I've unpacked the boot.img file into /tftpboot/ia64/ and passed the path to the elilo.efi file via DHCP with the filename ""; option. Changing this filename generates a PXE file not found error (see below). So I assume that PXE has found the file... ![alt text][5] The only thing wrong I can find in the logs is : Jan 6 19:49:31 dhcphost in.tftpd[31379]: tftp: client does not accept options Any ideas? I'm sure I hit a problem like this a few years ago but I can't remember the fix :) Thanks in advance! Thanks in advance! [1]: http:// serverfault.com/questions/100188/ efi-pxe-network-boot-error [2]: http:// superuser.com/questions/92295/ efi-pxe-network-boot-error [3]: http:// i.imgur.com/Zx1Jy. png [4]: http:// www-uxsup.csx.cam.ac.uk/pub/doc/suse/sles9/adminguide-sles9/ch04s03.html [5]: http:// i.imgur.com/CEzGf. jpg

    Read the article

  • iptable CLUSTERIP won't work

    - by Rad Akefirad
    We have some requirements which explained here. We tried to satisfy them without any success as described. Here is the brief information: Here are requirements: 1. High Availability 2. Load Balancing Current Configuration: Server #1: one static (real) IP for each 10.17.243.11 Server #2: one static (real) IP for each 10.17.243.12 Cluster (virtual and shared among all servers) IP: 10.17.243.15 I tried to use CLUSTERIP to have the cluster IP by the following: on the server #1 iptables -I INPUT -i eth0 -d 10.17.243.15 -j CLUSTERIP --new --hashmode sourceip --clustermac 01:00:5E:00:00:20 --total-nodes 2 --local-node 1 on the server #2 iptables -I INPUT -i eth0 -d 10.17.243.15 -j CLUSTERIP --new --hashmode sourceip --clustermac 01:00:5E:00:00:20 --total-nodes 2 --local-node 2 When we try to ping 10.17.243.15 there is no reply. And the web service (tomcat on port 8080) is not accessible either. However we managed to get the packets on both servers by using TCPDUMP. Some useful information: iptable roules (iptables -L -n -v): Chain INPUT (policy ACCEPT 21775 packets, 1470K bytes) pkts bytes target prot opt in out source destination 0 0 CLUSTERIP all -- eth0 * 0.0.0.0/0 10.17.243.15 CLUSTERIP hashmode=sourceip clustermac=01:00:5E:00:00:20 total_nodes=2 local_node=1 hash_init=0 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 14078 packets, 44M bytes) pkts bytes target prot opt in out source destination Log messages: ... kernel: [ 7.329017] e1000e: eth3 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None ... kernel: [ 7.329133] e1000e 0000:05:00.0: eth3: 10/100 speed: disabling TSO ... kernel: [ 7.329567] ADDRCONF(NETDEV_CHANGE): eth3: link becomes ready ... kernel: [ 71.333285] ip_tables: (C) 2000-2006 Netfilter Core Team ... kernel: [ 71.341804] nf_conntrack version 0.5.0 (16384 buckets, 65536 max) ... kernel: [ 71.343168] ipt_CLUSTERIP: ClusterIP Version 0.8 loaded successfully ... kernel: [ 108.456043] device eth0 entered promiscuous mode ... kernel: [ 112.678859] device eth0 left promiscuous mode ... kernel: [ 117.916050] device eth0 entered promiscuous mode ... kernel: [ 140.168848] device eth0 left promiscuous mode TCPDUMP while pinging: tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes 12:11:55.335528 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto ICMP (1), length 84) 10.17.243.1 > 10.17.243.15: ICMP echo request, id 16162, seq 2390, length 64 12:11:56.335778 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto ICMP (1), length 84) 10.17.243.1 > 10.17.243.15: ICMP echo request, id 16162, seq 2391, length 64 12:11:57.336010 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto ICMP (1), length 84) 10.17.243.1 > 10.17.243.15: ICMP echo request, id 16162, seq 2392, length 64 12:11:58.336287 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto ICMP (1), length 84) 10.17.243.1 > 10.17.243.15: ICMP echo request, id 16162, seq 2393, length 64 And there is no ping reply as I said. Does anyone know which part I missed? Thanks in advance.

    Read the article

< Previous Page | 615 616 617 618 619 620 621 622 623 624 625 626  | Next Page >