Search Results

Search found 12484 results on 500 pages for 'seraphims host'.

Page 163/500 | < Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >

  • Connecting to external MySQL DB from a web server not running MySQL

    - by jrb04c
    While I've been working with MySQL for years, this is the first time I've run across this very newbie-esq issue. Due to a client demand, I must host their website files (PHP) on a IIS server that is not running MySQL (instead, they are running MSSQL). However, I have developed the site using a MySQL database which is located on an external host (Rackspace Cloud). Obviously, my mysql_connect function is now bombing because MySQL is not running on localhost. Question: Is it even possible to hit an external MySQL database if localhost is not running MySQL? Apologies for the rookie question, and many thanks in advance.

    Read the article

  • Use gmail domain account with IMAP authentication with SAML authentication not working...

    - by mscd000
    I have a python script that interfaces gmail accounts and allows searches, etc. This works on normal emails (ending on @gmail.com) but not on domain accounts. In this case authentication is done via SAML, and IMAP is enabled on the gmail domain account... The instructions from google on how to configure IMAP only seem to work for @gmail.com accounts... I've tried authentication to IMAP using user, user@admin and using host: imap.gmail.com as well as my domain's email and authentication is not working.... is there a specific 'host' from gmail for domain accounts? other way to get imap on gmail domain accounts? Thanks, Rodolfo

    Read the article

  • Why does a module compile by itself but fail when used from elsewhere?

    - by chrisribe
    I have a Perl module that appears to compile fine by itself, but is causing other programs to fail compilation when it is included: me@host:~/code $ perl -c -Imodules modules/Rebat/Store.pm modules/Rebat/Store.pm syntax OK me@host:~/code $ perl -c -Imodules bin/rebat-report-status Attempt to reload Rebat/Store.pm aborted Compilation failed in require at bin/rebat-report-status line 4. BEGIN failed--compilation aborted at bin/rebat-report-status line 4. The first few lines of rebat-report-status are ... 3 use Rebat; 4 use Rebat::Store; 5 use strict; ...

    Read the article

  • Troubleshooting latency spikes on ESXi NFS datastores

    - by exo_cw
    I'm experiencing fsync latencies of around five seconds on NFS datastores in ESXi, triggered by certain VMs. I suspect this might be caused by VMs using NCQ/TCQ, as this does not happen with virtual IDE drives. This can be reproduced using fsync-tester (by Ted Ts'o) and ioping. For example using a Grml live system with a 8GB disk: Linux 2.6.33-grml64: root@dynip211 /mnt/sda # ./fsync-tester fsync time: 5.0391 fsync time: 5.0438 fsync time: 5.0300 fsync time: 0.0231 fsync time: 0.0243 fsync time: 5.0382 fsync time: 5.0400 [... goes on like this ...] That is 5 seconds, not milliseconds. This is even creating IO-latencies on a different VM running on the same host and datastore: root@grml /mnt/sda/ioping-0.5 # ./ioping -i 0.3 -p 20 . 4096 bytes from . (reiserfs /dev/sda): request=1 time=7.2 ms 4096 bytes from . (reiserfs /dev/sda): request=2 time=0.9 ms 4096 bytes from . (reiserfs /dev/sda): request=3 time=0.9 ms 4096 bytes from . (reiserfs /dev/sda): request=4 time=0.9 ms 4096 bytes from . (reiserfs /dev/sda): request=5 time=4809.0 ms 4096 bytes from . (reiserfs /dev/sda): request=6 time=1.0 ms 4096 bytes from . (reiserfs /dev/sda): request=7 time=1.2 ms 4096 bytes from . (reiserfs /dev/sda): request=8 time=1.1 ms 4096 bytes from . (reiserfs /dev/sda): request=9 time=1.3 ms 4096 bytes from . (reiserfs /dev/sda): request=10 time=1.2 ms 4096 bytes from . (reiserfs /dev/sda): request=11 time=1.0 ms 4096 bytes from . (reiserfs /dev/sda): request=12 time=4950.0 ms When I move the first VM to local storage it looks perfectly normal: root@dynip211 /mnt/sda # ./fsync-tester fsync time: 0.0191 fsync time: 0.0201 fsync time: 0.0203 fsync time: 0.0206 fsync time: 0.0192 fsync time: 0.0231 fsync time: 0.0201 [... tried that for one hour: no spike ...] Things I've tried that made no difference: Tested several ESXi Builds: 381591, 348481, 260247 Tested on different hardware, different Intel and AMD boxes Tested with different NFS servers, all show the same behavior: OpenIndiana b147 (ZFS sync always or disabled: no difference) OpenIndiana b148 (ZFS sync always or disabled: no difference) Linux 2.6.32 (sync or async: no difference) It makes no difference if the NFS server is on the same machine (as a virtual storage appliance) or on a different host Guest OS tested, showing problems: Windows 7 64 Bit (using CrystalDiskMark, latency spikes happen mostly during preparing phase) Linux 2.6.32 (fsync-tester + ioping) Linux 2.6.38 (fsync-tester + ioping) I could not reproduce this problem on Linux 2.6.18 VMs. Another workaround is to use virtual IDE disks (vs SCSI/SAS), but that is limiting performance and the number of drives per VM. Update 2011-06-30: The latency spikes seem to happen more often if the application writes in multiple small blocks before fsync. For example fsync-tester does this (strace output): pwrite(3, "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"..., 1048576, 0) = 1048576 fsync(3) = 0 ioping does this while preparing the file: [lots of pwrites] pwrite(3, "********************************"..., 4096, 1036288) = 4096 pwrite(3, "********************************"..., 4096, 1040384) = 4096 pwrite(3, "********************************"..., 4096, 1044480) = 4096 fsync(3) = 0 The setup phase of ioping almost always hangs, while fsync-tester sometimes works fine. Is someone capable of updating fsync-tester to write multiple small blocks? My C skills suck ;) Update 2011-07-02: This problem does not occur with iSCSI. I tried this with the OpenIndiana COMSTAR iSCSI server. But iSCSI does not give you easy access to the VMDK files so you can move them between hosts with snapshots and rsync. Update 2011-07-06: This is part of a wireshark capture, captured by a third VM on the same vSwitch. This all happens on the same host, no physical network involved. I've started ioping around time 20. There were no packets sent until the five second delay was over: No. Time Source Destination Protocol Info 1082 16.164096 192.168.250.10 192.168.250.20 NFS V3 WRITE Call (Reply In 1085), FH:0x3eb56466 Offset:0 Len:84 FILE_SYNC 1083 16.164112 192.168.250.10 192.168.250.20 NFS V3 WRITE Call (Reply In 1086), FH:0x3eb56f66 Offset:0 Len:84 FILE_SYNC 1084 16.166060 192.168.250.20 192.168.250.10 TCP nfs > iclcnet-locate [ACK] Seq=445 Ack=1057 Win=32806 Len=0 TSV=432016 TSER=769110 1085 16.167678 192.168.250.20 192.168.250.10 NFS V3 WRITE Reply (Call In 1082) Len:84 FILE_SYNC 1086 16.168280 192.168.250.20 192.168.250.10 NFS V3 WRITE Reply (Call In 1083) Len:84 FILE_SYNC 1087 16.168417 192.168.250.10 192.168.250.20 TCP iclcnet-locate > nfs [ACK] Seq=1057 Ack=773 Win=4163 Len=0 TSV=769110 TSER=432016 1088 23.163028 192.168.250.10 192.168.250.20 NFS V3 GETATTR Call (Reply In 1089), FH:0x0bb04963 1089 23.164541 192.168.250.20 192.168.250.10 NFS V3 GETATTR Reply (Call In 1088) Directory mode:0777 uid:0 gid:0 1090 23.274252 192.168.250.10 192.168.250.20 TCP iclcnet-locate > nfs [ACK] Seq=1185 Ack=889 Win=4163 Len=0 TSV=769821 TSER=432716 1091 24.924188 192.168.250.10 192.168.250.20 RPC Continuation 1092 24.924210 192.168.250.10 192.168.250.20 RPC Continuation 1093 24.924216 192.168.250.10 192.168.250.20 RPC Continuation 1094 24.924225 192.168.250.10 192.168.250.20 RPC Continuation 1095 24.924555 192.168.250.20 192.168.250.10 TCP nfs > iclcnet_svinfo [ACK] Seq=6893 Ack=1118613 Win=32625 Len=0 TSV=432892 TSER=769986 1096 24.924626 192.168.250.10 192.168.250.20 RPC Continuation 1097 24.924635 192.168.250.10 192.168.250.20 RPC Continuation 1098 24.924643 192.168.250.10 192.168.250.20 RPC Continuation 1099 24.924649 192.168.250.10 192.168.250.20 RPC Continuation 1100 24.924653 192.168.250.10 192.168.250.20 RPC Continuation 2nd Update 2011-07-06: There seems to be some influence from TCP window sizes. I was not able to reproduce this problem using FreeNAS (based on FreeBSD) as a NFS server. The wireshark captures showed TCP window updates to 29127 bytes in regular intervals. I did not see them with OpenIndiana, which uses larger window sizes by default. I can no longer reproduce this problem if I set the following options in OpenIndiana and restart the NFS server: ndd -set /dev/tcp tcp_recv_hiwat 8192 # default is 128000 ndd -set /dev/tcp tcp_max_buf 1048575 # default is 1048576 But this kills performance: Writing from /dev/zero to a file with dd_rescue goes from 170MB/s to 80MB/s. Update 2011-07-07: I've uploaded this tcpdump capture (can be analyzed with wireshark). In this case 192.168.250.2 is the NFS server (OpenIndiana b148) and 192.168.250.10 is the ESXi host. Things I've tested during this capture: Started "ioping -w 5 -i 0.2 ." at time 30, 5 second hang in setup, completed at time 40. Started "ioping -w 5 -i 0.2 ." at time 60, 5 second hang in setup, completed at time 70. Started "fsync-tester" at time 90, with the following output, stopped at time 120: fsync time: 0.0248 fsync time: 5.0197 fsync time: 5.0287 fsync time: 5.0242 fsync time: 5.0225 fsync time: 0.0209 2nd Update 2011-07-07: Tested another NFS server VM, this time NexentaStor 3.0.5 community edition: Shows the same problems. Update 2011-07-31: I can also reproduce this problem on the new ESXi build 4.1.0.433742.

    Read the article

  • telnetlib python example

    - by de1337ed
    So I'm trying this really simple example given by the python docs: import getpass import sys import telnetlib HOST = "<HOST_IP>" user = raw_input("Enter your remote account: ") password = getpass.getpass() tn = telnetlib.Telnet(HOST) tn.read_until("login: ") tn.write(user + "\n") if password: tn.read_until("Password: ") tn.write(password + "\n") tn.write("ls\n") tn.write("exit\n") print tn.read_all() My issue is that it hangs at the end of the read_all()... It doesn't print anything out. I've never used this module before so I'm trying to get this really basic example to work before continuing. BTW, I'm using python 2.4 Thank you.

    Read the article

  • Access Git Repository using Eclipse and Netbeans Plugins with LDAP Users

    - by ukrania
    Hello everyone! I've configure a git server. I need to use ssh because I've defined permissions using users of my domain, using LDAP. Only users with permissions could read a project. So, the links to access my repositories are like that: ssh://[email protected]@hostname/var/git/repo.git When I clone, commit or push a project using linux git commands or using tortoisegit on windows, there is no problem, everything works as expected. However, I've tried to clone a project using plugins from Eclipse (EGit) and Netbeans (NBGit), with no success. Seems that they can't recognize the host. I've accessed using a user from the server (not from the domain) and it cloned the project perfectly. Seems that the plugins assume that the host is everything after the first @. Do you know how I can solve this problem? There are any other Git plugins for those IDEs? Thanks for your answers. Best Regards, ukrania

    Read the article

  • Serving files inside of directories with ruby and mongrel

    - by AdamB
    I made a simple web server in Ruby using the Mongrel gem. It works great for files in a single directory, but I run into some path issues when inside a directory. Lets say we have 2 files in a directory named "dir1", "index.html" and "style.css". If we visit http://host/dir1/index.html, the file is found, but style.css isn't because it's trying to load "style.css" from http://host/style.css instead of inside the directory. index.html is trying to load style.css as if its in the same directory as itself. <link href="style.css" rel="stylesheet" type="text/css" /> I can get the file if I enter the full path of style.css: /dir1/style.css but it doesn't seem to remember it's already inside a directory and that no path before a filename should serve from the current directory. This is the ruby code im using to serve out files. require 'rubygems' require 'mongrel' server = Mongrel::HttpServer.new("0.0.0.0", "3000") server.register("/", Mongrel::DirHandler.new(".")) server.run.join

    Read the article

  • Hosting WCF services

    - by Jayesh
    Hi, I have been working on a Silverlight app that consumes a WCF service. [on Visual Studio] as a matter of simplicity I created a WCF service in the project itself [as-in I didnt host it in IIS, but let the build-in webdev server in VS do it for me] It works well, now I want to deploy it on IIS 7.0, can you tell me If i would need to host the service independently and then the remaining stuff or if I just publish the website, the service would be hosted too and the Silverlight client would be able to communicate with the service. Please help! Thanks

    Read the article

  • PHP Retrieve Cookie which was not set with setcookie

    - by Martin
    I have the following problem - a third party software is setting a cookie with the login credentials and then forward the user to my app. I now need to retrieve the cookie values: The problem is - I can do this easily in the Frontend/AS3 with var ticket : String = CookieUtil.getCookie( 'ticket' ).toString(); but in PHP, the cookie is not within the $_COOKIES array. The cookie values are: Name: ticket Domain: .www.myserver.com Path : / Send for: encrypted connections only Expires: at end of session The one I see, and set before in PHP is: Name: myCookie Host: www.myserver.com Path : / Send for: any type of connection Expires: at end of session Actually, since host/domain are both the same, it should be visible in the PHP script, since it is running on this domain. Any thoughts? Thankx Martin

    Read the article

  • using Awk inside Perl script

    - by papoyan
    My first question in stackoverflow! I'm having trouble using the following code inside my perl script, any advise is really appreciated, how to correct the syntax? # If I execute in bash, it's working just fine bash$ whois google.com | egrep "\w+([._-]\w)*@\w+([._-]\w)*\.\w{2,4}" |awk ' {for (i=1;i<=NF;i++) {if ( $i ~ /[[:alpha:]]@[[:alpha:]]/ ) { print $i}}}'|head -n1 [email protected] #----------------------------------- #but this doesn't work bash$ ./email.pl google.com awk: {for (i=1;i<=NF;i++) {if ( ~ /[[:alpha:]]@[[:alpha:]]/ ) { print }}} awk: ^ syntax error # Here is my script bash$ cat email.pl ####\#!/usr/bin/perl $input = lc shift @ARGV; $host = $input; my $email = `whois $host | egrep "\w+([._-]\w)*@\w+([._-]\w)*\.\w{2,4}" |awk ' {for (i=1;i<=NF;i++) {if ( $i ~ /[[:alpha:]]@[[:alpha:]]/ ) { print $i}}}'|head -1`; print my $email; bash$ Thank you in advance !

    Read the article

  • System.Net.Dns.GetHostAddresses("")

    - by dbasnett
    Yesterday s**ked, and today ain't (sic) looking better. I have an application I have been working on and it can be slow to start when my ISP is down because of DNS. My ISP was down for 3 hours yesterday, so I didn't think much about this piece of code I had added, until I found that it is always slow to start. This code is supposed to return your IP address and my reading of the link suggests that should be immediate, but it isn't, at least on my machine. Oh, and yesterday before the internet went down, I upgraded (oymoron) to XP SP3, and have had other problems. So my questions / request: 1. Am I doing this right? 2. If you run this on your machine does it take 39 seconds to return your IP address? It does on mine. One other note, I did a packet capture and the first request did NOT go on the wire, but the second did, and was answered quickly. So the question is what happened in XP SP3 that I am missing, besides a brain. One last note. If I resolve a FQDN all is well. Public Class Form1 'http://msdn.microsoft.com/en-us/library/system.net.dns.gethostaddresses.aspx ' 'excerpt 'The GetHostAddresses method queries a DNS server 'for the IP addresses associated with a host name. ' 'If hostNameOrAddress is an IP address, this address 'is returned without querying the DNS server. ' 'When an empty string is passed as the host name, 'this method returns the IPv4 addresses of the local host Private Sub Button1_Click(ByVal sender As System.Object, _ ByVal e As System.EventArgs) Handles Button1.Click Dim stpw As New Stopwatch stpw.Reset() stpw.Start() 'originally Dns.GetHostEntry, but slow also Dim myIPs() As System.Net.IPAddress = System.Net.Dns.GetHostAddresses("") stpw.Stop() Debug.WriteLine("'" & stpw.Elapsed.TotalSeconds) If myIPs.Length > 0 Then Debug.WriteLine("'" & myIPs(0).ToString) 'debug '39.8990525 '192.168.1.2 stpw.Reset() stpw.Start() 'originally Dns.GetHostEntry, but slow also myIPs = System.Net.Dns.GetHostAddresses("www.vbforums.com") stpw.Stop() Debug.WriteLine("'" & stpw.Elapsed.TotalSeconds) If myIPs.Length > 0 Then Debug.WriteLine("'" & myIPs(0).ToString) 'debug '0.042212 '63.236.73.220 End Sub End Class

    Read the article

  • Hosting a WCF service in a Windows service. Works, but I can't reach it from Silverlight.

    - by BaBu
    I've made me an application hosted WCF service for my Silverlight application to consume. When I host the WCF service in a forms application everything works fine. But when I host my WCF service in a windows service (as explained brilliantly here) I get the dreaded 'NotFound' Web Exception when I call it from Silverlight. The 'WCF Test Client' tool does not complain. I'm' exposing a decent clientaccesspolicy.xml at root. Yet Silverlight won't have it. I'm stumped. Anybody have an idea about what could be going on here?

    Read the article

  • Codeigniter or PHP Amazon API help

    - by faya
    Hello, I have a problem searching through amazon web servise using PHP in my CodeIgniter. I get InvalidParameter timestamp is not in ISO-8601 format response from the server. But I don't think that timestamp is the problem,because I have tryed to compare with given date format from http://associates-amazon.s3.amazonaws.com/signed-requests/helper/index.html and it seems its fine. Could anyone help? Here is my code: $private_key = 'XXXXXXXXXXXXXXXX'; // Took out real secret key $method = "GET"; $host = "ecs.amazonaws.com"; $uri = "/onca/xml"; $timeStamp = gmdate("Y-m-d\TH:i:s.000\Z"); $timeStamp = str_replace(":", "%3A", $timeStamp); $params["AWSAccesskeyId"] = "XXXXXXXXXXXX"; // Took out real access key $params["ItemPage"] = $item_page; $params["Keywords"] = $keywords; $params["ResponseGroup"] = "Medium2%2525COffers"; $params["SearchIndex"] = "Books"; $params["Operation"] = "ItemSearch"; $params["Service"] = "AWSECommerceService"; $params["Timestamp"] = $timeStamp; $params["Version"] = "2009-03-31"; ksort($params); $canonicalized_query = array(); foreach ($params as $param=>$value) { $param = str_replace("%7E", "~", rawurlencode($param)); $value = str_replace("%7E", "~", rawurlencode($value)); $canonicalized_query[] = $param. "=". $value; } $canonicalized_query = implode("&", $canonicalized_query); $string_to_sign = $method."\n\r".$host."\n\r".$uri."\n\r".$canonicalized_query; $signature = base64_encode(hash_hmac("sha256",$string_to_sign, $private_key, True)); $signature = str_replace("%7E", "~", rawurlencode($signature)); $request = "http://".$host.$uri."?".$canonicalized_query."&Signature=".$signature; $response = @file_get_contents($request); if ($response === False) { return "response fail"; } else { $parsed_xml = simplexml_load_string($response); if ($parsed_xml === False) { return "parse fail"; } else { return $parsed_xml; } } P.S. - Personally I think that something is wrong in the generation of the from the $string_to_sign when hashing it.

    Read the article

  • ESX3.5 Cluster & MD3000i -- Both servers see iSCSI Targets, Only one server can use partition.

    - by GruffTech
    Alright. First and foremost, Warning. This is a bigger-then-normal question. I like to be thorough and try to eliminate all possible "easymode" answers, as well as give everyone a feel of what i've tried. I've included several images of our setup and the problem it is having.. TLDR Version: So I've followed the guides located here: ESX Deployment Guide V1 this is the guide Dell has sent me to setup two ESX3.5 servers mounting a Dell MD3000i. It doesn't work. Both servers can't use the same storage partition on the MD3000. Both servers see it, but only one server can actually use it. (that server being whatever server created the partition on the target.) Both ESX servers are members of the Host Group. Full Version I have 2 ESX3.5 Servers (10.0.7.102, also called EPI2, and 10.0.7.103, also called EPI3.) connected to a iSCSI SAN Device (Dell MD3000i). Both ESX servers can "scan" the SAN and see the LUNS. Part One: MD3000i Storage On the MD3000i, Both servers are in my host group. I have two partitions, VM1 and VM2, both 1.6TB (vmware doesn't like anything past 2tb.) And you can even see that the ESX servers are targetting the MD3000 just fine. Part Two: The ESX Servers Figure 1. So as you can see above, Both ESX Servers (10.0.7.102 and 10.0.7.103) are able to see and scan the MD3000i SAN. Figure 2. Above is the storage both servers see. I created the storage partition on EPI2 (102). I then Extended the partition to include the second LUN for a grand total of 3.27 TB of storage. When i "rescan" on 103 (the server not mounting the partition), I get the below log in log/messages. Mar 11 10:41:18 epi3 kernel: scsi1: remove-single-device 0 0 0 failed, device busy(4). being the only line that grabs my attentions. (EPI3 is the server name) Mar 11 10:41:04 epi3 vmkiscsid[5436]: Connected to Discovery Address 192.168.130.101 Mar 11 10:41:04 epi3 vmkiscsid[5437]: Connected to Discovery Address 192.168.130.102 Mar 11 10:41:04 epi3 vmkiscsid[5438]: Connected to Discovery Address 192.168.131.101 Mar 11 10:41:04 epi3 vmkiscsid[5439]: Connected to Discovery Address 192.168.131.102 Mar 11 10:41:17 epi3 kernel: scsi singledevice 2 0 0 0 Mar 11 10:41:17 epi3 kernel: Vendor: DELL Model: MD3000i Rev: 0735 Mar 11 10:41:17 epi3 kernel: Type: Direct-Access ANSI SCSI revision: 05 Mar 11 10:41:17 epi3 kernel: VMWARE SCSI Id: Supported VPD pages for sdb : 0x0 0x80 0x83 0x85 0x86 0x87 0xc0 0xc1 0xc2 0xc3 0xc4 0xc8 0xc9 0xca 0xd0 Mar 11 10:41:17 epi3 kernel: VMWARE SCSI Id: Device id info for sdb: 0x1 0x3 0x0 0x10 0x60 0x1 0xe4 0xf0 0x0 0x1a 0x1a 0xa2 0x0 0x0 0x15 0xe2 0x4d 0x75 0xf6 0x99 0x53 0x98 0x0 0x54 0x69 0x71 0x6e 0x2e 0x31 0x39 0x38 0x34 0x2d 0x30 0x35 0x2e 0x63 0x6f 0x6d 0x2e 0x64 0x65 0x6c 0x6c 0x3a 0x70 0x6f 0x77 0x65 0x72 0x76 0x61 0x75 0x6c 0x74 0x2e 0x36 0x30 0x30 0x31 0x65 0x34 0x66 0x30 0x30 0x30 0x31 0x61 0x31 0x61 0x61 0x32 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x34 0x37 0x39 0x30 0x36 0x32 0x32 0x65 0x2c 0x74 0x2c 0x30 0x78 0x30 0x30 0x30 0x31 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x32 0x0 0x0 0x0 0x51 0x94 0x0 0x4 0x0 0x0 0x80 0x1 0x53 0xa8 0x0 0x44 0x69 0x71 0x6e 0x2e 0x31 0x39 0x38 0x34 0x2d 0x30 0x35 0x2e 0x63 0x6f 0x6d 0x2e 0x64 0x65 0x6c 0x6c 0x3a 0x70 0x6f 0x77 0x65 0x72 0x76 0x61 0x75 0x6c 0x74 0x2e 0x36 0x30 0x30 0x31 0x65 0x34 0x66 0x30 0x30 0x30 0x31 0x61 0x31 0x61 0x61 0x32 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x34 0x37 0x39 0x30 0x36 0x32 0x32 0x65 0x0 0x0 0x0 0x0 Mar 11 10:41:17 epi3 kernel: VMWARE SCSI Id: Id for sdb 0x60 0x01 0xe4 0xf0 0x00 0x1a 0x1a 0xa2 0x00 0x00 0x15 0xe2 0x4d 0x75 0xf6 0x99 0x4d 0x44 0x33 0x30 0x30 0x30 Mar 11 10:41:17 epi3 kernel: VMWARE: Unique Device attached as scsi disk sdb at scsi2, channel 0, id 0, lun 0 Mar 11 10:41:17 epi3 kernel: Attached scsi disk sdb at scsi2, channel 0, id 0, lun 0 Mar 11 10:41:17 epi3 kernel: scan_scsis starting finish Mar 11 10:41:17 epi3 kernel: SCSI device sdb: 3509329920 512-byte hdwr sectors (1797751 MB) Mar 11 10:41:17 epi3 kernel: sdb: sdb1 Mar 11 10:41:17 epi3 kernel: scan_scsis done with finish Mar 11 10:41:17 epi3 kernel: scsi singledevice 2 0 0 1 Mar 11 10:41:17 epi3 kernel: Vendor: DELL Model: MD3000i Rev: 0735 Mar 11 10:41:17 epi3 kernel: Type: Direct-Access ANSI SCSI revision: 05 Mar 11 10:41:18 epi3 kernel: VMWARE SCSI Id: Supported VPD pages for sdc : 0x0 0x80 0x83 0x85 0x86 0x87 0xc0 0xc1 0xc2 0xc3 0xc4 0xc8 0xc9 0xca 0xd0 Mar 11 10:41:18 epi3 kernel: VMWARE SCSI Id: Device id info for sdc: 0x1 0x3 0x0 0x10 0x60 0x1 0xe4 0xf0 0x0 0x1a 0x1a 0x86 0x0 0x0 0xd 0xb7 0x4d 0x75 0xf2 0x77 0x53 0x98 0x0 0x54 0x69 0x71 0x6e 0x2e 0x31 0x39 0x38 0x34 0x2d 0x30 0x35 0x2e 0x63 0x6f 0x6d 0x2e 0x64 0x65 0x6c 0x6c 0x3a 0x70 0x6f 0x77 0x65 0x72 0x76 0x61 0x75 0x6c 0x74 0x2e 0x36 0x30 0x30 0x31 0x65 0x34 0x66 0x30 0x30 0x30 0x31 0x61 0x31 0x61 0x61 0x32 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x34 0x37 0x39 0x30 0x36 0x32 0x32 0x65 0x2c 0x74 0x2c 0x30 0x78 0x30 0x30 0x30 0x31 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x32 0x0 0x0 0x0 0x51 0x94 0x0 0x4 0x0 0x0 0x80 0x1 0x53 0xa8 0x0 0x44 0x69 0x71 0x6e 0x2e 0x31 0x39 0x38 0x34 0x2d 0x30 0x35 0x2e 0x63 0x6f 0x6d 0x2e 0x64 0x65 0x6c 0x6c 0x3a 0x70 0x6f 0x77 0x65 0x72 0x76 0x61 0x75 0x6c 0x74 0x2e 0x36 0x30 0x30 0x31 0x65 0x34 0x66 0x30 0x30 0x30 0x31 0x61 0x31 0x61 0x61 0x32 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x34 0x37 0x39 0x30 0x36 0x32 0x32 0x65 0x0 0x0 0x0 0x0 Mar 11 10:41:18 epi3 kernel: VMWARE SCSI Id: Id for sdc 0x60 0x01 0xe4 0xf0 0x00 0x1a 0x1a 0x86 0x00 0x00 0x0d 0xb7 0x4d 0x75 0xf2 0x77 0x4d 0x44 0x33 0x30 0x30 0x30 Mar 11 10:41:18 epi3 kernel: VMWARE: Unique Device attached as scsi disk sdc at scsi2, channel 0, id 0, lun 1 Mar 11 10:41:18 epi3 kernel: Attached scsi disk sdc at scsi2, channel 0, id 0, lun 1 Mar 11 10:41:18 epi3 kernel: scan_scsis starting finish Mar 11 10:41:18 epi3 kernel: SCSI device sdc: 3509329920 512-byte hdwr sectors (1797751 MB) Mar 11 10:41:18 epi3 kernel: sdc: sdc1 Mar 11 10:41:18 epi3 kernel: scan_scsis done with finish Mar 11 10:41:18 epi3 kernel: scsi1: remove-single-device 0 0 0 failed, device busy(4). Mar 11 10:41:18 epi3 kernel: scsi singledevice 1 0 0 0 Things I've Tried: Removing iSCSI targets from only 103, disabling iSCSI, rebooting, enabled iSCSI, re-adding targets, rescan. Same result. Removing partition on 102, Formatted partition on 103 instead. Same result, except flipped. 103 can use storage, 102 can not. Starting Over. Removing all iSCSI Targets on both ESX Boxes, disabling iSCSI, turning off the firewall for iSCSI, rebooting ESX. Then on the MD3000, Removed the Host Group, Removed the Host-to-Virtual Mappings, Restarted the SAN. Followed the Documentation again, same result. Both servers see the storage, but only one server can use it. Disabling and Re-enabling VMware DRS and HA. Same result. Flat-out turning off VMware DRS and HA, and doing the "start over" step to see if maybe that borked it. Same Result. I'm kinda loosing my mind here, Everything i read online says "just partition it and if the ESX boxes can see the targets, it just works".... well crap. Any ideas, any other things to try? Can anyone atleast point me in the right direction? I'm really tired of working from 1am til 4am (our maintenance hours)

    Read the article

  • client side application not working as intended while using AsyncSocket

    - by Miraaj
    Hi All, I am using AsyncSocket class in a simple client-server application. As a first step I want that - as soon as connection is established between client and server, client transmit a welcome message - "connected to xyz server" to server and server displays it in textview. //The code in ClientController class is: -(void)awakeFromNib{ NSError *error = nil; if (![connectSocket connectToHost:@"192.168.0.32" onPort:25242 error:&error]) { NSLog(@"Error starting client: %@", error); return; } NSLog(@"xyz chat client started on port %hu",[connectSocket localPort]); } - (void)onSocket:(AsyncSocket *)sock didConnectToHost:(NSString *)host port:(UInt16)port{ [sock writeData:[@"connected to xyz server" dataUsingEncoding:NSUTF8StringEncoding] withTimeout:30.0 tag:0]; } - (void)onSocket:(AsyncSocket *)sock didReadData:(NSData *)data withTag:(long)tag{ // some relevant code goes here } - (void)onSocket:(AsyncSocket *)sock didWriteDataWithTag:(long)tag{ NSLog(@"within didWriteDataWithTag:"); // getting this message, means it should have written something to remote socket but // delegate- onSocket:didReadData:withTag: at server side is not getting invoked } // The code in ServerController class is: - (IBAction)startStop:(id)sender{ NSLog(@"startStopAction"); if(!isRunning) { NSError *error = nil; if(![listenSocket acceptOnPort:INPUT_PORT error:&error]) { NSLog(@"Error starting server: %@", error); return; } NSLog(@"Echo server started on port %hu",[listenSocket localPort]); isRunning = YES; [sender setTitle:@"Stop"]; } else { // Stop accepting connections [listenSocket disconnect]; // Stop any client connections int i; for(i = 0; i < [connectedSockets count]; i++) { // Call disconnect on the socket, // which will invoke the onSocketDidDisconnect: method, // which will remove the socket from the list. [[connectedSockets objectAtIndex:i] disconnect]; } NSLog(@"Stopped Echo server"); isRunning = false; [sender setTitle:@"Start"]; } } - (void)onSocket:(AsyncSocket *)sock didAcceptNewSocket:(AsyncSocket *)newSocket{ [connectedSockets addObject:newSocket]; } - (void)onSocket:(AsyncSocket *)sock didConnectToHost:(NSString *)host port:(UInt16)port{ NSLog(@"Accepted client %@:%hu", host, port); // it is getting displayed [sock readDataToData:[AsyncSocket CRLFData] withTimeout:READ_TIMEOUT tag:0]; } - (void)onSocket:(AsyncSocket *)sock didReadData:(NSData *)data withTag:(long)tag{ NSString *msgReceived = [[[NSString alloc] initWithData:data encoding:NSUTF8StringEncoding] autorelease]; NSLog(@"msgReceived in didReadData- %@",msgReceived); // it is not getting displayed [outputView insertText:msgReceived]; [sock readDataToData:[AsyncSocket CRLFData] withTimeout:READ_TIMEOUT tag:0]; } Can anyone suggest me where I may be wrong?? Thanks in advance...... Miraaj

    Read the article

  • code to send email

    - by Alexander
    What am I doing wrong here? private void SendMail(string from, string body) { string mailServerName = "plus.pop.mail.yahoo.com"; MailMessage message = new MailMessage(from, "[email protected]", "feedback", body); SmtpClient mailClient = new SmtpClient(); mailClient.Host = mailServerName; mailClient.Send(message); message.Dispose(); } I got the following error: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond 209.191.108.191:25

    Read the article

  • Using Jquery to load Facebook fb:profile-pic - not working in IE

    - by Gublooo
    Hey.... I followed the tips given on Loading Facebook fb:profile via ajax In my app, I am basically loading more comments posted by the user via Jquery and along with each comment showing the user's picture using the fb:profile-pic tag This is a sample of how I'm building the string via Jquery newcomment += "<fb:profile-pic uid='"+data.fb_userid+"' height='50' width='50' /"; newcomment += ""+data.user_name+": "; newcomment += ""+data.user_comment+": "; $("#morecomments").append(newcomment); So the profile-pic was not being displayed - After reading the above link, I added if ( FB.XFBML.Host.parseDomTree ) setTimeout( FB.XFBML.Host.parseDomTree, 0 ); The weird thing now is - its working in Chrome and Firefox but not in IE Cant figure out why. Any help will be appreciated. Thanks

    Read the article

  • setting permissions of python module (python setup install)

    - by SetJmp
    I am configuring a distutils-based setup.py for a python module that is to be installed on a heterogeneous set of resources. Due to the heterogeneity, the location where the module is installed is not the same on each host however disutils picks the host-specific location. I find that the module is installed without o+rx permissions using disutils (in spite of setting umask ahead of running setup.py). One solution is to manually correct this problem, however I would like an automated means that works on heterogeneous install targets. For example, is there a way to extract the ending location of the installation from within setup.py? Any other suggestions? Thanks! SetJmp

    Read the article

  • Action Mailer Confirmation Emails - Ruby on Rails...

    - by bgadoci
    I have successfully installed the Clearance Gem from ThoughtBot. Clearance sends a confirmation email upon a new sign_up and suggests adding: config.action_mailer.default_url_options = { :host => 'localhost:3000' } to your /environments/test.rb and development.rb. I have tried this and also config.action_mailer.default_url_options = { :host => '127.0.0.1', :port => 3000 } But can't seem to get rails to send the email. As I am new to both Ruby and Rails, I am wondering if there is some step/config that ThoughtBot is assuming I have already done to send emails. What am I doing wrong/missing? UPDATE: Just added notifier.rb class Notifier < ActionMailer::Base recipients recipient.email_address_with_name bcc ["[email protected]"] from "[email protected]" subject "New account information" body :account => recipient end end

    Read the article

  • msmq binding wcf

    - by pdiddy
    I have some messages in my queue. Now I notice that after 3 tries the service host faults. Is this a normal behavior? Where does the 3 times comes from? I thought it came from receiveRetryCount. But I set that one to 1. I got 20 messages in my queue waiting to be processed. The WCF operation that is responsible to process the message supports transaction so if it can't process the message it will throw so that the message stays in the queue. I didn't think that it would of Fault the ServiceHost after a number of retry, is this part documented somewhere? I'm running MSMQ service on my winxp machine. I'm more interested in documentation indicating that the service host will fault after a number of retry. Is this part true?

    Read the article

  • xDebug cant find my files. Always looks in localhost

    - by sleeper
    I'm running win7-64bit, NetBeans 7.1.1 and WampServer 2.2 (which has xDebug) I've configured php.ini (xdebug.remote_enable=on, etc.) I create a directory (virtual host called example.dev) and add a test file. (c:/wamp/example/test-xdebug.php) I run debug in NetBeans and the following url displays: http://localhost/example/test-xdebug.php?XDEBUG_SESSION_START=netbeans-xdebug This fails. The browser coughs up the following error message. Not Found. The requested URL /example/test-xdebug.php was not found on this server. I add the correct path to the virtual host, and xDebug Runs Flawlessly: http://example.dev/test-xdebug.php?XDEBUG_SESSION_START=netbeans-xdebug Tried every configuration I could think of. If this is a php.ini config issue, I sure as heck cant find it. If its a NetBeans issue, there is not an option/interface to modify i (that I can find). Please illuminate! thanks sleeper

    Read the article

  • VC7.1 C1204 internal compiler error

    - by Nathan Ernst
    I'm working on modifying Firaxis' Civilization 4 core game DLL. The host application is built using VC7, hence the constraint (source not provided for the host EXE). I've been working on rewriting a large chunk of the code (focusing on low-hanging performance issues & memory leaks). I recently ran into an internal compiler error when trying to mod the code to use an array class instead of dynamically allocated 2-d arrays, I was going to use matrices from the boost lib (Civ4 is already using boost, so why not?). Basically, the issue comes down to: if I include "boost/numeric/ublas/matrix.hpp", I run into an internal compiler error C1204. MSDN has this to say: MSDN C1204 KB has this to say: KB 883655 So, I'm curious, is it possible to solve this error without a KB/SP being applied and dramatically reducing the complexity of the code? Additionally, as VC7 is no longer "supported", does anyone have a valid (supported) link for a VC7 service pack?

    Read the article

  • solved: passenger(mod_rails) fails to start puppet master under nginx

    - by Anadi Misra
    On the server [root@bangvmpllDA02 logs]# ruby -v ruby 1.8.7 (2011-06-30 patchlevel 352) [x86_64-linux] [root@bangvmpllDA02 logs]# puppet --version 3.0.1 and [root@bangvmpllDA02 logs]# service nginx configtest nginx: the configuration file /apps/nginx/nginx.conf syntax is ok nginx: configuration file /apps/nginx/nginx.conf test is successful [root@bangvmpllDA02 logs]# service nginx status nginx (pid 25923 25921 25920 25917 25908) is running... [root@bangvmpllDA02 logs]# however none of my agents are able to connect to the master, they all fail with errors like so [amisr1@blramisr195602 ~]$ puppet agent --test --verbose --server bangvmpllda02.XXX.com Info: Creating a new SSL certificate request for blramisr195602.XXX.com Info: Certificate Request fingerprint (SHA256): 26:EB:08:1F:82:32:E4:03:7A:64:8E:30:A3:99:93:26:E6:66:B9:B0:49:B6:08:F9:67:CA:1B:0C:00:B9:1D:41 Error: Could not request certificate: Error 405 on SERVER: <html> <head><title>405 Not Allowed</title></head> <body bgcolor="white"> <center><h1>405 Not Allowed</h1></center> <hr><center>nginx</center> </body> </html> Exiting; failed to retrieve certificate and waitforcert is disabled when I check logs on puppet master [root@bangvmpllDA02 logs]# tail puppet_access.log [05/Dec/2012:17:45:18 +0530] "GET /production/certificate/ca? HTTP/1.1" 404 162 "-" "Ruby" [05/Dec/2012:18:32:23 +0530] "PUT /production/certificate_request/sl63anadi.XXX.com HTTP/1.1" 405 166 "-" "-" [05/Dec/2012:18:33:33 +0530] "GET /production/certificate/sl63anadi.XXX.com? HTTP/1.1" 404 162 "-" "-" [05/Dec/2012:18:33:33 +0530] "GET /production/certificate_request/sl63anadi.XXX.com? HTTP/1.1" 404 162 "-" "-" [05/Dec/2012:18:33:33 +0530] "PUT /production/certificate_request/sl63anadi.XXX.com HTTP/1.1" 405 166 "-" "-" and the error logs show that nginx is not really able to process the request well 2012/12/05 18:33:33 [error] 25920#0: *23 open() "/etc/puppet/rack/public/production/certificate/sl63anadi.XXX.com" failed (2: No such file or directory), client: 10.209.47.26, server: , request: "GET /production/certificate/sl63anadi.XXX.com? HTTP/1.1", host: "bangvmpllda02.XXX.com:8140" 2012/12/05 18:33:33 [error] 25920#0: *24 open() "/etc/puppet/rack/public/production/certificate_request/sl63anadi.XXX.com" failed (2: No such file or directory), client: 10.209.47.26, server: , request: "GET /production/certificate_request/sl63anadi.XXX.com? HTTP/1.1", host: "bangvmpllda02.XXX.com:8140" 2012/12/05 18:47:56 [error] 25923#0: *27 open() "/etc/puppet/rack/public/production/certificate/ca" failed (2: No such file or directory), client: 10.209.47.31, server: , request: "GET /production/certificate/ca? HTTP/1.1", host: "bangvmpllda02.XXX.com:8140" 2012/12/05 18:47:56 [error] 25923#0: *28 open() "/etc/puppet/rack/public/production/certificate_request/blramisr195602.XXX.com" failed (2: No such file or directory), client: 10.209.47.31, server: , request: "GET /production/certificate_request/blramisr195602.XXX.com? HTTP/1.1", host: "bangvmpllda02.XXX.com:8140" Passenger does not show any application groups either [root@bangvmpllDA02 nginx]# passenger-status ----------- General information ----------- max = 15 count = 0 active = 0 inactive = 0 Waiting on global queue: 0 ----------- Application groups ----------- [root@bangvmpllDA02 nginx]# here's my nginx configuration [root@bangvmpllDA02 logs]# cat ../nginx.conf user puppet; worker_processes 4; #error_log logs/error.log; #error_log logs/error.log notice; error_log logs/error.log info; #pid logs/nginx.pid; events { use epoll; worker_connections 1024; } http { include mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log logs/access.log main; sendfile on; #tcp_nopush on; server_tokens off; #keepalive_timeout 0; keepalive_timeout 120; gzip on; gzip_http_version 1.1; gzip_disable "msie6"; gzip_vary on; gzip_min_length 1100; gzip_buffers 64 8k; gzip_comp_level 3; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml; server { listen 80; server_name bangvmpllda02.XXXX.com; charset utf-8; #access_log logs/http.access.log main; location / { root html; index index.html index.htm index.php; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { root html; fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param SCRIPT_NAME $fastcgi_script_name; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # location ~ /\.ht { access_log off; log_not_found off; deny all; } location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml)$ { access_log off; log_not_found off; expires 2d; } } # Passenger needed for puppet passenger_root /usr/lib/ruby/gems/1.8/gems/passenger-3.0.18; passenger_ruby /usr/bin/ruby; passenger_max_pool_size 15; server { ssl on; listen 8140 default ssl; server_name bangvmpllda02.XXXX.com; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /etc/puppet/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } } and the puppet.conf [main] # The Puppet log directory. # The default value is '$vardir/log'. logdir = /var/log/puppet # Where Puppet PID files are kept. # The default value is '$vardir/run'. rundir = /var/run/puppet dns_alt_names = devops.XXXX.com,devops confdir = /etc/puppet vardir = /var/lib/puppet storeconfigs = true storeconfigs_backend = puppetdb thin_storeconfigs = false async_storeconfigs = false ssl_client_header = SSL_CLIENT_S_D ssl_client_verify_header = SSL_CLIENT_VERIFY # Where SSL certificates are kept. # The default value is '$confdir/ssl'. ssldir = $vardir/ssl any ideas where am I going wrong? I checkthe directory permissions; /usr/share/puppet, /etc/puppet and /var/lib/puppet (and files inside them) are owned by puppet user. Solved The simple solution to my complicated problem was that I had placed the config.ru in wrong place moved it to /etc/puppet/rack , it was in /etc/puppet/rack/public Well!!! :-/

    Read the article

  • Problems using wxWidgets (wxMSW) within multiple DLL instances

    Preface I'm developing VST-plugins which are DLL-based software modules and loaded by VST-supporting host applications. To open a VST-plugin the host applications loads the VST-DLL and calls an appropriate function of the plugin while providing a native window handle, which the plugin can use to draw it's GUI. I managed to port my original VSTGUI code to the wxWidgets-Framework and now all my plugins run under wxMSW and wxMac but I still have problems under wxMSW to find a correct way to open and close the plugins and I am not sure if this is a wxMSW-only issue. Problem If I use any VST-host application I can open and close multiple instances of one of my VST-plugins without any problems. As soon as I open another of my VST-plugins besides my first VST-plugin and then close all instances of my first VST-plugin the application crashes after a short amount of time within the wxEventHandlerr::ProcessEvent function telling me that the wxTheApp object isn't valid any longer during execution of wxTheApp-FilterEvent (see below). So it seems to be that the wxTheApp objects was deleted after closing all instances of the first plugin and is no longer available for the second plugin. bool wxEvtHandler::ProcessEvent(wxEvent& event) { // allow the application to hook into event processing if ( wxTheApp ) { int rc = wxTheApp->FilterEvent(event); if ( rc != -1 ) { wxASSERT_MSG( rc == 1 || rc == 0, _T("unexpected wxApp::FilterEvent return value") ); return rc != 0; } //else: proceed normally } .... } Preconditions 1.) All my VST-plugins a dynamically linked against the C-Runtime and wxWidgets libraries. With regard to the wxWidgets forum this seemed to be the best way to run multiple instances of the software side by side. 2.) The DllMain of each VST-Plugin is defined as follows: // WXW #include "wx/app.h" #include "wx/defs.h" #include "wx/gdicmn.h" #include "wx/image.h" #ifdef __WXMSW__ #include <windows.h> #include "wx/msw/winundef.h" BOOL APIENTRY DllMain ( HANDLE hModule, DWORD ul_reason_for_call, LPVOID lpReserved ) { switch (ul_reason_for_call) { case DLL_PROCESS_ATTACH: { wxInitialize(); ::wxInitAllImageHandlers(); break; } case DLL_THREAD_ATTACH: break; case DLL_THREAD_DETACH: break; case DLL_PROCESS_DETACH: wxUninitialize(); break; } return TRUE; } #endif // __WXMSW__ class Application : public wxApp {}; IMPLEMENT_APP_NO_MAIN(Application) Question How can I prevent this behavior respectively how can I properly handle the wxTheApp object if I have multiple instances of different VST-plugins (DLL-modules), which are dynamically linked against the C-Runtime and wxWidgets libraries? Best reagards, Steffen

    Read the article

< Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >