Search Results

Search found 3710 results on 149 pages for 'protocol analyzer'.

Page 28/149 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • Installing Glassfish 3.1 on Ubuntu 10.10 Server

    - by andand
    I've used the directions here to successfully install Glassfish 3.0.1 on an virtualized (VirtualBox and VMWare) Ubuntu 10.10 Server instance without any real difficulty not resolved by more closely following the directions. However when I try applying them to Glassfish 3.1, I seem to keep getting stuck at section 6. "Security configuration before first startup". In particular, there are some differences I noted: 1) There are two keys in the default keystore. The 's1as' key is still there, but another named 'glassfish-instance' is also there. When I saw this, I deleted and recreated them both along with a 'myAlias' key which I was going to use where needed. 2) When turning the security on it seems like part of the server thinks it's on, but others don't. For instances: $ /home/glassfish/bin/asadmin set server-config.network-config.protocols.protocol.admin-listener.security-enabled=true server-config.network-config.protocols.protocol.admin-listener.security-enabled=true Command set executed successfully. $ /home/glassfish/bin/asadmin get server-config.network-config.protocols.protocol.admin-listener.security-enabled server-config.network-config.protocols.protocol.admin-listener.security-enabled=true Command get executed successfully. $ /home/glassfish/bin/asadmin --secure list-jvm-options It appears that server [localhost:4848] does not accept secure connections. Retry with --secure=false. javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake Command list-jvm-options failed. $ /home/glassfish/bin/asadmin --secure=false list-jvm-options -XX:MaxPermSize=192m -client -Djavax.management.builder.initial=com.sun.enterprise.v3.admin.AppServerMBeanServerBuilder -XX: UnlockDiagnosticVMOptions -Djava.endorsed.dirs=${com.sun.aas.installRoot}/modules/endorsed${path.separator}${com.sun.aas.installRoot}/lib/endorsed -Djava.security.policy=${com.sun.aas.instanceRoot}/config/server.policy -Djava.security.auth.login.config=${com.sun.aas.instanceRoot}/config/login.conf -Dcom.sun.enterprise.security.httpsOutboundKeyAlias=s1as -Xmx512m -Djavax.net.ssl.keyStore=${com.sun.aas.instanceRoot}/config/keystore.jks -Djavax.net.ssl.trustStore=${com.sun.aas.instanceRoot}/config/cacerts.jks -Djava.ext.dirs=${com.sun.aas.javaRoot}/lib/ext${path.separator}${com.sun.aas.javaRoot}/jre/lib/ext${path.separator}${com.sun.aas.in stanceRoot}/lib/ext -Djdbc.drivers=org.apache.derby.jdbc.ClientDriver -DANTLR_USE_DIRECT_CLASS_LOADING=true -Dcom.sun.enterprise.config.config_environment_factory_class=com.sun.enterprise.config.serverbeans.AppserverConfigEnvironmentFactory -Dorg.glassfish.additionalOSGiBundlesToStart=org.apache.felix.shell,org.apache.felix.gogo.runtime,org.apache.felix.gogo.shell,org.apache.felix.gogo.command -Dosgi.shell.telnet.port=6666 -Dosgi.shell.telnet.maxconn=1 -Dosgi.shell.telnet.ip=127.0.0.1 -Dgosh.args=--nointeractive -Dfelix.fileinstall.dir=${com.sun.aas.installRoot}/modules/autostart/ -Dfelix.fileinstall.poll=5000 -Dfelix.fileinstall.log.level=2 -Dfelix.fileinstall.bundles.new.start=true -Dfelix.fileinstall.bundles.startTransient=true -Dfelix.fileinstall.disableConfigSave=false -XX:NewRatio=2 Command list-jvm-options executed successfully. Also the admin console responds only to http (not https) requests. Thoughts?

    Read the article

  • Spanning-tree setup with incompatible switches

    - by wfaulk
    I have a set of eight HP ProCurve 2910al-48G Ethernet switches at my datacenter that are set up in a star topology with no physical loops. I want to partially mesh the switches for redundancy and manage the loops with a spanning-tree protocol. However, our connection to the datacenter is provided by two uplinks, each to a Cisco 3750. The datacenter's switches are handling the redundant connection using PVST spanning-tree, which is a Cisco-proprietary spanning-tree implementation that my HP switches do not support. It appears that my switches are not participating in the datacenter's spanning-tree domain, but are blindly passing the BPDUs between the two switchports on my side, which enables the datacenter's switches to recognize the loop and put one of the uplinks into the Blocking state. This is somewhat supposition, but I can confirm that, while my switches say that both of the uplink ports are forwarding, only one is passing any real quantity of data. (I am assuming that I cannot get the datacenter to move away from PVST. I don't know that I'd want them to make that significant of a change anyway.) The datacenter has also sent me this output from their switches (which I have expurgated of any identifiable info): 3750G-1#sh spanning-tree vlan nnn VLAN0nnn Spanning tree enabled protocol ieee Root ID Priority 10 Address 00d0.0114.xxxx Cost 4 Port 5 (GigabitEthernet1/0/5) Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Bridge ID Priority 32mmm (priority 32768 sys-id-ext nnn) Address 0018.73d3.yyyy Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Aging Time 300 sec Interface Role Sts Cost Prio.Nbr Type ------------------- ---- --- --------- -------- -------------------------------- Gi1/0/5 Root FWD 4 128.5 P2p Gi1/0/6 Altn BLK 4 128.6 P2p Gi1/0/8 Altn BLK 4 128.8 P2p and: 3750G-2#sh spanning-tree vlan nnn VLAN0nnn Spanning tree enabled protocol ieee Root ID Priority 10 Address 00d0.0114.xxxx Cost 4 Port 6 (GigabitEthernet1/0/6) Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Bridge ID Priority 32mmm (priority 32768 sys-id-ext nnn) Address 000f.f71e.zzzz Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Aging Time 300 sec Interface Role Sts Cost Prio.Nbr Type ------------------- ---- --- --------- -------- -------------------------------- Gi1/0/1 Desg FWD 4 128.1 P2p Gi1/0/5 Altn BLK 4 128.5 P2p Gi1/0/6 Root FWD 4 128.6 P2p Gi1/0/8 Desg FWD 4 128.8 P2p The uplinks to my switches are on Gi1/0/8 on both of their switches. The uplink ports are configured with a single tagged VLAN. I am also using a number of other tagged VLANs in my switch infrastructure. And, to be clear, I am passing the tagged VLAN I'm receiving from the datacenter to other ports on other switches in my infrastructure. My question is: how do I configure my switches so that I can use a spanning tree protocol inside my switch infrastructure without breaking the datacenter's spanning tree that I cannot participate in?

    Read the article

  • SSH: Connection Reset by Peer

    - by hopeless
    I have a Solaris 10 server on another network. I can ping it and telnet to it, but ssh doesn't connect. PuTTY log contains nothing of interest (they both negotiate to ssh v2) and then I get "Event Log: Network error: Software caused connection abort". ssh is defintely running: svcs -a | grep ssh online 12:12:04 svc:/network/ssh:default Here's an extract from the server's /var/adm/messages (anonymised) Jun 8 19:51:05 ******* sshd[26391]: [ID 800047 auth.crit] fatal: Read from socket failed: Connection reset by peer However, if I telnet to the box, I can login to ssh locally. I can also ssh to other (non-Solaris) machines on that network fine so I don't believe that it's a network issue (though, since I'm a few hundred miles away, I can't be sure). The server's firewall is disabled, so that shouldn't be a problem root@******** # svcs -a | grep -i ipf disabled Apr_27 svc:/network/ipfilter:default Any ideas what I should start checking? Update: Based on the feedback below, I've run sshd in debug mode. Here's the client output: $ ssh -vvv root@machine -p 32222 OpenSSH_5.0p1, OpenSSL 0.9.8h 28 May 2008 debug2: ssh_connect: needpriv 0 debug1: Connecting to machine [X.X.X.X] port 32222. debug1: Connection established. debug1: identity file /home/lawrencj/.ssh/identity type -1 debug1: identity file /home/lawrencj/.ssh/id_rsa type -1 debug1: identity file /home/lawrencj/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version Sun_SSH_1.1 debug1: no match: Sun_SSH_1.1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.0 debug2: fd 3 setting O_NONBLOCK debug1: SSH2_MSG_KEXINIT sent Read from socket failed: Connection reset by peer And here's the server output: root@machine # /usr/lib/ssh/sshd -d -p 32222 debug1: sshd version Sun_SSH_1.1 debug1: read PEM private key done: type RSA debug1: private host key: #0 type 1 RSA debug1: read PEM private key done: type DSA debug1: private host key: #1 type 2 DSA debug1: Bind to port 32222 on ::. Server listening on :: port 32222. debug1: Bind to port 32222 on 0.0.0.0. Server listening on 0.0.0.0 port 32222. debug1: Server will not fork when running in debugging mode. Connection from 1.2.3.4 port 2652 debug1: Client protocol version 2.0; client software version OpenSSH_5.0 debug1: match: OpenSSH_5.0 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-Sun_SSH_1.1 debug1: list_hostkey_types: ssh-rsa,ssh-dss debug1: Failed to acquire GSS-API credentials for any mechanisms (No credentials were supplied, or the credentials were unavailable or inaccessible Unknown code 0 ) debug1: SSH2_MSG_KEXINIT sent Read from socket failed: Connection reset by peer debug1: Calling cleanup 0x4584c(0x0) This line seems a likely candidate: debug1: Failed to acquire GSS-API credentials for any mechanisms (No credentials were supplied, or the credentials were unavailable or inaccessible

    Read the article

  • Tunneling in IPv6:

    - by JoesyXHN
    Hi, The concept of 6to4 tunneling is to do encapsulate and descapsulate an ipv6 packet through ipv4 network. The encapsulation process is: [IPv6 header][Transport Header][Application Protocol data] = encapsulation: [Ip4 Header][IPv6 header][Transport Header][Application Protocol data] I am talking from this infrastructure: Host A (IPv6) - Router R1 (dual stack) - Ipv4 net work - Router R2 (dual stack) - Host B (Ipv6) packet. The Ipv4 header in the encapsulation, which Ipv4 header is this among: Host A, Router R1, Router R2 and Host B? Thanks in advance.

    Read the article

  • Apache DirectorySlash ignores X-Forwarded-Proto header

    - by Sharique Abdullah
    It seems that Apache's DirectorySlash directive is causing issues when using it behind Amazon ELB on HTTPS protocol. So say I access a URL: https://myserver.com/svn/MyProject it would redirect me to: http://myserver.com/svn/MyProject/ My ELB configuration forwards port 443 to port 80 on Apache, but Apache should be aware of the X-Forwarded-Porto header in the request and thus keep the protocol as https in the redirect URL too. Any thoughts?

    Read the article

  • how to block https sites on netgear router?

    - by Karthick88it
    I am using NETGEAR Wireless-N-300 Router Model among couple of peoples to sahre internet connectivity. I have a problem, on my company i blocked facebook.com, but the users are access on protocol https, i blocked some ip´s of facebook but they haves a lot ip, please, how to block facebook on https protocol...?? Can anybody help me for creating the block HTTPS traffic rule. Like I need to block: https://www.facebook.com/ many thanks Karthick

    Read the article

  • RDP 6.1.7600 Bug

    - by Jacob
    I've noticed a bug with RDP 6.1.7600. When I try to remote control another user on our Windows Server 2003 machine, I get an error that says "because of a protocol error, this session will be disconnected" and my RDP session is logged out. The error only happens when I try to control a Neoware Thin Clients. Is it because the RDP protocol versions are different?

    Read the article

  • Why won't vyatta allow SMTP through my firewall?

    - by Solignis
    I am setting up a vyatta router on VMware ESXi, But I see to have hit a major snag, I could not get my firewall and NAT to work correctly. I am not sure what was wrong with NAT but it "seems" to be working now. But the firewall is not allowing traffic from my WAN interface (eth0) to my LAN (eth1). I can confirm its the firewall because I disabled all firewall rules and everything worked with just NAT. If put the firewalls (WAN and LAN) back in place nothing can get through to port 25. I am not really sure what the issue could be I am using pretty basic firewall rules, I wrote the rules while looking at the vyatta docs so unless there is something odd with the documentation they "should" be working. Here is my NAT rules so far; vyatta@gateway# show service nat rule 20 { description "Zimbra SNAT #1" outbound-interface eth0 outside-address { address 74.XXX.XXX.XXX } source { address 10.0.0.17 } type source } rule 21 { description "Zimbra SMTP #1" destination { address 74.XXX.XXX.XXX port 25 } inbound-interface eth0 inside-address { address 10.0.0.17 } protocol tcp type destination } rule 100 { description "Default LAN -> WAN" outbound-interface eth0 outside-address { address 74.XXX.XXX.XXX } source { address 10.0.0.0/24 } type source } Then here is my firewall rules, this is where I believe the problem is. vyatta@gateway# show firewall all-ping enable broadcast-ping disable conntrack-expect-table-size 4096 conntrack-hash-size 4096 conntrack-table-size 32768 conntrack-tcp-loose enable ipv6-receive-redirects disable ipv6-src-route disable ip-src-route disable log-martians enable name LAN_in { rule 100 { action accept description "Default LAN -> any" protocol all source { address 10.0.0.0/24 } } } name LAN_out { } name LOCAL { rule 100 { action accept state { established enable } } } name WAN_in { rule 20 { action accept description "Allow SMTP connections to MX01" destination { address 74.XXX.XXX.XXX port 25 } protocol tcp } rule 100 { action accept description "Allow established connections back through" state { established enable } } } name WAN_out { } receive-redirects disable send-redirects enable source-validation disable syn-cookies enable SIDENOTE To test for open ports I have using this website, http://www.yougetsignal.com/tools/open-ports/, it showed port 25 as open without the firewall rules and closed with the firewall rules. UPDATE Just to see if the firewall was working properly I made a rule to block SSH from the WAN interface. When I checked for port 22 on my primary WAN address it said it was still open even though I outright blocked the port. Here is the rule I used; rule 21 { action reject destination { address 74.219.80.163 port 22 } protocol tcp } So now I am convinced either I am doing something wrong or the firewall is not working like it should.

    Read the article

  • Active Directory : AdExplorer and LDAP browsers

    - by webwesen
    I can access my corp AD with SysInternals' "AdExplorer" with no problems. however, when I try to use generic LDAP browser (ldp.exe in my example) to access the same AD directory I can't get the required protocol/auth method. I think I have tried them all. what protocol/settings does AdExplorer use by default?

    Read the article

  • ssh client problem: Connection reset by peer

    - by yonix
    I'm having a really annoying problem on my Ubuntu laptop. I noticed it today, after upgrading to Ubuntu 11.04, although I'm not entirely sure this is the cause as I played with my ssh keys a few days ago. The problem is, whenever I try to ssh to ANY host I get the following error: Read from socket failed: Connection reset by peer running with -vvv gives the following output: OpenSSH_5.8p1 Debian-1ubuntu3, OpenSSL 0.9.8o 01 Jun 2010 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to hostname [10.0.0.2] port 22. debug1: Connection established. debug1: permanently_set_uid: 0/0 debug1: identity file /root/.ssh/id_rsa type -1 debug1: identity file /root/.ssh/id_rsa-cert type -1 debug1: identity file /root/.ssh/id_dsa type -1 debug1: identity file /root/.ssh/id_dsa-cert type -1 debug1: identity file /root/.ssh/id_ecdsa type -1 debug1: identity file /root/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 1.99, remote software version OpenSSH_4.2 debug1: match: OpenSSH_4.2 pat OpenSSH_4* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.8p1 Debian-1ubuntu3 debug2: fd 3 setting O_NONBLOCK debug3: load_hostkeys: loading entries for host "hostname" from file "/root/.ssh/known_hosts" debug3: load_hostkeys: loaded 0 keys debug1: SSH2_MSG_KEXINIT sent Read from socket failed: Connection reset by peer My /etc/ssh/ssh_config: Host * SendEnv LANG LC_* HashKnownHosts yes GSSAPIAuthentication no GSSAPIDelegateCredentials no I can connect to my laptop from any other server via ssh, and I can also ssh localhost from my laptop successfully. I can connect to all these other server from other laptops, and I don't see anything in the logs of the other servers regarding my failed attempt. I tried to stop iptables, didn't help. I tried several tricks I could find online with my /etc/ssh/ssh_config, but I was unsuccessful in solving the problem... Any ideas? Edit: This is the log from one of the hosts I try to connect to: May 1 19:15:23 localhost sshd[2845]: debug1: Forked child 2847. May 1 19:15:23 localhost sshd[2845]: debug3: send_rexec_state: entering fd = 8 config len 577 May 1 19:15:23 localhost sshd[2845]: debug3: ssh_msg_send: type 0 May 1 19:15:23 localhost sshd[2845]: debug3: send_rexec_state: done May 1 19:15:23 localhost sshd[2847]: debug1: rexec start in 5 out 5 newsock 5 pipe 7 sock 8 May 1 19:15:23 localhost sshd[2847]: debug1: inetd sockets after dupping: 3, 3 May 1 19:15:23 localhost sshd[2847]: Connection from 10.0.0.7 port 55747 May 1 19:15:23 localhost sshd[2847]: debug1: Client protocol version 2.0; client software version OpenSSH_5.8p1 Debian-1ubuntu3 May 1 19:15:23 localhost sshd[2847]: debug1: match: OpenSSH_5.8p1 Debian-1ubuntu3 pat OpenSSH* May 1 19:15:23 localhost sshd[2847]: debug1: Enabling compatibility mode for protocol 2.0 May 1 19:15:23 localhost sshd[2847]: debug1: Local version string SSH-2.0-OpenSSH_5.3 May 1 19:15:23 localhost sshd[2847]: debug2: fd 3 setting O_NONBLOCK May 1 19:15:23 localhost sshd[2847]: debug2: Network child is on pid 2848 May 1 19:15:23 localhost sshd[2847]: debug3: preauth child monitor started May 1 19:15:23 localhost sshd[2847]: debug3: mm_request_receive entering May 1 19:15:23 localhost sshd[2848]: debug3: privsep user:group 74:74 May 1 19:15:23 localhost sshd[2848]: debug1: permanently_set_uid: 74/74 May 1 19:15:23 localhost sshd[2848]: debug1: list_hostkey_types: ssh-rsa,ssh-dss May 1 19:15:23 localhost sshd[2848]: debug1: SSH2_MSG_KEXINIT sent May 1 19:15:23 localhost sshd[2848]: debug3: Wrote 784 bytes for a total of 805 May 1 19:15:23 localhost sshd[2848]: fatal: Read from socket failed: Connection reset by peer

    Read the article

  • Where can I get an open Diameter server application to install

    - by EricJLN
    I need to learn about the Diameter Protocol and its use in different devices. I want to install a Diameter Server, some kind of client that uses Diameter Protocol to authenticate, and then start tweaking things. http://www.opendiameter.org has gone dark (although the sourceforge page still exists). I can't figure out how to install OpenBlox (http://www.traffixsystems.com/OpenBloXDiameterStack.html). Where can I find a Diameter server and some kind of client application to test it with?

    Read the article

  • pretty-printing IP packets

    - by pts
    I'm receiving IP packets using the SLIP protocol, and I'd like to pretty-print them similarly to how tcpdump does it. My program is able to decode the SLIP protocol and create a single string containing an IP packet if necessary. I couldn't find any relevant tcpdump command-line flags except for -r. The file format is documented at http://www.tcpdump.org/pcap/pcap.html , but it looks a bit too complicated. Is there a Linux tool for pretty-printing raw IP packets?

    Read the article

  • IIS 7 - allow http for part of site, https for rest?

    - by Martin Clarke
    In IIS 7, is there a way to set two urls on the same site to allow http and https, and the rest to be https only? - http://mysite/url1 or https://mysite/url1 is accepted and stays on that protocol. - http://mysite/url2 or https://mysite/url2 is accepted and stays on that protocol. - any other item, i.e. http://mysite/whatever redirects to https://mysite/whatever - https://mysite/whatever is accepted. Edited because first question wasn't clear enough.

    Read the article

  • Error Building Project With NSXMLParserDelegate.

    - by fuzzygoat
    TurbineXMLParser.h #import <Foundation/Foundation.h> @interface TurbineXMLParser : NSObject <NSXMLParserDelegate> { ... TurbineXMLParser.m #import "TurbineXMLParser.h" I have just added a new class to my current project that I previously tested in a single file. When I try and build the project I get the error: error: cannot find protocol declaration for 'NSXMLParserDelegate' I did a bit of searching and tried adding the following ... TurbineXMLParser.h #import <Foundation/Foundation.h> @protocol NSXMLParserDelegate; @interface TurbineXMLParser : NSObject <NSXMLParserDelegate> { ... but still get the warning: warning: no definition of protocol 'NSXMLParserDelegate' is found any help would be much appreciated gary

    Read the article

  • chat application: pubsubhub vs xmpp

    - by sofia
    I'm unsure on the best stack to build a chat application. Currently I'm thinking of two main options: facebook tornado cons: does not use the main chat protocol xmpp but pubsubhubbub pros: i really like its simplicity for development (webserver + webframework); pubsubhubbub also seems simpler as a protocol than xmpp; and i know python xmpp + bosch, punjab, ejabberd cons: don't know erlang; overall seems a bit harder to develop pros: uses xmpp protocol The chat app will need to have the following: Private messages Public rooms Private rooms Chat history for rooms (not forever, just the last n messages) html embedding url to chat room Both options seem scalable so that's not really my worry (we're thinking of running the app in amazon's ec2 as well). I know there's a project that builds a xmpp server using tornado but it's not ready for production use and our deadline isn't that big. Basically my main worry is ease of development vs somehow regretting later using pubsubhubbub to develop a chat app but I read somewhere that PubSubHubbub might eventually replace XMPP as REST replaced SOAP - so what do you think?

    Read the article

  • chat application: pubsubhubbub vs xmpp

    - by sofia
    I'm unsure on the best stack to build a chat application. Currently I'm thinking of two main options: facebook tornado cons: does not use the main chat protocol xmpp but pubsubhubbub pros: i really like its simplicity for development (webserver + webframework); pubsubhubbub also seems simpler as a protocol than xmpp; and i know python xmpp + bosch, punjab, ejabberd cons: don't know erlang; overall seems a bit harder to develop pros: uses xmpp protocol The chat app will need to have the following: Private messages Public rooms Private rooms Chat history for rooms (not forever, just the last n messages) html embedding url to chat room Both options seem scalable so that's not really my worry (we're thinking of running the app in amazon's ec2 as well). I know there's a project that builds a xmpp server using tornado but it's not ready for production use and our deadline isn't that big. Basically my main worry is ease of development vs somehow regretting later using pubsubhubbub to develop a chat app but I read somewhere that PubSubHubbub might eventually replace XMPP as REST replaced SOAP - so what do you think?

    Read the article

  • How can I effectively test a scripting engine?

    - by ChaosPandion
    I have been working on an ECMAScript implementation and I am currently working on polishing up the project. As a part of this, I have been writing tests like the following: [TestMethod] public void ArrayReduceTest() { var engine = new Engine(); var request = new ExecScriptRequest(@" var a = [1, 2, 3, 4, 5]; a.reduce(function(p, c, i, o) { return p + c; }); "); var response = (ExecScriptResponse)engine.PostWithReply(request); Assert.AreEqual((double)response.Data, 15D); } The problem is that there are so many points of failure in this test and similar tests that it almost doesn't seem worth it. It almost seems like my effort would be better spent reducing coupling between modules. To write a true unit test I would have to assume something like this: [TestMethod] public void CommentTest() { const string toParse = "/*First Line\r\nSecond Line*/"; var analyzer = new LexicalAnalyzer(toParse); { Assert.IsInstanceOfType(analyzer.Next(), typeof(MultiLineComment)); Assert.AreEqual(analyzer.Current.Value, "First Line\r\nSecond Line"); } } Doing this would require me to write thousands of tests which once again does not seem worth it.

    Read the article

  • Gmail 3-legged OAuth access -- Zend_Mail_Protocol_Exception

    - by tchaymore
    I'm trying to access Gmail by using three-legged Oauth PHP code provided by Google ('google-mail-xoauth-tools') here: http://code.google.com/apis/gmail/oauth/code.html. I have my domain registered and everything seems to go fine with OAuth, but after I authorize access I get this error: Fatal error: Uncaught exception 'Zend_Mail_Protocol_Exception' with message 'cannot connect to host; error = Connection refused (errno = 111 )' in /home/tchaymor/public_html/gmail/Zend/Mail/Protocol/Imap.php:100 Stack trace: #0 /home/tchaymor/public_html/gmail/Zend/Mail/Protocol/Imap.php(61): Zend_Mail_Protocol_Imap->connect('imap.gmail.com', '993', true) #1 /home/tchaymor/public_html/gmail/three-legged.php(170): Zend_Mail_Protocol_Imap->__construct('imap.gmail.com', '993', true) #2 {main} thrown in /home/tchaymor/public_html/gmail/Zend/Mail/Protocol/Imap.php on line 100 This is my first time using OAuth with any Google products, so it could be something totally brainless I'm missing. Any suggestions would be most welcome (as suggestions for easier alternatives). I'm more on the designer rather than coder end, so the simpler the better.

    Read the article

  • Core Location Best Placement and User Interruption

    - by b.dot
    Hi All, My application uses Core Location in three different views. It's working perfectly. In my first view, I subclass the CLLocationManager and use protocol methods for location updates to my calling class. Before I install the framework and code in my other classes, I was wondering: Is the protocol method the best way? What happens to the Core Location execution if the user exits the view or quits the app while it's trying to get a location fix? Is the location task terminated with the GPS system turned off immediately? If the user simply switches to another view, is it OK to assume that I can start Core Location in the next view without regard to the last? Where should the first update location call be placed. Should the application delegate instantiate the CLLocation Manager class using protocol so that it can update any of the views chosen or should each class instantiate the manager. Any feedback would be appreciated. Thanks.

    Read the article

  • SqlPlus on mac osx 10.6 doesn't work

    - by lesce
    When i try to run this #sqlplus system@orcl it gives me this error SQL*Plus: Release 10.1.0.3.0 - Production on Tue Apr 20 02:24:41 2010 Copyright (c) 1982, 2004, Oracle. All rights reserved. Enter password: ERROR: ORA-12154: TNS:could not resolve the connect identifier specified the oracle server is working , I can connect through SQLDeveloper My .profile looks like this export PATH=/opt/local/bin:/opt/local/sbin:$PATH # Setting PATH for Python 3.1 # The orginal version is saved in .profile.pysave PATH="/Library/Frameworks/Python.framework/Versions/3.1/bin:${PATH}" DYLD_LIBRARY_PATH=/Users/lesce/instantclient export TNS_ADMIN=/Users/lesce/instantclient export ORACLE_SID="orcl" export DYLD_LIBRARY_PATH export PATH=$PATH:/Users/lesce/instantclient tnsnames.ora ORCL = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcl) ) ) listener.ora SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (SID_NAME = PLSExtProc) (ORACLE_HOME = /Users/oracle/oracle/product/10.2.0/db_1) (PROGRAM = extproc) ) (SID_DESC = (SID_NAME = orcl) (ORACLE_HOME = /Users/oracle/oracle/product/10.2.0/db_1) ) ) LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521)) (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0)) ) ) My SqlDeveloper configuration username : sys role : sysdba connection type : basic hostname : localhost port : 1521 sid : orcl

    Read the article

  • How do you build a Request-Response service using Asyncore in Python?

    - by Casey
    I have a 3rd-party protocol module (SNMP) that is built on top of asyncore. The asyncore interface is used to process response messages. What is the proper technique to design a client that generate the request-side of the protocol, while the asyncore main loop is running. I can think of two options right now: Use the loop,timeout parameters of asyncore.loop() to allow my client program time to send the appropriate request. Create a client asyncore dispatcher that will be executed in the same asyncore processing loop as the receiver. What is the best option? I'm working on the 2nd solution, cause the protocol API does not give me direct access to the asyncore parameters. Please correct me if I've misunderstood the proper technique for utilizing asyncore.

    Read the article

  • Lucene and Special Characters

    - by Brandon
    I am using Lucene.Net 2.0 to index some fields from a database table. One of the fields is a 'Name' field which allows special characters. When I perform a search, it does not find my document that contains a term with special characters. I index my field as such: Directory DALDirectory = FSDirectory.GetDirectory(@"C:\Indexes\Name", false); Analyzer analyzer = new StandardAnalyzer(); IndexWriter indexWriter = new IndexWriter(DALDirectory, analyzer, true, IndexWriter.MaxFieldLength.UNLIMITED); Document doc = new Document(); doc.Add(new Field("Name", "Test (Test)", Field.Store.YES, Field.Index.TOKENIZED)); indexWriter.AddDocument(doc); indexWriter.Optimize(); indexWriter.Close(); And I search doing the following: value = value.Trim().ToLower(); value = QueryParser.Escape(value); Query searchQuery = new TermQuery(new Term(field, value)); Searcher searcher = new IndexSearcher(DALDirectory); TopDocCollector collector = new TopDocCollector(searcher.MaxDoc()); searcher.Search(searchQuery, collector); ScoreDoc[] hits = collector.TopDocs().scoreDocs; If I perform a search for field as 'Name' and value as 'Test', it finds the document. If I perform the same search as 'Name' and value as 'Test (Test)', then it does not find the document. Even more strange, if I remove the QueryParser.Escape line do a search for a GUID (which, of course, contains hyphens) it finds documents where the GUID value matches, but performing the same search with the value as 'Test (Test)' still yields no results. I am unsure what I am doing wrong. I am using the QueryParser.Escape method to escape the special characters and am storing the field and searching by the Lucene.Net's examples. Any thoughts?

    Read the article

  • Error while importing SSL into jboss 4.2 ?

    - by worldpython
    I've tried to setup .keystore on Jboss 4.2. due to this documentation from jboss community http://community.jboss.org/wiki/sslsetup but Jboss console generate this error LifecycleException: service.getName(): "jboss.web"; Protocol handler start failed: java.io.FileNotFoundException: C:\Documents and Settings\mebada\.keystore (The system cannot find the file specified) even I specify location of keystore in server.xml <Connector className = "org.apache.coyote.tomcat4.CoyoteConnector" address="${jboss.bind.address}" port = "8443" protocol="HTTP/1.1" SSLEnabled="true" scheme = "https" secure = "true"> <Factory className = "org.apache.coyote.tomcat4.CoyoteServerSocketFactory" keystoreFile="D:/Projects/Demo/jboss-4.2.3.GA/jboss-4.2.3.GA/server/default/conf/server.keystore" keystorePass="tc-ssl" protocol = "TLS"></Factory> Any Help ? Thanks in advance

    Read the article

  • Implement a vpn

    - by jackson
    I want to build an application client(client.exe) - server to do the following: when the clients run it they are thrown in a VPN and they can communicate each other within 1 applicataion. For example : clients run client.exe and they can see each other in LAN ONLY in Starcraft. From what i have read the right type of vpn for this situation is Secured Socket Tunneling Protocol: "Secure socket tunneling protocol, also referred to as SSTP, is by definition an application-layer protocol. It is designed to employ a synchronous communication in a back and forth motion between two programs. It allows many application endpoints over one network connection, between peer nodes, thereby enabling efficient usage of the communication resources that are available to that network. " Question: I don't have experience with networking programming so my question for the ones who have, is this the right approach? PS1: i don't want something done like OpenVpn, i do this as learning exercise. PS2: the application is targeting Windows and i plan to use .NET Thanks for reading the whole story, i am waiting for your replies.

    Read the article

  • Parsing Chunk of Data into Hash of Array With Perl

    - by neversaint
    I have data that looks like this: #info #info2 1:SRX004541 Submitter: UT-MGS, UT-MGS Study: Glossina morsitans transcript sequencing project(SRP000741) Sample: Glossina morsitans(SRS002835) Instrument: Illumina Genome Analyzer Total: 1 run, 8.3M spots, 299.9M bases Run #1: SRR016086, 8330172 spots, 299886192 bases 2:SRX004540 Submitter: UT-MGS Study: Anopheles stephensi transcript sequencing project(SRP000747) Sample: Anopheles stephensi(SRS002864) Instrument: Solexa 1G Genome Analyzer Total: 1 run, 8.4M spots, 401M bases Run #1: SRR017875, 8354743 spots, 401027664 bases 3:SRX002521 Submitter: UT-MGS Study: Massive transcriptional start site mapping of human cells under hypoxic conditions.(SRP000403) Sample: Human DLD-1 tissue culture cell line(SRS001843) Instrument: Solexa 1G Genome Analyzer Total: 6 runs, 27.1M spots, 977M bases Run #1: SRR013356, 4801519 spots, 172854684 bases Run #2: SRR013357, 3603355 spots, 129720780 bases Run #3: SRR013358, 3459692 spots, 124548912 bases Run #4: SRR013360, 5219342 spots, 187896312 bases Run #5: SRR013361, 5140152 spots, 185045472 bases Run #6: SRR013370, 4916054 spots, 176977944 bases What I want to do is to create a hash of array with first line of each chunk as keys and SR## part of lines with "^Run" as its array member: $VAR = { 'SRX004541' => ['SRR016086'], # etc } But why my construct doesn't work. And it must be a better way to do it. use Data::Dumper; my %bighash; my $head = ""; my @temp = (); while ( <> ) { chomp; next if (/^\#/); if ( /^\d{1,2}:(\w+)/ ) { print "$1\n"; $head = $1; } elsif (/^Run \#\d+: (\w+),.*/){ print "\t$1\n"; push @temp, $1; } elsif (/^$/) { push @{$bighash{$head}}, [@temp]; @temp =(); } } print Dumper \%bighash ;

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >