Search Results

Search found 545 results on 22 pages for 'pwd'.

Page 16/22 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • php_ibm_db2.dll on IIS 7.5 using PHP 5.3 error message

    - by grmbl
    I'm trying to use ibm_db2 extension to access iSeries DB2 database. This is the testcode (taken from here) <?php $database = 'ALI452BFAL'; //library $user = 'STN452'; $password = '**********'; $hostname = 'myserverip'; $port = 50000; $conn_string = "DRIVER={IBM DB2 ODBC DRIVER};DATABASE=$database;" . "HOSTNAME=$hostname;PORT=$port;PROTOCOL=TCPIP;UID=$user;PWD=$password;"; $conn = db2_connect($conn_string, '', ''); if ($conn) { print "ok"; db2_close($conn); } else { echo db2_conn_error() . '<br>' . db2_conn_errormsg(); } ?> I have installed a very basic package containing the db2 driver and added this as an extension. (IBM Data Server Driver for ODBC, CLI, and .NET.msi) This is my result: 08001 [IBM][CLI Driver] SQL30081N A communication error has been detected. Communication protocol being used: "TCP/IP". Communication API being used: "SOCKETS". Location where the error was detected: "10.10.0.120". Communication function detecting the error: "connect". Protocol specific error code(s): "10061", "", "". SQLSTATE=08001 SQLCODE=-30081 Anybody tried this before??

    Read the article

  • FTP not listing directory NcFTP PASV

    - by Jacob Talbot
    I am attempting to setup Multicraft on my server, all is running smoothly however the FTP won't allow anyone to connect from a remote FTP client, where net2ftp will work smoothly from a remote location. I have included the transcript from my FTP client, Transmit below to give you an idea of what's going on. I have disabled iptables as well, and still no luck either way. Transmit 4.1.7 (x86_64) Session Transcript [Version 10.8.2 (Build 12C54)] (21/10/12 11:23 PM) LibNcFTP 3.2.3 (July 23, 2009) compiled for UNIX 220: Multicraft 1.7.1 FTP server Connected to ateam.bn-mc.net. Cmd: USER jacob.9 331: Username ok, send password. Cmd: PASS xxxxxxxx 230: Login successful Cmd: TYPE A 200: Type set to: ASCII. Logged in to ateam.bn-mc.net as jacob.9. Cmd: SYST 215: UNIX Type: L8 Cmd: FEAT 211: Features supported: EPRT EPSV MDTM MLSD MLST type*;perm*;size*;modify*;unique*;unix.mode;unix.uid;unix.gid; REST STREAM SIZE TVFS UTF8 End FEAT. Cmd: OPTS UTF8 ON 200: OK Cmd: PWD 257: "/" is the current directory. Cmd: PASV Could not read reply from control connection -- timed out. (SReadline 1)

    Read the article

  • Apache 2.2 + mod_fcgid + PHP 5.4: (104) Connection reset by peer

    - by Michele Piccirillo
    On a Debian 6 VPS, I'm running PHP 5.4 via mod_fcgid on a couple of different virtual hosts, managed by Virtualmin GPL. At random, I get 500 Internal Server Errors; restarting Apache brings everything back to normality. Examining the logs, I find messages of this kind: [Thu Oct 04 15:39:35 2012] [warn] [client 173.252.100.117] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Thu Oct 04 15:39:35 2012] [error] [client 173.252.100.117] Premature end of script headers: index.php Any ideas about what is happening? UPDATE: I found a similar question and the author reported to have solved the problem disabling APC. I tried following the advice, but I'm still getting the same errors. VirtualHost configuration SuexecUserGroup "#1000" "#1000" ServerName example.com DocumentRoot /home/example/public_html ScriptAlias /cgi-bin/ /home/example/cgi-bin/ DirectoryIndex index.html index.htm index.php index.php4 index.php5 <Directory /home/example/public_html> Options -Indexes +IncludesNOEXEC +FollowSymLinks +ExecCGI allow from all AllowOverride All AddHandler fcgid-script .php AddHandler fcgid-script .php5 FCGIWrapper /home/example/fcgi-bin/php5.fcgi .php FCGIWrapper /home/example/fcgi-bin/php5.fcgi .php5 </Directory> <Directory /home/example/cgi-bin> allow from all </Directory> RemoveHandler .php RemoveHandler .php5 IPCCommTimeout 61 FcgidMaxRequestLen 1073741824 php5.fcgi #!/bin/bash PHPRC=$PWD/../etc/php5 export PHPRC umask 022 export PHP_FCGI_CHILDREN PHP_FCGI_MAX_REQUESTS=99999 export PHP_FCGI_MAX_REQUESTS SCRIPT_FILENAME=$PATH_TRANSLATED export SCRIPT_FILENAME exec /usr/bin/php5-cgi Package versions webmin-virtual-server/virtualmin-universal 3.94.gpl-2 apache2/squeeze 2.2.16-6+squeeze8 libapache2-mod-fcgid/squeeze 1:2.3.6-1+squeeze1 php5 5.4.7-1~dotdeb.0 php5-apc 5.4.7-1~dotdeb.0

    Read the article

  • Anonymous FTP upload on CentOS 5.2

    - by Craig
    I need to allow users to upload files to an FTP server anonymously. They should not be able to see any other files, or download files. It is a CentOS 5.2 server. I have a separate partition for the the upload area (mounted at /ftp). I have tried to set up vsftpd, followed all the instructions/advice I could find. But, when a user logs in and tries to transfer a file it throws a "553 could not create file." error. If I do a 'pwd' it shows the directory as "/" rather than the anon_root of "/ftp/anonymous". Any attempt to change the remote directory ends with "550 Failed to change directory.". I have a subdirectory "/ftp/anonymous/incoming" that is writable for the uploads SELinux is in permissive mode. I am running version 2.0.5 release 16.el5 of vsftpd. Here is the vsftpd.conf file: anonymous_enable=YES local_enable=YES write_enable=YES local_umask=002 anon_umask=007 file_open_mode=0666 anon_upload_enable=YES anon_mkdir_write_enable=NO dirmessage_enable=YES xferlog_enable=YES connect_from_port_20=YES chown_uploads=YES chown_username=inftpadm xferlog_std_format=YES nopriv_user=nobody listen=YES pam_service_name=vsftpd userlist_enable=YES tcp_wrappers=YES ftp_username=inftpadm anon_root=/ftp/anonymous anon_other_write_enable=NO anon_mkdir_write_enable=NO anon_world_readable_only=NO dirlist_enable=YES Can anyone help?

    Read the article

  • SFTP, Chroot problems on Redhat

    - by Curtis_w
    I'm having problems setting up sftp with a ChrootDirectory. I've done an equivalent setup on other distros, but for some reason I cannot get it to work on a Redhat AMI. The changes to my sshd_config file are: Subsystem sftp internal-sftp Match Group ftponly PasswordAuthentication yes X11Forwarding no ChrootDirectory %h ForceCommand internal-sftp AllowTcpForwarding no I have the concerned usere's homes at /home/user, owned by root. After connecting with a user in the ftponly group, I'm dropped into / without permissions for anything, and am unable to do anything. sftp bob@localhost Connecting to localhost... bob@localhost's password: sftp> pwd Remote working directory: / I can connect normally with users not in the ftponly group. openssh version 5.3 I've experimented with different permissions, as well as having users own their own home directory (gives a Write failed: Broken pipe error), and so far, nothing has seemed to work. I'm sure it's a permissions error, or something equally as trivial, but at this point my eyes are beginning to glaze over, and any help would be greatly appreciated. EDIT: James and Madhatter, thanks for clarifying. I was confused by chroot dropping me in /... just didn't think through it properly. I've added the appropriate directories and permissions to get read access. One other key part was enabling write access to chrooted homes: setsebool -P ssh_chroot_rw_homedirs on in order to get write access. I think I'm all set now. Thanks for the help.

    Read the article

  • Help about pure-ftp

    - by hai
    I setup pure-ftp on freebsd behind firewall. On pure-ftp setuped passsi mode ftp(rangle port 50400-50600) and firewall open port from 50400-50600 (include mode IN and out). But i try use ftp client connect but not connect. Nofinication error status: Connecting to 210.245.89.95:21... Status: Connection established, waiting for welcome message... Response: 220---------- Welcome to Pure-FTPd [privsep] ---------- Response: 220-You are user number 1 of 50 allowed. Response: 220-Local time is now 13:20. Server port: 21. Response: 220-IPv6 connections are also welcome on this server. Response: 220 You will be disconnected after 15 minutes of inactivity. Command: USER bk Response: 331 User bk OK. Password required Command: PASS Response: 230 OK. Current directory is / Command: SYST Response: 215 UNIX Type: L8 Command: FEAT Response: 211-Extensions supported: Response: EPRT Response: IDLE Response: MDTM Response: SIZE Response: REST STREAM Response: MLST type;size*;sizd*;modify*;UNIX.mode*;UNIX.uid*;UNIX.gid*;unique*; Response: MLSD Response: ESTA Response: PASV Response: EPSV Response: SPSV Response: ESTP Response: 211 End. Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is your current location Command: TYPE I Response: 200 TYPE is now 8-bit binary Command: PASV Response: 227 Entering Passive Mode (210,245,88,98,138,1) Command: MLSD Error: Connection timed out Error: Failed to retrieve directory listing Status: Connecting to 210.245.88.98:21... Status: Connection established, waiting for welcome message... Help me.

    Read the article

  • Howto disable SSH local port forwarding ?

    - by SCO
    I have a server running Ubuntu and the OpenSSH daemon. Let's call it S1. I use this server from client machines (let's call one of them C1) to do an SSH reverse tunnel by using remote port forwarding, eg : ssh -R 1234:localhost:23 login@S1 On S1, I use the default sshd_config file. From what I can see, anyone having the right credentials {login,pwd} on S1 can log into S1 and either do remote port forwarding and local port forwarding. Such credentials could be a certificate in the future, so in my understanding anyone grabbing the certificate can log into S1 from anywhere else (not necessarily C1) and hence create local port forwardings. To me, allowing local port forwarding is too dangerous, since it allows to create some kind of public proxy. I'm looking for a way tto disable only -L forwardings. I tried the following, but this disables both local and remote forwarding : AllowTcpForwarding No I also tried the following, this will only allow -L to SX:1. It's better than nothing, but still not what I need, which is a "none" option. PermitOpen SX:1 So I'm wondering if there is a way, so that I can forbid all local port forwards to write something like : PermitOpen none:none Is the following a nice idea ? PermitOpen localhost:1

    Read the article

  • Defining Virtual and Real User Directories with Dovecot & Postfix

    - by blankabout
    Following a wobble described in this question we now have virtual and real users authenticating with Dovecot, the problem now is that the real users (who have been on the system for years) can no longer access their mail. I'm guessing that it is because Dovecot is configured to point to the virtual mailboxes but not the real mail boxes. These are snippets from the config files: /etc/dovecot/dovecot.conf !include conf.d/*.conf /etc/dovecot/conf.d/10-auth.cong passdb { driver = passwd-file # Path for passwd-file. Also set the default password scheme. args = scheme=cram-md5 /etc/cram-md5.pwd } userdb { driver=static #args = mail_uid=dovecot mail_gid=dovecot /etc/dovecot/userdb args = uid=vmail gid=vmail home=/var/spool/vhosts/%d/%n /etc/dovecot/userdb } [email protected]:::::/var/spool/vhosts/virtualdomain.com/:/bin/false:: We think the problem is that the Dovecot file 10-auth.conf does not contain a method of accessing the mailboxes for the real users. We have looked around on this site, dovecot.org and done the usual googling but cannot find anywhere that describes how to set up virtual users on alongside legacy real users. Any help would be appreciated, especially by our real users who would like the contents of their inboxes to be available! If any further config is required, please let me know.

    Read the article

  • Ubuntu 9.10 RSA authentication: ssh fails, filezilla runs fine

    - by MariusPontmercy
    This is quite a mistery for me. I usually use passwordless RSA authentication to login into my remote *nix servers with ssh and sftp. Never had any problem until now. I cannot connect to an Ubuntu 9.10 machine: user@myclient$ ssh -i .ssh/Ganymede_key [email protected] [...] debug1: Host 'ganymede.server.com' is known and matches the RSA host key. debug1: Found key in /home/user/.ssh/known_hosts:14 debug2: bits set: 494/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: .ssh/Ganymede_key (0xb96a0ef8) debug2: key: .ssh/Ganymede_key ((nil)) debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Next authentication method: publickey debug1: Offering public key: .ssh/Ganymede_key debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Trying private key: .ssh/Ganymede_key debug1: read PEM private key done: type RSA debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,password,keyboard-interactive debug2: we did not send a packet, disable method debug1: Next authentication method: keyboard-interactive debug2: userauth_kbdint debug2: we sent a keyboard-interactive packet, wait for reply debug2: input_userauth_info_req debug2: input_userauth_info_req: num_prompts 1 Then it falls back to password authentication. If I disable password authentication on the remote machine my connection attempt just fails with a "Permission denied (publickey)." state. Same thing for sftp from command line. The "funny" thing is that the exact same RSA key works like a charm with a Filezilla sftp session instead: 12:08:00 Trace: Offered public key from "/home/user/.filezilla/keys/Ganymede_key" 12:08:00 Trace: Offer of public key accepted, trying to authenticate using it. 12:08:01 Trace: Access granted 12:08:01 Trace: Opened channel for session 12:08:01 Trace: Started a shell/command 12:08:01 Status: Connected to ganymede.server.com 12:08:02 Trace: CSftpControlSocket::ConnectParseResponse() 12:08:02 Trace: CSftpControlSocket::ResetOperation(0) 12:08:02 Trace: CControlSocket::ResetOperation(0) 12:08:02 Status: Retrieving directory listing... 12:08:02 Trace: CSftpControlSocket::SendNextCommand() 12:08:02 Trace: CSftpControlSocket::ChangeDirSend() 12:08:02 Command: pwd 12:08:02 Response: Current directory is: "/root" 12:08:02 Trace: CSftpControlSocket::ResetOperation(0) 12:08:02 Trace: CControlSocket::ResetOperation(0) 12:08:02 Trace: CSftpControlSocket::ParseSubcommandResult(0) 12:08:02 Trace: CSftpControlSocket::ListSubcommandResult() 12:08:02 Trace: CSftpControlSocket::ResetOperation(0) 12:08:02 Trace: CControlSocket::ResetOperation(0) 12:08:02 Status: Directory listing successful Any thoughts? M

    Read the article

  • Wireshark WPA 4-way handshake

    - by cYrus
    From this wiki page: WPA and WPA2 use keys derived from an EAPOL handshake to encrypt traffic. Unless all four handshake packets are present for the session you're trying to decrypt, Wireshark won't be able to decrypt the traffic. You can use the display filter eapol to locate EAPOL packets in your capture. I've noticed that the decryption works with (1, 2, 4) too, but not with (1, 2, 3). As far as I know the first two packets are enough, at least for what concern unicast traffic. Can someone please explain exactly how does Wireshark deal with that, in other words why does only the former sequence work, given that the fourth packet is just an acknowledgement? Also, is it guaranteed that the (1, 2, 4) will always work when (1, 2, 3, 4) works? Test case This is the gzipped handshake (1, 2, 4) and an ecrypted ARP packet (SSID: SSID, password: password) in base64 encoding: H4sICEarjU8AA2hhbmRzaGFrZS5jYXAAu3J400ImBhYGGPj/n4GhHkhfXNHr37KQgWEqAwQzMAgx 6HkAKbFWzgUMhxgZGDiYrjIwKGUqcW5g4Ldd3rcFQn5IXbWKGaiso4+RmSH+H0MngwLUZMarj4Rn S8vInf5yfO7mgrMyr9g/Jpa9XVbRdaxH58v1fO3vDCQDkCNv7mFgWMsAwXBHMoEceQ3kSMZbDFDn ITk1gBnJkeX/GDkRjmyccfus4BKl75HC2cnW1eXrjExNf66uYz+VGLl+snrF7j2EnHQy3JjDKPb9 3fOd9zT0TmofYZC4K8YQ8IkR6JaAT0zIJMjxtWaMmCEMdvwNnI5PYEYJYSTHM5EegqhggYbFhgsJ 9gJXy42PMx9JzYKEcFkcG0MJULYE2ZEGrZwHIMnASwc1GSw4mmH1JCCNQYEF7C7tjasVT+0/J3LP gie59HFL+5RDIdmZ8rGMEldN5s668eb/tp8vQ+7OrT9jPj/B7425QIGJI3Pft72dLxav8BefvcGU 7+kfABxJX+SjAgAA Decode with: $ base64 -d | gunzip > handshake.cap Run tshark to see if it correctly decrypt the ARP packet: $ tshark -r handshake.cap -o wlan.enable_decryption:TRUE -o wlan.wep_key1:wpa-pwd:password:SSID It should print: 1 0.000000 D-Link_a7:8e:b4 - HonHaiPr_22:09:b0 EAPOL Key 2 0.006997 HonHaiPr_22:09:b0 - D-Link_a7:8e:b4 EAPOL Key 3 0.038137 HonHaiPr_22:09:b0 - D-Link_a7:8e:b4 EAPOL Key 4 0.376050 ZyxelCom_68:3a:e4 - HonHaiPr_22:09:b0 ARP 192.168.1.1 is at 00:a0:c5:68:3a:e4

    Read the article

  • Setting XFCE terminal PS1 value and making it permanent

    - by Matt
    I'm trying to add the value PS1='\u@\h: \w\$ ' to my terminal in XFCE. I added the line to (what I think is) the correct area in /etc/profile. The relevant segment is: # Set a default shell prompt: #PS1='`hostname`:`pwd`# ' PS1='\u@\h: \w\$ ' if [ "$SHELL" = "/bin/pdksh" ]; then # PS1='! $ ' PS1='\u@\h: \w\$ ' elif [ "$SHELL" = "/bin/ksh" ]; then # PS1='! ${PWD/#$HOME/~}$ ' PS1='\u@\h: \w\$ ' elif [ "$SHELL" = "/bin/zsh" ]; then # PS1='%n@%m:%~%# ' PS1='\u@\h: \w\$ ' elif [ "$SHELL" = "/bin/ash" ]; then # PS1='$ ' PS1='\u@\h: \w\$ ' else PS1='\u@\h: \w\$ ' fi Most of that was already there, I just commented out the existing value and added the one I want. By manually opening the terminal and doing . profile, I can load these values, but they don't stick - I close the terminal and reopen, and I'm back to sh-4.1$. Maybe I'm doing this in the wrong place, but how can I make that value stick? All the info I've found on google is Fedora/Ubuntu-specific. I use Slackware. Any help on this matter would be greatly appreciated.

    Read the article

  • Setup ejabberd with SQL Server 2008

    - by wonster
    Here's what I have got so far. Windows 2008 Server 64 bit. Installed the latest version of ejabberd, ejabberd-2.1.8-windows-installer.exe. The windows service starts up fine but seems ineffective. However, using the start & stop scripts work. I am able to login to the admin page which so far doesn't seem that versatile. Opened up ports 5222, 5226 and 5280 for my workstation to talk to the server. I've got Spark and Jabbear Windows clients to register, login and instant message with multiple accounts using the server. After confirming that I've got the very basics working, I've decided to make use of SQL Server 2008 as the database. Reason? Mainly, I am very comfortable with SQL Server. I can deal with redundancy, failover, data analysis easily. Not sure if ejabberd's built in DB provides all that. Following the instructions from ejabberd's documentation, I setup a system DSN that points to another physical database. The DSN checks out fine. (Tried both Named Pipes and TCP/IP) Modified ejabberd.cfg. Commented line %%{auth_method, internal} and uncommented line {auth_method, odbc} Uncommented and modified {odbc_server, "DSN=ejabberd;UID=somelogin;PWD=somepassword"}. After making these changes, I restarted. No errors are found in the log files. The jabber clients are no longer able to register new accounts. I'm not sure where to look for errors besides the /logs/ folder as I'm new to all this. I am basically stuck here on step 5. Has anyone got this setup to work recently? Some of the posts I've found around are years old and of no help. I can't be the only one setting up ejabberd with MS SQL. Any help would be appreciated!

    Read the article

  • Glassfish and SSL [closed]

    - by Richard
    I'm struggling to get SSL working on Glassfish 3.1.1. I've been following tutorials like http://javadude.wordpress.com/2010/04/06/getting-started-with-glassfish-v3-and-ssl/ and SO posts like this Issues with setting up SSL on Glassfish v3 The above links are for information only. I've summarised what I've done below. As far as I can tell I'm doing everything correctly but I'm getting this error: SSL configuration is invalid due to No available certificate or key corresponds to the SSL cipher suites which are enabled Some background of what I have done: My cert is from GoDaddy. I generated the CSR from a new keystore (keystore.jks), then imported the resulting certs back into the same keystore and set the keystore password to the same pwd as the GF master password. Then created a new SSL listener in GF and pointed it at my keystore file (which I copied into domains/domain1/config). Set the Nickname to the alias of my cert (which is something liem 'mydomain.org' i.e. the name that I get when I run keytool -list. In my ciphers section in the network listeners page, I leave the defaults in place (empty, which means all ciphers are available I think). In domain.xml I've replaced all instances of s1as to 'mydomain.org'. This is the question: What exactly is causing the error highlighted? I'm guessing it's a mismatch between my listener config and aliases in my keystore, or something similar, but I'm not really sure what. Thanks

    Read the article

  • Strange ssh login

    - by Hikaru
    I am running debian server and i have received a strange email warning about ssh login It says, that user mail logged in using ssh from remote address: Environment info: USER=mail SSH_CLIENT=92.46.127.173 40814 22 MAIL=/var/mail/mail HOME=/var/mail SSH_TTY=/dev/pts/7 LOGNAME=mail TERM=xterm PATH=/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/games LANG=en_US.UTF-8 SHELL=/bin/sh KRB5CCNAME=FILE:/tmp/krb5cc_8 PWD=/var/mail SSH_CONNECTION=92.46.127.173 40814 my-ip-here 22 I looked in /etc/shadow and find out, that password for is not set mail:*:15316:0:99999:7::: I found this lines for login in auth.log n 3 02:57:09 gw sshd[2090]: pam_winbind(sshd:auth): getting password (0x00000388) Jun 3 02:57:09 gw sshd[2090]: pam_winbind(sshd:auth): pam_get_item returned a password Jun 3 02:57:09 gw sshd[2091]: pam_winbind(sshd:auth): user 'mail' granted access Jun 3 02:57:09 gw sshd[2091]: Accepted password for mail from 92.46.127.173 port 45194 ssh2 Jun 3 02:57:09 gw sshd[2091]: pam_unix(sshd:session): session opened for user mail by (uid=0) Jun 3 02:57:10 gw CRON[2051]: pam_unix(cron:session): session closed for user root and lots of auth failures for this user. There is no lines with COMMAND string for this user. Nothing was found with "rkhunter" and with "ps aux" process inspection, also there is no suspicious connections was found with "netstat" (as I can see) Can anyone tell me how it is possible and what else should be done? Thanks in advance.

    Read the article

  • Customizing tmux status to represent current working directory and files

    - by user69397
    I've been playing with this for a couple of days, so I'm sure I'm missing something simple. Love tmux. Using it for development and have so many windows I need a better way of distinguishing them in the status bar and in the buffer list. Seeing a list of "bash" and "vim" isn't really helpful at all. And since they're all on the same host - don't care about the hostname right now. I'd like to show the current working directory, and the file being worked on. For example when I view the list of buffers I currently see: (0) 0: vim [100x44] (1 panes) "murph" (1) 1: vim [100x44] (1 panes) "murph" (2) 2: bash- [100x44] (1 panes) "murph" (3) 3: bash* [100x44] (1 panes) "murph" Here's what I'd like to see 0:vim main.py ~/devl/project1 1:vim index.html ~/devl/samples/staticfiles 2:bash ~/devl/sandbox 3:bash ~/.vimrc I'd like to see similar info in the status bar for each individual window. While I am able to get PWD to show up in the status bar of a window, it's only the working directory from where tmux was launched. This isn't any help as I change directories. I'm hoping this can be done without a bunch of scripts. Thanks all.

    Read the article

  • cannot get mssql working with sql server 2005

    - by Ryan
    I'm a MySQL/Apache user, trying my hand with IIS and SQL server, so please, if this is a stupid question have patience. I'm using IIS version 7.5. PHP version 5.3.13 and SQL server 2005 IIS is running on port 90, not sure if that will make a difference or not. I know my sql server is running because I can explore/connect to it in Server management studio. I know php is configured properly, because //localhost:90/phpinfo.php works fine. I updated the php_msql.dll extension in phpinfo to: extension=ext/php_msql.dll EDIT- However, when I run phpinfo() under the "configure command" row, this is present: --without-mssql I found/downloaded the ntwdblib.dll and placed it in both sys32 and php root. All these things were supposed to fix the issue, and they haven't. This is the code I'm using, straight from php.net: <?php // Server in the this format: <computer>\<instance name> or // <server>,<port> when using a non default port number $server = 'localhost'; // Connect to MSSQL $link = mssql_connect($server, 'uname', 'pwd'); if (!$link) { die('Something went wrong while connecting to MSSQL'); } ?> obviously I'm using a real username and password, but when I load the file in my browser, I receive a 500 error. Upon checking the log, this is what is displayed: 2012-06-25 12:41:29 ::1 GET /test.php - 90 - ::1 Mozilla/5.0+(Windows+NT+6.1;+WOW64)+AppleWebKit/536.5+(KHTML,+like+Gecko)+Chrome/19.0.1084.56+Safari/536.5 500 0 0 5 That (to me) doesn't help me much. What am I doing wrong? Thank you

    Read the article

  • What ports, besides 80, need to be available to send (only send) email using phpmailer to gmail over SSL?

    - by Wobblefoot
    Using phpmailer I keep getting a 110 timeout and "Unable to connect to host" when sending email from my web server. The authentication details are right and they work on another server I have (login, pwd, ports etc and gmail acct set up for SSL connections on 465), but it's failing on my new server. FIREWALL: I allow related/established, port 80 and a port for SSH on INPUT, then this on OUTPUT: 7906 474K DROP tcp -- any any anywhere anywhere tcp dpt:smtp 0 0 ACCEPT tcp -- any any localhost.localdomain yw-in-f109.1e100.net tcp dpt:submission 0 0 ACCEPT tcp -- any any localhost.localdomain gx-in-f109.1e100.net tcp dpt:ssmtp 0 0 DROP tcp -- any any anywhere anywhere tcp dpt:submission 9 540 DROP tcp -- any any anywhere anywhere tcp dpt:ssmtp This output chain works on my other server and disabling it doesn't get mail delivered either. WEB SERVER: Varnish (80) Nginx (8088) Drupal 7 PHP5-FPM APC MySQL All works beautifully, except for outgoing email. What else could it be? I understand phpmailer does NOT require a local MTA or procmail (this is sort of the point - I don't want the security or admin overhead of a full blown MTA on my web server). Am I wrong? Do I need an MTA as well? What local ports and programs are used to authenticate over SSL and route mail using phpmailer? Any ideas at all greatly appreciated - wasted a day on this nonsense already!

    Read the article

  • The instruction at “0x7c910a19” referenced memory at “oxffffffff”. The memory could not be “read”

    - by ClareBear
    Hello guys/girls I have a small issue, I receive the following error before the .vbs terminates. I don't know why this error is thrown. Below is the process of the .vbs file: Call ImportTransactions() Call UpdateTransactions() Function ImportTransactions() Dim objConnection, objCommand, objRecordset, strOracle Dim strSQL, objRecordsetInsert Set objConnection = CreateObject("ADODB.Connection") objConnection.Open "DSN=*****;UID=*****;PWD==*****;" Set objCommand = CreateObject("ADODB.Command") Set objRecordset = CreateObject("ADODB.Recordset") strOracle = "SELECT query here from Oracle database" objCommand.CommandText = strOracle objCommand.CommandType = 1 objCommand.CommandTimeout = 0 Set objCommand.ActiveConnection = objConnection objRecordset.cursorType = 0 objRecordset.cursorlocation = 3 objRecordset.Open objCommand, , 1, 3 If objRecordset.EOF = False Then Do Until objRecordset.EOF = True strSQL = "INSERT query here into SQL database" strSQL = Query(strSQL) Call RunSQL(strSQL, objRecordsetInsert, False, conTimeOut, conServer, conDatabase, conUsername, conPassword) objRecordset.MoveNext Loop End If objRecordset.Close() Set objRecordset = Nothing Set objRecordsetInsert = Nothing End Function Function UpdateTransactions() Dim strSQLUpdateVAT, strSQLUpdateCodes Dim objRecordsetVAT, objRecordsetUpdateCodes strSQLUpdateVAT = "UPDATE query here SET [value:costing output] = ([value:costing output] * -1)" Call RunSQL(strSQLUpdateVAT, objRecordsetVAT, False, conTimeOut, conServer, conDatabase, conUsername, conPassword) strSQLUpdateCodes = "UPDATE query here SET [value:costing output] = ([value:costing output] * -1) different WHERE clause" Call RunSQL(strSQLUpdateCodes, objRecordsetUpdateCodes, False, conTimeOut, conServer, conDatabase, conUsername, conPassword) Set objRecordsetVAT = Nothing Set objRecordsetUpdateCodes = Nothing End Function It does both the import and update and seems to throw this error after. If I comment out the ImportTransactions it doesnt throw a error, however I have produced similar code for another vbs file and this does not throw any errors Thanks in advance for any help, Clare

    Read the article

  • Customizing tmux status to represent current working directory and files

    - by user69397
    I've been playing with this for a couple of days, so I'm sure I'm missing something simple. Love tmux. Using it for development and have so many windows I need a better way of distinguishing them in the status bar and in the buffer list. Seeing a list of "bash" and "vim" isn't really helpful at all. And since they're all on the same host - don't care about the hostname right now. I'd like to show the current working directory, and the file being worked on. For example when I view the list of buffers I currently see: (0) 0: vim [100x44] (1 panes) "murph" (1) 1: vim [100x44] (1 panes) "murph" (2) 2: bash- [100x44] (1 panes) "murph" (3) 3: bash* [100x44] (1 panes) "murph" Here's what I'd like to see 0:vim main.py ~/devl/project1 1:vim index.html ~/devl/samples/staticfiles 2:bash ~/devl/sandbox 3:bash ~/.vimrc I'd like to see similar info in the status bar for each individual window. While I am able to get PWD to show up in the status bar of a window, it's only the working directory from where tmux was launched. This isn't any help as I change directories. I'm hoping this can be done without a bunch of scripts. Thanks all.

    Read the article

  • Snort's problems in generating alert from Darpa 1998 intrusion detection dataset.

    - by manofseven2
    Hi. I’m working on DARPA 1998 intrusion detection dataset. When I run snort on this dataset (outside.tcpdump file), snort don’t generate complete list of alerts. It means snort start from last few hours of tcpdump file and generate alerts about this section of file and all of packets in first hours are ignored. Another problem in generatin alert is in time stamp of generated alerts. This means when I run snort on a specific day of dataset, snort insert incorrect time stamp for that alert. The configuration and command line statement and other information about my research are: Snort version: 2.8.6 Operating system: windows XP Rule version: snortrules-snapshot-2860_s.tar.gz -———————————————————————— Command line: snort_2.8.6 c D:\programs\Snort_2.8.6\snort\etc\snort.conf -r d:\users\amir\docs\darpa\training_data\week_3\monday\outside.tcpdump -l D:\users\amir\current-task\research\thesis\snort\890230 -————————————————————————— Snort.config Hi. I'm working on DARPA 1998 intrusion detection dataset. When I run snort on this dataset (outside.tcpdump file), snort don't generate complete list of alerts. It means snort start from last few hours of tcpdump file and generate alerts about this section of file and all of packets in first hours are ignored. Another problem in generatin alert is in time stamp of generated alerts. This means when I run snort on a specific day of dataset, snort insert incorrect time stamp for that alert. The configuration and command line statement and other information about my research are: Snort version: 2.8.6 Operating system: windows XP Rule version: snortrules-snapshot-2860_s.tar.gz Command line: snort_2.8.6 -c D:\programs\Snort_2.8.6\snort\etc\snort.conf -r d:\users\amir\docs\darpa\training_data\week_3\monday\outside.tcpdump -l D:\users\amir\current-task\research\thesis\snort\890230 Snort.config # Setup the network addresses you are protecting var HOME_NET any # Set up the external network addresses. Leave as "any" in most situations var EXTERNAL_NET any # List of DNS servers on your network var DNS_SERVERS $HOME_NET # List of SMTP servers on your network var SMTP_SERVERS $HOME_NET # List of web servers on your network var HTTP_SERVERS $HOME_NET # List of sql servers on your network var SQL_SERVERS $HOME_NET # List of telnet servers on your network var TELNET_SERVERS $HOME_NET # List of ssh servers on your network var SSH_SERVERS $HOME_NET # List of ports you run web servers on portvar HTTP_PORTS [80,1220,2301,3128,7777,7779,8000,8008,8028,8080,8180,8888,9999] # List of ports you want to look for SHELLCODE on. portvar SHELLCODE_PORTS !80 # List of ports you might see oracle attacks on portvar ORACLE_PORTS 1024: # List of ports you want to look for SSH connections on: portvar SSH_PORTS 22 # other variables, these should not be modified var AIM_SERVERS [64.12.24.0/23,64.12.28.0/23,64.12.161.0/24,64.12.163.0/24,64.12.200.0/24,205.188.3.0/24,205.188.5.0/24,205.188.7.0/24,205.188.9.0/24,205.188.153.0/24,205.188.179.0/24,205.188.248.0/24] var RULE_PATH ../rules var SO_RULE_PATH ../so_rules var PREPROC_RULE_PATH ../preproc_rules # Stop generic decode events: config disable_decode_alerts # Stop Alerts on experimental TCP options config disable_tcpopt_experimental_alerts # Stop Alerts on obsolete TCP options config disable_tcpopt_obsolete_alerts # Stop Alerts on T/TCP alerts config disable_tcpopt_ttcp_alerts # Stop Alerts on all other TCPOption type events: config disable_tcpopt_alerts # Stop Alerts on invalid ip options config disable_ipopt_alerts # Alert if value in length field (IP, TCP, UDP) is greater th elength of the packet # config enable_decode_oversized_alerts # Same as above, but drop packet if in Inline mode (requires enable_decode_oversized_alerts) # config enable_decode_oversized_drops # Configure IP / TCP checksum mode config checksum_mode: all config pcre_match_limit: 1500 config pcre_match_limit_recursion: 1500 # Configure the detection engine See the Snort Manual, Configuring Snort - Includes - Config config detection: search-method ac-split search-optimize max-pattern-len 20 # Configure the event queue. For more information, see README.event_queue config event_queue: max_queue 8 log 3 order_events content_length dynamicpreprocessor directory D:\programs\Snort_2.8.6\snort\lib\snort_dynamicpreprocessor dynamicengine D:\programs\Snort_2.8.6\snort\lib\snort_dynamicengine\sf_engine.dll # path to dynamic rules libraries #dynamicdetection directory /usr/local/lib/snort_dynamicrules preprocessor frag3_global: max_frags 65536 preprocessor frag3_engine: policy windows detect_anomalies overlap_limit 10 min_fragment_length 100 timeout 180 preprocessor stream5_global: max_tcp 8192, track_tcp yes, track_udp yes, track_icmp no preprocessor stream5_tcp: policy windows, detect_anomalies, require_3whs 180, \ overlap_limit 10, small_segments 3 bytes 150, timeout 180, \ ports client 21 22 23 25 42 53 79 109 110 111 113 119 135 136 137 139 143 \ 161 445 513 514 587 593 691 1433 1521 2100 3306 6665 6666 6667 6668 6669 \ 7000 32770 32771 32772 32773 32774 32775 32776 32777 32778 32779, \ ports both 80 443 465 563 636 989 992 993 994 995 1220 2301 3128 6907 7702 7777 7779 7801 7900 7901 7902 7903 7904 7905 \ 7906 7908 7909 7910 7911 7912 7913 7914 7915 7916 7917 7918 7919 7920 8000 8008 8028 8080 8180 8888 9999 preprocessor stream5_udp: timeout 180 preprocessor http_inspect: global iis_unicode_map unicode.map 1252 compress_depth 20480 decompress_depth 20480 preprocessor http_inspect_server: server default \ chunk_length 500000 \ server_flow_depth 0 \ client_flow_depth 0 \ post_depth 65495 \ oversize_dir_length 500 \ max_header_length 750 \ max_headers 100 \ ports { 80 1220 2301 3128 7777 7779 8000 8008 8028 8080 8180 8888 9999 } \ non_rfc_char { 0x00 0x01 0x02 0x03 0x04 0x05 0x06 0x07 } \ enable_cookie \ extended_response_inspection \ inspect_gzip \ apache_whitespace no \ ascii no \ bare_byte no \ directory no \ double_decode no \ iis_backslash no \ iis_delimiter no \ iis_unicode no \ multi_slash no \ non_strict \ u_encode yes \ webroot no preprocessor rpc_decode: 111 32770 32771 32772 32773 32774 32775 32776 32777 32778 32779 no_alert_multiple_requests no_alert_large_fragments no_alert_incomplete preprocessor bo preprocessor ftp_telnet: global inspection_type stateful encrypted_traffic no preprocessor ftp_telnet_protocol: telnet \ ayt_attack_thresh 20 \ normalize ports { 23 } \ detect_anomalies preprocessor ftp_telnet_protocol: ftp server default \ def_max_param_len 100 \ ports { 21 2100 3535 } \ telnet_cmds yes \ ignore_telnet_erase_cmds yes \ ftp_cmds { ABOR ACCT ADAT ALLO APPE AUTH CCC CDUP } \ ftp_cmds { CEL CLNT CMD CONF CWD DELE ENC EPRT } \ ftp_cmds { EPSV ESTA ESTP FEAT HELP LANG LIST LPRT } \ ftp_cmds { LPSV MACB MAIL MDTM MIC MKD MLSD MLST } \ ftp_cmds { MODE NLST NOOP OPTS PASS PASV PBSZ PORT } \ ftp_cmds { PROT PWD QUIT REIN REST RETR RMD RNFR } \ ftp_cmds { RNTO SDUP SITE SIZE SMNT STAT STOR STOU } \ ftp_cmds { STRU SYST TEST TYPE USER XCUP XCRC XCWD } \ ftp_cmds { XMAS XMD5 XMKD XPWD XRCP XRMD XRSQ XSEM } \ ftp_cmds { XSEN XSHA1 XSHA256 } \ alt_max_param_len 0 { ABOR CCC CDUP ESTA FEAT LPSV NOOP PASV PWD QUIT REIN STOU SYST XCUP XPWD } \ alt_max_param_len 200 { ALLO APPE CMD HELP NLST RETR RNFR STOR STOU XMKD } \ alt_max_param_len 256 { CWD RNTO } \ alt_max_param_len 400 { PORT } \ alt_max_param_len 512 { SIZE } \ chk_str_fmt { ACCT ADAT ALLO APPE AUTH CEL CLNT CMD } \ chk_str_fmt { CONF CWD DELE ENC EPRT EPSV ESTP HELP } \ chk_str_fmt { LANG LIST LPRT MACB MAIL MDTM MIC MKD } \ chk_str_fmt { MLSD MLST MODE NLST OPTS PASS PBSZ PORT } \ chk_str_fmt { PROT REST RETR RMD RNFR RNTO SDUP SITE } \ chk_str_fmt { SIZE SMNT STAT STOR STRU TEST TYPE USER } \ chk_str_fmt { XCRC XCWD XMAS XMD5 XMKD XRCP XRMD XRSQ } \ chk_str_fmt { XSEM XSEN XSHA1 XSHA256 } \ cmd_validity ALLO \ cmd_validity EPSV \ cmd_validity MACB \ cmd_validity MDTM \ cmd_validity MODE \ cmd_validity PORT \ cmd_validity PROT \ cmd_validity STRU \ cmd_validity TYPE preprocessor ftp_telnet_protocol: ftp client default \ max_resp_len 256 \ bounce yes \ ignore_telnet_erase_cmds yes \ telnet_cmds yes preprocessor smtp: ports { 25 465 587 691 } \ inspection_type stateful \ normalize cmds \ normalize_cmds { MAIL RCPT HELP HELO ETRN EHLO EXPN VRFY ATRN SIZE BDAT DEBUG EMAL ESAM ESND ESOM EVFY IDENT NOOP RSET SEND SAML SOML AUTH TURN DATA QUIT ONEX QUEU STARTTLS TICK TIME TURNME VERB X-EXPS X-LINK2STATE XADR XAUTH XCIR XEXCH50 XGEN XLICENSE XQUE XSTA XTRN XUSR } \ max_command_line_len 512 \ max_header_line_len 1000 \ max_response_line_len 512 \ alt_max_command_line_len 260 { MAIL } \ alt_max_command_line_len 300 { RCPT } \ alt_max_command_line_len 500 { HELP HELO ETRN EHLO } \ alt_max_command_line_len 255 { EXPN VRFY ATRN SIZE BDAT DEBUG EMAL ESAM ESND ESOM EVFY IDENT NOOP RSET } \ alt_max_command_line_len 246 { SEND SAML SOML AUTH TURN ETRN DATA RSET QUIT ONEX QUEU STARTTLS TICK TIME TURNME VERB X-EXPS X-LINK2STATE XADR XAUTH XCIR XEXCH50 XGEN XLICENSE XQUE XSTA XTRN XUSR } \ valid_cmds { MAIL RCPT HELP HELO ETRN EHLO EXPN VRFY ATRN SIZE BDAT DEBUG EMAL ESAM ESND ESOM EVFY IDENT NOOP RSET SEND SAML SOML AUTH TURN DATA QUIT ONEX QUEU STARTTLS TICK TIME TURNME VERB X-EXPS X-LINK2STATE XADR XAUTH XCIR XEXCH50 XGEN XLICENSE XQUE XSTA XTRN XUSR } \ xlink2state { enabled } preprocessor ssh: server_ports { 22 } \ autodetect \ max_client_bytes 19600 \ max_encrypted_packets 20 \ max_server_version_len 100 \ enable_respoverflow enable_ssh1crc32 \ enable_srvoverflow enable_protomismatch preprocessor dcerpc2: memcap 102400, events [co ] preprocessor dcerpc2_server: default, policy WinXP, \ detect [smb [139,445], tcp 135, udp 135, rpc-over-http-server 593], \ autodetect [tcp 1025:, udp 1025:, rpc-over-http-server 1025:], \ smb_max_chain 3 preprocessor dns: ports { 53 } enable_rdata_overflow preprocessor ssl: ports { 443 465 563 636 989 992 993 994 995 7801 7702 7900 7901 7902 7903 7904 7905 7906 6907 7908 7909 7910 7911 7912 7913 7914 7915 7916 7917 7918 7919 7920 }, trustservers, noinspect_encrypted # SDF sensitive data preprocessor. For more information see README.sensitive_data preprocessor sensitive_data: alert_threshold 25 output alert_full: alert.log output database: log, mysql, user=root password=123456 dbname=snort host=localhost include classification.config include reference.config include $RULE_PATH/local.rules include $RULE_PATH/attack-responses.rules include $RULE_PATH/backdoor.rules include $RULE_PATH/bad-traffic.rules include $RULE_PATH/chat.rules include $RULE_PATH/content-replace.rules include $RULE_PATH/ddos.rules include $RULE_PATH/dns.rules include $RULE_PATH/dos.rules include $RULE_PATH/exploit.rules include $RULE_PATH/finger.rules include $RULE_PATH/ftp.rules include $RULE_PATH/icmp.rules include $RULE_PATH/icmp-info.rules include $RULE_PATH/imap.rules include $RULE_PATH/info.rules include $RULE_PATH/misc.rules include $RULE_PATH/multimedia.rules include $RULE_PATH/mysql.rules include $RULE_PATH/netbios.rules include $RULE_PATH/nntp.rules include $RULE_PATH/oracle.rules include $RULE_PATH/other-ids.rules include $RULE_PATH/p2p.rules include $RULE_PATH/policy.rules include $RULE_PATH/pop2.rules include $RULE_PATH/pop3.rules include $RULE_PATH/rpc.rules include $RULE_PATH/rservices.rules include $RULE_PATH/scada.rules include $RULE_PATH/scan.rules include $RULE_PATH/shellcode.rules include $RULE_PATH/smtp.rules include $RULE_PATH/snmp.rules include $RULE_PATH/specific-threats.rules include $RULE_PATH/spyware-put.rules include $RULE_PATH/sql.rules include $RULE_PATH/telnet.rules include $RULE_PATH/tftp.rules include $RULE_PATH/virus.rules include $RULE_PATH/voip.rules include $RULE_PATH/web-activex.rules include $RULE_PATH/web-attacks.rules include $RULE_PATH/web-cgi.rules include $RULE_PATH/web-client.rules include $RULE_PATH/web-coldfusion.rules include $RULE_PATH/web-frontpage.rules include $RULE_PATH/web-iis.rules include $RULE_PATH/web-misc.rules include $RULE_PATH/web-php.rules include $RULE_PATH/x11.rules include threshold.conf -————————————————————————————- Can anyone help me to solve this problem? Thanks.

    Read the article

  • Client no longer getting data from Web Service after introducing targetNamespace in XSD

    - by Laurence
    Sorry if there is way too much info in this post – there’s a load of story before I get to the actual problem. I thought I‘d include everything that might be relevant as I don’t have much clue what is wrong. I had a working web service and client (both written with VS 2008 in C#) for passing product data to an e-commerce site. The XSD started like this: <xs:schema id="Ecommerce" elementFormDefault="qualified" xmlns:mstns="http://tempuri.org/Ecommerce.xsd" xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="eur"> <xs:complexType> <xs:sequence> <xs:element ref="sec" minOccurs="1" maxOccurs="1"/> </xs:sequence> etc Here’s a sample document sent from client to service: <eur xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" class="ECommerce_WebService" type="product" method="GetLastDateSent" chunk_no="1" total_chunks="1" date_stamp="2010-03-10T17:16:34.523" version="1.1"> <sec guid="BFBACB3C-4C17-4786-ACCF-96BFDBF32DA5" company_name="Company" version="1.1"> <data /> </sec> </eur> Then, I had to give the service a targetNamespace. Actually I don’t know if I “had” to set it, but I added (to the same VS project) some code to act as a client to a completely unrelated service (which also had no namespace), and the project would not build until I gave my service a namespace. Now the XSD starts like this: <xs:schema id="Ecommerce" elementFormDefault="qualified" xmlns:mstns="http://tempuri.org/Ecommerce.xsd" xmlns:xs="http://www.w3.org/2001/XMLSchema" targetNamespace="http://www.company.com/ecommerce" xmlns:ecom="http://www. company.com/ecommerce"> <xs:element name="eur"> <xs:complexType> <xs:sequence> <xs:element ref="ecom:sec" minOccurs="1" maxOccurs="1" /> </xs:sequence> etc As you can see above I also updated all the xs:element ref attributes to give them the “ecom” prefix. Now the project builds again. I found the client needed some modification after this. The client uses a SQL stored procedure to generate the XML. This is then de-serialised into an object of the correct type for the service’s “get_data” method. The object’s type used to be “eur” but after updating the web reference to the service, it became “get_dataEur”. And sure enough the parent element in the XML had to be changed to “get_dataEur” to be accepted. Then bizarrely I also had to put the xmlns attribute containing my namespace on the “sec” element (the immediate child of the parent element) rather than the parent element. Here’s a sample document now sent from client to service: <get_dataEur xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" class="ECommerce_WebService" type="product" method="GetLastDateSent" chunk_no="1" total_chunks="1" date_stamp="2010-03-10T18:23:20.653" version="1.1"> <sec xmlns="http://www.company.com/ecommerce" guid="BFBACB3C-4C17-4786-ACCF-96BFDBF32DA5" company_name="Company" version="1.1"> <data /> </sec> </get_dataEur> If in the service’s get_data method I then serialize the incoming object I see this (the parent element is “eur” and the xmlns attribute is on the parent element): <eur xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="http://www.company.com/ecommerce" class="ECommerce_WebService" type="product" method="GetLastDateSent" chunk_no="1" total_chunks="1" date_stamp="2010-03-10T18:23:20.653" version="1.1"> <sec guid="BFBACB3C-4C17-4786-ACCF-96BFDBF32DA5" company_name="Company" version="1.1"> <data /> </sec> </eur> The service then prepares a reply to go back to the client. The XML looks like this (the important data being sent back is the date_stamp attribute in the last_sent element): <eur xmlns="http://www.company.com/ecommerce" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" class="ECommerce_WebService" type="product" method="GetLastDateSent" chunk_no="1" total_chunks="1" date_stamp="2010-03-10T18:22:57.530" version="1.1"> <sec version="1.1" xmlns=""> <data> <last_sent date_stamp="2010-02-25T15:15:10.193" /> </data> </sec> </eur> Now finally, here’s the problem!!! The client does not see any data – all it sees is the parent element with nothing inside it. If I serialize the reply object in the client code it looks like this: <get_dataResponseEur xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" class="ECommerce_WebService" type="product" method="GetLastDateSent" chunk_no="1" total_chunks="1" date_stamp="2010-03-10T18:22:57.53" version="1.1" /> So, my questions are: why isn’t my client seeing the contents of the reply document? how do I fix it? why do I have to put the xmlns attribute on a child element rather than the parent element in the outgoing document? Here’s a bit more possibly relevant info: The client code (pre-namespace) called the service method like this: XmlSerializer serializer = new XmlSerializer(typeof(eur)); XmlReader reader = xml.CreateReader(); eur eur = (eur)serializer.Deserialize(reader); service.Credentials = new NetworkCredential(login, pwd); service.Url = url; rc = service.get_data(ref eur); After the namespace was added I had to change it to this: XmlSerializer serializer = new XmlSerializer(typeof(get_dataEur)); XmlReader reader = xml.CreateReader(); get_dataEur eur = (get_dataEur)serializer.Deserialize(reader); get_dataResponseEur eur1 = new get_dataResponseEur(); service.Credentials = new NetworkCredential(login, pwd); service.Url = url; rc = service.get_data(eur, out eur1);

    Read the article

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Create a WCF REST Client Proxy Programatically (in C#)

    - by Tawani
    I am trying to create a REST Client proxy programatically in C# using the code below but I keep getting a CommunicationException error. Am I missing something? public static class WebProxyFactory { public static T Create<T>(string url) where T : class { ServicePointManager.Expect100Continue = false; WebHttpBinding binding = new WebHttpBinding(); binding.MaxReceivedMessageSize = 1000000; WebChannelFactory<T> factory = new WebChannelFactory<T>(binding, new Uri(url)); T proxy = factory.CreateChannel(); return proxy; } public static T Create<T>(string url, string userName, string password) where T : class { ServicePointManager.Expect100Continue = false; WebHttpBinding binding = new WebHttpBinding(); binding.Security.Mode = WebHttpSecurityMode.TransportCredentialOnly; binding.Security.Transport.ClientCredentialType = HttpClientCredentialType.Basic; binding.UseDefaultWebProxy = false; binding.MaxReceivedMessageSize = 1000000; WebChannelFactory<T> factory = new WebChannelFactory<T>(binding, new Uri(url)); ClientCredentials credentials = factory.Credentials; credentials.UserName.UserName = userName; credentials.UserName.Password = password; T proxy = factory.CreateChannel(); return proxy; } } So that I can use it as follows: IMyRestService proxy = WebProxyFactory.Create<IMyRestService>(url, usr, pwd); var result = proxy.GetSomthing(); // Fails right here

    Read the article

  • Getting the JSESSIONID from the response Headers in C#

    - by acadia
    Hello, In my C# Windows application I am building a web request and getting the response back Uri uri = null; string workplaceURL = "http://filenet:9081/WorkPlaceXT"; uri = new Uri(workplaceURL + "/setCredentials?op=getUserToken&userId=" + encodeLabel(userName) + "&password=" + encodeLabel(pwd) + "&verify=true"); System.Net.WebRequest webRequest = System.Net.WebRequest.Create(uri); System.Net.WebResponse webResponse = webRequest.GetResponse(); StreamReader streamReader = new StreamReader(webResponse.GetResponseStream()); and I am getting the headers back as shown below ?webResponse.Headers {ResultXml: <?xml version="1.0"?><response><errorcode>0</errorcode><description>Success.</description></response> Content-Language: en-US Content-Length: 201 Cache-Control: no-cache="set-cookie, set-cookie2" Date: Thu, 03 Jun 2010 16:10:12 GMT Expires: Thu, 01 Dec 1994 16:00:00 GMT Set-Cookie: JSESSIONID=0000GiPPR9PPceZSv6d0FC4-vcT:-1; Path=/ Server: WebSphere Application Server/6.1 } base {System.Collections.Specialized.NameValueCollection}: {ResultXml: <?xml version="1.0"?><response><errorcode>0</errorcode><description>Success.</description></response> Content-Language: en-US Content-Length: 201 Cache-Control: no-cache="set-cookie, set-cookie2" Date: Thu, 03 Jun 2010 16:10:12 GMT Expires: Thu, 01 Dec 1994 16:00:00 GMT Set-Cookie: JSESSIONID=0000GiPPR9PPceZSv6d0FC4-vcT:-1; Path=/ Server: WebSphere Application Server/6.1 How do I fetch just the JSESSIONID? as I need to pass the JSESSIOID to a different URL. Please help

    Read the article

  • Use MTOM/streaming from C# calling a webservice in java exposed via jaxws

    - by raticulin
    We have this webservice created with jax-ws @WebService(name = "Mywebser", targetNamespace = "http://namespace") @MTOM(threshold = 2048) @SOAPBinding(style = SOAPBinding.Style.DOCUMENT, use = SOAPBinding.Use.LITERAL, parameterStyle = SOAPBinding.ParameterStyle.WRAPPED) public class Mywebser { @WebMethod(operationName = "doStreaming", action = "urn:doStreaming") @WebResult(name = "return") public ResultInfo doStreaming(String username, String pwd, @XmlMimeType("application/octet-stream") DataHandler data, boolean overw){ ... } } The generated client side looks like this: @WebMethod(action = "urn:doStreaming") @WebResult(targetNamespace = "") @RequestWrapper(localName = "doStreaming", targetNamespace = "http://namespace", className = "com.mypack.client.doStreaming") @ResponseWrapper(localName = "doStreamingResponse", targetNamespace = "http://namespace", className = "com.mypack.client.doStreamingResponse") public ResultInfo doStreaming( @WebParam(name = "arg0", targetNamespace = "") String arg0, @WebParam(name = "arg1", targetNamespace = "") String arg1, @WebParam(name = "arg2", targetNamespace = "") DataHandler arg2, @WebParam(name = "arg3", targetNamespace = "") boolean arg3); By using it this way it uses streaming properly (verified we can pass an argument of 80mb when the jvm had less allowed. MywebserService serv = ...; Mywebser wso = serv.getMywebserPort(new MTOMFeature()); Map<String, Object> ctxt = ((BindingProvider) wso).getRequestContext(); ctxt.put(JAXWSProperties.HTTP_CLIENT_STREAMING_CHUNK_SIZE, 8192); DataHandler dataHandler = new DataHandler(new FileDataSource("c:\\temp\\A.dat")); arcres = wso.doStreaming("a", "b", dataHandler, true); We generate a clienet for .net, with VS2008, using "Add Web Reference", we get this C# code: [System.Web.Services.Protocols.SoapDocumentMethodAttribute("urn:doStreaming",RequestNamespace="http://namespace",ResponseNamespace="http://namespace",Use=System.Web.Services.Description.SoapBindingUse.Literal,ParameterStyle=System.Web.Services.Protocols.SoapParameterStyle.Wrapped)] [return: System.Xml.Serialization.XmlElementAttribute("return",Form=System.Xml.Schema.XmlSchemaForm.Unqualified)] public ResultInfo doStreaming( [System.Xml.Serialization.XmlElementAttribute(Form=System.Xml.Schema.XmlSchemaForm.Unqualified)] string arg0, [System.Xml.Serialization.XmlElementAttribute(Form=System.Xml.Schema.XmlSchemaForm.Unqualified)] string arg1, [System.Xml.Serialization.XmlElementAttribute(Form=System.Xml.Schema.XmlSchemaForm.Unqualified,DataType="base64Binary")] byte[] arg2, [System.Xml.Serialization.XmlElementAttribute(Form=System.Xml.Schema.XmlSchemaForm.Unqualified)] bool arg3) Apparently this is not using streaming? The type base64Binary of arg2 seems not the right one? In java it's a DataHandler. By testing it with low memory on the java side we can see it is not using streaming as it fails with OOM. Does someone knows if this is possible, and if so how? Our environment: server: jdk1.6, jaxws 2.1.7 client: C# 2.0, visual studio 2008

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22  | Next Page >