Search Results

Search found 12215 results on 489 pages for 'identity column'.

Page 178/489 | < Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >

  • Sshfs is not working..

    - by Devrim
    Hi, When I run sshpass -p 'mypass' sshfs 'root'@'68.19.40.16':/ '/dir' -o StrictHostKeyChecking=no,debug It successfully mounts but it runs on foreground. When I run without 'debug' parameter, it doesn't mount at all. Server is ubuntu 8.04 Any ideas why? UPDATE: When I run the command as ROOT it does mount. It doesn't work with other users. here is the output of an unsuccessful mount $ sshpass -p 'pass' sshfs 'root'@'68.1.1.1':/ '/s6' -o StrictHostKeyChecking=no,sshfs_debug,loglevel=debug debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Connecting to 68.1.1.1 [68.1.1.1] port 22. debug1: Connection established. debug1: identity file /var/www/vhosts/devrim.kodingen.com/.ssh/id_rsa type -1 debug1: identity file /var/www/vhosts/devrim.kodingen.com/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1p1 Debian-5 debug1: match: OpenSSH_5.1p1 Debian-5 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_4.7p1 Debian-8ubuntu1.2 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-cbc hmac-md5 none debug1: kex: client->server aes128-cbc hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY Warning: Permanently added '68.1.1.1' (RSA) to the list of known hosts. debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Trying private key: /var/www/vhosts/devrim.kodingen.com/.ssh/id_rsa debug1: Trying private key: /var/www/vhosts/devrim.kodingen.com/.ssh/id_dsa debug1: Next authentication method: password debug1: Authentication succeeded (password). debug1: channel 0: new [client-session] debug1: Entering interactive session. debug1: Sending environment. debug1: Sending env LANG = en_GB.UTF-8 debug1: Sending subsystem: sftp Server version: 3 debug1: channel 0: free: client-session, nchannels 1 debug1: fd 0 clearing O_NONBLOCK debug1: Killed by signal 1.

    Read the article

  • Problem with creation of scheduled task from IIS6 on SR2003

    - by Morten Louw Nielsen
    Hi, I have also posted this question on stackoverflow, but will also try here, since it might be more system-related I am writing a webapplication using .NET. The webapp creates scheduled tasks using the System.Diagnostics.Process class, calling SCHTASKS.EXE with parameters. I have changed the identity on the app pool, to a specific domain user. The domain-user is local administrator on all the four webservers. From webserver01 I am creating tasks on webserver01 to webserver04. It works perfect for 3-5 days, but then it breaks. It gives me the following errormessage in a messagebox: "The application failed to initialize properly (0xc0000142). Click on OK to terminate the application." If I have the system in the broken state, and I change the identity of the app pool to Domain administrator, it works. As I change it back to my domain-user, it breaks again. If I reboot the server, it works again for the same amount of days, but will break again. It seems like a permission-related problem. I just don't understand why it works sometimes, and sometimes doesn't. I hope someone outthere has seen this problem! Looking forward to hear from you! Kind regards, Morten, Denmark

    Read the article

  • Can't login via ssh after upgrading to Ubuntu 12.10

    - by user42899
    I have an Ubuntu 12.04LTS instance on AWS EC2 and I upgraded it to 12.10 following the instructions at https://help.ubuntu.com/community/QuantalUpgrades. After upgrading I can no longer ssh into my VM. It isn't accepting my ssh key and my password is also rejected. The VM is running, reachable, and SSH is started. The problem seems to be about the authentication part. SSH has been the only way for me to access that VM. What are my options? ubuntu@alice:~$ ssh -v -i .ssh/sos.pem [email protected] OpenSSH_5.9p1 Debian-5ubuntu1, OpenSSL 1.0.1 14 Mar 2012 debug1: Reading configuration data /home/ubuntu/.ssh/config debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: Connecting to www.hostname.com [37.37.37.37] port 22. debug1: Connection established. debug1: identity file .ssh/sos.pem type -1 debug1: identity file .ssh/sos.pem-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9p1 Debian-5ubuntu1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: RSA 33:33:33:33:33:33:33:33:33:33:33:33:33:33 debug1: Host '[www.hostname.com]:22' is known and matches the RSA host key. debug1: Found key in /home/ubuntu/.ssh/known_hosts:12 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Trying private key: .ssh/sos.pem debug1: read PEM private key done: type RSA debug1: Authentications that can continue: publickey,password debug1: Next authentication method: password [email protected]'s password: debug1: Authentications that can continue: publickey,password Permission denied, please try again.

    Read the article

  • SSH login very slow on OS X Leopard

    - by acjohnson55
    My SSH sessions take a very long time to initiate. This applies for logins with and without passwords, interactive and non-interactive. I have tried setting 'GSSAPIAuthentication no' and 'IPQoS 0x00' on the client side, and 'UseDNS no' on the server side, but no dice. I'm really stumped and frustrated. The worst part is that it SFTP takes forever to establish connections too, making file transfer much longer than it would be otherwise. I thought the problem might be something with PAM, because of where the hang is in the sshd log below, so I tried commenting out each line one-by-one in the /etc/pam.d/sshd file. Some caused login to be impossible, some had no apparent effect. I can't really tell if PAM is stalling for other services, but I can say that su'ing into my account from another account with 'su -l' has no apparent delay. I tried creating a new user account, just to see if there was something wrong with my existing account, and the same problem persisted. Any ideas of what's going on? On the client side, the most verbose mode outputs (redacted where reasonable): OpenSSH_5.9p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data ... debug1: ... line 1: Applying options for ... debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug1: /etc/ssh_config line 53: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to ... [x.x.x.x] port 22. debug1: Connection established. debug1: identity file /.../.ssh/id_rsa type -1 debug1: identity file /.../.ssh/id_rsa-cert type -1 debug3: Incorrect RSA1 identifier debug3: Could not load "/.../.ssh/id_dsa" as a RSA1 public key debug1: identity file /.../.ssh/id_dsa type 2 debug1: identity file /.../.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.2 debug1: match: OpenSSH_5.2 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9 debug2: fd 3 setting O_NONBLOCK debug3: load_hostkeys: loading entries for host "..." from file "/.../.ssh/known_hosts" debug3: load_hostkeys: found key type RSA in file /.../.ssh/known_hosts:9 debug3: load_hostkeys: loaded 1 keys debug3: order_hostkeyalgs: prefer hostkeyalgs: [email protected],[email protected],ssh-rsa debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: [email protected],[email protected],ssh-rsa,[email protected],[email protected],ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5 debug1: kex: server->client aes128-ctr hmac-md5 none debug2: mac_setup: found hmac-md5 debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug2: dh_gen_key: priv key bits set: 136/256 debug2: bits set: 523/1024 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA ... debug3: load_hostkeys: loading entries for host "..." from file "/.../.ssh/known_hosts" debug3: load_hostkeys: found key type RSA in file /.../.ssh/known_hosts:9 debug3: load_hostkeys: loaded 1 keys debug3: load_hostkeys: loading entries for host "x.x.x.x" from file "/.../.ssh/known_hosts" debug3: load_hostkeys: found key type RSA in file /.../.ssh/known_hosts:9 debug3: load_hostkeys: loaded 1 keys debug1: Host '...' is known and matches the RSA host key. debug1: Found key in /.../.ssh/known_hosts:9 debug2: bits set: 492/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /.../.ssh/id_dsa (0x7f8b7b41d6c0) debug2: key: /.../.ssh/id_rsa (0x0) debug1: Authentications that can continue: publickey,password,keyboard-interactive debug3: start over, passed a different list publickey,password,keyboard-interactive debug3: preferred publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering DSA public key: /.../.ssh/id_dsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Server accepts key: pkalg ssh-dss blen 434 debug2: input_userauth_pk_ok: fp ... debug3: sign_and_send_pubkey: DSA ... debug1: Authentication succeeded (publickey). Authenticated to ... ([x.x.x.x]:22). debug1: channel 0: new [client-session] debug3: ssh_session2_open: channel_new: 0 debug2: channel 0: send open debug1: Requesting [email protected] debug1: Entering interactive session. ****** Hangs here ****** debug2: callback start debug2: client_session2_setup: id 0 debug2: fd 3 setting TCP_NODELAY debug2: channel 0: request pty-req confirm 1 debug1: Sending environment. debug3: Ignored env TERM_PROGRAM debug3: Ignored env SHELL debug3: Ignored env TERM debug3: Ignored env TMPDIR debug3: Ignored env Apple_PubSub_Socket_Render debug3: Ignored env TERM_PROGRAM_VERSION debug3: Ignored env TERM_SESSION_ID debug3: Ignored env USER debug3: Ignored env COMMAND_MODE debug3: Ignored env SSH_AUTH_SOCK debug3: Ignored env Apple_Ubiquity_Message debug3: Ignored env __CF_USER_TEXT_ENCODING debug3: Ignored env PATH debug3: Ignored env MKL_NUM_THREADS debug3: Ignored env PWD debug1: Sending env LANG = en_US.UTF-8 debug2: channel 0: request env confirm 0 debug3: Ignored env HOME debug3: Ignored env SHLVL debug3: Ignored env DYLD_LIBRARY_PATH debug3: Ignored env PYTHONPATH debug3: Ignored env LOGNAME debug3: Ignored env DISPLAY debug3: Ignored env SECURITYSESSIONID debug3: Ignored env _ debug2: channel 0: request shell confirm 1 debug2: callback done debug2: channel 0: open confirm rwindow 0 rmax 32768 debug2: channel_input_status_confirm: type 99 id 0 debug2: PTY allocation request accepted on channel 0 debug2: channel 0: rcvd adjust 2097152 debug2: channel_input_status_confirm: type 99 id 0 debug2: shell request accepted on channel 0 On the server side, the debug output looks like: Sep 16 18:46:40 ... sshd[31435]: debug1: inetd sockets after dupping: 3, 4 Sep 16 18:46:40 ... sshd[31435]: Connection from x.x.x.x port 52758 Sep 16 18:46:40 ... sshd[31435]: debug1: Current Session ID is 56AC0FB0 / Session Attributes are 00008000 Sep 16 18:46:40 ... sshd[31435]: debug1: Running in inetd mode in a non-root session... assuming inetd created the session for us. Sep 16 18:46:40 ... sshd[31435]: debug1: Client protocol version 2.0; client software version OpenSSH_5.9 Sep 16 18:46:40 ... sshd[31435]: debug1: match: OpenSSH_5.9 pat OpenSSH* Sep 16 18:46:40 ... sshd[31435]: debug1: Enabling compatibility mode for protocol 2.0 Sep 16 18:46:40 ... sshd[31435]: debug1: Local version string SSH-2.0-OpenSSH_5.2 Sep 16 18:46:40 ... sshd[31435]: debug1: Checking with Service ACLs for ssh login restrictions Sep 16 18:46:40 ... sshd[31435]: debug1: call to mbr_user_name_to_uuid with <...> suceeded to retrieve user_uuid Sep 16 18:46:40 ... sshd[31435]: debug1: Call to mbr_check_service_membership failed with status <0> Sep 16 18:46:40 ... sshd[31435]: debug1: PAM: initializing for "..." Sep 16 18:46:40 ... sshd[31435]: debug1: PAM: setting PAM_RHOST to "x.x.x.x" Sep 16 18:46:40 ... sshd[31435]: Failed none for ... from x.x.x.x port 52758 ssh2 Sep 16 18:46:40 ... sshd[31435]: debug1: temporarily_use_uid: 509/20 (e=0/0) Sep 16 18:46:40 ... sshd[31435]: debug1: trying public key file /.../.ssh/authorized_keys Sep 16 18:46:40 ... sshd[31435]: debug1: restore_uid: 0/0 Sep 16 18:46:40 ... sshd[31435]: debug1: temporarily_use_uid: 509/20 (e=0/0) Sep 16 18:46:40 ... sshd[31435]: debug1: trying public key file /.../.ssh/authorized_keys2 Sep 16 18:46:40 ... sshd[31435]: debug1: fd 5 clearing O_NONBLOCK Sep 16 18:46:40 ... sshd[31435]: debug1: matching key found: file /.../.ssh/authorized_keys2, line 1 Sep 16 18:46:40 ... sshd[31435]: Found matching DSA key: ... Sep 16 18:46:40 ... sshd[31435]: debug1: restore_uid: 0/0 Sep 16 18:46:40 ... sshd[31435]: debug1: temporarily_use_uid: 509/20 (e=0/0) Sep 16 18:46:40 ... sshd[31435]: debug1: trying public key file /.../.ssh/authorized_keys Sep 16 18:46:40 ... sshd[31435]: debug1: restore_uid: 0/0 Sep 16 18:46:40 ... sshd[31435]: debug1: temporarily_use_uid: 509/20 (e=0/0) Sep 16 18:46:40 ... sshd[31435]: debug1: trying public key file /.../.ssh/authorized_keys2 Sep 16 18:46:40 ... sshd[31435]: debug1: fd 5 clearing O_NONBLOCK Sep 16 18:46:40 ... sshd[31435]: debug1: matching key found: file /.../.ssh/authorized_keys2, line 1 Sep 16 18:46:40 ... sshd[31435]: Found matching DSA key: ... Sep 16 18:46:40 ... sshd[31435]: debug1: restore_uid: 0/0 Sep 16 18:46:40 ... sshd[31435]: debug1: ssh_dss_verify: signature correct Sep 16 18:46:40 ... sshd[31435]: debug1: do_pam_account: called Sep 16 18:46:40 ... sshd[31435]: Accepted publickey for ... from x.x.x.x port 52758 ssh2 Sep 16 18:46:40 ... sshd[31435]: debug1: monitor_child_preauth: ... has been authenticated by privileged process Sep 16 18:46:40 ... sshd[31435]: debug1: PAM: establishing credentials ***** Hangs here ***** Sep 16 18:46:54 ... sshd[31435]: User child is on pid 31654 Sep 16 18:46:54 ... sshd[31654]: debug1: PAM: establishing credentials Sep 16 18:46:54 ... sshd[31654]: debug1: permanently_set_uid: 509/20 Sep 16 18:46:54 ... sshd[31654]: debug1: Entering interactive session for SSH2. Sep 16 18:46:54 ... sshd[31654]: debug1: server_init_dispatch_20 Sep 16 18:46:54 ... sshd[31654]: debug1: server_input_channel_open: ctype session rchan 0 win 1048576 max 16384 Sep 16 18:46:54 ... sshd[31654]: debug1: input_session_request Sep 16 18:46:54 ... sshd[31654]: debug1: channel 0: new [server-session] Sep 16 18:46:54 ... sshd[31654]: debug1: session_new: session 0 Sep 16 18:46:54 ... sshd[31654]: debug1: session_open: channel 0 Sep 16 18:46:54 ... sshd[31654]: debug1: session_open: session 0: link with channel 0 Sep 16 18:46:54 ... sshd[31654]: debug1: server_input_channel_open: confirm session Sep 16 18:46:54 ... sshd[31654]: debug1: server_input_global_request: rtype [email protected] want_reply 0 Sep 16 18:46:54 ... sshd[31654]: debug1: server_input_channel_req: channel 0 request pty-req reply 1 Sep 16 18:46:54 ... sshd[31654]: debug1: session_by_channel: session 0 channel 0 Sep 16 18:46:54 ... sshd[31654]: debug1: session_input_channel_req: session 0 req pty-req Sep 16 18:46:54 ... sshd[31654]: debug1: Allocating pty. Sep 16 18:46:54 ... sshd[31435]: debug1: session_new: session 0 Sep 16 18:46:54 ... sshd[31654]: debug1: session_pty_req: session 0 alloc /dev/ttys008 Sep 16 18:46:54 ... sshd[31654]: debug1: server_input_channel_req: channel 0 request env reply 0 Sep 16 18:46:54 ... sshd[31654]: debug1: session_by_channel: session 0 channel 0 Sep 16 18:46:54 ... sshd[31654]: debug1: session_input_channel_req: session 0 req env Sep 16 18:46:54 ... sshd[31654]: debug1: server_input_channel_req: channel 0 request shell reply 1 Sep 16 18:46:54 ... sshd[31654]: debug1: session_by_channel: session 0 channel 0 Sep 16 18:46:54 ... sshd[31654]: debug1: session_input_channel_req: session 0 req shell Sep 16 18:46:54 ... sshd[31655]: debug1: Setting controlling tty using TIOCSCTTY.

    Read the article

  • Routing a single request through multiple nginx backend apps

    - by Jonathan Oliver
    I wanted to get an idea if anything like the following scenario was possible: Nginx handles a request and routes it to some kind of authentication application where cookies and/or other kinds of security identifiers are interpreted and verified. The app perhaps makes a few additions to the request (appending authenticated headers). Failing authentication returns an HTTP 401. Nginx then takes the request and routes it through an authorization application which determines, based upon identity and the HTTP verb (put, delete, get, etc.) and URL in question, whether the actor/agent/user has permission to performed the intended action. Perhaps the authorization application modifies the request somewhat by appending another header, for example. Failing authorization returns 403. (Wash, rinse, repeat the proxy pattern for any number of services that want to participate in the request in some fashion.) Finally, Nginx routes the request into the actual application code where the request is inspected and the requested operations are executed according to the URL in question and where the identity of the user can be captured and understood by the application by looking at the altered HTTP request. Ideally, Nginx could do this natively or with a plugin. Any ideas? The alternative that I've considered is having Nginx hand off the initial request to the authentication application and then have this application proxy the request back through to Nginx (whether on the same box or another box). I know there are a number of applications frameworks (Django, RoR, etc.) that can do a lot of this stuff "in process", but I was trying to make things a little more generic and self contained where different applications could "hook" the HTTP pipeline of Nginx and then participate in, short circuit, and even modify the request accordingly. If Nginx can't do this, is anyone aware of other web servers that will perform in the manner described above?

    Read the article

  • Bad sectors, S.M.A.R.T., SpinRite, firmware on platter and drive id questions.

    - by Christopher Galpin
    Is it possible for S.M.A.R.T. to give false readings (say I was fiddling with lots of recovery programs, transfers, so on and so forth) or is it absolutely a read-only direct correlation to the physical status of a drive? Does SpinRite level 5 "recover bad sectors" operate on those marked at the factory? Are they on the same level as your generic bad sector, with SpinRite thus having full access? (Also I'm curious if SMART's bad sector count is zero'd afterward or if it includes factory marked sectors.) The main firmware of some drives, like a WD Passport is stored on the platter. How is it protected? Is it through marking them as bad sectors? If so, I'm wondering if SpinRite's sector recovery could bring about firmware corruption on these drives. Is the failure of a drive to report valid identity information (hdparm -I /dev/xx) consistent with corrupted firmware, or just general disk failure? I may be misunderstanding the role of firmware here. I feel I've read a drive's identity information is on the platter, just like the partition tables and so on. Is this true? (Apologizes if this is more appropriate for SuperUser.)

    Read the article

  • ssh without password does not work for some users

    - by joshxdr
    I have a new RHEL4 Linux box that I am using to copy data to old Solaris 2.6 and RHEL3 Linux boxes with scp. I have found that with the same setup, it works for some users but not for others. For user jane, this works fine: jane@host1$ ssh -v remhost debug1: Next authentication method: publickey debug1: Trying private key: /mnt/home/osborjo/.ssh/identity debug1: Offering public key: /mnt/home/osborjo/.ssh/id_rsa debug1: Server accepts key: pkalg ssh-rsa blen 277 debug1: read PEM private key done: type RSA debug1: Authentication succeeded (publickey). for user jack it does not: jack@host1 ssh -v remhost debug1: Next authentication method: publickey debug1: Trying private key: /mnt/home/oper1/.ssh/identity debug1: Offering public key: /mnt/home/oper1/.ssh/id_rsa debug1: Authentications that can continue: publickey,password,keyboard-interactive I have looked at the permissions for all the keys and files, they look the same. Since I am using home directories mounted by NFS, the keys for both the remote host and the local host are in the same directory. This is how things look for jane: jane@host1$ ls -l $HOME/.ssh -rw-rw-r-- 1 jane operator 394 Jan 27 16:28 authorized_keys -rw------- 1 jane operator 1675 Jan 27 16:27 id_rsa -rw-r--r-- 1 jane operator 394 Jan 27 16:27 id_rsa.pub -rw-rw-r-- 1 jane operator 1205 Jan 27 16:46 known_hosts For user jack: jack@host1$ ls -l $HOME/.ssh -rw-rw-r-- 1 jack engineer 394 Jan 27 16:28 authorized_keys -rw------- 1 jack engineer 1675 Jan 27 16:27 id_rsa -rw-r--r-- 1 jack engineer 394 Jan 27 16:27 id_rsa.pub -rw-rw-r-- 1 jack engineer 1205 Jan 27 16:46 known_hosts As a last ditch effort, I copied the authorized_keys, id_rsa, and id_rsa.pub from jill to jack, and changed the username in authorized_keys and id_rsa.pub with vi. It still did not work. It seems there is something different between the two users but I cannot figure out what it is.

    Read the article

  • Setting up Live @ EDU

    - by user73721
    [PROBLEM] Hello everyone. I have a small issue here. We are trying to get our exchange accounts for students only ported over from an exchange server 2003 to the Microsoft cloud services known as live @ EDU. The problem we are having is that in order to do this we need to install 2 pieces of software 1: OLSync 2: Microsoft Identity Life cycle Manager "Download the Galsync.msi here" the "Here" link takes you to a page that needs a login for an admin account for live @ EDU. That part works. However once logged in it redirects to a page that states: https://connect.microsoft.com/site185/Downloads/DownloadDetails.aspx?DownloadID=26407 Page Not Found The content that you requested cannot be found or you do not have permission to view it. If you believe you have reached this page in error, click the Help link at the top of the page to report the issue and include this ID in your e-mail: afa16bf4-3df0-437c-893a-8005f978c96c [WHAT I NEED] I need to download that file. Does anyone know of an alternative location for that installation file? I also need to obtain Identity Lifecycle Management (ILM) Server 2007, Feature Pack 1 (FP1). If anyone has any helpful information that would be fantastic! As well if anyone has completed a migration of account from a on site exchange 2003 server to the Microsoft Live @ EDU servers any general guidance would be helpful! Thanks in advance.

    Read the article

  • ADFS v.2.0 transitive trust in a federation scenario

    - by masi
    Currently i'm working with ADFS to establish a federated trust between two separated domains. My question is simple: does ADFS v. 2.0 support transitive trust across federated identity providers? I know that ADFS v 1.0 does not, as stated in this document on page 9. But when looking on the claims rules that come with ADFS 2.0 it seems to be possible, as a Microsoft partner confirmed. However: the documentation on this topic is a mess! Simply no ADFS v. 2.0 related statements on this topic that i was able to find (IF you got any documentation on this PLEASE help me out guys!). To be more clear, lets assume this scenario: Federation provider (A) trust federation provider (B) which trusts identity provider (C). So, does (A) trust identities comming from (C) across (B)? Also, if it is possible there are some things that i'm specially interested in: Is it possible to restrict the transitive trust in ADFS in any way? If so, how? How does the transitive trust affect the Issuer and OriginalIssuer properties of the claims? If transitive trust is used together with claims transformations and provider (B) would transform incomming claims from (C) in a way that they are transformed into (new) claims of same type an value, how would this affect the Issuer and OriginalIssuer properties?

    Read the article

  • User authentication -- username mismatch in IIS in ASP.NET application

    - by Cory Larson
    Last week, an employee's Active Directory username was changed (or a new one was created for them). For the purposes of this example, let's assume these usernames: Old: Domain\11111 New: Domain\22222 When this user now logs in using their new username, and attempts to browse to any one of a number of ASP.NET applications using only Windows Authentication (no Anonymous enabled), the system authenticates but our next layer of database-driven permissions prevents them from being authorized. We tracked it down to a mismatch of usernames between their logon account and who IIS thinks they are. Below are the outputs of several ASP.NET variables from apps running in a Windows 2008 IIS7.5 environment: Request.ServerVariables["AUTH_TYPE"]: Negotiate Request.ServerVariables["AUTH_USER"]: Domain\11111 Request.ServerVariables["LOGON_USER"]: Domain\22222 Request.ServerVariables["REMOTE_USER"]: Domain\11111 HttpContext.Current.User.Identity.Name: Domain\11111 System.Threading.Thread.CurrentPrincipal.Identity.Name: Domain\11111 From the above, I can see that only the LOGON_USER server variable has the correct value, which is the account the user used to log on to their machine. However, we use the "AUTH_USER" variable for looking up the database permissions. In a separate testing environment (completely different server: Windows 2003, IIS6), all of the above variables show "Domain\22222". So this seems to be a server-specific issue, like the credentials are somehow getting cached either on their machine or on the server (the former seems more plausible). So the question is: how do I confirm whether it's the user's machine or the server that is botching the request? How should I go about fixing this? I looked at the following two resources and will be giving the first one a try shortly: http://www.interworks.com/blogs/jvalente/2010/02/02/removing-saved-credentials-passwords-windows-xp-windows-vista-or-windows-7 http://stackoverflow.com/questions/2325005/classic-asp-request-servervariableslogon-user-returning-wrong-username/5299080#5299080 Thanks.

    Read the article

  • Deleting "undeletable" files in Vista

    - by Nik Reiman
    I recently upgraded my workstation from XP SP3 to Vista Business, and during the upgrade Windows moved my old C:\Windows directory to C:\Windows.old. I got all of the stuff I needed out of that folder, but there are six "undeletable" files there so I cannot remove it. They are: Windows.old\Program1\Adobe\Reader 9.0\Resource\CMap\Identity-H Windows.old\Program1\Adobe\Reader 9.0\Resource\CMap\Identity-V Windows.old\Program1\Common Files\Adobe\Acrobat\ActiveX\AcroIEHelper.dll Windows.old\Program1\Common Files\Adobe\Acrobat\ActiveX\AcroIEHelperShim.dll Windows.old\Program1\Common Files\Adobe\Acrobat\ActiveX\AcroPDF.dll Windows.old\Program1\Common Files\Adobe\Acrobat\ActiveX\pdfshell.dll Whenever I try to delete the files either through explorer or a command line, I get a permission denied error. I have tried to grant myself full permission on the files, but again, permission denied. I don't even have acrobat installed on my Vista machine, and I uninstalled Adobe updater. However, I still can't manage to get rid of these files. How do I nuke them for good? Edit: I was able to take ownership of the files, but I still can't delete them. Renaming them did not work, as I was denied permission to do that as well. I'll try booting up in safe mode and getting rid of them there. Edit II: Booting up into safe mode did not allow me to delete the files. Bummer.

    Read the article

  • ApplicationPoolIdentity IIS 7.5 to SQL Server 2008 R2 not working.

    - by Jack
    I have a small ASP.NET test script that opens a connection to a SQL Server database on another machine in the domain. It isn't working in all cases. Setup: IIS 7.5 under W2K8R2 trying to connect to a remote SQL Server 2008 R2 instance. All machines are in the same domain. Using the ApplicationPoolIdentity for the web site it fails to connect to the SQL Server with the following: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.SqlClient.SqlException: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. However if I switch the Process Model Identity to NETWORK SERVICE or my domain account the database connection is successful. I've granted the \$ access in SQL Server. I am not doing any sort of authentication on the web site, it is just a simple script to open a connection to a database to make sure it works. I have Anonymous Authentication enabled and set to use the Application pool identity. How do I make this work? Why is the ApplicationPoolIdentity trying to use ANONYMOUS LOGON? Better yet, how do I make it stop using the Anonymous logon?

    Read the article

  • Authentication required by wireless network

    - by Roman
    I would like to use a wireless network from Ubuntu. In the network drop-down menu I select a network (this is a University network I have an account there). Then I get a windows with the following fields: Wireless Security: [WPA&WPA2 Enterprise] Authentication: [Tunneled TLS] Anonymous Identity: [] CA Certificate: [(None)] Inner Authentication: [some letters] User Name: [] Password: [] I put there my user name and password and do not change default value and leave "Anonymous Identity"blank. As a result of that I get "Authentication required by wireless network". How can I solve this problem? I think it is important to notice that our system administrator tried to find some files (which are probably needed to be used as "CA Certificate"). He said that he does not know where this file is located on Ubuntu (he support only Windows). So, probably this is direction I need to go. I need to find this file. But may be I am wrong. May be something else needs to be done. Could you pleas help me with that?

    Read the article

  • Perfectly reproducable select statement default ordering issue....

    - by Dave
    Hi, I've recently been chasing an issue with a client's db... solution found, but impossible to recreate. Essentially, we're doing a Select * from mytable where ArbitraryColumn = 75 Where MyTable has an Identity column, called 'MyIndentityColumn' - incremented by one in each insert. Naturally, and normally I would assume that the order returned would be the order in which they are inserted (bad assumption, but one which was forced onto me, through an inherited application - which has been patched). Essentially, I would like suggestions as to why the database, when restored to my local machine (same OS, same SQL server version - 200 sp3) same collation, and same backup instance restored on it, as a test DB on the client site. When I perform the above select, I get them in order of insert (i.e. identity column ordered ascending). On the client, it seems random (but the same 'random' order each time)... A few other points: I have the same collation on my test server as client Same DB backup restored to a test only I can access Same SQL server version and service pack Same OS Test DB is a new DB - new log and MDF... I have the problem 'solved' by adding an explicit order by clause but I want to undertand the cause of the issue, given the exact nature of my attempts to recreate it beuing futile, and perfectly recreatable on the client server... Thanks in advance, Dave

    Read the article

  • Exchange 2010 OWA- Open Other User Mailbox

    - by Benjamin Jones
    I just started working for this small firm (30 people) a little bit ago, replacing their System Admin. First thing I noticed was Exchange Server 2010 was WAY out of date. Believe it or not they did not have SP1 installed. So after I installed and configured Exchange 2010 SP3 and redirected OWA I noticed something in OWA. I could add ANYONE's User Mailbox WITHOUT giving mailbox premission. I created a couple test users, same thing. I even had another employee provide me access to their OWA and they could open anyone's Inbox without granting permission. I don't want to play the blame game, but I was SHOCKED that this was going on. Luckly being such a small company I'll be able to cover this mistake that I did not create, BUT HOW? My guess is that I need to find out where the past System Admin went wrong in providing Full Access Permission? Or could this be a Auto-Mapping issue? I found this article: http://technet.microsoft.com/en-us/library/hh529943.aspx This might work $FixAutoMapping = Get-MailboxPermission sharedmailbox |where {$_.AccessRights -eq "FullAccess" -and $_.IsInherited -eq $false} $FixAutoMapping | Remove-MailboxPermission $FixAutoMapping | ForEach {Add-MailboxPermission -Identity $_.Identity -User $_.User -AccessRights:FullAccess -AutoMapping $false} However how do I insert the above code into Powershell? Again I was thrown into this mess and I'm just trying to iron out this tangled mess.

    Read the article

  • SimpleMembership, Membership Providers, Universal Providers and the new ASP.NET 4.5 Web Forms and ASP.NET MVC 4 templates

    - by Jon Galloway
    The ASP.NET MVC 4 Internet template adds some new, very useful features which are built on top of SimpleMembership. These changes add some great features, like a much simpler and extensible membership API and support for OAuth. However, the new account management features require SimpleMembership and won't work against existing ASP.NET Membership Providers. I'll start with a summary of top things you need to know, then dig into a lot more detail. Summary: SimpleMembership has been designed as a replacement for traditional the previous ASP.NET Role and Membership provider system SimpleMembership solves common problems people ran into with the Membership provider system and was designed for modern user / membership / storage needs SimpleMembership integrates with the previous membership system, but you can't use a MembershipProvider with SimpleMembership The new ASP.NET MVC 4 Internet application template AccountController requires SimpleMembership and is not compatible with previous MembershipProviders You can continue to use existing ASP.NET Role and Membership providers in ASP.NET 4.5 and ASP.NET MVC 4 - just not with the ASP.NET MVC 4 AccountController The existing ASP.NET Role and Membership provider system remains supported as is part of the ASP.NET core ASP.NET 4.5 Web Forms does not use SimpleMembership; it implements OAuth on top of ASP.NET Membership The ASP.NET Web Site Administration Tool (WSAT) is not compatible with SimpleMembership The following is the result of a few conversations with Erik Porter (PM for ASP.NET MVC) to make sure I had some the overall details straight, combined with a lot of time digging around in ILSpy and Visual Studio's assembly browsing tools. SimpleMembership: The future of membership for ASP.NET The ASP.NET Membership system was introduces with ASP.NET 2.0 back in 2005. It was designed to solve common site membership requirements at the time, which generally involved username / password based registration and profile storage in SQL Server. It was designed with a few extensibility mechanisms - notably a provider system (which allowed you override some specifics like backing storage) and the ability to store additional profile information (although the additional  profile information was packed into a single column which usually required access through the API). While it's sometimes frustrating to work with, it's held up for seven years - probably since it handles the main use case (username / password based membership in a SQL Server database) smoothly and can be adapted to most other needs (again, often frustrating, but it can work). The ASP.NET Web Pages and WebMatrix efforts allowed the team an opportunity to take a new look at a lot of things - e.g. the Razor syntax started with ASP.NET Web Pages, not ASP.NET MVC. The ASP.NET Web Pages team designed SimpleMembership to (wait for it) simplify the task of dealing with membership. As Matthew Osborn said in his post Using SimpleMembership With ASP.NET WebPages: With the introduction of ASP.NET WebPages and the WebMatrix stack our team has really be focusing on making things simpler for the developer. Based on a lot of customer feedback one of the areas that we wanted to improve was the built in security in ASP.NET. So with this release we took that time to create a new built in (and default for ASP.NET WebPages) security provider. I say provider because the new stuff is still built on the existing ASP.NET framework. So what do we call this new hotness that we have created? Well, none other than SimpleMembership. SimpleMembership is an umbrella term for both SimpleMembership and SimpleRoles. Part of simplifying membership involved fixing some common problems with ASP.NET Membership. Problems with ASP.NET Membership ASP.NET Membership was very obviously designed around a set of assumptions: Users and user information would most likely be stored in a full SQL Server database or in Active Directory User and profile information would be optimized around a set of common attributes (UserName, Password, IsApproved, CreationDate, Comment, Role membership...) and other user profile information would be accessed through a profile provider Some problems fall out of these assumptions. Requires Full SQL Server for default cases The default, and most fully featured providers ASP.NET Membership providers (SQL Membership Provider, SQL Role Provider, SQL Profile Provider) require full SQL Server. They depend on stored procedure support, and they rely on SQL Server cache dependencies, they depend on agents for clean up and maintenance. So the main SQL Server based providers don't work well on SQL Server CE, won't work out of the box on SQL Azure, etc. Note: Cory Fowler recently let me know about these Updated ASP.net scripts for use with Microsoft SQL Azure which do support membership, personalization, profile, and roles. But the fact that we need a support page with a set of separate SQL scripts underscores the underlying problem. Aha, you say! Jon's forgetting the Universal Providers, a.k.a. System.Web.Providers! Hold on a bit, we'll get to those... Custom Membership Providers have to work with a SQL-Server-centric API If you want to work with another database or other membership storage system, you need to to inherit from the provider base classes and override a bunch of methods which are tightly focused on storing a MembershipUser in a relational database. It can be done (and you can often find pretty good ones that have already been written), but it's a good amount of work and often leaves you with ugly code that has a bunch of System.NotImplementedException fun since there are a lot of methods that just don't apply. Designed around a specific view of users, roles and profiles The existing providers are focused on traditional membership - a user has a username and a password, some specific roles on the site (e.g. administrator, premium user), and may have some additional "nice to have" optional information that can be accessed via an API in your application. This doesn't fit well with some modern usage patterns: In OAuth and OpenID, the user doesn't have a password Often these kinds of scenarios map better to user claims or rights instead of monolithic user roles For many sites, profile or other non-traditional information is very important and needs to come from somewhere other than an API call that maps to a database blob What would work a lot better here is a system in which you were able to define your users, rights, and other attributes however you wanted and the membership system worked with your model - not the other way around. Requires specific schema, overflow in blob columns I've already mentioned this a few times, but it bears calling out separately - ASP.NET Membership focuses on SQL Server storage, and that storage is based on a very specific database schema. SimpleMembership as a better membership system As you might have guessed, SimpleMembership was designed to address the above problems. Works with your Schema As Matthew Osborn explains in his Using SimpleMembership With ASP.NET WebPages post, SimpleMembership is designed to integrate with your database schema: All SimpleMembership requires is that there are two columns on your users table so that we can hook up to it – an “ID” column and a “username” column. The important part here is that they can be named whatever you want. For instance username doesn't have to be an alias it could be an email column you just have to tell SimpleMembership to treat that as the “username” used to log in. Matthew's example shows using a very simple user table named Users (it could be named anything) with a UserID and Username column, then a bunch of other columns he wanted in his app. Then we point SimpleMemberhip at that table with a one-liner: WebSecurity.InitializeDatabaseFile("SecurityDemo.sdf", "Users", "UserID", "Username", true); No other tables are needed, the table can be named anything we want, and can have pretty much any schema we want as long as we've got an ID and something that we can map to a username. Broaden database support to the whole SQL Server family While SimpleMembership is not database agnostic, it works across the SQL Server family. It continues to support full SQL Server, but it also works with SQL Azure, SQL Server CE, SQL Server Express, and LocalDB. Everything's implemented as SQL calls rather than requiring stored procedures, views, agents, and change notifications. Note that SimpleMembership still requires some flavor of SQL Server - it won't work with MySQL, NoSQL databases, etc. You can take a look at the code in WebMatrix.WebData.dll using a tool like ILSpy if you'd like to see why - there places where SQL Server specific SQL statements are being executed, especially when creating and initializing tables. It seems like you might be able to work with another database if you created the tables separately, but I haven't tried it and it's not supported at this point. Note: I'm thinking it would be possible for SimpleMembership (or something compatible) to run Entity Framework so it would work with any database EF supports. That seems useful to me - thoughts? Note: SimpleMembership has the same database support - anything in the SQL Server family - that Universal Providers brings to the ASP.NET Membership system. Easy to with Entity Framework Code First The problem with with ASP.NET Membership's system for storing additional account information is that it's the gate keeper. That means you're stuck with its schema and accessing profile information through its API. SimpleMembership flips that around by allowing you to use any table as a user store. That means you're in control of the user profile information, and you can access it however you'd like - it's just data. Let's look at a practical based on the AccountModel.cs class in an ASP.NET MVC 4 Internet project. Here I'm adding a Birthday property to the UserProfile class. [Table("UserProfile")] public class UserProfile { [Key] [DatabaseGeneratedAttribute(DatabaseGeneratedOption.Identity)] public int UserId { get; set; } public string UserName { get; set; } public DateTime Birthday { get; set; } } Now if I want to access that information, I can just grab the account by username and read the value. var context = new UsersContext(); var username = User.Identity.Name; var user = context.UserProfiles.SingleOrDefault(u => u.UserName == username); var birthday = user.Birthday; So instead of thinking of SimpleMembership as a big membership API, think of it as something that handles membership based on your user database. In SimpleMembership, everything's keyed off a user row in a table you define rather than a bunch of entries in membership tables that were out of your control. How SimpleMembership integrates with ASP.NET Membership Okay, enough sales pitch (and hopefully background) on why things have changed. How does this affect you? Let's start with a diagram to show the relationship (note: I've simplified by removing a few classes to show the important relationships): So SimpleMembershipProvider is an implementaiton of an ExtendedMembershipProvider, which inherits from MembershipProvider and adds some other account / OAuth related things. Here's what ExtendedMembershipProvider adds to MembershipProvider: The important thing to take away here is that a SimpleMembershipProvider is a MembershipProvider, but a MembershipProvider is not a SimpleMembershipProvider. This distinction is important in practice: you cannot use an existing MembershipProvider (including the Universal Providers found in System.Web.Providers) with an API that requires a SimpleMembershipProvider, including any of the calls in WebMatrix.WebData.WebSecurity or Microsoft.Web.WebPages.OAuth.OAuthWebSecurity. However, that's as far as it goes. Membership Providers still work if you're accessing them through the standard Membership API, and all of the core stuff  - including the AuthorizeAttribute, role enforcement, etc. - will work just fine and without any change. Let's look at how that affects you in terms of the new templates. Membership in the ASP.NET MVC 4 project templates ASP.NET MVC 4 offers six Project Templates: Empty - Really empty, just the assemblies, folder structure and a tiny bit of basic configuration. Basic - Like Empty, but with a bit of UI preconfigured (css / images / bundling). Internet - This has both a Home and Account controller and associated views. The Account Controller supports registration and login via either local accounts and via OAuth / OpenID providers. Intranet - Like the Internet template, but it's preconfigured for Windows Authentication. Mobile - This is preconfigured using jQuery Mobile and is intended for mobile-only sites. Web API - This is preconfigured for a service backend built on ASP.NET Web API. Out of these templates, only one (the Internet template) uses SimpleMembership. ASP.NET MVC 4 Basic template The Basic template has configuration in place to use ASP.NET Membership with the Universal Providers. You can see that configuration in the ASP.NET MVC 4 Basic template's web.config: <profile defaultProvider="DefaultProfileProvider"> <providers> <add name="DefaultProfileProvider" type="System.Web.Providers.DefaultProfileProvider, System.Web.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" applicationName="/" /> </providers> </profile> <membership defaultProvider="DefaultMembershipProvider"> <providers> <add name="DefaultMembershipProvider" type="System.Web.Providers.DefaultMembershipProvider, System.Web.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="false" requiresUniqueEmail="false" maxInvalidPasswordAttempts="5" minRequiredPasswordLength="6" minRequiredNonalphanumericCharacters="0" passwordAttemptWindow="10" applicationName="/" /> </providers> </membership> <roleManager defaultProvider="DefaultRoleProvider"> <providers> <add name="DefaultRoleProvider" type="System.Web.Providers.DefaultRoleProvider, System.Web.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" applicationName="/" /> </providers> </roleManager> <sessionState mode="InProc" customProvider="DefaultSessionProvider"> <providers> <add name="DefaultSessionProvider" type="System.Web.Providers.DefaultSessionStateProvider, System.Web.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" /> </providers> </sessionState> This means that it's business as usual for the Basic template as far as ASP.NET Membership works. ASP.NET MVC 4 Internet template The Internet template has a few things set up to bootstrap SimpleMembership: \Models\AccountModels.cs defines a basic user account and includes data annotations to define keys and such \Filters\InitializeSimpleMembershipAttribute.cs creates the membership database using the above model, then calls WebSecurity.InitializeDatabaseConnection which verifies that the underlying tables are in place and marks initialization as complete (for the application's lifetime) \Controllers\AccountController.cs makes heavy use of OAuthWebSecurity (for OAuth account registration / login / management) and WebSecurity. WebSecurity provides account management services for ASP.NET MVC (and Web Pages) WebSecurity can work with any ExtendedMembershipProvider. There's one in the box (SimpleMembershipProvider) but you can write your own. Since a standard MembershipProvider is not an ExtendedMembershipProvider, WebSecurity will throw exceptions if the default membership provider is a MembershipProvider rather than an ExtendedMembershipProvider. Practical example: Create a new ASP.NET MVC 4 application using the Internet application template Install the Microsoft ASP.NET Universal Providers for LocalDB NuGet package Run the application, click on Register, add a username and password, and click submit You'll get the following execption in AccountController.cs::Register: To call this method, the "Membership.Provider" property must be an instance of "ExtendedMembershipProvider". This occurs because the ASP.NET Universal Providers packages include a web.config transform that will update your web.config to add the Universal Provider configuration I showed in the Basic template example above. When WebSecurity tries to use the configured ASP.NET Membership Provider, it checks if it can be cast to an ExtendedMembershipProvider before doing anything else. So, what do you do? Options: If you want to use the new AccountController, you'll either need to use the SimpleMembershipProvider or another valid ExtendedMembershipProvider. This is pretty straightforward. If you want to use an existing ASP.NET Membership Provider in ASP.NET MVC 4, you can't use the new AccountController. You can do a few things: Replace  the AccountController.cs and AccountModels.cs in an ASP.NET MVC 4 Internet project with one from an ASP.NET MVC 3 application (you of course won't have OAuth support). Then, if you want, you can go through and remove other things that were built around SimpleMembership - the OAuth partial view, the NuGet packages (e.g. the DotNetOpenAuthAuth package, etc.) Use an ASP.NET MVC 4 Internet application template and add in a Universal Providers NuGet package. Then copy in the AccountController and AccountModel classes. Create an ASP.NET MVC 3 project and upgrade it to ASP.NET MVC 4 using the steps shown in the ASP.NET MVC 4 release notes. None of these are particularly elegant or simple. Maybe we (or just me?) can do something to make this simpler - perhaps a NuGet package. However, this should be an edge case - hopefully the cases where you'd need to create a new ASP.NET but use legacy ASP.NET Membership Providers should be pretty rare. Please let me (or, preferably the team) know if that's an incorrect assumption. Membership in the ASP.NET 4.5 project template ASP.NET 4.5 Web Forms took a different approach which builds off ASP.NET Membership. Instead of using the WebMatrix security assemblies, Web Forms uses Microsoft.AspNet.Membership.OpenAuth assembly. I'm no expert on this, but from a bit of time in ILSpy and Visual Studio's (very pretty) dependency graphs, this uses a Membership Adapter to save OAuth data into an EF managed database while still running on top of ASP.NET Membership. Note: There may be a way to use this in ASP.NET MVC 4, although it would probably take some plumbing work to hook it up. How does this fit in with Universal Providers (System.Web.Providers)? Just to summarize: Universal Providers are intended for cases where you have an existing ASP.NET Membership Provider and you want to use it with another SQL Server database backend (other than SQL Server). It doesn't require agents to handle expired session cleanup and other background tasks, it piggybacks these tasks on other calls. Universal Providers are not really, strictly speaking, universal - at least to my way of thinking. They only work with databases in the SQL Server family. Universal Providers do not work with Simple Membership. The Universal Providers packages include some web config transforms which you would normally want when you're using them. What about the Web Site Administration Tool? Visual Studio includes tooling to launch the Web Site Administration Tool (WSAT) to configure users and roles in your application. WSAT is built to work with ASP.NET Membership, and is not compatible with Simple Membership. There are two main options there: Use the WebSecurity and OAuthWebSecurity API to manage the users and roles Create a web admin using the above APIs Since SimpleMembership runs on top of your database, you can update your users as you would any other data - via EF or even in direct database edits (in development, of course)

    Read the article

  • Improving Partitioned Table Join Performance

    - by Paul White
    The query optimizer does not always choose an optimal strategy when joining partitioned tables. This post looks at an example, showing how a manual rewrite of the query can almost double performance, while reducing the memory grant to almost nothing. Test Data The two tables in this example use a common partitioning partition scheme. The partition function uses 41 equal-size partitions: CREATE PARTITION FUNCTION PFT (integer) AS RANGE RIGHT FOR VALUES ( 125000, 250000, 375000, 500000, 625000, 750000, 875000, 1000000, 1125000, 1250000, 1375000, 1500000, 1625000, 1750000, 1875000, 2000000, 2125000, 2250000, 2375000, 2500000, 2625000, 2750000, 2875000, 3000000, 3125000, 3250000, 3375000, 3500000, 3625000, 3750000, 3875000, 4000000, 4125000, 4250000, 4375000, 4500000, 4625000, 4750000, 4875000, 5000000 ); GO CREATE PARTITION SCHEME PST AS PARTITION PFT ALL TO ([PRIMARY]); There two tables are: CREATE TABLE dbo.T1 ( TID integer NOT NULL IDENTITY(0,1), Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T1 PRIMARY KEY CLUSTERED (TID) ON PST (TID) );   CREATE TABLE dbo.T2 ( TID integer NOT NULL, Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T2 PRIMARY KEY CLUSTERED (TID, Column1) ON PST (TID) ); The next script loads 5 million rows into T1 with a pseudo-random value between 1 and 5 for Column1. The table is partitioned on the IDENTITY column TID: INSERT dbo.T1 WITH (TABLOCKX) (Column1) SELECT (ABS(CHECKSUM(NEWID())) % 5) + 1 FROM dbo.Numbers AS N WHERE n BETWEEN 1 AND 5000000; In case you don’t already have an auxiliary table of numbers lying around, here’s a script to create one with 10 million rows: CREATE TABLE dbo.Numbers (n bigint PRIMARY KEY);   WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS n FROM L5) INSERT dbo.Numbers WITH (TABLOCKX) SELECT TOP (10000000) n FROM Nums ORDER BY n OPTION (MAXDOP 1); Table T1 contains data like this: Next we load data into table T2. The relationship between the two tables is that table 2 contains ‘n’ rows for each row in table 1, where ‘n’ is determined by the value in Column1 of table T1. There is nothing particularly special about the data or distribution, by the way. INSERT dbo.T2 WITH (TABLOCKX) (TID, Column1) SELECT T.TID, N.n FROM dbo.T1 AS T JOIN dbo.Numbers AS N ON N.n >= 1 AND N.n <= T.Column1; Table T2 ends up containing about 15 million rows: The primary key for table T2 is a combination of TID and Column1. The data is partitioned according to the value in column TID alone. Partition Distribution The following query shows the number of rows in each partition of table T1: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T1 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are 40 partitions containing 125,000 rows (40 * 125k = 5m rows). The rightmost partition remains empty. The next query shows the distribution for table 2: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T2 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are roughly 375,000 rows in each partition (the rightmost partition is also empty): Ok, that’s the test data done. Test Query and Execution Plan The task is to count the rows resulting from joining tables 1 and 2 on the TID column: SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; The optimizer chooses a plan using parallel hash join, and partial aggregation: The Plan Explorer plan tree view shows accurate cardinality estimates and an even distribution of rows across threads (click to enlarge the image): With a warm data cache, the STATISTICS IO output shows that no physical I/O was needed, and all 41 partitions were touched: Running the query without actual execution plan or STATISTICS IO information for maximum performance, the query returns in around 2600ms. Execution Plan Analysis The first step toward improving on the execution plan produced by the query optimizer is to understand how it works, at least in outline. The two parallel Clustered Index Scans use multiple threads to read rows from tables T1 and T2. Parallel scan uses a demand-based scheme where threads are given page(s) to scan from the table as needed. This arrangement has certain important advantages, but does result in an unpredictable distribution of rows amongst threads. The point is that multiple threads cooperate to scan the whole table, but it is impossible to predict which rows end up on which threads. For correct results from the parallel hash join, the execution plan has to ensure that rows from T1 and T2 that might join are processed on the same thread. For example, if a row from T1 with join key value ‘1234’ is placed in thread 5’s hash table, the execution plan must guarantee that any rows from T2 that also have join key value ‘1234’ probe thread 5’s hash table for matches. The way this guarantee is enforced in this parallel hash join plan is by repartitioning rows to threads after each parallel scan. The two repartitioning exchanges route rows to threads using a hash function over the hash join keys. The two repartitioning exchanges use the same hash function so rows from T1 and T2 with the same join key must end up on the same hash join thread. Expensive Exchanges This business of repartitioning rows between threads can be very expensive, especially if a large number of rows is involved. The execution plan selected by the optimizer moves 5 million rows through one repartitioning exchange and around 15 million across the other. As a first step toward removing these exchanges, consider the execution plan selected by the optimizer if we join just one partition from each table, disallowing parallelism: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = 1 AND $PARTITION.PFT(T2.TID) = 1 OPTION (MAXDOP 1); The optimizer has chosen a (one-to-many) merge join instead of a hash join. The single-partition query completes in around 100ms. If everything scaled linearly, we would expect that extending this strategy to all 40 populated partitions would result in an execution time around 4000ms. Using parallelism could reduce that further, perhaps to be competitive with the parallel hash join chosen by the optimizer. This raises a question. If the most efficient way to join one partition from each of the tables is to use a merge join, why does the optimizer not choose a merge join for the full query? Forcing a Merge Join Let’s force the optimizer to use a merge join on the test query using a hint: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN); This is the execution plan selected by the optimizer: This plan results in the same number of logical reads reported previously, but instead of 2600ms the query takes 5000ms. The natural explanation for this drop in performance is that the merge join plan is only using a single thread, whereas the parallel hash join plan could use multiple threads. Parallel Merge Join We can get a parallel merge join plan using the same query hint as before, and adding trace flag 8649: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN, QUERYTRACEON 8649); The execution plan is: This looks promising. It uses a similar strategy to distribute work across threads as seen for the parallel hash join. In practice though, performance is disappointing. On a typical run, the parallel merge plan runs for around 8400ms; slower than the single-threaded merge join plan (5000ms) and much worse than the 2600ms for the parallel hash join. We seem to be going backwards! The logical reads for the parallel merge are still exactly the same as before, with no physical IOs. The cardinality estimates and thread distribution are also still very good (click to enlarge): A big clue to the reason for the poor performance is shown in the wait statistics (captured by Plan Explorer Pro): CXPACKET waits require careful interpretation, and are most often benign, but in this case excessive waiting occurs at the repartitioning exchanges. Unlike the parallel hash join, the repartitioning exchanges in this plan are order-preserving ‘merging’ exchanges (because merge join requires ordered inputs): Parallelism works best when threads can just grab any available unit of work and get on with processing it. Preserving order introduces inter-thread dependencies that can easily lead to significant waits occurring. In extreme cases, these dependencies can result in an intra-query deadlock, though the details of that will have to wait for another time to explore in detail. The potential for waits and deadlocks leads the query optimizer to cost parallel merge join relatively highly, especially as the degree of parallelism (DOP) increases. This high costing resulted in the optimizer choosing a serial merge join rather than parallel in this case. The test results certainly confirm its reasoning. Collocated Joins In SQL Server 2008 and later, the optimizer has another available strategy when joining tables that share a common partition scheme. This strategy is a collocated join, also known as as a per-partition join. It can be applied in both serial and parallel execution plans, though it is limited to 2-way joins in the current optimizer. Whether the optimizer chooses a collocated join or not depends on cost estimation. The primary benefits of a collocated join are that it eliminates an exchange and requires less memory, as we will see next. Costing and Plan Selection The query optimizer did consider a collocated join for our original query, but it was rejected on cost grounds. The parallel hash join with repartitioning exchanges appeared to be a cheaper option. There is no query hint to force a collocated join, so we have to mess with the costing framework to produce one for our test query. Pretending that IOs cost 50 times more than usual is enough to convince the optimizer to use collocated join with our test query: -- Pretend IOs are 50x cost temporarily DBCC SETIOWEIGHT(50);   -- Co-located hash join SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (RECOMPILE);   -- Reset IO costing DBCC SETIOWEIGHT(1); Collocated Join Plan The estimated execution plan for the collocated join is: The Constant Scan contains one row for each partition of the shared partitioning scheme, from 1 to 41. The hash repartitioning exchanges seen previously are replaced by a single Distribute Streams exchange using Demand partitioning. Demand partitioning means that the next partition id is given to the next parallel thread that asks for one. My test machine has eight logical processors, and all are available for SQL Server to use. As a result, there are eight threads in the single parallel branch in this plan, each processing one partition from each table at a time. Once a thread finishes processing a partition, it grabs a new partition number from the Distribute Streams exchange…and so on until all partitions have been processed. It is important to understand that the parallel scans in this plan are different from the parallel hash join plan. Although the scans have the same parallelism icon, tables T1 and T2 are not being co-operatively scanned by multiple threads in the same way. Each thread reads a single partition of T1 and performs a hash match join with the same partition from table T2. The properties of the two Clustered Index Scans show a Seek Predicate (unusual for a scan!) limiting the rows to a single partition: The crucial point is that the join between T1 and T2 is on TID, and TID is the partitioning column for both tables. A thread that processes partition ‘n’ is guaranteed to see all rows that can possibly join on TID for that partition. In addition, no other thread will see rows from that partition, so this removes the need for repartitioning exchanges. CPU and Memory Efficiency Improvements The collocated join has removed two expensive repartitioning exchanges and added a single exchange processing 41 rows (one for each partition id). Remember, the parallel hash join plan exchanges had to process 5 million and 15 million rows. The amount of processor time spent on exchanges will be much lower in the collocated join plan. In addition, the collocated join plan has a maximum of 8 threads processing single partitions at any one time. The 41 partitions will all be processed eventually, but a new partition is not started until a thread asks for it. Threads can reuse hash table memory for the new partition. The parallel hash join plan also had 8 hash tables, but with all 5,000,000 build rows loaded at the same time. The collocated plan needs memory for only 8 * 125,000 = 1,000,000 rows at any one time. Collocated Hash Join Performance The collated join plan has disappointing performance in this case. The query runs for around 25,300ms despite the same IO statistics as usual. This is much the worst result so far, so what went wrong? It turns out that cardinality estimation for the single partition scans of table T1 is slightly low. The properties of the Clustered Index Scan of T1 (graphic immediately above) show the estimation was for 121,951 rows. This is a small shortfall compared with the 125,000 rows actually encountered, but it was enough to cause the hash join to spill to physical tempdb: A level 1 spill doesn’t sound too bad, until you realize that the spill to tempdb probably occurs for each of the 41 partitions. As a side note, the cardinality estimation error is a little surprising because the system tables accurately show there are 125,000 rows in every partition of T1. Unfortunately, the optimizer uses regular column and index statistics to derive cardinality estimates here rather than system table information (e.g. sys.partitions). Collocated Merge Join We will never know how well the collocated parallel hash join plan might have worked without the cardinality estimation error (and the resulting 41 spills to tempdb) but we do know: Merge join does not require a memory grant; and Merge join was the optimizer’s preferred join option for a single partition join Putting this all together, what we would really like to see is the same collocated join strategy, but using merge join instead of hash join. Unfortunately, the current query optimizer cannot produce a collocated merge join; it only knows how to do collocated hash join. So where does this leave us? CROSS APPLY sys.partitions We can try to write our own collocated join query. We can use sys.partitions to find the partition numbers, and CROSS APPLY to get a count per partition, with a final step to sum the partial counts. The following query implements this idea: SELECT row_count = SUM(Subtotals.cnt) FROM ( -- Partition numbers SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1 ) AS P CROSS APPLY ( -- Count per collocated join SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals; The estimated plan is: The cardinality estimates aren’t all that good here, especially the estimate for the scan of the system table underlying the sys.partitions view. Nevertheless, the plan shape is heading toward where we would like to be. Each partition number from the system table results in a per-partition scan of T1 and T2, a one-to-many Merge Join, and a Stream Aggregate to compute the partial counts. The final Stream Aggregate just sums the partial counts. Execution time for this query is around 3,500ms, with the same IO statistics as always. This compares favourably with 5,000ms for the serial plan produced by the optimizer with the OPTION (MERGE JOIN) hint. This is another case of the sum of the parts being less than the whole – summing 41 partial counts from 41 single-partition merge joins is faster than a single merge join and count over all partitions. Even so, this single-threaded collocated merge join is not as quick as the original parallel hash join plan, which executed in 2,600ms. On the positive side, our collocated merge join uses only one logical processor and requires no memory grant. The parallel hash join plan used 16 threads and reserved 569 MB of memory:   Using a Temporary Table Our collocated merge join plan should benefit from parallelism. The reason parallelism is not being used is that the query references a system table. We can work around that by writing the partition numbers to a temporary table (or table variable): SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   CREATE TABLE #P ( partition_number integer PRIMARY KEY);   INSERT #P (partition_number) SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1;   SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals;   DROP TABLE #P;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; Using the temporary table adds a few logical reads, but the overall execution time is still around 3500ms, indistinguishable from the same query without the temporary table. The problem is that the query optimizer still doesn’t choose a parallel plan for this query, though the removal of the system table reference means that it could if it chose to: In fact the optimizer did enter the parallel plan phase of query optimization (running search 1 for a second time): Unfortunately, the parallel plan found seemed to be more expensive than the serial plan. This is a crazy result, caused by the optimizer’s cost model not reducing operator CPU costs on the inner side of a nested loops join. Don’t get me started on that, we’ll be here all night. In this plan, everything expensive happens on the inner side of a nested loops join. Without a CPU cost reduction to compensate for the added cost of exchange operators, candidate parallel plans always look more expensive to the optimizer than the equivalent serial plan. Parallel Collocated Merge Join We can produce the desired parallel plan using trace flag 8649 again: SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: One difference between this plan and the collocated hash join plan is that a Repartition Streams exchange operator is used instead of Distribute Streams. The effect is similar, though not quite identical. The Repartition uses round-robin partitioning, meaning the next partition id is pushed to the next thread in sequence. The Distribute Streams exchange seen earlier used Demand partitioning, meaning the next partition id is pulled across the exchange by the next thread that is ready for more work. There are subtle performance implications for each partitioning option, but going into that would again take us too far off the main point of this post. Performance The important thing is the performance of this parallel collocated merge join – just 1350ms on a typical run. The list below shows all the alternatives from this post (all timings include creation, population, and deletion of the temporary table where appropriate) from quickest to slowest: Collocated parallel merge join: 1350ms Parallel hash join: 2600ms Collocated serial merge join: 3500ms Serial merge join: 5000ms Parallel merge join: 8400ms Collated parallel hash join: 25,300ms (hash spill per partition) The parallel collocated merge join requires no memory grant (aside from a paltry 1.2MB used for exchange buffers). This plan uses 16 threads at DOP 8; but 8 of those are (rather pointlessly) allocated to the parallel scan of the temporary table. These are minor concerns, but it turns out there is a way to address them if it bothers you. Parallel Collocated Merge Join with Demand Partitioning This final tweak replaces the temporary table with a hard-coded list of partition ids (dynamic SQL could be used to generate this query from sys.partitions): SELECT row_count = SUM(Subtotals.cnt) FROM ( VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10), (11),(12),(13),(14),(15),(16),(17),(18),(19),(20), (21),(22),(23),(24),(25),(26),(27),(28),(29),(30), (31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41) ) AS P (partition_number) CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: The parallel collocated hash join plan is reproduced below for comparison: The manual rewrite has another advantage that has not been mentioned so far: the partial counts (per partition) can be computed earlier than the partial counts (per thread) in the optimizer’s collocated join plan. The earlier aggregation is performed by the extra Stream Aggregate under the nested loops join. The performance of the parallel collocated merge join is unchanged at around 1350ms. Final Words It is a shame that the current query optimizer does not consider a collocated merge join (Connect item closed as Won’t Fix). The example used in this post showed an improvement in execution time from 2600ms to 1350ms using a modestly-sized data set and limited parallelism. In addition, the memory requirement for the query was almost completely eliminated  – down from 569MB to 1.2MB. The problem with the parallel hash join selected by the optimizer is that it attempts to process the full data set all at once (albeit using eight threads). It requires a large memory grant to hold all 5 million rows from table T1 across the eight hash tables, and does not take advantage of the divide-and-conquer opportunity offered by the common partitioning. The great thing about the collocated join strategies is that each parallel thread works on a single partition from both tables, reading rows, performing the join, and computing a per-partition subtotal, before moving on to a new partition. From a thread’s point of view… If you have trouble visualizing what is happening from just looking at the parallel collocated merge join execution plan, let’s look at it again, but from the point of view of just one thread operating between the two Parallelism (exchange) operators. Our thread picks up a single partition id from the Distribute Streams exchange, and starts a merge join using ordered rows from partition 1 of table T1 and partition 1 of table T2. By definition, this is all happening on a single thread. As rows join, they are added to a (per-partition) count in the Stream Aggregate immediately above the Merge Join. Eventually, either T1 (partition 1) or T2 (partition 1) runs out of rows and the merge join stops. The per-partition count from the aggregate passes on through the Nested Loops join to another Stream Aggregate, which is maintaining a per-thread subtotal. Our same thread now picks up a new partition id from the exchange (say it gets id 9 this time). The count in the per-partition aggregate is reset to zero, and the processing of partition 9 of both tables proceeds just as it did for partition 1, and on the same thread. Each thread picks up a single partition id and processes all the data for that partition, completely independently from other threads working on other partitions. One thread might eventually process partitions (1, 9, 17, 25, 33, 41) while another is concurrently processing partitions (2, 10, 18, 26, 34) and so on for the other six threads at DOP 8. The point is that all 8 threads can execute independently and concurrently, continuing to process new partitions until the wider job (of which the thread has no knowledge!) is done. This divide-and-conquer technique can be much more efficient than simply splitting the entire workload across eight threads all at once. Related Reading Understanding and Using Parallelism in SQL Server Parallel Execution Plans Suck © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • What causes this org.hibernate.MappingException?

    - by stacker
    I'm trying to configure an ejb3 sample application, it's entities where mapped to postgres now I want the app run on Jboss4.3 and Informix using JPA. If the DDL creation <property name="hibernate.hbm2ddl.auto" value="create"/> is active this error appears > WARN [ServiceController] Problem > starting service > persistence.units:ear=weblog.ear,jar=weblog.jar,unitName=weblog > javax.persistence.PersistenceException: > [PersistenceUnit: weblog] Unable to > build EntityManagerFactory > at org.hibernate.ejb.Ejb3Configuration.buildEntityManagerFactory(Ejb3Configuration.java:677) > at org.hibernate.ejb.HibernatePersistence.createContainerEntityManagerFactory(HibernatePersistence.java:132) > at org.jboss.ejb3.entity.PersistenceUnitDeployment.start(PersistenceUnitDeployment.java:246) followed by Caused by: org.hibernate.MappingException: No Dialect mapping for JDBC type: 2005 at org.hibernate.dialect.TypeNames.get(TypeNames.java:56) at org.hibernate.dialect.TypeNames.get(TypeNames.java:81) at org.hibernate.dialect.Dialect.getTypeName(Dialect.java:291) at org.hibernate.mapping.Column.getSqlType(Column.java:182) at org.hibernate.mapping.Table.sqlCreateString(Table.java:394) at org.hibernate.cfg.Configuration.generateSchemaCreationScript(Configuration.java:854) at org.hibernate.tool.hbm2ddl.SchemaExport.<init>(SchemaExport.java:74) at org.hibernate.impl.SessionFactoryImpl.<init>(SessionFactoryImpl.java:311) at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1300) at org.hibernate.cfg.AnnotationConfiguration.buildSessionFactory(AnnotationConfiguration.java:874) at org.hibernate.ejb.Ejb3Configuration.buildEntityManagerFactory(Ejb3Configuration.java:669) What does JDBC type: 2005 mean? Any idea how I can track down the entity/column causes the problem? Thanks

    Read the article

  • JavaScriptSerializer().Serialize(Entity Framework object)

    - by loviji
    May be, it is not so problematic for you. but i'm trying first time with json serialization. and also read other articles in stackowerflow. I have created Entity Framework data model. then by method get all data from object: private uqsEntities _db = new uqsEntities(); //get all data from table sysMainTableColumns where tableName=paramtableName public List<sysMainTableColumns> getDataAboutMainTable(string tableName) { return (from column in _db.sysMainTableColumns where column.TableName==tableName select column).ToList(); } my webservice: public string getDataAboutMainTable() { penta.DAC.Tables dictTable = new penta.DAC.Tables(); var result = dictTable.getDataAboutMainTable("1"); return new JavaScriptSerializer().Serialize(result); } and jQuery ajax method $('#loadData').click(function() { $.ajax({ type: "POST", url: "WS/ConstructorWS.asmx/getDataAboutMainTable", data: "{}", contentType: "application/json; charset=utf-8", dataType: "json", success: function(msg) { $("#jsonResponse").html(msg); var data = eval("(" + msg + ")"); //do something with data }, error: function(msg) { } }); }); problem with data, code fails there. and i think i'm not use JavaScriptSerializer().Serialize() method very well. Please, tell me, what a big mistake I made in C# code?

    Read the article

  • WPF Toolkit DataGridCell Style DataTrigger

    - by KrisTrip
    I am trying to change the color of a cell to Yellow if the value has been updated in the DataGrid. My XAML: <toolkit:DataGrid x:Name="TheGrid" ItemsSource="{Binding}" IsReadOnly="False" CanUserAddRows="False" CanUserResizeRows="False" AutoGenerateColumns="False" CanUserSortColumns="False" SelectionUnit="CellOrRowHeader" EnableColumnVirtualization="True" VerticalScrollBarVisibility="Auto" HorizontalScrollBarVisibility="Auto"> <toolkit:DataGrid.CellStyle> <Style TargetType="{x:Type toolkit:DataGridCell}"> <Style.Triggers> <DataTrigger Binding="{Binding IsDirty}" Value="True"> <Setter Property="Background" Value="Yellow"/> </DataTrigger> </Style.Triggers> </Style> </toolkit:DataGrid.CellStyle> </toolkit:DataGrid> The grid is bound to a List of arrays (displaying a table of values kind of like excel would). Each value in the array is a custom object that contains an IsDirty dependency property. The IsDirty property gets set when the value is changed. When i run this: change a value in column 1 = whole row goes yellow change a value in any other column = nothing happens I want only the changed cell to go yellow no matter what column its in. Do you see anything wrong with my XAML?

    Read the article

  • Listview of items from a object selected in another listview

    - by Ingó Vals
    Ok the title maybe a little confusing. I have a database with the table Companies wich has one-to-many relotionship with another table Divisions ( so each company can have many divisions ) and division will have many employees. I have a ListView of the companies. What I wan't is that when I choose a company from the ListView another ListView of divisions within that company appears below it. Then I pick a division and another listview of employees within that division appaers below that. You get the picture. Is there anyway to do this mostly inside the XAML code declaritively (sp?). I'm using linq so the Company entity objects have a property named Division wich if I understand linq correctly should include Division objects of the divisions connected to the company. So after getting all the companies and putting them as a itemsource to CompanyListView this is where I currently am. <ListView x:Name="CompanyListView" DisplayMemberPath="CompanyName" Grid.Row="0" Grid.Column="0" /> <ListView DataContext="{Binding ElementName=CompanyListView, Path=SelectedItem}" DisplayMemberPath="Division.DivisionName" Grid.Row="1" Grid.Column="0" /> I know I'm way off but I was hoping by putting something specific in the DataContext and DisplayMemberPath I could get this to work. If not then I have to capture the Id of the company I guess and capture a select event or something. Another issue but related is the in the seconde column besides the lisview I wan't to have a details/edit view for the selected item. So when only a company is selected details about that will appear then when a division under the company is picked It will go there instead, any ideas?

    Read the article

  • How to bind collection to WPF:DataGridComboBoxColumn

    - by everwicked
    Admittedly I am new to WPF but I have looked and looked and can't find a solution to this problem. I have a simple object like: class Item { .... public String Measure { get; set; } public String[] Measures {get; } } Which I am trying to bind to a DataGrid with two text columns and a combo box column. For the combo box column, propery Measure is the current selection and Measures the possible values. My XAML is: <DataGridComboBoxColumn Header="Measure" Width="Auto" SelectedItemBinding="{Binding Path=Measure}" ItemsSource="{Binding Path=Measures}"/> </DataGrid.Columns> </DataGrid> The text column are displayed just fine but the combobox is not - the values are not displayed at all. The binding error is : ¨System.Windows.Data Error: 2 : Cannot find governing FrameworkElement or FrameworkContentElement for target element. BindingExpression:Path=Measures; DataItem=null; target element is 'DataGridComboBoxColumn' (HashCode=11497055); target property is 'ItemsSource' (type 'IEnumerable') How do I fix this??? Thanks

    Read the article

  • Java RegEx API "Look-behind group does not have an obvious maximum length near index ..."

    - by Foo Inc
    Hello, I'm on to some SQL where clause parsing and designed a working RegEx to find a column outside string literals using "Rad Software Regular Expression Desginer" which is using the .NET API. To make sure the designed RegEx works with Java too, I tested it by using the API of course (1.5 and 1.6). But guess what, it won't work. I got the message "Look-behind group does not have an obvious maximum length near index 28". The string that I'm trying to get parsed is Column_1='test''the''stuff''all''day''long' AND Column_2='000' AND TheVeryColumnIWantToFind = 'Column_1=''test''''the''''stuff''''all''''day''''long'' AND Column_2=''000'' AND TheVeryColumnIWantToFind = '' TheVeryColumnIWantToFind = '' AND (Column_3 is null or Column_3 = ''Not interesting'') AND ''1'' = ''1''' AND (Column_3 is null or Column_3 = 'Still not interesting') AND '1' = '1' As you may have guessed, I tried to create some kind of worst case to ensure the RegEx won't fail on more complicated SQL where clauses. The RegEx itself looks like this (?i:(?<!=\s*'(?:[^']|(?:''))*)((?<=\s*)TheVeryColumnIWantToFind(?=(?:\s+|=)))) I'm not sure if there is a more elegant RegEx (there'll most likely be one), but that's not important right now as it does the trick. To explain the RegEx in a few words: If it finds the column I'm after, it does a negative look-behind to figure out if the column name is used in a string literal. If so, it won't match. If not, it'll match. Back to the question. As I mentioned before, it won't work with Java. What will work and result in what I want? I found out, that Java does not seem to support unlimited look-behinds but still I couldn't get it to work. Isn't it right that a look-behind is always putting a limit up on itself from the search offset to the current search position? So it would result in something like "position - offset"?

    Read the article

  • Google Apps Script - Sending Google Spreadsheet Row to Email

    - by AME
    Hi I am trying to write a script using Google Apps Script (Javascript) for a Google spreadsheet. I am trying to do the same thing that is shown in this tutorial [http://www.google.com/google-d-s/scripts/sending_emails.html][1], but each row in my spreadsheet has 24 columns. I would like to send out the contents of each row as an email. Here is the code as I am trying to use: function sendEmails() { var sheet = SpreadsheetApp.getActiveSheet(); var dataRange = sheet.getRange("A2:X31") // Fetch values for each row in the Range. var data = dataRange.getValues(); for (i in data) { var row = data[i]; var i = i + 1; var emailAddress = row[0]; // First column var message = row[1]; // Second column var subject = "Sending emails from a Spreadsheet"; MailApp.sendEmail(emailAddress, subject, message); } }? The result is an email with the contents in the "B" column only. Can someone help me change this code to get all of the contents in each row (columns A-X). Thanks,

    Read the article

  • How to display a flyout menu on clicking an image in Grid header in MVC?

    - by Vincent
    A grid is displayed using the following code in MVC. <%= this.Grid( "routes-" + mc , "Testing", Model.Routes, new GridBuilder<RouteModel>() .Column( "chk", () => Html.Image( Url.Content( "down.gif" ), "Select or deselect tests" ), (m) => "<input type='hidden' name='routeLinkId' value='"+ m.RouteLinkID +"' />" + Html.CheckBox( "drugRoute" + mc + m.RouteLinkID, m.IsChecked ) ) .Column( "route clip-text", "Route", (m) = Html.Encode( m.Route ) ) .Column( "groups clip-text", "Groups", (m) = "" + Html.Encode(m.Group) + "" ) .ToGrid() ) % On clicking the image in header I need to display a flyout menu. The code for the flyout menu is below. <div id="group-menu-<%= mc %>" class="flyout-menu" title="Select tests"> <div> <p>Select</p> <ul> <li><button type="button" name="one">One</button></li> <li><button type="button" name="two">two</button></li> </ul> <p>Unselect</p> <ul> <li><button type="button" name="unselect-one">one</button></li> <li><button type="button" name="unselect-two">two</button></li> </ul> </div> </div> Now how do I display this menu on clicking the image in header.

    Read the article

< Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >