Search Results

Search found 3169 results on 127 pages for 'identity ranges'.

Page 23/127 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • What are the app pool identity and account for anonymous access for?

    - by apollodude217
    I understand what the two are used for, except I don't know what each does--i.e. what one is for vs. what the other is for. (I usually set them to the same account anyway.) If you're not sure what accounts I'm talking about, in the IIS manager thingy: Right-click on the app pool in question, go to Properties, and click the Identity tab to see the App Pool Identity. Right-click a Web site, go to Properties - Directory Security, and click Edit under Anonymous Access and authentication control to view the Account for anonymous access.

    Read the article

  • How do HTTP proxy caches decide between serving identity- vs. gzip-encoded resources?

    - by mrclay
    An HTTP server uses content-negotiation to serve a single URL identity- or gzip-encoded based on the client's Accept-Encoding header. Now say we have a proxy cache like squid between clients and the httpd. If the proxy has cached both encodings of a URL, how does it determine which to serve? The non-gzip instance (not originally served with Vary) can be served to any client, but the encoded instances (having Vary: Accept-Encoding) can only be sent to a clients with the identical Accept-Encoding header value as was used in the original request. E.g. Opera sends "deflate, gzip, x-gzip, identity, *;q=0" but IE8 sends "gzip, deflate". According to the spec, then, caches shouldn't share content-encoded caches between the two browsers. Is this true?

    Read the article

  • Should I Use GUID or IDENTITY as Thread Number?

    - by user311509
    offerID is the thread # which represents the thread posted. I see in forums posts are represented by random numbers. Is this achieved by IDENTITY? If not, please advice. nvarchar(max) will carry all kind of texts along with HTML tags. CREATE TABLE Offer ( offerID int IDENTITY (4382,15) PRIMARY KEY, memberID int NOT NULL REFERENCES Member(memberID), title nvarchar(200) NOT NULL, thread nvarchar(max) NOT NULL, . . . );

    Read the article

  • Insert into ... Select *, how to ignore identity?

    - by Haoest
    I have a temp table with the exact structure of a concrete table T. It was created like this: select top 0 * into #tmp from T After processing and filling in content into #tmp, I want to copy the content back to T like this: insert into T select * from #tmp This is okay as long as T doesn't have identity column, but in my case it does. Is there anyways I can ignore the auto-increment identity column from #tmp when I copy to T? My motivation is to avoid having to spell out every column name in the Insert Into list. EDIT: toggling identity_insert wouldn't work because the pkeys in #tmp may collide with those in T if rows were inserted into T outside of my script, that's if #tmp has auto-incremented the pkey to sync with T's in the first place.

    Read the article

  • Is it a bad idea to run an asp.net app pool with the same identity as IIS's anon user?

    - by Andrew Bullock
    Subject says it all really, Thinking on security terms, I want to give each site on my server its own user account, so that they can't access each other's data. I also want to use integrated authentication for sql so i dont have any passwords knocking about in connection strings. Is it a bad idea to use the same account for the app pool identity and the anon user account for iis (im interested in answers for both v6 and 7)? Edit: ive seen this post describing how IIS7 allows you to automatically use the same account, but the question of whether its a good idea or not remains ;) If so, why? Thanks

    Read the article

  • WIF-less claim extraction from ACS: JWT

    - by Elton Stoneman
    ACS support for JWT still shows as "beta", but it meets the spec and it works nicely, so it's becoming the preferred option as SWT is losing favour. (Note that currently ACS doesn’t support JWT encryption, if you want encrypted tokens you need to go SAML). In my last post I covered pulling claims from an ACS token without WIF, using the SWT format. The JWT format is a little more complex, but you can still inspect claims just with string manipulation. The incoming token from ACS is still presented in the BinarySecurityToken element of the XML payload, with a TokenType of urn:ietf:params:oauth:token-type:jwt: <t:RequestSecurityTokenResponse xmlns:t="http://schemas.xmlsoap.org/ws/2005/02/trust">   <t:Lifetime>     <wsu:Created xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">2012-08-31T07:39:55.337Z</wsu:Created>     <wsu:Expires xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">2012-08-31T09:19:55.337Z</wsu:Expires>   </t:Lifetime>   <wsp:AppliesTo xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy">     <EndpointReference xmlns="http://www.w3.org/2005/08/addressing">       <Address>http://localhost/x.y.z</Address>     </EndpointReference>   </wsp:AppliesTo>   <t:RequestedSecurityToken>     <wsse:BinarySecurityToken wsu:Id="_1eeb5cf4-b40b-40f2-89e0-a3343f6bd985-6A15D1EED0CDB0D8FA48C7D566232154" ValueType="urn:ietf:params:oauth:token-type:jwt" EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd">[ base64string ] </wsse:BinarySecurityToken>   </t:RequestedSecurityToken>   <t:TokenType>urn:ietf:params:oauth:token-type:jwt</t:TokenType>   <t:RequestType>http://schemas.xmlsoap.org/ws/2005/02/trust/Issue</t:RequestType>   <t:KeyType>http://schemas.xmlsoap.org/ws/2005/05/identity/NoProofKey</t:KeyType> </t:RequestSecurityTokenResponse> The token as a whole needs to be base-64 decoded. The decoded value contains a header, payload and signature, dot-separated; the parts are also base-64, but they need to be decoded using a no-padding algorithm (implementation and more details in this MSDN article on validating an Exchange 2013 identity token). The values are then in JSON; the header contains the token type and the hashing algorithm: "{"typ":"JWT","alg":"HS256"}" The payload contains the same data as in the SWT, but JSON rather than querystring format: {"aud":"http://localhost/x.y.z" "iss":"https://adfstest-bhw.accesscontrol.windows.net/" "nbf":1346398795 "exp":1346404795 "http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationinstant":"2012-08-31T07:39:53.652Z" "http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod":"http://schemas.microsoft.com/ws/2008/06/identity/authenticationmethod/windows" "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname":"xyz" "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress":"[email protected]" "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn":"[email protected]" "identityprovider":"http://fs.svc.x.y.z.com/adfs/services/trust"} The signature is in the third part of the token. Unlike SWT which is fixed to HMAC-SHA-256, JWT can support other protocols (the one in use is specified as the "alg" value in the header). How to: Validate an Exchange 2013 identity token contains an implementation of a JWT parser and validator; apart from the custom base-64 decoding part, it’s very similar to SWT extraction. I've wrapped the basic SWT and JWT in a ClaimInspector.aspx page on gitHub here: SWT and JWT claim inspector. You can drop it into any ASP.Net site and set the URL to be your redirect page in ACS. Swap ACS to issue SWT or JWT, and using the same page you can inspect the claims that come out.

    Read the article

  • SSHing thru an HTTP proxy

    - by Siler
    Typical scenario: I'm trying to SSH thru a corporate HTTP proxy to a remote machine using corkscrew, and I get: ssh_exchange_identification: Connection closed by remote host Obviously, there's a lot of reasons this might be happening - the proxy might not allow this, the remote box might not be running sshd, etc. So, I tried to tunnel manually via telnet: $ telnet proxy.evilcorporation.com 82 Trying XX.XX.XX.XX... Connected to proxy.evilcorporation.com. Escape character is '^]'. CONNECT myremotehost.com:22 HTTP/1.1 HTTP/1.1 200 Connection established So, unless I'm mistaken... it looks like the connection is working. So, why then, doesn't it work via corkscrew? ssh -vvv [email protected] -p 22 -o "ProxyCommand corkscrew proxy.evilcorporation.com 82 myremotehost.com 22" OpenSSH_6.6, OpenSSL 1.0.1f 6 Jan 2014 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: Executing proxy command: exec corkscrew proxy.evilcorporation.com 82 myremotehost.com 22 debug1: permanently_set_uid: 0/0 debug1: permanently_drop_suid: 0 debug1: identity file /root/.ssh/id_rsa type -1 debug1: identity file /root/.ssh/id_rsa-cert type -1 debug1: identity file /root/.ssh/id_dsa type -1 debug1: identity file /root/.ssh/id_dsa-cert type -1 debug1: identity file /root/.ssh/id_ecdsa type -1 debug1: identity file /root/.ssh/id_ecdsa-cert type -1 debug1: identity file /root/.ssh/id_ed25519 type -1 debug1: identity file /root/.ssh/id_ed25519-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.6p1 Ubuntu-2ubuntu1 ssh_exchange_identification: Connection closed by remote host

    Read the article

  • Can't ssh from CentOS 6.5 to SUSE LINUX 10.1

    - by Pavel Tankov
    We have a quite old installation of SUSE LINUX 10.1 (i586) in the office. The problem shortly: I can successfully ssh to it from machines in the same LAN (192.168.1.0) and not from others (that are in 10.23.0.0). The SuSE has SSH server openssh-4.2p1-18.12. I have ruled out the firewall and hosts.allow and hosts.deny files. When my ssh login attempt fails, here is what the logs say: on the client: $ ssh -vvv 192.168.1.5 OpenSSH_5.3p1, OpenSSL 1.0.1e-fips 11 Feb 2013 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to 192.168.1.5 [192.168.1.5] port 22. debug1: Connection established. debug1: identity file /home/nbuild/.ssh/identity type -1 debug1: identity file /home/nbuild/.ssh/identity-cert type -1 debug1: identity file /home/nbuild/.ssh/id_rsa type -1 debug1: identity file /home/nbuild/.ssh/id_rsa-cert type -1 debug1: identity file /home/nbuild/.ssh/id_dsa type -1 debug1: identity file /home/nbuild/.ssh/id_dsa-cert type -1 on the server: Aug 21 16:34:25 serverhost sshd[20736]: debug3: fd 4 is not O_NONBLOCK Aug 21 16:34:25 serverhost sshd[20736]: debug1: Forked child 20739. Aug 21 16:34:25 serverhost sshd[20736]: debug3: send_rexec_state: entering fd = 7 config len 403 Aug 21 16:34:25 serverhost sshd[20736]: debug3: ssh_msg_send: type 0 Aug 21 16:34:25 serverhost sshd[20736]: debug3: send_rexec_state: done Aug 21 16:34:25 serverhost sshd[20739]: debug1: rexec start in 4 out 4 newsock 4 pipe 6 sock 7 Aug 21 16:34:25 serverhost sshd[20739]: debug1: inetd sockets after dupping: 3, 3 Aug 21 16:34:25 serverhost sshd[20739]: debug3: Normalising mapped IPv4 in IPv6 address Aug 21 16:34:25 serverhost sshd[20739]: Connection from 10.23.1.11 port 44340 The above log on the server is when I enable DEBUG3 log level. However, with the default log level (INFO), the only thing the server logs is this: Aug 21 16:38:32 serverhost sshd[20749]: Did not receive identification string from 10.23.1.11 Any hints? I feel I've tried everything already.

    Read the article

  • Can a Highcharts range selector use non-date linear ranges?

    - by Simon
    I am using the HighStock JS lib to produce a chart that uses a linear series (not a time-series) for the xAxis. I'd still like to use the range-selector in order to zoom to pre-determined ranges within my linear series. Is this possible? For example; say my xAxis has a series: [[121,616],[122,600],[123,605],[124,585.5],[125,575.5],[126,580.5],[127,582],[128,582],[129,584],[130,583]] I'd like to use the range selector to zoom to the last n in the series.

    Read the article

  • Can EC2 instances be set up to come from different IP ranges?

    - by Joshua Frank
    I need to run a web crawler and I want to do it from EC2 because I want the HTTP requests to come from different IP ranges so I don't get blocked. So I thought distributing this on EC2 instances might help, but I can't find any information about what the outbound IP range will be. I don't want to go to the trouble of figuring out the extra complexity of EC2 and distributed data, only to find that all the instances use the same address block and I get blocked by the server anyway. NOTE: This isn't for a DoS attack or anything. I'm trying to harvest data for a legitimate business purpose, I'm respecting robots.txt, and I'm only making one request per second, but the host is still shutting me down. Edit: Commenter Paul Dixon suggests that the act of blocking even my modest crawl indicates that the host doesn't want me to crawl them and therefore that I shouldn't do it (even assuming I can work around the blocking). Do people agree with this?

    Read the article

  • Recursive algorithm for coalescing / collapsing list of dates into ranges.

    - by Dycey
    Given a list of dates 12/07/2010 13/07/2010 14/07/2010 15/07/2010 12/08/2010 13/08/2010 14/08/2010 15/08/2010 19/08/2010 20/08/2010 21/08/2010 I'm looking for pointers towards a recursive pseudocode algorithm (which I can translate into a FileMaker custom function) for producing a list of ranges, i.e. 12/07/2010 to 15/07/2010, 12/08/2010 to 15/08/2010, 19/08/2010 to 20/08/2010 The list is presorted and de-deuplicated. I've tried starting from both the first value and working forwards, and the last value and working backwards but I just can't seem to get it to work. Having one of those frustrating days... It would be nice if the signature was something like CollapseDateList( dateList, separator, ellipsis ) :-)

    Read the article

  • using python,How to cut the wav file between certain time ranges??

    - by kaushik
    How to cut the wav file between certain time ranges from multiple wav files and paste the segments together in a single wav file in continous time ?? For this i thou of a way,to store the contents of the wav file in array form and cut the segments required from the array copy thm in another file and convert it back into wav formant. but i hav no idea how to code it in python as i am a beginner in it.. plz help...any alternative methods which serve the purpose are also welcome.. Quick reply,xpected plzz.. Thanks in advance..

    Read the article

  • What permission(s) does an application pool identity required to manage other application pools?

    - by Mr Shoubs
    I have a web site (used to manage various parts of our software) that needs the permissions required to start/stop other application pools. I've created a user and set the app pool identity to custom, however the web app still can't start/stop the app pools. I get the following Error: System.UnauthorizedAccessException: Filename: redirection.config Error: Cannot read configuration file due to insufficient permissions at Microsoft.Web.Administration.Interop.AppHostWritableAdminManager.GetAdminSection(String bstrSectionName, String bstrSectionPath) at Microsoft.Web.Administration.Configuration.GetSectionInternal(ConfigurationSection section, String sectionPath, String locationPath) at Microsoft.Web.Administration.ServerManager.get_ApplicationPoolsSection() at Microsoft.Web.Administration.ServerManager.get_ApplicationPools() Discussion here suggests setting the application pool to local system or administrator, this does work, but I don't want to do this for security reasons (external support will need access this site). I did give the user higher permissions (as suggested here), starting by making it part of the local administrators group, but initially this didn't work, and giving the user read/write/mod permission on C:\Windows\System32\inetsrv\config also didn't work. I must have done something wrong as local administrator now works, however this still isn't what I want. So can anyone suggest the permissions I need to add to this user, and how can I apply them? An answer my problem (but different question) is here, but to clarify, I think I need to give an individual user "IIS Runtime Operation Permissions", does anyone know how to do this, if indeed this is the permissions I require?

    Read the article

  • ASP.NET WebAPI Security 4: Examples for various Authentication Scenarios

    - by Your DisplayName here!
    The Thinktecture.IdentityModel.Http repository includes a number of samples for the various authentication scenarios. All the clients follow a basic pattern: Acquire client credential (a single token, multiple tokens, username/password). Call Service. The service simply enumerates the claims it finds on the request and returns them to the client. I won’t show that part of the code, but rather focus on the step 1 and 2. Basic Authentication This is the most basic (pun inteneded) scenario. My library contains a class that can create the Basic Authentication header value. Simply set username and password and you are good to go. var client = new HttpClient { BaseAddress = _baseAddress }; client.DefaultRequestHeaders.Authorization = new BasicAuthenticationHeaderValue("alice", "alice"); var response = client.GetAsync("identity").Result; response.EnsureSuccessStatusCode();   SAML Authentication To integrate a Web API with an existing enterprise identity provider like ADFS, you can use SAML tokens. This is certainly not the most efficient way of calling a “lightweight service” ;) But very useful if that’s what it takes to get the job done. private static string GetIdentityToken() {     var factory = new WSTrustChannelFactory(         new WindowsWSTrustBinding(SecurityMode.Transport),         _idpEndpoint);     factory.TrustVersion = TrustVersion.WSTrust13;     var rst = new RequestSecurityToken     {         RequestType = RequestTypes.Issue,         KeyType = KeyTypes.Bearer,         AppliesTo = new EndpointAddress(Constants.Realm)     };     var token = factory.CreateChannel().Issue(rst) as GenericXmlSecurityToken;     return token.TokenXml.OuterXml; } private static Identity CallService(string saml) {     var client = new HttpClient { BaseAddress = _baseAddress };     client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("SAML", saml);     var response = client.GetAsync("identity").Result;     response.EnsureSuccessStatusCode();     return response.Content.ReadAsAsync<Identity>().Result; }   SAML to SWT conversion using the Azure Access Control Service Another possible options for integrating SAML based identity providers is to use an intermediary service that allows converting the SAML token to the more compact SWT (Simple Web Token) format. This way you only need to roundtrip the SAML once and can use the SWT afterwards. The code for the conversion uses the ACS OAuth2 endpoint. The OAuth2Client class is part of my library. private static string GetServiceTokenOAuth2(string samlToken) {     var client = new OAuth2Client(_acsOAuth2Endpoint);     return client.RequestAccessTokenAssertion(         samlToken,         SecurityTokenTypes.Saml2TokenProfile11,         Constants.Realm).AccessToken; }   SWT Authentication When you have an identity provider that directly supports a (simple) web token, you can acquire the token directly without the conversion step. Thinktecture.IdentityServer e.g. supports the OAuth2 resource owner credential profile to issue SWT tokens. private static string GetIdentityToken() {     var client = new OAuth2Client(_oauth2Address);     var response = client.RequestAccessTokenUserName("bob", "abc!123", Constants.Realm);     return response.AccessToken; } private static Identity CallService(string swt) {     var client = new HttpClient { BaseAddress = _baseAddress };     client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", swt);     var response = client.GetAsync("identity").Result;     response.EnsureSuccessStatusCode();     return response.Content.ReadAsAsync<Identity>().Result; }   So you can see that it’s pretty straightforward to implement various authentication scenarios using WebAPI and my authentication library. Stay tuned for more client samples!

    Read the article

  • Data Source Security Part 4

    - by Steve Felts
    So far, I have covered Client Identity and Oracle Proxy Session features, with WLS or database credentials.  This article will cover one more feature, Identify-based pooling.  Then, there is one more topic to cover - how these options play with transactions.Identity-based Connection Pooling An identity based pool creates a heterogeneous pool of connections.  This allows applications to use a JDBC connection with a specific DBMS credential by pooling physical connections with different DBMS credentials.  The DBMS credential is based on either the WebLogic user mapped to a database user or the database user directly, based on the “use database credentials” setting as described earlier. Using this feature enabled with “use database credentials” enabled seems to be what is proposed in the JDBC standard, basically a heterogeneous pool with users specified by getConnection(user, password). The allocation of connections is more complex if Enable Identity Based Connection Pooling attribute is enabled on the data source.  When an application requests a database connection, the WebLogic Server instance selects an existing physical connection or creates a new physical connection with requested DBMS identity. The following section provides information on how heterogeneous connections are created:1. At connection pool initialization, the physical JDBC connections based on the configured or default “initial capacity” are created with the configured default DBMS credential of the data source.2. An application tries to get a connection from a data source.3a. If “use database credentials” is not enabled, the user specified in getConnection is mapped to a DBMS credential, as described earlier.  If the credential map doesn’t have a matching user, the default DBMS credential is used from the datasource descriptor.3b. If “use database credentials” is enabled, the user and password specified in getConnection are used directly.4. The connection pool is searched for a connection with a matching DBMS credential.5. If a match is found, the connection is reserved and returned to the application.6. If no match is found, a connection is created or reused based on the maximum capacity of the pool: - If the maximum capacity has not been reached, a new connection is created with the DBMS credential, reserved, and returned to the application.- If the pool has reached maximum capacity, based on the least recently used (LRU) algorithm, a physical connection is selected from the pool and destroyed. A new connection is created with the DBMS credential, reserved, and returned to the application. It should be clear that finding a matching connection is more expensive than a homogeneous pool.  Destroying a connection and getting a new one is very expensive.  If you can use a normal homogeneous pool or one of the light-weight options (client identity or an Oracle proxy connection), those should be used instead of identity based pooling. Regardless of how physical connections are created, each physical connection in the pool has its own DBMS credential information maintained by the pool. Once a physical connection is reserved by the pool, it does not change its DBMS credential even if the current thread changes its WebLogic user credential and continues to use the same connection. To configure this feature, select Enable Identity Based Connection Pooling.  See http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/jdbc/jdbc_datasources/EnableIdentityBasedConnectionPooling.html  "Enable identity-based connection pooling for a JDBC data source" in Oracle WebLogic Server Administration Console Help. You must make the following changes to use Logging Last Resource (LLR) transaction optimization with Identity-based Pooling to get around the problem that multiple users will be accessing the associated transaction table.- You must configure a custom schema for LLR using a fully qualified LLR table name. All LLR connections will then use the named schema rather than the default schema when accessing the LLR transaction table.  - Use database specific administration tools to grant permission to access the named LLR table to all users that could access this table via a global transaction. By default, the LLR table is created during boot by the user configured for the connection in the data source. In most cases, the database will only allow access to this user and not allow access to mapped users. Connections within Transactions Now that we have covered the behavior of all of these various options, it’s time to discuss the exception to all of the rules.  When you get a connection within a transaction, it is associated with the transaction context on a particular WLS instance. When getting a connection with a data source configured with non-XA LLR or 1PC (using the JTS driver) with global transactions, the first connection obtained within the transaction is returned on subsequent connection requests regardless of the values of username/password specified and independent of the associated proxy user session, if any. The connection must be shared among all users of the connection when using LLR or 1PC. For XA data sources, the first connection obtained within the global transaction is returned on subsequent connection requests within the application server, regardless of the values of username/password specified and independent of the associated proxy user session, if any.  The connection must be shared among all users of the connection within a global transaction within the application server/JVM.

    Read the article

  • Cannot SSH after resetting firewall on VPS

    - by Thomas Buckley
    I'm having trouble trying to SSH to my Debian 5 VPS with blacknight. It was working fine until I did the following: Logged into 'Parallels Infrastructure Manager' - Container - Firewall - Set to 'Normal Firewall settings'. It told me there was an error with the IPTables and offered the option again with a checkbox to 'reset' firewall settings, I selected this. I can see that that the default rules are been applied ( anything from anyone on any port and allowing anything to happen). Whenever I attempt to SSH I get the following debug info: thomas@localmachine:~/.ssh$ ssh -v thomas@hostname OpenSSH_5.8p1 Debian-7ubuntu1, OpenSSL 1.0.0e 6 Sep 2011 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Connecting to hostname [***********] port 22. debug1: Connection established. debug1: identity file /home/thomas/.ssh/id_rsa type 1 debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-4096 debug1: Checking blacklist file /etc/ssh/blacklist.RSA-4096 debug1: identity file /home/thomas/.ssh/id_rsa-cert type -1 debug1: identity file /home/thomas/.ssh/id_dsa type -1 debug1: identity file /home/thomas/.ssh/id_dsa-cert type -1 debug1: identity file /home/thomas/.ssh/id_ecdsa type -1 debug1: identity file /home/thomas/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1p1 Debian-5 debug1: match: OpenSSH_5.1p1 Debian-5 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.8p1 Debian-7ubuntu1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA ************************************* debug1: Host 'hostname' is known and matches the RSA host key. debug1: Found key in /home/thomas/.ssh/known_hosts:2 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/thomas/.ssh/id_rsa debug1: Authentications that can continue: publickey debug1: Trying private key: /home/thomas/.ssh/id_dsa debug1: Trying private key: /home/thomas/.ssh/id_ecdsa debug1: No more authentication methods to try. Permission denied (publickey). I had my public/private RSA keys set up and working fine before I reset the firewall settings. I had also made the following changes to my /etc/ssh/sshd_config file on the VPS: PermitRootLogin no PasswordAuthentication no X11Forwarding no UsePAM no UseDNS no AllowUsers thomas Could it be something to do with the SSH server & client having different versions between my local machine and VPS? Any help appreciated. Output with ssh -vvv thomas@localcomputer:~/.ssh$ ssh -vvv thomas@**************** OpenSSH_5.8p1 Debian-7ubuntu1, OpenSSL 1.0.0e 6 Sep 2011 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to ************ [*************] port 22. debug1: Connection established. debug3: Incorrect RSA1 identifier debug3: Could not load "/home/thomas/.ssh/id_rsa" as a RSA1 public key debug2: key_type_from_name: unknown key type '-----BEGIN' debug3: key_read: missing keytype debug2: key_type_from_name: unknown key type 'Proc-Type:' debug3: key_read: missing keytype debug2: key_type_from_name: unknown key type 'DEK-Info:' debug3: key_read: missing keytype debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug2: key_type_from_name: unknown key type '-----END' debug3: key_read: missing keytype debug1: identity file /home/thomas/.ssh/id_rsa type 1 debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-4096 debug1: Checking blacklist file /etc/ssh/blacklist.RSA-4096 debug1: identity file /home/thomas/.ssh/id_rsa-cert type -1 debug1: identity file /home/thomas/.ssh/id_dsa type -1 debug1: identity file /home/thomas/.ssh/id_dsa-cert type -1 debug1: identity file /home/thomas/.ssh/id_ecdsa type -1 debug1: identity file /home/thomas/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1p1 Debian-5 debug1: match: OpenSSH_5.1p1 Debian-5 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.8p1 Debian-7ubuntu1 debug2: fd 3 setting O_NONBLOCK debug3: load_hostkeys: loading entries for host "*****************" from file "/home/thomas/.ssh/known_hosts" debug3: load_hostkeys: found key type RSA in file /home/thomas/.ssh/known_hosts:1 debug3: load_hostkeys: loaded 1 keys debug3: order_hostkeyalgs: prefer hostkeyalgs: [email protected],[email protected],ssh-rsa debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: [email protected],[email protected],ssh-rsa,[email protected],[email protected],[email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5 debug1: kex: server->client aes128-ctr hmac-md5 none debug2: mac_setup: found hmac-md5 debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug2: dh_gen_key: priv key bits set: 127/256 debug2: bits set: 498/1024 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA *********************************************************** debug3: load_hostkeys: loading entries for host "*********************" from file "/home/thomas/.ssh/known_hosts" debug3: load_hostkeys: found key type RSA in file /home/thomas/.ssh/known_hosts:1 debug3: load_hostkeys: loaded 1 keys debug1: Host '****************' is known and matches the RSA host key. debug1: Found key in /home/thomas/.ssh/known_hosts:1 debug2: bits set: 516/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /home/thomas/.ssh/id_rsa (0x7fa7028b6010) debug2: key: /home/thomas/.ssh/id_dsa ((nil)) debug2: key: /home/thomas/.ssh/id_ecdsa ((nil)) debug1: Authentications that can continue: publickey debug3: start over, passed a different list publickey debug3: preferred gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/thomas/.ssh/id_rsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey debug1: Trying private key: /home/thomas/.ssh/id_dsa debug3: no such identity: /home/thomas/.ssh/id_dsa debug1: Trying private key: /home/thomas/.ssh/id_ecdsa debug3: no such identity: /home/thomas/.ssh/id_ecdsa debug2: we did not send a packet, disable method debug1: No more authentication methods to try. Permission denied (publickey). sshd_config # Package generated configuration file # See the sshd(8) manpage for details # What ports, IPs and protocols we listen for Port 22 # Use these options to restrict which interfaces/protocols sshd will bind to #ListenAddress :: #ListenAddress 0.0.0.0 Protocol 2 # HostKeys for protocol version 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key #Privilege Separation is turned on for security UsePrivilegeSeparation yes # Lifetime and size of ephemeral version 1 server key KeyRegenerationInterval 3600 ServerKeyBits 768 # Logging SyslogFacility AUTH LogLevel INFO # Authentication: LoginGraceTime 120 PermitRootLogin no StrictModes yes RSAAuthentication yes PubkeyAuthentication yes #AuthorizedKeysFile %h/.ssh/authorized_keys # Don't read the user's ~/.rhosts and ~/.shosts files IgnoreRhosts yes # For this to work you will also need host keys in /etc/ssh_known_hosts RhostsRSAAuthentication no # similar for protocol version 2 HostbasedAuthentication no # Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication #IgnoreUserKnownHosts yes # To enable empty passwords, change to yes (NOT RECOMMENDED) PermitEmptyPasswords no # Change to yes to enable challenge-response passwords (beware issues with # some PAM modules and threads) C hallengeResponseAuthentication no # Change to no to disable tunnelled clear text passwords PasswordAuthentication no # Kerberos options #KerberosAuthentication no #KerberosGetAFSToken no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes X11Forwarding no X11DisplayOffset 10 PrintMotd no PrintLastLog yes TCPKeepAlive yes #UseLogin no #MaxStartups 10:30:60 #Banner /etc/issue.net # Allow client to pass locale environment variables AcceptEnv LANG LC_* Subsystem sftp /usr/lib/openssh/sftp-server UsePAM no UseDNS no AllowUsers thomas Thanks

    Read the article

  • Identity alternative for SQL Azure Federation : are Azure Queues or Service Bus Queues a good choice?

    - by JYL
    As many of developers, I'm looking for a way to integrate my existing app to SQL Azure Federations, and replacing the Identity columns (the primary keys of my tables) is a big problem. For many reasons, I do NOT want use GUID for my primary keys (please don't open the debate about the GUID or not, it's not my question : i just don't want a GUID, period). So I need to build a key provider to replace the "identity" feature of a standard SQL database. I'm using Entity Framework, so i can easily find one place to set the Id value just before the insert (by overriding the SaveChanges method of my ObjectContext class). I just need to find a "not too complicated" implementation for getting the current Id, which is "farm-ready". I've read this SO post : "ID Generation for Sharded Database (Azure Federated Database)" and "Synchronizing Multiple Nodes in Windows Azure from MSDN Magazine", but this solution sounds a bit complicated for me. I'm thinking about creating (automatically) one azure queue for each SQL table, which contain a pre-loaded list of consecutive integer. When I want an Id value, I just have to get a message from the queue (which becomes invisible and is deleted on the way), which give me the current available Id. About the choice between "Windows Azure Queues" and "Windows Azure Service Bus Queues", I prefere "Windows Azure Queues", due to the "high" latency of Service Bus Queues. I don't think that the lack of "ordering garantee" of Azure Queues is a problem. What do you think about that idea of using Azure Queues to provide Id values ? Do you see any argument to give up that idea ? Do you have a better idea, or even a good practice, to provider integer ids in SQL Azure Federation databases ? Thanks.

    Read the article

  • How to avoid overlapping date ranges when using a grouping clause?

    - by k rey
    I have a situation where I need to find time spans between value changes. I tried a simple group by clause but it eliminates overlapping changes. Consider the following example: create table #items ( code varchar(4) , class varchar(4) , txdate datetime ) insert into #items (code, class, txdate) values ('A', 'C', '2010-01-01'); insert into #items (code, class, txdate) values ('A', 'C', '2010-01-02'); insert into #items (code, class, txdate) values ('A', 'C', '2010-01-03'); insert into #items (code, class, txdate) values ('A', 'D', '2010-01-04'); insert into #items (code, class, txdate) values ('A', 'D', '2010-01-05'); insert into #items (code, class, txdate) values ('A', 'C', '2010-01-06'); insert into #items (code, class, txdate) values ('A', 'C', '2010-01-07'); insert into #items (code, class, txdate) values ('A', 'D', '2010-01-08'); insert into #items (code, class, txdate) values ('A', 'D', '2010-01-09'); select code , class , min(txdate) mindate , max(txdate) maxdate from #items group by code, class This returns the following results (notice the overlapping date ranges): |code|class|mindate |maxdate | ---------------------------------- |A |C |2010-01-01|2010-01-07| |A |D |2010-01-04|2010-01-09| I would like to have the query return the following: |code|class|mindate |maxdate | ---------------------------------- |A |C |2010-01-01|2010-01-03| |A |D |2010-01-04|2010-01-05| |A |C |2010-01-06|2010-01-07| |A |D |2010-01-08|2010-01-09| Any ideas and suggestions?

    Read the article

  • Set Context User Principal for Customized Authentication in SignalR

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2014/05/27/set-context-user-principal-for-customized-authentication-in-signalr.aspxCurrently I'm working on a single page application project which is built on AngularJS and ASP.NET WebAPI. When I need to implement some features that needs real-time communication and push notifications from server side I decided to use SignalR. SignalR is a project currently developed by Microsoft to build web-based, read-time communication application. You can find it here. With a lot of introductions and guides it's not a difficult task to use SignalR with ASP.NET WebAPI and AngularJS. I followed this and this even though it's based on SignalR 1. But when I tried to implement the authentication for my SignalR I was struggled 2 days and finally I got a solution by myself. This might not be the best one but it actually solved all my problem.   In many articles it's said that you don't need to worry about the authentication of SignalR since it uses the web application authentication. For example if your web application utilizes form authentication, SignalR will use the user principal your web application authentication module resolved, check if the principal exist and authenticated. But in my solution my ASP.NET WebAPI, which is hosting SignalR as well, utilizes OAuth Bearer authentication. So when the SignalR connection was established the context user principal was empty. So I need to authentication and pass the principal by myself.   Firstly I need to create a class which delivered from "AuthorizeAttribute", that will takes the responsible for authenticate when SignalR connection established and any method was invoked. 1: public class QueryStringBearerAuthorizeAttribute : AuthorizeAttribute 2: { 3: public override bool AuthorizeHubConnection(HubDescriptor hubDescriptor, IRequest request) 4: { 5: } 6:  7: public override bool AuthorizeHubMethodInvocation(IHubIncomingInvokerContext hubIncomingInvokerContext, bool appliesToMethod) 8: { 9: } 10: } The method "AuthorizeHubConnection" will be invoked when any SignalR connection was established. And here I'm going to retrieve the Bearer token from query string, try to decrypt and recover the login user's claims. 1: public override bool AuthorizeHubConnection(HubDescriptor hubDescriptor, IRequest request) 2: { 3: var dataProtectionProvider = new DpapiDataProtectionProvider(); 4: var secureDataFormat = new TicketDataFormat(dataProtectionProvider.Create()); 5: // authenticate by using bearer token in query string 6: var token = request.QueryString.Get(WebApiConfig.AuthenticationType); 7: var ticket = secureDataFormat.Unprotect(token); 8: if (ticket != null && ticket.Identity != null && ticket.Identity.IsAuthenticated) 9: { 10: // set the authenticated user principal into environment so that it can be used in the future 11: request.Environment["server.User"] = new ClaimsPrincipal(ticket.Identity); 12: return true; 13: } 14: else 15: { 16: return false; 17: } 18: } In the code above I created "TicketDataFormat" instance, which must be same as the one I used to generate the Bearer token when user logged in. Then I retrieve the token from request query string and unprotect it. If I got a valid ticket with identity and it's authenticated this means it's a valid token. Then I pass the user principal into request's environment property which can be used in nearly future. Since my website was built in AngularJS so the SignalR client was in pure JavaScript, and it's not support to set customized HTTP headers in SignalR JavaScript client, I have to pass the Bearer token through request query string. This is not a restriction of SignalR, but a restriction of WebSocket. For security reason WebSocket doesn't allow client to set customized HTTP headers from browser. Next, I need to implement the authentication logic in method "AuthorizeHubMethodInvocation" which will be invoked when any SignalR method was invoked. 1: public override bool AuthorizeHubMethodInvocation(IHubIncomingInvokerContext hubIncomingInvokerContext, bool appliesToMethod) 2: { 3: var connectionId = hubIncomingInvokerContext.Hub.Context.ConnectionId; 4: // check the authenticated user principal from environment 5: var environment = hubIncomingInvokerContext.Hub.Context.Request.Environment; 6: var principal = environment["server.User"] as ClaimsPrincipal; 7: if (principal != null && principal.Identity != null && principal.Identity.IsAuthenticated) 8: { 9: // create a new HubCallerContext instance with the principal generated from token 10: // and replace the current context so that in hubs we can retrieve current user identity 11: hubIncomingInvokerContext.Hub.Context = new HubCallerContext(new ServerRequest(environment), connectionId); 12: return true; 13: } 14: else 15: { 16: return false; 17: } 18: } Since I had passed the user principal into request environment in previous method, I can simply check if it exists and valid. If so, what I need is to pass the principal into context so that SignalR hub can use. Since the "User" property is all read-only in "hubIncomingInvokerContext", I have to create a new "ServerRequest" instance with principal assigned, and set to "hubIncomingInvokerContext.Hub.Context". After that, we can retrieve the principal in my Hubs through "Context.User" as below. 1: public class DefaultHub : Hub 2: { 3: public object Initialize(string host, string service, JObject payload) 4: { 5: var connectionId = Context.ConnectionId; 6: ... ... 7: var domain = string.Empty; 8: var identity = Context.User.Identity as ClaimsIdentity; 9: if (identity != null) 10: { 11: var claim = identity.FindFirst("Domain"); 12: if (claim != null) 13: { 14: domain = claim.Value; 15: } 16: } 17: ... ... 18: } 19: } Finally I just need to add my "QueryStringBearerAuthorizeAttribute" into the SignalR pipeline. 1: app.Map("/signalr", map => 2: { 3: // Setup the CORS middleware to run before SignalR. 4: // By default this will allow all origins. You can 5: // configure the set of origins and/or http verbs by 6: // providing a cors options with a different policy. 7: map.UseCors(CorsOptions.AllowAll); 8: var hubConfiguration = new HubConfiguration 9: { 10: // You can enable JSONP by uncommenting line below. 11: // JSONP requests are insecure but some older browsers (and some 12: // versions of IE) require JSONP to work cross domain 13: // EnableJSONP = true 14: EnableJavaScriptProxies = false 15: }; 16: // Require authentication for all hubs 17: var authorizer = new QueryStringBearerAuthorizeAttribute(); 18: var module = new AuthorizeModule(authorizer, authorizer); 19: GlobalHost.HubPipeline.AddModule(module); 20: // Run the SignalR pipeline. We're not using MapSignalR 21: // since this branch already runs under the "/signalr" path. 22: map.RunSignalR(hubConfiguration); 23: }); On the client side should pass the Bearer token through query string before I started the connection as below. 1: self.connection = $.hubConnection(signalrEndpoint); 2: self.proxy = self.connection.createHubProxy(hubName); 3: self.proxy.on(notifyEventName, function (event, payload) { 4: options.handler(event, payload); 5: }); 6: // add the authentication token to query string 7: // we cannot use http headers since web socket protocol doesn't support 8: self.connection.qs = { Bearer: AuthService.getToken() }; 9: // connection to hub 10: self.connection.start(); Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Reaver keeps reapeating the same PIN

    - by Umair Ayub
    I have been trying to Hack a WPA2 Wifi and so far I am stuck with it. Problem is that it keeps trying same pin over and over again. Here is the last REAVER command I entered. reaver -i mon0 -b 2C:AB:25:51:F1:CF -vv -c 1 -S -L -f It does this (only one PIN again and again) [+] Switching mon0 to channel 1 [+] Waiting for beacon from 2C:AB:25:51:F1:CF [+] Associated with 2C:AB:25:51:F1:CF (ESSID: PTCL-BB) [+] Trying pin 12345670 [+] Sending EAPOL START request [+] Received identity request [+] Sending identity response [!] WARNING: Receive timeout occurred [+] Sending WSC NACK [!] WPS transaction failed (code: 0x02), re-trying last pin [+] Trying pin 12345670 [+] Sending EAPOL START request [+] Received identity request [+] Sending identity response [+] Received identity request [+] Sending identity response ^C [+] Nothing done, nothing to save.

    Read the article

  • ssh refuses to authenticate keys

    - by MixturaDementiae
    So I am setting up a connection between my machine [fedora 17] and a virtual machine running in Virtual Box in which is running CentOS 5. I have installed openssh from the repositories on CentOS, and I have configured everything as it follows: Protocol 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key SyslogFacility AUTHPRIV PermitRootLogin yes RSAAuthentication yes PubkeyAuthentication yes AuthorizedKeysFile /home/pigreco/.ssh/authorized_keys PasswordAuthentication no ChallengeResponseAuthentication yes GSSAPIAuthentication yes GSSAPICleanupCredentials yes UsePAM yes AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT AcceptEnv LC_IDENTIFICATION LC_ALL LANGUAGE AcceptEnv XMODIFIERS X11Forwarding yes Subsystem sftp /usr/libexec/openssh/sftp-server this is the configuration file sshd_config on the server i.e. on the CentOS. Moreover I have created a public/private key pair as usual on the .ssh/ folder in my home directory in my OS, i.e. Fedora, and then I've copied with scp the id_rsa.pub to the server and then I have appended its content to the file .ssh/authorized_keys on the server machine. The error that I get is the following: OpenSSH_5.9p1, OpenSSL 1.0.0j-fips 10 May 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 50: Applying options for * debug1: Connecting to 192.168.100.13 [192.168.100.13] port 22. debug1: Connection established. debug1: identity file /home/mayhem/.ssh/identity type -1 debug1: identity file /home/mayhem/.ssh/identity-cert type -1 debug1: identity file /home/mayhem/.ssh/id_rsa type 1 debug1: identity file /home/mayhem/.ssh/id_rsa-cert type -1 debug1: identity file /home/mayhem/.ssh/id_dsa type -1 debug1: identity file /home/mayhem/.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3 debug1: match: OpenSSH_5.3 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA 16:e5:72:d1:37:94:1b:5e:3d:3a:e5:da:6f:df:0c:08 debug1: Host '192.168.100.13' is known and matches the RSA host key. debug1: Found key in /home/mayhem/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,keyboard-interactive debug1: Next authentication method: gssapi-keyex debug1: No valid Key exchange context debug1: Next authentication method: gssapi-with-mic debug1: Unspecified GSS failure. Minor code may provide more information Cannot determine realm for numeric host address debug1: Unspecified GSS failure. Minor code may provide more information Cannot determine realm for numeric host address debug1: Unspecified GSS failure. Minor code may provide more information debug1: Unspecified GSS failure. Minor code may provide more information Cannot determine realm for numeric host address debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/mayhem/.ssh/id_rsa debug1: Server accepts key: pkalg ssh-rsa blen 279 Agent admitted failure to sign using the key. debug1: Trying private key: /home/mayhem/.ssh/identity debug1: Trying private key: /home/mayhem/.ssh/id_dsa debug1: Next authentication method: keyboard-interactive Do you have some good suggestion of what I can do? thank you

    Read the article

  • VPN Authentication Credentials (Local/Remote Identifiers) For Remote Access VPN

    - by thatidiotguy
    So I am trying to set up a remote access VPN using the free ShrewSoft vpn client: https://www.shrew.net/software I want to use a PSK as the authentication mechanism combined with XAuth so that a connection requires a valid username/pass combo. Under the authentication tab this particular VPN Client however is asking for a Local Identity and a Remote Identity. The options for Local Identity Type are: Fully Qualified Domain Name User Fully Qualified Domain Name IP Address Key Identifier The options for Remote Identity are: Any Fully Qualified Domain Name User Fully Qualified Domain Name IP Address Key Identifier My current thinking is that I can use the Fully Qualifed Domain Name provided by the remote firewall for the Remote Identity, but I do not know what it wants for local identity. Just to stress: I am not trying to set up a site to site VPN. Can anybody shed any light on what I am missing here? A screenshot can be provided if that would be helpful. The current error I am getting during the connection is: IKE Responder: Proposed IKE ID mismatch

    Read the article

  • PowerShell Code Snippets for SharePoint2010 Developers

    - by ybbest
    Install solution to SharePoint Farm and activate Feature to a site collection #Please specify the solution package path. $SolutionPackagePath = “C:\ybbest\myForm.xsn” Add-SPSolution -LiteralPath $SolutionPackagePath #Please specify the site collection url. $SiteCollectionUrl=”http:// ybbest /” # Install the solution package to the SharePoint Farm Install-SPSolution -Identity ybbest.wsp -GACDeployment #Activate features in the solution package to a Site Collection Enable-SPFeature -Identity 8ed800a2-3494-4cba-adf1-ed8714cb062d -Url $SiteCollectionUrl Retract solution from SharePoint Farm and deactivate Feature to a site collection #Deactivate features from a Site Collection Disable-SPFeature -Identity 8ed800a2-3494-4cba-adf1-ed8714cb062d -Url http:// ybbest / # Uninstall the solution package to the SharePoint Farm Uninstall-SPSolution -Identity ybbest.wsp # Remove the solution package to the SharePoint Farm Remove-SPSolution -Identity ybbest.wsp Install Admin Approved InfoPath form #Please specify the template path. $InfopathFormTemplatePath = “C:\ybbest\myForm.xsn” #Please specify the site collection url. $SiteCollectionUrl=”http:// ybbest /” #Install InfoPath to the SharePoint Farm $formTemplate=Install-SPInfoPathFormTemplate -Path $InfopathFormTemplatePath #Activate InfoPath form to Site Collection Enable-SPInfoPathFormTemplate -Identity $formTemplate -Site $SiteCollectionUrl References http://technet.microsoft.com/en-us/library/ee806878.aspx http://www.wssdemo.com/Lists/PowerShell/Commands.aspx

    Read the article

  • Can't ssh to instance

    - by megas
    I have a linode instance, I was successfully connecting to it via ssh. But I've decided to rebuild my instance and then I can not connect to that instance via ssh. The linode works correctly because I can get access via Lish (lonode ssh) I've tried to clear known_hosts with: ssh-keygen -R 212.71.xxx.xx But I still getting message: ssh [email protected] -v OpenSSH_5.9p1 Debian-5ubuntu1.1, OpenSSL 1.0.1 14 Mar 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: Connecting to 212.71.238.74 [212.71.238.74] port 22. debug1: Connection established. debug1: identity file /home/megas/.ssh/id_rsa type 1 debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-2048 debug1: Checking blacklist file /etc/ssh/blacklist.RSA-2048 debug1: identity file /home/megas/.ssh/id_rsa-cert type -1 debug1: identity file /home/megas/.ssh/id_dsa type -1 debug1: identity file /home/megas/.ssh/id_dsa-cert type -1 debug1: identity file /home/megas/.ssh/id_ecdsa type -1 debug1: identity file /home/megas/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1.1 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1.1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9p1 Debian-5ubuntu1.1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: ECDSA c5:c3:a7:c0:5a:25:a1:64:c4:04:0c:42:bb:46:f6:96 debug1: Host '212.71.238.74' is known and matches the ECDSA host key. debug1: Found key in /home/megas/.ssh/known_hosts:1 debug1: ssh_ecdsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/megas/.ssh/id_rsa debug1: Authentications that can continue: publickey,password debug1: Trying private key: /home/megas/.ssh/id_dsa debug1: Trying private key: /home/megas/.ssh/id_ecdsa debug1: No more authentication methods to try. Permission denied (publickey,password). How to resolve this problem? Thanks

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >