Search Results

Search found 14989 results on 600 pages for 'street address'.

Page 465/600 | < Previous Page | 461 462 463 464 465 466 467 468 469 470 471 472  | Next Page >

  • nginx reverse proxy cannot access apache virtual hosts

    - by Sc0rian
    I am setting up nginx as a reverse proxy. The server runs on directadmin and lamp stack. I have nginx running on port 81. I can access all my sites (including virtual ips) on the port 81. However when I forward the traffic from port 80 to 81, the virtual ips have a message saying "Apache is running normally". Server IPs are fine, and I can still access virtual IP's on 81. [root@~]# netstat -an | grep LISTEN | egrep ":80|:81" tcp 0 0 <virtual ip>:81 0.0.0.0:* LISTEN tcp 0 0 <virtual ip>:81 0.0.0.0:* LISTEN tcp 0 0 <serverip>:81 0.0.0.0:* LISTEN tcp 0 0 :::80 :::* LISTEN apache 24090 0.6 1.3 29252 13612 ? S 18:34 0:00 /usr/sbin/httpd -k start -DSSL apache 24092 0.9 2.1 39584 22056 ? S 18:34 0:00 /usr/sbin/httpd -k start -DSSL apache 24096 0.2 1.9 35892 20256 ? S 18:34 0:00 /usr/sbin/httpd -k start -DSSL apache 24120 0.3 1.7 35752 17840 ? S 18:34 0:00 /usr/sbin/httpd -k start -DSSL apache 24495 0.0 1.4 30892 14756 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24496 1.0 2.1 39892 22164 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24516 1.5 3.6 55496 38040 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24519 0.1 1.2 28996 13224 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24521 2.7 4.0 58244 41984 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24522 0.0 1.2 29124 12672 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24524 0.0 1.1 28740 12364 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24535 1.1 1.7 36008 17876 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24536 0.0 1.1 28592 12084 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24537 0.0 1.1 28592 12112 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24539 0.0 0.0 0 0 ? Z 18:35 0:00 [httpd] <defunct> apache 24540 0.0 1.1 28592 11540 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24541 0.0 1.1 28592 11548 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL root 24548 0.0 0.0 4132 752 pts/0 R+ 18:35 0:00 egrep apache|nginx root 28238 0.0 0.0 19576 284 ? Ss May29 0:00 nginx: master process /usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf apache 28239 0.0 0.0 19888 804 ? S May29 0:00 nginx: worker process apache 28240 0.0 0.0 19888 548 ? S May29 0:00 nginx: worker process apache 28241 0.0 0.0 19736 484 ? S May29 0:00 nginx: cache manager process here is my nginx conf: cat /usr/local/nginx/conf/nginx.conf user apache apache; worker_processes 2; # Set it according to what your CPU have. 4 Cores = 4 worker_rlimit_nofile 8192; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; server_tokens off; access_log /var/log/nginx_access.log main; error_log /var/log/nginx_error.log debug; server_names_hash_bucket_size 64; sendfile on; tcp_nopush on; tcp_nodelay off; keepalive_timeout 30; gzip on; gzip_comp_level 9; gzip_proxied any; proxy_buffering on; proxy_cache_path /usr/local/nginx/proxy_temp levels=1:2 keys_zone=one:15m inactive=7d max_size=1000m; proxy_buffer_size 16k; proxy_buffers 100 8k; proxy_connect_timeout 60; proxy_send_timeout 60; proxy_read_timeout 60; server { listen <server ip>:81 default rcvbuf=8192 sndbuf=16384 backlog=32000; # Real IP here server_name <server host name> _; # "_" is for handle all hosts that are not described by server_name charset off; access_log /var/log/nginx_host_general.access.log main; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://<server ip>; # Real IP here client_max_body_size 16m; client_body_buffer_size 128k; proxy_buffering on; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 120; proxy_buffer_size 16k; proxy_buffers 32 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } location /nginx_status { stub_status on; access_log off; allow 127.0.0.1; deny all; } } include /usr/local/nginx/vhosts/*.conf; } here is my vhost conf: # cat /usr/local/nginx/vhosts/1.conf server { listen <virt ip>:81 default rcvbuf=8192 sndbuf=16384 backlog=32000; # Real IP here server_name <virt domain name>.com ; # "_" is for handle all hosts that are not described by server_name charset off; access_log /var/log/nginx_host_general.access.log main; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://<virt ip>; # Real IP here client_max_body_size 16m; client_body_buffer_size 128k; proxy_buffering on; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 120; proxy_buffer_size 16k; proxy_buffers 32 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } } Apache config: <VirtualHost xxxxxx:80 > ServerName www.<domain>.com ServerAlias www.<domain>.com <domain>.com ServerAdmin webmaster@<domain>.com DocumentRoot /home/<domain>/domains/<domain>.com/public_html ScriptAlias /cgi-bin/ /home/<domain>/domains/<domain>.com/public_html/cgi-bin/ UseCanonicalName OFF <IfModule !mod_ruid2.c> SuexecUserGroup <domain> <domain> </IfModule> <IfModule mod_ruid2.c> RMode config RUidGid <domain> <domain> RGroups apache access </IfModule> CustomLog /var/log/httpd/domains/<domain>.com.bytes bytes CustomLog /var/log/httpd/domains/<domain>.com.log combined ErrorLog /var/log/httpd/domains/<domain>.com.error.log <Directory /home/<domain>/domains/<domain>.com/public_html> Options +Includes -Indexes php_admin_flag engine ON php_admin_value sendmail_path '/usr/sbin/sendmail -t -i -f <domain>@<domain>.com' </Directory> <virtual ip address>:80 is a NameVirtualHost default server www.xx.com (/usr/local/directadmin/data/users/xx/httpd.conf:16) port 80 namevhost www.xx.com (/usr/local/directadmin/data/users/xx/httpd.conf:16) port 80 namevhost www.xx.co.uk (/usr/local/directadmin/data/users/xx/httpd.conf:107) port 80 namevhost www.xx.co.uk (/usr/local/directadmin/data/users/xx/httpd.conf:151) port 80 namevhost www.xx.co.uk (/usr/local/directadmin/data/users/xx/httpd.conf:195) <virtual ip address>:443 is a NameVirtualHost default server www.xx.com (/usr/local/directadmin/data/users/xx/httpd.conf:61) port 443 namevhost www.xx.com (/usr/local/directadmin/data/users/xx/httpd.conf:61) <server ip>:80 is a NameVirtualHost default server localhost (/etc/httpd/conf/extra/httpd-vhosts.conf:29) port 80 namevhost localhost (/etc/httpd/conf/extra/httpd-vhosts.conf:29) port 80 namevhost www.xx.co.uk (/usr/local/directadmin/data/users/admin/httpd.conf:16)

    Read the article

  • cannot access localhost using ip

    - by Robert
    I have done a small web development project using eclipse. It runs well when I try running it on browser with url localhost:8080/myproject/home.html. But if I want to access it on another machine (laptop, mobile, etc. using the same wifi) it is not possible; it is not able to connect. After Googling for a while found out that I have to use the IP address instead of 'localhost'. So I tried 10.0.0.4:8080/myproject/home.html, but still does not work. In fact i am unable to open that url on the same machine (where localhost:8080/myproject/home.html works fine). I also added a new Inbound rule in control panel firewall settings, allowing access to all ports for protocol TCP. Still have problem in running application with the url 10.0.0.4:8080/myproject/home.html (both on same machine as well as laptop and mobile). FYI i am using Eclipse Indigo, Apache tomcat 6.0 and server.xml file contents is as below: <?xml version="1.0" encoding="UTF-8"?> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --><!-- Note: A "Server" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/server.html --><Server port="8005" shutdown="SHUTDOWN"> <!--APR library loader. Documentation at /docs/apr.html --> <Listener SSLEngine="on" className="org.apache.catalina.core.AprLifecycleListener"/> <!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html --> <Listener className="org.apache.catalina.core.JasperListener"/> <!-- Prevent memory leaks due to use of particular java/javax APIs--> <Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener"/> <!-- JMX Support for the Tomcat server. Documentation at /docs/non-existent.html --> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener"/> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener"/> <!-- Global JNDI resources Documentation at /docs/jndi-resources-howto.html --> <GlobalNamingResources> <!-- Editable user database that can also be used by UserDatabaseRealm to authenticate users --> <Resource auth="Container" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" name="UserDatabase" pathname="conf/tomcat-users.xml" type="org.apache.catalina.UserDatabase"/> </GlobalNamingResources> <!-- A "Service" is a collection of one or more "Connectors" that share a single "Container" Note: A "Service" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/service.html --> <Service name="Catalina"> <!--The connectors can use a shared executor, you can define one or more named thread pools--> <!-- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="4"/> --> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 --> <Connector port="8080" protocol="HTTP/1.1" address="10.0.0.4" connectionTimeout="20000" redirectPort="8443" /> <!-- A "Connector" using the shared thread pool--> <!-- <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> --> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443"/> <!-- An Engine represents the entry point (within Catalina) that processes every request. The Engine implementation for Tomcat stand alone analyzes the HTTP headers included with the request, and passes them on to the appropriate Host (virtual host). Documentation at /docs/config/engine.html --> <!-- You should set jvmRoute to support load-balancing via AJP ie : <Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1"> --> <Engine defaultHost="localhost" name="Catalina"> <!--For clustering, please take a look at documentation at: /docs/cluster-howto.html (simple how to) /docs/config/cluster.html (reference documentation) --> <!-- <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> --> <!-- The request dumper valve dumps useful debugging information about the request and response data received and sent by Tomcat. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.RequestDumperValve"/> --> <!-- This Realm uses the UserDatabase configured in the global JNDI resources under the key "UserDatabase". Any edits that are performed against this UserDatabase are immediately available for use by the Realm. --> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <!-- Define the default virtual host Note: XML Schema validation will not work with Xerces 2.2. --> <Host appBase="webapps" autoDeploy="true" name="localhost" unpackWARs="true" xmlNamespaceAware="false" xmlValidation="false"> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="common" resolveHosts="false"/> --> <Context docBase="myproject" path="/myproject" reloadable="true" source="org.eclipse.jst.jee.server:myproject"/></Host> </Engine> </Service> </Server>

    Read the article

  • WCF WS-Security and WSE Nonce Authentication

    - by Rick Strahl
    WCF makes it fairly easy to access WS-* Web Services, except when you run into a service format that it doesn't support. Even then WCF provides a huge amount of flexibility to make the service clients work, however finding the proper interfaces to make that happen is not easy to discover and for the most part undocumented unless you're lucky enough to run into a blog, forum or StackOverflow post on the matter. This is definitely true for the Password Nonce as part of the WS-Security/WSE protocol, which is not natively supported in WCF. Specifically I had a need to create a WCF message on the client that includes a WS-Security header that looks like this from their spec document:<soapenv:Header> <wsse:Security soapenv:mustUnderstand="1" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <wsse:UsernameToken wsu:Id="UsernameToken-8" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <wsse:Username>TeStUsErNaMe1</wsse:Username> <wsse:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText" >TeStPaSsWoRd1</wsse:Password> <wsse:Nonce EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" >f8nUe3YupTU5ISdCy3X9Gg==</wsse:Nonce> <wsu:Created>2011-05-04T19:01:40.981Z</wsu:Created> </wsse:UsernameToken> </wsse:Security> </soapenv:Header> Specifically, the Nonce and Created keys are what WCF doesn't create or have a built in formatting for. Why is there a nonce? My first thought here was WTF? The username and password are there in clear text, what does the Nonce accomplish? The Nonce and created keys are are part of WSE Security specification and are meant to allow the server to detect and prevent replay attacks. The hashed nonce should be unique per request which the server can store and check for before running another request thus ensuring that a request is not replayed with exactly the same values. Basic ServiceUtl Import - not much Luck The first thing I did when I imported this service with a service reference was to simply import it as a Service Reference. The Add Service Reference import automatically detects that WS-Security is required and appropariately adds the WS-Security to the basicHttpBinding in the config file:<?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <bindings> <basicHttpBinding> <binding name="RealTimeOnlineSoapBinding"> <security mode="Transport" /> </binding> <binding name="RealTimeOnlineSoapBinding1" /> </basicHttpBinding> </bindings> <client> <endpoint address="https://notarealurl.com:443/services/RealTimeOnline" binding="basicHttpBinding" bindingConfiguration="RealTimeOnlineSoapBinding" contract="RealTimeOnline.RealTimeOnline" name="RealTimeOnline" /> </client> </system.serviceModel> </configuration> If if I run this as is using code like this:var client = new RealTimeOnlineClient(); client.ClientCredentials.UserName.UserName = "TheUsername"; client.ClientCredentials.UserName.Password = "ThePassword"; … I get nothing in terms of WS-Security headers. The request is sent, but the the binding expects transport level security to be applied, rather than message level security. To fix this so that a WS-Security message header is sent the security mode can be changed to: <security mode="TransportWithMessageCredential" /> Now if I re-run I at least get a WS-Security header which looks like this:<s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/" xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <s:Header> <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <u:Timestamp u:Id="_0"> <u:Created>2012-11-24T02:55:18.011Z</u:Created> <u:Expires>2012-11-24T03:00:18.011Z</u:Expires> </u:Timestamp> <o:UsernameToken u:Id="uuid-18c215d4-1106-40a5-8dd1-c81fdddf19d3-1"> <o:Username>TheUserName</o:Username> <o:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText" >ThePassword</o:Password> </o:UsernameToken> </o:Security> </s:Header> Closer! Now the WS-Security header is there along with a timestamp field (which might not be accepted by some WS-Security expecting services), but there's no Nonce or created timestamp as required by my original service. Using a CustomBinding instead My next try was to go with a CustomBinding instead of basicHttpBinding as it allows a bit more control over the protocol and transport configurations for the binding. Specifically I can explicitly specify the message protocol(s) used. Using configuration file settings here's what the config file looks like:<?xml version="1.0"?> <configuration> <system.serviceModel> <bindings> <customBinding> <binding name="CustomSoapBinding"> <security includeTimestamp="false" authenticationMode="UserNameOverTransport" defaultAlgorithmSuite="Basic256" requireDerivedKeys="false" messageSecurityVersion="WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10"> </security> <textMessageEncoding messageVersion="Soap11"></textMessageEncoding> <httpsTransport maxReceivedMessageSize="2000000000"/> </binding> </customBinding> </bindings> <client> <endpoint address="https://notrealurl.com:443/services/RealTimeOnline" binding="customBinding" bindingConfiguration="CustomSoapBinding" contract="RealTimeOnline.RealTimeOnline" name="RealTimeOnline" /> </client> </system.serviceModel> <startup> <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> </startup> </configuration> This ends up creating a cleaner header that's missing the timestamp field which can cause some services problems. The WS-Security header output generated with the above looks like this:<s:Header> <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <o:UsernameToken u:Id="uuid-291622ca-4c11-460f-9886-ac1c78813b24-1"> <o:Username>TheUsername</o:Username> <o:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText" >ThePassword</o:Password> </o:UsernameToken> </o:Security> </s:Header> This is closer as it includes only the username and password. The key here is the protocol for WS-Security:messageSecurityVersion="WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10" which explicitly specifies the protocol version. There are several variants of this specification but none of them seem to support the nonce unfortunately. This protocol does allow for optional omission of the Nonce and created timestamp provided (which effectively makes those keys optional). With some services I tried that requested a Nonce just using this protocol actually worked where the default basicHttpBinding failed to connect, so this is a possible solution for access to some services. Unfortunately for my target service that was not an option. The nonce has to be there. Creating Custom ClientCredentials As it turns out WCF doesn't have support for the Digest Nonce as part of WS-Security, and so as far as I can tell there's no way to do it just with configuration settings. I did a bunch of research on this trying to find workarounds for this, and I did find a couple of entries on StackOverflow as well as on the MSDN forums. However, none of these are particularily clear and I ended up using bits and pieces of several of them to arrive at a working solution in the end. http://stackoverflow.com/questions/896901/wcf-adding-nonce-to-usernametoken http://social.msdn.microsoft.com/Forums/en-US/wcf/thread/4df3354f-0627-42d9-b5fb-6e880b60f8ee The latter forum message is the more useful of the two (the last message on the thread in particular) and it has most of the information required to make this work. But it took some experimentation for me to get this right so I'll recount the process here maybe a bit more comprehensively. In order for this to work a number of classes have to be overridden: ClientCredentials ClientCredentialsSecurityTokenManager WSSecurityTokenizer The idea is that we need to create a custom ClientCredential class to hold the custom properties so they can be set from the UI or via configuration settings. The TokenManager and Tokenizer are mainly required to allow the custom credentials class to flow through the WCF pipeline and eventually provide custom serialization. Here are the three classes required and their full implementations:public class CustomCredentials : ClientCredentials { public CustomCredentials() { } protected CustomCredentials(CustomCredentials cc) : base(cc) { } public override System.IdentityModel.Selectors.SecurityTokenManager CreateSecurityTokenManager() { return new CustomSecurityTokenManager(this); } protected override ClientCredentials CloneCore() { return new CustomCredentials(this); } } public class CustomSecurityTokenManager : ClientCredentialsSecurityTokenManager { public CustomSecurityTokenManager(CustomCredentials cred) : base(cred) { } public override System.IdentityModel.Selectors.SecurityTokenSerializer CreateSecurityTokenSerializer(System.IdentityModel.Selectors.SecurityTokenVersion version) { return new CustomTokenSerializer(System.ServiceModel.Security.SecurityVersion.WSSecurity11); } } public class CustomTokenSerializer : WSSecurityTokenSerializer { public CustomTokenSerializer(SecurityVersion sv) : base(sv) { } protected override void WriteTokenCore(System.Xml.XmlWriter writer, System.IdentityModel.Tokens.SecurityToken token) { UserNameSecurityToken userToken = token as UserNameSecurityToken; string tokennamespace = "o"; DateTime created = DateTime.Now; string createdStr = created.ToString("yyyy-MM-ddThh:mm:ss.fffZ"); // unique Nonce value - encode with SHA-1 for 'randomness' // in theory the nonce could just be the GUID by itself string phrase = Guid.NewGuid().ToString(); var nonce = GetSHA1String(phrase); // in this case password is plain text // for digest mode password needs to be encoded as: // PasswordAsDigest = Base64(SHA-1(Nonce + Created + Password)) // and profile needs to change to //string password = GetSHA1String(nonce + createdStr + userToken.Password); string password = userToken.Password; writer.WriteRaw(string.Format( "<{0}:UsernameToken u:Id=\"" + token.Id + "\" xmlns:u=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\">" + "<{0}:Username>" + userToken.UserName + "</{0}:Username>" + "<{0}:Password Type=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText\">" + password + "</{0}:Password>" + "<{0}:Nonce EncodingType=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary\">" + nonce + "</{0}:Nonce>" + "<u:Created>" + createdStr + "</u:Created></{0}:UsernameToken>", tokennamespace)); } protected string GetSHA1String(string phrase) { SHA1CryptoServiceProvider sha1Hasher = new SHA1CryptoServiceProvider(); byte[] hashedDataBytes = sha1Hasher.ComputeHash(Encoding.UTF8.GetBytes(phrase)); return Convert.ToBase64String(hashedDataBytes); } } Realistically only the CustomTokenSerializer has any significant code in. The code there deals with actually serializing the custom credentials using low level XML semantics by writing output into an XML writer. I can't take credit for this code - most of the code comes from the MSDN forum post mentioned earlier - I made a few adjustments to simplify the nonce generation and also added some notes to allow for PasswordDigest generation. Per spec the nonce is nothing more than a unique value that's supposed to be 'random'. I'm thinking that this value can be any string that's unique and a GUID on its own probably would have sufficed. Comments on other posts that GUIDs can be potentially guessed are highly exaggerated to say the least IMHO. To satisfy even that aspect though I added the SHA1 encryption and binary decoding to give a more random value that would be impossible to 'guess'. The original example from the forum post used another level of encoding and decoding to string in between - but that really didn't accomplish anything but extra overhead. The header output generated from this looks like this:<s:Header> <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <o:UsernameToken u:Id="uuid-f43d8b0d-0ebb-482e-998d-f544401a3c91-1" xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <o:Username>TheUsername</o:Username> <o:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText">ThePassword</o:Password> <o:Nonce EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" >PjVE24TC6HtdAnsf3U9c5WMsECY=</o:Nonce> <u:Created>2012-11-23T07:10:04.670Z</u:Created> </o:UsernameToken> </o:Security> </s:Header> which is exactly as it should be. Password Digest? In my case the password is passed in plain text over an SSL connection, so there's no digest required so I was done with the code above. Since I don't have a service handy that requires a password digest,  I had no way of testing the code for the digest implementation, but here is how this is likely to work. If you need to pass a digest encoded password things are a little bit trickier. The password type namespace needs to change to: http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#Digest and then the password value needs to be encoded. The format for password digest encoding is this: Base64(SHA-1(Nonce + Created + Password)) and it can be handled in the code above with this code (that's commented in the snippet above): string password = GetSHA1String(nonce + createdStr + userToken.Password); The entire WriteTokenCore method for digest code looks like this:protected override void WriteTokenCore(System.Xml.XmlWriter writer, System.IdentityModel.Tokens.SecurityToken token) { UserNameSecurityToken userToken = token as UserNameSecurityToken; string tokennamespace = "o"; DateTime created = DateTime.Now; string createdStr = created.ToString("yyyy-MM-ddThh:mm:ss.fffZ"); // unique Nonce value - encode with SHA-1 for 'randomness' // in theory the nonce could just be the GUID by itself string phrase = Guid.NewGuid().ToString(); var nonce = GetSHA1String(phrase); string password = GetSHA1String(nonce + createdStr + userToken.Password); writer.WriteRaw(string.Format( "<{0}:UsernameToken u:Id=\"" + token.Id + "\" xmlns:u=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\">" + "<{0}:Username>" + userToken.UserName + "</{0}:Username>" + "<{0}:Password Type=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#Digest\">" + password + "</{0}:Password>" + "<{0}:Nonce EncodingType=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary\">" + nonce + "</{0}:Nonce>" + "<u:Created>" + createdStr + "</u:Created></{0}:UsernameToken>", tokennamespace)); } I had no service to connect to to try out Digest auth - if you end up needing it and get it to work please drop a comment… How to use the custom Credentials The easiest way to use the custom credentials is to create the client in code. Here's a factory method I use to create an instance of my service client:  public static RealTimeOnlineClient CreateRealTimeOnlineProxy(string url, string username, string password) { if (string.IsNullOrEmpty(url)) url = "https://notrealurl.com:443/cows/services/RealTimeOnline"; CustomBinding binding = new CustomBinding(); var security = TransportSecurityBindingElement.CreateUserNameOverTransportBindingElement(); security.IncludeTimestamp = false; security.DefaultAlgorithmSuite = SecurityAlgorithmSuite.Basic256; security.MessageSecurityVersion = MessageSecurityVersion.WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10; var encoding = new TextMessageEncodingBindingElement(); encoding.MessageVersion = MessageVersion.Soap11; var transport = new HttpsTransportBindingElement(); transport.MaxReceivedMessageSize = 20000000; // 20 megs binding.Elements.Add(security); binding.Elements.Add(encoding); binding.Elements.Add(transport); RealTimeOnlineClient client = new RealTimeOnlineClient(binding, new EndpointAddress(url)); // to use full client credential with Nonce uncomment this code: // it looks like this might not be required - the service seems to work without it client.ChannelFactory.Endpoint.Behaviors.Remove<System.ServiceModel.Description.ClientCredentials>(); client.ChannelFactory.Endpoint.Behaviors.Add(new CustomCredentials()); client.ClientCredentials.UserName.UserName = username; client.ClientCredentials.UserName.Password = password; return client; } This returns a service client that's ready to call other service methods. The key item in this code is the ChannelFactory endpoint behavior modification that that first removes the original ClientCredentials and then adds the new one. The ClientCredentials property on the client is read only and this is the way it has to be added.   Summary It's a bummer that WCF doesn't suport WSE Security authentication with nonce values out of the box. From reading the comments in posts/articles while I was trying to find a solution, I found that this feature was omitted by design as this protocol is considered unsecure. While I agree that plain text passwords are rarely a good idea even if they go over secured SSL connection as WSE Security does, there are unfortunately quite a few services (mosly Java services I suspect) that use this protocol. I've run into this twice now and trying to find a solution online I can see that this is not an isolated problem - many others seem to have struggled with this. It seems there are about a dozen questions about this on StackOverflow all with varying incomplete answers. Hopefully this post provides a little more coherent content in one place. Again I marvel at WCF and its breadth of support for protocol features it has in a single tool. And even when it can't handle something there are ways to get it working via extensibility. But at the same time I marvel at how freaking difficult it is to arrive at these solutions. I mean there's no way I could have ever figured this out on my own. It takes somebody working on the WCF team or at least being very, very intricately involved in the innards of WCF to figure out the interconnection of the various objects to do this from scratch. Luckily this is an older problem that has been discussed extensively online and I was able to cobble together a solution from the online content. I'm glad it worked out that way, but it feels dirty and incomplete in that there's a whole learning path that was omitted to get here… Man am I glad I'm not dealing with SOAP services much anymore. REST service security - even when using some sort of federation is a piece of cake by comparison :-) I'm sure once standards bodies gets involved we'll be right back in security standard hell…© Rick Strahl, West Wind Technologies, 2005-2012Posted in WCF  Web Services   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • SQLAuthority News – SafePeak’s SQL Server Performance Contest – Winners

    - by pinaldave
    SafePeak, the unique automated SQL performance acceleration and performance tuning software vendor, announced the winners of their SQL Performance Contest 2011. The contest quite unique: the writer of the best / most interesting and most community liked “performance story” would win an expensive gadget. The judges were the community DBAs that could participating and Like’ing stories and could also win expensive prizes. Robert Pearl SQL MVP, was the contest supervisor. I liked most of the stories and decided then to contact SafePeak and suggested to participate in the give-away and they have gladly accepted the same. The winner of best story is: Jason Brimhall (USA) with a story about a proc with a fair amount of business logic. Congratulations Jason! The 3 participants won the second prize of $100 gift card on amazon.com are: Michael Corey (USA), Hakim Ali (USA) and Alex Bernal (USA). And 5 participants won a printed copy of a book of mine (Book Reviews of SQL Wait Stats Joes 2 Pros: SQL Performance Tuning Techniques Using Wait Statistics, Types & Queues) are: Patrick Kansa (USA), Wagner Bianchi (USA), Riyas.V.K (India), Farzana Patwa (USA) and Wagner Crivelini (Brazil). The winners are welcome to send safepeak their mail address to receive the prizes (to “info ‘at’ safepeak.com”). Also SafePeak team asked me to welcome you all to continue sending stories, simply because they (and we all) like to read interesting stuff) as well as to send them ideas for future contests. You can do it from here: www.safepeak.com/SQL-Performance-Contest-2011/Submit-Story Congratulations to everybody! I found this very funny video about SafePeak: It looks like someone (maybe the vendor) played with video’s once and created this non-commercial like video: SafePeak dynamic caching is an immediate plug-n-play performance acceleration and scalability solution for cloud, hosted and business SQL server applications. By caching in memory result sets of queries and stored procedures, while keeping all those cache correct and up to date using unique patent pending technology, SafePeak can fix SQL performance problems and bottlenecks of most applications – most importantly: without actual code changes. By the way, I checked their website prior this contest announcement and noticed that they are running these days a special end year promotion giving between 30% to 45% discounts. Since the installation is quick and full testing can be done within couple of days – those have the need (performance problems) and have budget leftovers: I suggest you hurry. A free fully functional trial is here: www.safepeak.com/download, while those that want to start with a quote should ping here www.safepeak.com/quote. Good luck! Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Why i disconnect every few seconds? using USB wireless adapter

    - by Rev3rse
    i know it's for ubuntu questions..but mint and ubuntu are very similiar and i had the same problem with linux ubuntu too..so i think this is the right place for my question anyway i don't have experience with drivers and other things,after installing Linux on my machine( i did dist-upgrade btw) everything seem to be great because i didn't have to install any driver, after a while i realized that my connection stop after few minutes(actually it shows that I'm connected but it's not) so i have to reconnect and after few minutes it disconnect again. I'm using Alfa USB wireless adapter AWS036H, and my Linux version is 11 i think the driver i'm using is Realtek i searched in the Internet and i found nothing. these are some outputs of few things people usually ask for: Note: I'm NOT using a laptop. dmsg: [19445.604448] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=2.174.220.77 DST=192.168.1.6 LEN=52 TOS=0x00 PREC=0x00 TTL=104 ID=10466 DF PROTO=TCP SPT=55150 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [19448.164050] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=192.168.1.254 DST=192.168.1.6 LEN=56 TOS=0x00 PREC=0x00 TTL=255 ID=41982 PROTO=ICMP TYPE=3 CODE=0 [SRC=192.168.1.6 DST=91.189.88.33 LEN=52 TOS=0x00 PREC=0x00 TTL=63 ID=7566 DF PROTO=TCP INCOMPLETE [8 bytes] ] [19465.079565] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=80.128.216.31 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=113 ID=5100 DF PROTO=TCP SPT=50169 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [19486.270328] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=90.130.13.122 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=109 ID=22207 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19497.480522] wlan0: deauthenticating from 00:24:c8:4b:46:e0 by local choice (reason=3) [19497.593276] cfg80211: All devices are disconnected, going to restore regulatory settings [19497.593282] cfg80211: Restoring regulatory settings [19497.593346] cfg80211: Calling CRDA to update world regulatory domain [19497.638740] cfg80211: Updating information on frequency 2412 MHz for a 20 MHz width channel with regulatory rule: [19497.638745] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638749] cfg80211: Updating information on frequency 2417 MHz for a 20 MHz width channel with regulatory rule: [19497.638753] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638756] cfg80211: Updating information on frequency 2422 MHz for a 20 MHz width channel with regulatory rule: [19497.638760] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638763] cfg80211: Updating information on frequency 2427 MHz for a 20 MHz width channel with regulatory rule: [19497.638766] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638770] cfg80211: Updating information on frequency 2432 MHz for a 20 MHz width channel with regulatory rule: [19497.638773] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638776] cfg80211: Updating information on frequency 2437 MHz for a 20 MHz width channel with regulatory rule: [19497.638780] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638783] cfg80211: Updating information on frequency 2442 MHz for a 20 MHz width channel with regulatory rule: [19497.638787] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638790] cfg80211: Updating information on frequency 2447 MHz for a 20 MHz width channel with regulatory rule: [19497.638794] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638797] cfg80211: Updating information on frequency 2452 MHz for a 20 MHz width channel with regulatory rule: [19497.638801] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638804] cfg80211: Updating information on frequency 2457 MHz for a 20 MHz width channel with regulatory rule: [19497.638807] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638811] cfg80211: Updating information on frequency 2462 MHz for a 20 MHz width channel with regulatory rule: [19497.638814] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638817] cfg80211: Updating information on frequency 2467 MHz for a 20 MHz width channel with regulatory rule: [19497.638821] cfg80211: 2457000 KHz - 2482000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638824] cfg80211: Updating information on frequency 2472 MHz for a 20 MHz width channel with regulatory rule: [19497.638828] cfg80211: 2457000 KHz - 2482000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638831] cfg80211: Updating information on frequency 2484 MHz for a 20 MHz width channel with regulatory rule: [19497.638835] cfg80211: 2474000 KHz - 2494000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638838] cfg80211: World regulatory domain updated: [19497.638841] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [19497.638845] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [19497.638848] cfg80211: (2457000 KHz - 2482000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [19497.638852] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [19497.638855] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [19497.638859] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [19513.145150] wlan0: authenticate with 00:24:c8:4b:46:e0 (try 1) [19513.146910] wlan0: authenticated [19513.252775] wlan0: associate with 00:24:c8:4b:46:e0 (try 1) [19513.255149] wlan0: RX AssocResp from 00:24:c8:4b:46:e0 (capab=0x411 status=0 aid=2) [19513.255154] wlan0: associated [19515.675091] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=91.79.8.40 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x20 TTL=110 ID=42720 DF PROTO=TCP SPT=1945 DPT=6881 WINDOW=65535 RES=0x00 SYN URGP=0 [19525.684312] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=78.13.80.169 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=109 ID=49890 DF PROTO=TCP SPT=53401 DPT=6881 WINDOW=16384 RES=0x00 SYN URGP=0 [19551.856766] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=85.228.39.93 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=103 ID=1162 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19564.623005] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=90.202.21.238 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=114 ID=17881 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19584.855364] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=2.49.151.87 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=117 ID=31716 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19604.688647] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=109.225.124.155 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=112 ID=6656 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19626.362529] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=81.184.50.41 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=114 ID=23241 DF PROTO=TCP SPT=1416 DPT=6881 WINDOW=65535 RES=0x00 SYN URGP=0 [19645.040906] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=92.250.245.244 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=51 ID=0 DF PROTO=TCP SPT=50061 DPT=6881 WINDOW=16384 RES=0x00 SYN URGP=0 [19665.212659] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=87.183.3.18 DST=192.168.1.6 LEN=52 TOS=0x00 PREC=0x00 TTL=111 ID=1689 DF PROTO=TCP SPT=62817 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [19685.036415] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=78.13.80.169 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=109 ID=50638 DF PROTO=TCP SPT=49624 DPT=6881 WINDOW=16384 RES=0x00 SYN URGP=0 [19705.487915] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=217.122.17.82 DST=192.168.1.6 LEN=56 TOS=0x00 PREC=0x00 TTL=112 ID=19070 DF PROTO=TCP SPT=54795 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [19726.779185] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=80.88.116.239 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=109 ID=32168 DF PROTO=TCP SPT=57330 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [19744.755673] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=109.124.5.43 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=113 ID=2288 DF PROTO=TCP SPT=6475 DPT=6881 WINDOW=65535 RES=0x00 SYN URGP=0 [19764.449183] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=79.216.35.19 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=113 ID=4281 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19784.456189] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=81.82.25.149 DST=192.168.1.6 LEN=52 TOS=0x00 PREC=0x00 TTL=114 ID=1866 DF PROTO=TCP SPT=59507 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [19804.836687] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=81.56.199.3 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=108 ID=14749 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19824.812685] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=186.28.7.159 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=107 ID=44686 PROTO=UDP SPT=23418 DPT=6881 LEN=28 [19847.683314] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=78.13.80.169 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=108 ID=63046 DF PROTO=TCP SPT=52192 DPT=6881 WINDOW=16384 RES=0x00 SYN URGP=0 [19884.711455] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=84.146.24.238 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=113 ID=27914 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19884.983589] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=2.107.130.61 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=112 ID=7742 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19905.681078] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=95.21.11.121 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=114 ID=31775 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19926.035707] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=109.76.132.55 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=113 ID=28140 DF PROTO=TCP SPT=51905 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [19945.668326] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=188.92.0.197 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=113 ID=7865 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19967.200339] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=83.252.102.172 DST=192.168.1.6 LEN=52 TOS=0x00 PREC=0x00 TTL=105 ID=28408 DF PROTO=TCP SPT=63505 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [19999.752732] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=79.166.171.200 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=110 ID=36405 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [20007.928719] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=79.235.59.16 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=112 ID=46415 DF PROTO=TCP SPT=4537 DPT=6881 WINDOW=16384 RES=0x00 SYN URGP=0 [20026.181726] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=81.182.169.36 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=106 ID=25126 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [20048.845358] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=87.66.118.104 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=111 ID=18068 DF PROTO=TCP SPT=49928 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [20064.341857] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=77.2.63.153 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=107 ID=7242 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [20090.093490] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=93.16.17.210 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=108 ID=894 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [20104.443995] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=89.83.235.99 DST=192.168.1.6 LEN=52 TOS=0x00 PREC=0x00 TTL=114 ID=17295 DF PROTO=TCP SPT=58979 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [20128.625374] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=81.62.91.79 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=107 ID=21793 DF PROTO=TCP SPT=51446 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [20151.055506] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=84.135.217.213 DST=192.168.1.6 LEN=52 TOS=0x00 PREC=0x00 TTL=112 ID=32452 DF PROTO=TCP SPT=55136 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [20164.618874] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=91.79.8.40 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x20 TTL=110 ID=47784 DF PROTO=TCP SPT=2422 DPT=6881 WINDOW=65535 RES=0x00 SYN URGP=0 [20184.337745] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=83.252.212.71 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=107 ID=14544 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [20205.007512] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=91.62.158.247 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=110 ID=21562 DF PROTO=TCP SPT=3933 DPT=6881 WINDOW=65535 RES=0x00 SYN URGP=0 [20225.204018] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=84.146.24.238 DST=192.168.1.6 LEN=52 TOS=0x00 PREC=0x00 TTL=113 ID=15045 DF PROTO=TCP SPT=49630 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [20244.842290] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=82.82.190.168 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=112 ID=23741 DF PROTO=TCP SPT=50766 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [20266.701649] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=88.153.108.124 DST=192.168.1.6 LEN=48 TOS=0x02 PREC=0x00 TTL=111 ID=206 DF PROTO=TCP SPT=2451 DPT=6881 WINDOW=65535 RES=0x00 SYN URGP=0 [20286.305414] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=78.240.86.73 DST=192.168.1.6 LEN=52 TOS=0x00 PREC=0x00 TTL=107 ID=325 DF PROTO=TCP SPT=65184 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [20294.293989] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=192.168.1.254 DST=192.168.1.6 LEN=56 TOS=0x00 PREC=0x00 TTL=255 ID=43133 PROTO=ICMP TYPE=3 CODE=0 [SRC=192.168.1.6 DST=91.189.88.33 LEN=52 TOS=0x00 PREC=0x00 TTL=63 ID=56899 DF PROTO=TCP INCOMPLETE [8 bytes] ] [20294.297015] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=192.168.1.254 DST=192.168.1.6 LEN=56 TOS=0x00 PREC=0x00 TTL=255 ID=43134 PROTO=ICMP TYPE=3 CODE=0 [SRC=192.168.1.6 DST=91.189.88.40 LEN=52 TOS=0x00 PREC=0x00 TTL=63 ID=12080 DF PROTO=TCP INCOMPLETE [8 bytes] ] [20294.297242] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=192.168.1.254 DST=192.168.1.6 LEN=56 TOS=0x00 PREC=0x00 TTL=255 ID=43135 PROTO=ICMP TYPE=3 CODE=0 [SRC=192.168.1.6 DST=91.189.88.33 LEN=52 TOS=0x00 PREC=0x00 TTL=63 ID=25195 DF PROTO=TCP INCOMPLETE [8 bytes] ] [20295.478338] wlan0: deauthenticating from 00:24:c8:4b:46:e0 by local choice (reason=3) [20295.552735] cfg80211: All devices are disconnected, going to restore regulatory settings [20295.552742] cfg80211: Restoring regulatory settings [20295.552748] cfg80211: Calling CRDA to update world regulatory domain [20295.680635] cfg80211: Updating information on frequency 2412 MHz for a 20 MHz width channel with regulatory rule: [20295.680641] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680644] cfg80211: Updating information on frequency 2417 MHz for a 20 MHz width channel with regulatory rule: [20295.680648] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680652] cfg80211: Updating information on frequency 2422 MHz for a 20 MHz width channel with regulatory rule: [20295.680655] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680658] cfg80211: Updating information on frequency 2427 MHz for a 20 MHz width channel with regulatory rule: [20295.680662] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680665] cfg80211: Updating information on frequency 2432 MHz for a 20 MHz width channel with regulatory rule: [20295.680669] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680672] cfg80211: Updating information on frequency 2437 MHz for a 20 MHz width channel with regulatory rule: [20295.680676] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680679] cfg80211: Updating information on frequency 2442 MHz for a 20 MHz width channel with regulatory rule: [20295.680683] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680687] cfg80211: Updating information on frequency 2447 MHz for a 20 MHz width channel with regulatory rule: [20295.680690] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680693] cfg80211: Updating information on frequency 2452 MHz for a 20 MHz width channel with regulatory rule: [20295.680697] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680700] cfg80211: Updating information on frequency 2457 MHz for a 20 MHz width channel with regulatory rule: [20295.680704] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680708] cfg80211: Updating information on frequency 2462 MHz for a 20 MHz width channel with regulatory rule: [20295.680711] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680715] cfg80211: Updating information on frequency 2467 MHz for a 20 MHz width channel with regulatory rule: [20295.680718] cfg80211: 2457000 KHz - 2482000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680722] cfg80211: Updating information on frequency 2472 MHz for a 20 MHz width channel with regulatory rule: [20295.680725] cfg80211: 2457000 KHz - 2482000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680728] cfg80211: Updating information on frequency 2484 MHz for a 20 MHz width channel with regulatory rule: [20295.680732] cfg80211: 2474000 KHz - 2494000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680736] cfg80211: World regulatory domain updated: [20295.680738] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [20295.680742] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [20295.680745] cfg80211: (2457000 KHz - 2482000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [20295.680749] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [20295.680752] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [20295.680756] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [20306.009341] wlan0: authenticate with 00:24:c8:4b:46:e0 (try 1) [20306.011225] wlan0: authenticated [20306.118095] wlan0: associate with 00:24:c8:4b:46:e0 (try 1) [20306.120963] wlan0: RX AssocResp from 00:24:c8:4b:46:e0 (capab=0x411 status=0 aid=2) [20306.120967] wlan0: associated [20307.364427] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=87.91.101.130 DST=192.168.1.6 LEN=64 TOS=0x00 PREC=0x00 TTL=49 ID=36839 DF PROTO=TCP SPT=62492 DPT=6881 WINDOW=65535 RES=0x00 SYN URGP=0 [20310.914290] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=192.168.1.254 DST=192.168.1.6 LEN=56 TOS=0x00 PREC=0x00 TTL=255 ID=43180 PROTO=ICMP TYPE=3 CODE=0 [SRC=192.168.1.6 DST=91.189.88.33 LEN=52 TOS=0x00 PREC=0x00 TTL=63 ID=56900 DF PROTO=TCP INCOMPLETE [8 bytes] ] [20310.936634] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=192.168.1.254 DST=192.168.1.6 LEN=56 TOS=0x00 PREC=0x00 TTL=255 ID=43181 PROTO=ICMP TYPE=3 CODE=0 [SRC=192.168.1.6 DST=91.189.88.40 LEN=52 TOS=0x00 PREC=0x00 TTL=63 ID=12081 DF PROTO=TCP INCOMPLETE [8 bytes] ] [20310.939017] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=192.168.1.254 DST=192.168.1.6 LEN=56 TOS=0x00 PREC=0x00 TTL=255 ID=43182 PROTO=ICMP TYPE=3 CODE=0 [SRC=192.168.1.6 DST=91.189.88.33 LEN=52 TOS=0x00 PREC=0x00 TTL=63 ID=25196 DF PROTO=TCP INCOMPLETE [8 bytes] ] [20325.941050] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=217.118.78.99 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=113 ID=4407 PROTO=UDP SPT=2970 DPT=6881 LEN=28 [20328.801724] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=192.168.1.254 DST=192.168.1.6 LEN=56 TOS=0x00 PREC=0x00 TTL=255 ID=43196 PROTO=ICMP TYPE=3 CODE=0 [SRC=192.168.1.6 DST=91.189.88.33 LEN=52 TOS=0x00 PREC=0x00 TTL=63 ID=56901 DF PROTO=TCP INCOMPLETE [8 bytes] ] ... inxi -N Network: Card-1 Realtek RTL8101E/RTL8102E PCI Express Fast Ethernet controller driver r8169 Card-2 Realtek RTL-8139/8139C/8139C+ driver 8139too /usr/lib/linuxmint/mintWifi/mintWifi.py ------------------------- * I. scanning WIFI PCI devices... ------------------------- * II. querying ndiswrapper... ------------------------- * III. querying iwconfig... lo no wireless extensions. eth0 no wireless extensions. eth1 no wireless extensions. wlan0 IEEE 802.11bg ESSID:"Home" Mode:Managed Frequency:2.437 GHz Access Point: 00:24:C8:4B:46:E0 Bit Rate=54 Mb/s Tx-Power=20 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=68/70 Signal level=-42 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:1132 Missed beacon:0 ------------------------- * IV. querying ifconfig... eth0 Link encap:Ethernet HWaddr 00:1f:d0:c9:b8:8e UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:43 Base address:0x4000 eth1 Link encap:Ethernet HWaddr 00:0e:2e:77:88:16 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:19 Base address:0xd000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:10696 errors:0 dropped:0 overruns:0 frame:0 TX packets:10696 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:3823011 (3.8 MB) TX bytes:3823011 (3.8 MB) wlan0 Link encap:Ethernet HWaddr 00:c0:ca:44:62:d1 inet addr:192.168.1.6 Bcast:255.255.255.255 Mask:255.255.255.0 inet6 addr: fe80::2c0:caff:fe44:62d1/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:90424 errors:0 dropped:0 overruns:0 frame:0 TX packets:65201 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:98024465 (98.0 MB) TX bytes:10345450 (10.3 MB) ------------------------- * V. querying DHCP... lspci 00:00.0 Host bridge: Intel Corporation 82G33/G31/P35/P31 Express DRAM Controller (rev 10) 00:01.0 PCI bridge: Intel Corporation 82G33/G31/P35/P31 Express PCI Express Root Port (rev 10) 00:1b.0 Audio device: Intel Corporation N10/ICH 7 Family High Definition Audio Controller (rev 01) 00:1c.0 PCI bridge: Intel Corporation N10/ICH 7 Family PCI Express Port 1 (rev 01) 00:1c.1 PCI bridge: Intel Corporation N10/ICH 7 Family PCI Express Port 2 (rev 01) 00:1d.0 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #1 (rev 01) 00:1d.1 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #2 (rev 01) 00:1d.2 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #3 (rev 01) 00:1d.3 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #4 (rev 01) 00:1d.7 USB Controller: Intel Corporation N10/ICH 7 Family USB2 EHCI Controller (rev 01) 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev e1) 00:1f.0 ISA bridge: Intel Corporation 82801GB/GR (ICH7 Family) LPC Interface Bridge (rev 01) 00:1f.2 IDE interface: Intel Corporation N10/ICH7 Family SATA IDE Controller (rev 01) 00:1f.3 SMBus: Intel Corporation N10/ICH 7 Family SMBus Controller (rev 01) 01:00.0 VGA compatible controller: nVidia Corporation G96 [GeForce 9400 GT] (rev a1) 03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller (rev 02) 04:01.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10) lsmod Module Size Used by ipt_REJECT 12512 1 ipt_LOG 12784 5 xt_limit 12541 7 xt_tcpudp 12531 8 ipt_addrtype 12535 4 xt_state 12514 7 ip6table_filter 12711 1 ip6_tables 22545 1 ip6table_filter nf_nat_irc 12542 0 nf_conntrack_irc 13138 1 nf_nat_irc nf_nat_ftp 12548 0 nf_nat 24827 2 nf_nat_irc,nf_nat_ftp nf_conntrack_ipv4 19024 9 nf_nat nf_defrag_ipv4 12649 1 nf_conntrack_ipv4 nf_conntrack_ftp 13106 1 nf_nat_ftp nf_conntrack 69744 7 xt_state,nf_nat_irc,nf_conntrack_irc,nf_nat_ftp,nf_nat,nf_conntrack_ipv4,nf_conntrack_ftp iptable_filter 12706 1 ip_tables 18125 1 iptable_filter x_tables 21907 10 ipt_REJECT,ipt_LOG,xt_limit,xt_tcpudp,ipt_addrtype,xt_state,ip6table_filter,ip6_tables,iptable_filter,ip_tables nls_utf8 12493 10 udf 83795 1 crc_itu_t 12627 1 udf usb_storage 43946 1 uas 17676 0 snd_seq_dummy 12686 0 cryptd 19801 0 aes_i586 16956 1 aes_generic 38023 1 aes_i586 binfmt_misc 13213 1 dm_crypt 22463 0 vesafb 13449 1 nvidia 9766978 44 arc4 12473 2 rtl8187 56206 0 mac80211 257001 1 rtl8187 cfg80211 156212 2 rtl8187,mac80211 ppdev 12849 0 snd_hda_codec_realtek 255882 1 parport_pc 32111 1 psmouse 73312 0 eeprom_93cx6 12653 1 rtl8187 snd_hda_intel 24113 5 snd_hda_codec 90901 2 snd_hda_codec_realtek,snd_hda_intel snd_hwdep 13274 1 snd_hda_codec snd_pcm 80042 3 snd_hda_intel,snd_hda_codec snd_seq_midi 13132 0 snd_rawmidi 25269 1 snd_seq_midi snd_seq_midi_event 14475 1 snd_seq_midi snd_seq 51291 3 snd_seq_dummy,snd_seq_midi,snd_seq_midi_event snd_timer 28659 2 snd_pcm,snd_seq snd_seq_device 14110 4 snd_seq_dummy,snd_seq_midi,snd_rawmidi,snd_seq joydev 17322 0 snd 55295 18 snd_hda_codec_realtek,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device serio_raw 12990 0 soundcore 12600 1 snd snd_page_alloc 14073 2 snd_hda_intel,snd_pcm lp 13349 0 parport 36746 3 ppdev,parport_pc,lp usbhid 41704 0 hid 77084 1 usbhid dm_raid45 88410 0 xor 21860 1 dm_raid45 btrfs 527388 0 zlib_deflate 26594 1 btrfs libcrc32c 12543 1 btrfs 8139too 23208 0 8139cp 22497 0 r8169 42534 0 floppy 60032 0

    Read the article

  • Special Activities in the OTN Lounge

    - by Bob Rhubart
    What is the OTN Lounge? It's the place for Oracle OpenWorld and JavaOne attendees to hang out, get off your feet, rest up between sessions, recharge your laptop, tablet, or phone, connect with other community members, pick the brains of subject matter experts and community leaders, enjoy some refreshments (coffee and soft drinks in the morning, beer in the afternoon), and avoid the crowds by watching keynote presentations on a plasma screen. But in addition to general chillaxin' the OTN Lounge also hosts several special activities throughout the week… OTN Lounge Special Activities Sunday Oracle Social Network Developer Challenge Kick-off   (7:00pm - 8:30pm)Want to learn more about Oracle Social Network? Love working with APIs? Enter the Oracle Social Network Developer Challenge and build your dream integration with Oracle's secure, purposeful social network for business. Demonstrate your skills, work with the latest and greatest and compete for $500 in Amazon gift cards. Go to theappslab.com/osnregisterr Read and agree to the terms and rules. Register yourself with your name, corporate email address, and company. Watch your inbox for a confirmation email from Oracle Social Network. Start coding (individual or teams welcome) Show off your work to the judges in the OTN Lounge, Wednesday, 4:00pm - 6:00pm Monday (Lounge hours: 8:00am - 7:00pm) RAC Attack (9:00am - 1:00pm) Learn about Oracle Real Application Clustering (RAC) in this collaborative event. You'll work with experts from the IOUG RAC SIG to get an Oracle Database 11gR2 RAC cluster running inside a virtual machine. For more information: RAC attack at Oracle Open World (Pythian Blog) RAC Attack - Oracle Cluster Database at Home/Events (WikiBooks) Oracle Social Network Developer Challenge Office Hours (4:00pm - 8:00pm)Meet the people behind Oracle Social Network. Tuesday (Lounge hours: 8:00am - 7:00pm) RAC Attack (9:00am - 1:00pm) Oracle Social Network Developer Challenge Office Hours (4:30pm - 8:00pm) Oracle Database / Oracle Fusion Middleware Tweet Meet (4:30pm - 6:00pm) Free as in beer! Oracle Database and Oracle Fusion Middleware tweeters, gather in the OTN Lounge for refreshments and conversation with fellow tweeters and Oracle Database and Middleware experts. Wednesday (Lounge Hours: 8:00am - 6:00pm) RAC Attack (9:00am - 1:00pm) Oracle Social Network Developer Challenge Judging (4:00pm - 6:00pm) ADF Oracle ADF / Oracle Fusion Middleware Meet-up (4:30pm - 5:30pm) Join other Oracle ADF and Oracle Fusion Middleware developers and meet the product managers and engineers behind Oracle ADF, ADF Mobile, and ADF Essentials. Did we mention free beer? Thursday (Lounge Hours: 8:00am - 2:00pm) RAC Attack (9:00am - 1:00pm) The OTN Lounge is located in the Howard St .tent, located by no small coincidence on Howard St. between 3rd and 4th, directly between Moscone North and Moscone South. An Oracle OpenWorld or JavaOne conference badge is required for access to the OTN Lounge.

    Read the article

  • View Maps and Get Directions in Google Chrome

    - by Asian Angel
    Every so often we all need to look at a map for reference purposes or to get directions. If you are looking for a great quick reference app then join us as we look at the Mini Google Maps extension for Google Chrome. Mini Google Maps in Action While this may look like a rather basic map extension there is more to it than meets the eye at first glance. Here is the default view when you open Mini Google Maps for the first time. Things that we really liked about this extension were: Three different aerial views available (Map, Satellite, & Terrain) Three different viewing sizes available (and the extension remembers your chosen size) The ability to get directions in combination with a map We decided to try each of the viewing sizes available…here you can see the “Medium Setting”. Notice that the scale stays the same but you get more territory included to view. Then the “Large Setting”…which we infinitely preferred to the others. Once again look at the amount of territory included by default…very nice. Switching over to the “Satellite View”… Followed by the “Terrain View”. For our first example we decided to peek at Vancouver, British Columbia. After zooming out a little bit we had a very nice looking map. For the next test we asked for directions from Vancouver to Toronto. Both the directions and map turned out very well. And just for fun we looked up Paris, France with the “Satellite View”. Conclusion If you find yourself needing to view a map or get directions often then the Mini Google Maps extension will be a very useful tool for you. Links Download the Mini Google Maps extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Get Maps and Directions to Your Contacts in Outlook 2007Stupid Geek Tricks: Browse the Web from OutlookView the Time & Date in Chrome When Hiding Your TaskbarHow to Make Google Chrome Your Default BrowserAccess Google Chrome’s Special Pages the Easy Way TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier Design Your Web Pages Using the Golden Ratio Worldwide Growth of the Internet How to Find Your Mac Address Use My TextTools to Edit and Organize Text

    Read the article

  • SQL SERVER – Difference Between DATETIME and DATETIME2

    - by pinaldave
    Yesterday I have written a very quick blog post on SQL SERVER – Difference Between GETDATE and SYSDATETIME and I got tremendous response for the same. I suggest you read that blog post before continuing this blog post today. I had asked people to honestly take part and share their view about above two system function. There are few emails as well few comments on the blog post asking question how did I come to know the difference between the same. The answer is real world issues. I was called in for performance tuning consultancy where I was asked very strange question by one developer. Here is the situation he was facing. System had a single table with two different column of datetime. One column was datelastmodified and second column was datefirstmodified. One of the column was DATETIME and another was DATETIME2. Developer was populating them with SYSDATETIME respectively. He was always thinking that the value inserted in the table will be the same. This table was only accessed by INSERT statement and there was no updates done over it in application.One fine day he ran distinct on both of this column and was in for surprise. He always thought that both of the table will have same data, but in fact they had very different data. He presented this scenario to me. I said this can not be possible but when looked at the resultset, I had to agree with him. Here is the simple script generated to demonstrate the problem he was facing. This is just a sample of original table. DECLARE @Intveral INT SET @Intveral = 10000 CREATE TABLE #TimeTable (FirstDate DATETIME, LastDate DATETIME2) WHILE (@Intveral > 0) BEGIN INSERT #TimeTable (FirstDate, LastDate) VALUES (SYSDATETIME(), SYSDATETIME()) SET @Intveral = @Intveral - 1 END GO SELECT COUNT(DISTINCT FirstDate) D_GETDATE, COUNT(DISTINCT LastDate) D_SYSGETDATE FROM #TimeTable GO SELECT DISTINCT a.FirstDate, b.LastDate FROM #TimeTable a INNER JOIN #TimeTable b ON a.FirstDate = b.LastDate GO SELECT * FROM #TimeTable GO DROP TABLE #TimeTable GO Let us see the resultset. You can clearly see from result that SYSDATETIME() does not populate the same value in the both of the field. In fact the value is either rounded down or rounded up in the field which is DATETIME. Event though we are populating the same value, the values are totally different in both the column resulting the SELF JOIN fail and display different DISTINCT values. The best policy is if you are using DATETIME use GETDATE() and if you are suing DATETIME2 use SYSDATETIME() to populate them with current date and time to accurately address the precision. As DATETIME2 is introduced in SQL Server 2008, above script will only work with SQL SErver 2008 and later versions. I hope I have answered few questions asked yesterday. Reference: Pinal Dave (http://www.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL DateTime, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Windows Azure: Backup Services Release, Hyper-V Recovery Manager, VM Enhancements, Enhanced Enterprise Management Support

    - by ScottGu
    This morning we released a huge set of updates to Windows Azure.  These new capabilities include: Backup Services: General Availability of Windows Azure Backup Services Hyper-V Recovery Manager: Public preview of Windows Azure Hyper-V Recovery Manager Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Configuration Active Directory: Securely manage hundreds of SaaS applications Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure SDK 2.2: A massive update of our SDK + Visual Studio tooling support All of these improvements are now available to use immediately.  Below are more details about them. Backup Service: General Availability Release of Windows Azure Backup Today we are releasing Windows Azure Backup Service as a general availability service.  This release is now live in production, backed by an enterprise SLA, supported by Microsoft Support, and is ready to use for production scenarios. Windows Azure Backup is a cloud based backup solution for Windows Server which allows files and folders to be backed up and recovered from the cloud, and provides off-site protection against data loss. The service provides IT administrators and developers with the option to back up and protect critical data in an easily recoverable way from any location with no upfront hardware cost. Windows Azure Backup is built on the Windows Azure platform and uses Windows Azure blob storage for storing customer data. Windows Server uses the downloadable Windows Azure Backup Agent to transfer file and folder data securely and efficiently to the Windows Azure Backup Service. Along with providing cloud backup for Windows Server, Windows Azure Backup Service also provides capability to backup data from System Center Data Protection Manager and Windows Server Essentials, to the cloud. All data is encrypted onsite before it is sent to the cloud, and customers retain and manage the encryption key (meaning the data is stored entirely secured and can’t be decrypted by anyone but yourself). Getting Started To get started with the Windows Azure Backup Service, create a new Backup Vault within the Windows Azure Management Portal.  Click New->Data Services->Recovery Services->Backup Vault to do this: Once the backup vault is created you’ll be presented with a simple tutorial that will help guide you on how to register your Windows Servers with it: Once the servers you want to backup are registered, you can use the appropriate local management interface (such as the Microsoft Management Console snap-in, System Center Data Protection Manager Console, or Windows Server Essentials Dashboard) to configure the scheduled backups and to optionally initiate recoveries. You can follow these tutorials to learn more about how to do this: Tutorial: Schedule Backups Using the Windows Azure Backup Agent This tutorial helps you with setting up a backup schedule for your registered Windows Servers. Additionally, it also explains how to use Windows PowerShell cmdlets to set up a custom backup schedule. Tutorial: Recover Files and Folders Using the Windows Azure Backup Agent This tutorial helps you with recovering data from a backup. Additionally, it also explains how to use Windows PowerShell cmdlets to do the same tasks. Below are some of the key benefits the Windows Azure Backup Service provides: Simple configuration and management. Windows Azure Backup Service integrates with the familiar Windows Server Backup utility in Windows Server, the Data Protection Manager component in System Center and Windows Server Essentials, in order to provide a seamless backup and recovery experience to a local disk, or to the cloud. Block level incremental backups. The Windows Azure Backup Agent performs incremental backups by tracking file and block level changes and only transferring the changed blocks, hence reducing the storage and bandwidth utilization. Different point-in-time versions of the backups use storage efficiently by only storing the changes blocks between these versions. Data compression, encryption and throttling. The Windows Azure Backup Agent ensures that data is compressed and encrypted on the server before being sent to the Windows Azure Backup Service over the network. As a result, the Windows Azure Backup Service only stores encrypted data in the cloud storage. The encryption key is not available to the Windows Azure Backup Service, and as a result the data is never decrypted in the service. Also, users can setup throttling and configure how the Windows Azure Backup service utilizes the network bandwidth when backing up or restoring information. Data integrity is verified in the cloud. In addition to the secure backups, the backed up data is also automatically checked for integrity once the backup is done. As a result, any corruptions which may arise due to data transfer can be easily identified and are fixed automatically. Configurable retention policies for storing data in the cloud. The Windows Azure Backup Service accepts and implements retention policies to recycle backups that exceed the desired retention range, thereby meeting business policies and managing backup costs. Hyper-V Recovery Manager: Now Available in Public Preview I’m excited to also announce the public preview of a new Windows Azure Service – the Windows Azure Hyper-V Recovery Manager (HRM). Windows Azure Hyper-V Recovery Manager helps protect your business critical services by coordinating the replication and recovery of System Center Virtual Machine Manager 2012 SP1 and System Center Virtual Machine Manager 2012 R2 private clouds at a secondary location. With automated protection, asynchronous ongoing replication, and orderly recovery, the Hyper-V Recovery Manager service can help you implement Disaster Recovery and restore important services accurately, consistently, and with minimal downtime. Application data in an Hyper-V Recovery Manager scenarios always travels on your on-premise replication channel. Only metadata (such as names of logical clouds, virtual machines, networks etc.) that is needed for orchestration is sent to Azure. All traffic sent to/from Azure is encrypted. You can begin using Windows Azure Hyper-V Recovery today by clicking New->Data Services->Recovery Services->Hyper-V Recovery Manager within the Windows Azure Management Portal.  You can read more about Windows Azure Hyper-V Recovery Manager in Brad Anderson’s 9-part series, Transform the datacenter. To learn more about setting up Hyper-V Recovery Manager follow our detailed step-by-step guide. Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Today’s Windows Azure release includes a number of nice updates to Windows Azure Virtual Machines.  These improvements include: Ability to Delete both VM Instances + Attached Disks in One Operation Prior to today’s release, when you deleted VMs within Windows Azure we would delete the VM instance – but not delete the drives attached to the VM.  You had to manually delete these yourself from the storage account.  With today’s update we’ve added a convenience option that now allows you to either retain or delete the attached disks when you delete the VM:   We’ve also added the ability to delete a cloud service, its deployments, and its role instances with a single action. This can either be a cloud service that has production and staging deployments with web and worker roles, or a cloud service that contains virtual machines.  To do this, simply select the Cloud Service within the Windows Azure Management Portal and click the “Delete” button: Warnings on Availability Sets with Only One Virtual Machine In Them One of the nice features that Windows Azure Virtual Machines supports is the concept of “Availability Sets”.  An “availability set” allows you to define a tier/role (e.g. webfrontends, databaseservers, etc) that you can map Virtual Machines into – and when you do this Windows Azure separates them across fault domains and ensures that at least one of them is always available during servicing operations.  This enables you to deploy applications in a high availability way. One issue we’ve seen some customers run into is where they define an availability set, but then forget to map more than one VM into it (which defeats the purpose of having an availability set).  With today’s release we now display a warning in the Windows Azure Management Portal if you have only one virtual machine deployed in an availability set to help highlight this: You can learn more about configuring the availability of your virtual machines here. Configuring SQL Server Always On SQL Server Always On is a great feature that you can use with Windows Azure to enable high availability and DR scenarios with SQL Server. Today’s Windows Azure release makes it even easier to configure SQL Server Always On by enabling “Direct Server Return” endpoints to be configured and managed within the Windows Azure Management Portal.  Previously, setting this up required using PowerShell to complete the endpoint configuration.  Starting today you can enable this simply by checking the “Direct Server Return” checkbox: You can learn more about how to use direct server return for SQL Server AlwaysOn availability groups here. Active Directory: Application Access Enhancements This summer we released our initial preview of our Application Access Enhancements for Windows Azure Active Directory.  This service enables you to securely implement single-sign-on (SSO) support against SaaS applications (including Office 365, SalesForce, Workday, Box, Google Apps, GitHub, etc) as well as LOB based applications (including ones built with the new Windows Azure AD support we shipped last week with ASP.NET and VS 2013). Since the initial preview we’ve enhanced our SAML federation capabilities, integrated our new password vaulting system, and shipped multi-factor authentication support. We've also turned on our outbound identity provisioning system and have it working with hundreds of additional SaaS Applications: Earlier this month we published an update on dates and pricing for when the service will be released in general availability form.  In this blog post we announced our intention to release the service in general availability form by the end of the year.  We also announced that the below features would be available in a free tier with it: SSO to every SaaS app we integrate with – Users can Single Sign On to any app we are integrated with at no charge. This includes all the top SAAS Apps and every app in our application gallery whether they use federation or password vaulting. Application access assignment and removal – IT Admins can assign access privileges to web applications to the users in their active directory assuring that every employee has access to the SAAS Apps they need. And when a user leaves the company or changes jobs, the admin can just as easily remove their access privileges assuring data security and minimizing IP loss User provisioning (and de-provisioning) – IT admins will be able to automatically provision users in 3rd party SaaS applications like Box, Salesforce.com, GoToMeeting, DropBox and others. We are working with key partners in the ecosystem to establish these connections, meaning you no longer have to continually update user records in multiple systems. Security and auditing reports – Security is a key priority for us. With the free version of these enhancements you'll get access to our standard set of access reports giving you visibility into which users are using which applications, when they were using them and where they are using them from. In addition, we'll alert you to un-usual usage patterns for instance when a user logs in from multiple locations at the same time. Our Application Access Panel – Users are logging in from every type of devices including Windows, iOS, & Android. Not all of these devices handle authentication in the same manner but the user doesn't care. They need to access their apps from the devices they love. Our Application Access Panel will support the ability for users to access access and launch their apps from any device and anywhere. You can learn more about our plans for application management with Windows Azure Active Directory here.  Try out the preview and start using it today. Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure Active Directory provides the ability to manage your organization in a directory which is hosted entirely in the cloud, or alternatively kept in sync with an on-premises Windows Server Active Directory solution (allowing you to seamlessly integrate with the directory you already have).  With today’s Windows Azure release we are integrating Windows Azure Active Directory even more within the core Windows Azure management experience, and enabling an even richer enterprise security offering.  Specifically: 1) All Windows Azure accounts now have a default Windows Azure Active Directory created for them.  You can create and map any users you want into this directory, and grant administrative rights to manage resources in Windows Azure to these users. 2) You can keep this directory entirely hosted in the cloud – or optionally sync it with your on-premises Windows Server Active Directory.  Both options are free.  The later approach is ideal for companies that wish to use their corporate user identities to sign-in and manage Windows Azure resources.  It also ensures that if an employee leaves an organization, his or her access control rights to the company’s Windows Azure resources are immediately revoked. 3) The Windows Azure Service Management APIs have been updated to support using Windows Azure Active Directory credentials to sign-in and perform management operations.  Prior to today’s release customers had to download and use management certificates (which were not scoped to individual users) to perform management operations.  We still support this management certificate approach (don’t worry – nothing will stop working).  But we think the new Windows Azure Active Directory authentication support enables an even easier and more secure way for customers to manage resources going forward.  4) The Windows Azure SDK 2.2 release (which is also shipping today) includes built-in support for the new Service Management APIs that authenticate with Windows Azure Active Directory, and now allow you to create and manage Windows Azure applications and resources directly within Visual Studio using your Active Directory credentials.  This, combined with updated PowerShell scripts that also support Active Directory, enables an end-to-end enterprise authentication story with Windows Azure. Below are some details on how all of this works: Subscriptions within a Directory As part of today’s update, we have associated all existing Window Azure accounts with a Windows Azure Active Directory (and created one for you if you don’t already have one). When you login to the Windows Azure Management Portal you’ll now see the directory name in the URI of the browser.  For example, in the screen-shot below you can see that I have a “scottgu” directory that my subscriptions are hosted within: Note that you can continue to use Microsoft Accounts (formerly known as Microsoft Live IDs) to sign-into Windows Azure.  These map just fine to a Windows Azure Active Directory – so there is no need to create new usernames that are specific to a directory if you don’t want to.  In the scenario above I’m actually logged in using my @hotmail.com based Microsoft ID which is now mapped to a “scottgu” active directory that was created for me.  By default everything will continue to work just like you used to before. Manage your Directory You can manage an Active Directory (including the one we now create for you by default) by clicking the “Active Directory” tab in the left-hand side of the portal.  This will list all of the directories in your account.  Clicking one the first time will display a getting started page that provides documentation and links to perform common tasks with it: You can use the built-in directory management support within the Windows Azure Management Portal to add/remove/manage users within the directory, enable multi-factor authentication, associate a custom domain (e.g. mycompanyname.com) with the directory, and/or rename the directory to whatever friendly name you want (just click the configure tab to do this).  You can also setup the directory to automatically sync with an on-premises Active Directory using the “Directory Integration” tab. Note that users within a directory by default do not have admin rights to login or manage Windows Azure based resources.  You still need to explicitly grant them co-admin permissions on a subscription for them to login or manage resources in Windows Azure.  You can do this by clicking the Settings tab on the left-hand side of the portal and then by clicking the administrators tab within it. Sign-In Integration within Visual Studio If you install the new Windows Azure SDK 2.2 release, you can now connect to Windows Azure from directly inside Visual Studio without having to download any management certificates.  You can now just right-click on the “Windows Azure” icon within the Server Explorer and choose the “Connect to Windows Azure” context menu option to do so: Doing this will prompt you to enter the email address of the username you wish to sign-in with (make sure this account is a user in your directory with co-admin rights on a subscription): You can use either a Microsoft Account (e.g. Windows Live ID) or an Active Directory based Organizational account as the email.  The dialog will update with an appropriate login prompt depending on which type of email address you enter: Once you sign-in you’ll see the Windows Azure resources that you have permissions to manage show up automatically within the Visual Studio server explorer and be available to start using: No downloading of management certificates required.  All of the authentication was handled using your Windows Azure Active Directory! Manage Subscriptions across Multiple Directories If you have already have multiple directories and multiple subscriptions within your Windows Azure account, we have done our best to create a good default mapping of your subscriptions->directories as part of today’s update.  If you don’t like the default subscription-to-directory mapping we have done you can click the Settings tab in the left-hand navigation of the Windows Azure Management Portal and browse to the Subscriptions tab within it: If you want to map a subscription under a different directory in your account, simply select the subscription from the list, and then click the “Edit Directory” button to choose which directory to map it to.  Mapping a subscription to a different directory takes only seconds and will not cause any of the resources within the subscription to recycle or stop working.  We’ve made the directory->subscription mapping process self-service so that you always have complete control and can map things however you want. Filtering By Directory and Subscription Within the Windows Azure Management Portal you can filter resources in the portal by subscription (allowing you to show/hide different subscriptions).  If you have subscriptions mapped to multiple directory tenants, we also now have a filter drop-down that allows you to filter the subscription list by directory tenant.  This filter is only available if you have multiple subscriptions mapped to multiple directories within your Windows Azure Account:   Windows Azure SDK 2.2 Today we are also releasing a major update of our Windows Azure SDK.  The Windows Azure SDK 2.2 release adds some great new features including: Visual Studio 2013 Support Integrated Windows Azure Sign-In support within Visual Studio Remote Debugging Cloud Services with Visual Studio Firewall Management support within Visual Studio for SQL Databases Visual Studio 2013 RTM VM Images for MSDN Subscribers Windows Azure Management Libraries for .NET Updated Windows Azure PowerShell Cmdlets and ScriptCenter I’ll post a follow-up blog shortly with more details about all of the above. Additional Updates In addition to the above enhancements, today’s release also includes a number of additional improvements: AutoScale: Richer time and date based scheduling support (set different rules on different dates) AutoScale: Ability to Scale to Zero Virtual Machines (very useful for Dev/Test scenarios) AutoScale: Support for time-based scheduling of Mobile Service AutoScale rules Operation Logs: Auditing support for Service Bus management operations Today we also shipped a major update to the Windows Azure SDK – Windows Azure SDK 2.2.  It has so much goodness in it that I have a whole second blog post coming shortly on it! :-) Summary Today’s Windows Azure release enables a bunch of great new scenarios, and enables a much richer enterprise authentication offering. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Serious about Embedded: Java Embedded @ JavaOne 2012

    - by terrencebarr
    It bears repeating: More than ever, the Java platform is the best technology for many embedded use cases. Java’s platform independence, high level of functionality, security, and developer productivity address the key pain points in building embedded solutions. Transitioning from 16 to 32 bit or even 64 bit? Need to support multiple architectures and operating systems with a single code base? Want to scale on multi-core systems? Require a proven security model? Dynamically deploy and manage software on your devices? Cut time to market by leveraging code, expertise, and tools from a large developer ecosystem? Looking for back-end services, integration, and management? The Java platform has got you covered. Java already powers around 10 billion devices worldwide, with traditional desktops and servers being only a small portion of that. And the ‘Internet of Things‘ is just really starting to explode … it is estimated that within five years, intelligent and connected embedded devices will outnumber desktops and mobile phones combined, and will generate the majority of the traffic on the Internet. Is your platform and services strategy ready for the coming disruptions and opportunities? It should come as no surprise that Oracle is keenly focused on Java for Embedded. At JavaOne 2012 San Francisco the dedicated track for Java ME, Java Card, and Embedded keeps growing, with 52 sessions, tutorials, Hands-on-Labs, and BOFs scheduled for this track alone, plus keynotes, demos, booths, and a variety of other embedded content. To further prove Oracle’s commitment, in 2012 for the first time there will be a dedicated sub-conference focused on the business aspects of embedded Java: Java Embedded @ JavaOne. This conference will run for two days in parallel to JavaOne in San Francisco, will have its own business-oriented track and content, and targets C-level executives, architects, business leaders, and decision makers. Registration and Call For Papers for Java Embedded @ JavaOne are now live. We expect a lot of interest in this new event and space is limited, so be sure to submit your paper and register soon. Hope to see you there! Cheers, – Terrence Filed under: Mobile & Embedded Tagged: ARM, Call for Papers, Embedded Java, Java Embedded, Java Embedded @ JavaOne, Java ME, Java SE Embedded, Java SE for Embedded, JavaOne San Francisco, PowerPC

    Read the article

  • How to create item in SharePoint2010 document library using SharePoint Web service

    - by ybbest
    Today, I’d like to show you how to create item in SharePoint2010 document library using SharePoint Web service. Originally, I thought I could use the WebSvcLists(list.asmx) that provides methods for working with lists and list data. However, after a bit Googling , I realize that I need to use the WebSvcCopy (copy.asmx).Here are the code used private const string siteUrl = "http://ybbest"; private static void Main(string[] args) { using (CopyWSProxyWrapper copyWSProxyWrapper = new CopyWSProxyWrapper(siteUrl)) { copyWSProxyWrapper.UploadFile("TestDoc2.pdf", new[] {string.Format("{0}/Shared Documents/TestDoc2.pdf", siteUrl)}, Resource.TestDoc, GetFieldInfos().ToArray()); } } private static List<FieldInformation> GetFieldInfos() { var fieldInfos = new List<FieldInformation>(); //The InternalName , DisplayName and FieldType are both required to make it work fieldInfos.Add(new FieldInformation { InternalName = "Title", Value = "TestDoc2.pdf", DisplayName = "Title", Type = FieldType.Text }); return fieldInfos; } Here is the code for the proxy wrapper. public class CopyWSProxyWrapper : IDisposable { private readonly string siteUrl; public CopyWSProxyWrapper(string siteUrl) { this.siteUrl = siteUrl; } private readonly CopySoapClient proxy = new CopySoapClient(); public void UploadFile(string testdoc2Pdf, string[] destinationUrls, byte[] testDoc, FieldInformation[] fieldInformations) { using (CopySoapClient proxy = new CopySoapClient()) { proxy.Endpoint.Address = new EndpointAddress(String.Format("{0}/_vti_bin/copy.asmx", siteUrl)); proxy.ClientCredentials.Windows.ClientCredential = CredentialCache.DefaultNetworkCredentials; proxy.ClientCredentials.Windows.AllowedImpersonationLevel = TokenImpersonationLevel.Impersonation; CopyResult[] copyResults = null; try { proxy.CopyIntoItems(testdoc2Pdf, destinationUrls, fieldInformations, testDoc, out copyResults); } catch (Exception e) { System.Console.WriteLine(e); } if (copyResults != null) System.Console.WriteLine(copyResults[0].ErrorMessage); System.Console.ReadLine(); } } public void Dispose() { proxy.Close(); } } You can download the source code here . ******Update********** It seems to be a bug that , you can not set the contentType when create a document item using Copy.asmx. In sp2007 the field type was Choice, however, in sp2010 it is actually Computed. I have tried using the Computed field type with no luck. I have also tried sending the ContentTypeId and this does not work.You might have to write your own web services to handle this.You can check my previous blog on how to get started with you own custom WCF in SP2010 here. References: SharePoint 2010 Web Services SharePoint2007 Web Services SharePoint MSDN Forum

    Read the article

  • Data Masking Pack 12.1.0.3 Certified with E-Business Suite 12.1.3

    - by Elke Phelps (Oracle Development)
    I'm pleased to announce the certification of the E-Business Suite 12.1.3 Data Masking Template for the Data Masking Pack with Enterprise Manager Cloud Control 12.1.0.3. You can use the Oracle Data Masking Pack with Oracle Enterprise Manager Grid Control 12c to scramble sensitive data in cloned E-Business Suite environments.     You may scramble data in E-Business Suite cloned environments with EM12.1.0.3 using the following template: E-Business Suite 12.1.3 Data Masking Template for Data Masking Pack with EM12c (Patch 18462641) What does data masking do in E-Business Suite environments? Application data masking does the following: De-identify the data:  Scramble identifiers of individuals, also known as personally identifiable information or PII.  Examples include information such as name, account, address, location, and driver's license number. Mask sensitive data:  Mask data that, if associated with personally identifiable information (PII), would cause privacy concerns.  Examples include compensation, health and employment information.   Maintain data validity:  Provide a fully functional application.  How can EBS customers use data masking? The Oracle E-Business Suite Template for Data Masking Pack can be used in situations where confidential or regulated data needs to be shared with other non-production users who need access to some of the original data, but not necessarily every table.  Examples of non-production users include internal application developers or external business partners such as offshore testing companies, suppliers or customers.  Due to data dependencies, scrambling E-Business Suite data is not a trivial task.  The data needs to be scrubbed in such a way that allows the application to continue to function. The template works with the Oracle Data Masking Pack and Oracle Enterprise Manager to obscure sensitive E-Business Suite information that is copied from production to non-production environments.  The Oracle E-Business Suite Template for Data Masking Pack is applied to a non-production environment with the Enterprise Manager Grid Control Data Masking Pack.  When applied, the Oracle E-Business Suite Template for Data Masking Pack will create an irreversibly scrambled version of your production database for development and testing. Is there a charge for this? Yes. You must purchase licenses for the Oracle Data Masking Pack to use the Oracle E-Business Suite 12.1.3 template. The Oracle E-Business Suite 12.1.3 Template for the Data Masking Pack is included with the Oracle Data Masking Pack license.  You can contact your Oracle account manager for more details about licensing. References Additional details and requirements are provided in the following My Oracle Support Note: Using Oracle E-Business Suite Release 12.1.3 Template for the Data Masking Pack with Oracle Enterprise Manager 12.1 Data Masking Tool (Note 1481916.1) Masking Sensitive Data in the Oracle Database Real Application Testing User's Guide 11g Release 2 (11.2) Related Articles Scrambling Sensitive Data in E-Business Suite E-Business Suite 12.1.3 Data Masking Certified with Enterprise Manager 12c

    Read the article

  • Disable Opera Thumbnail Previews on Windows 7 Taskbar

    - by Asian Angel
    If you are one of the people who does not care for the Taskbar Thumbnail Previews in Windows 7 then we have a quick and easy way for you to turn them off in Opera Browser. Before Here is our Opera Browser with four tabs full of HTG Network goodness… Hovering the mouse over the Taskbar Icon gives a nice preview of each tabs content. Looking closer you can see the fanned edge on the Taskbar Icon indicating that there are multiple tabs open. This is all good but what if you just want something simpler? Disabling the Previews If you want to disable the Taskbar Thumbnail Previews in Opera you will need to type opera:config in the Address Bar and press Enter. Once you have done that, you will see a condensed listing for all of Opera’s preferences. There is one Preference Category that we need to look for…User Prefs. Note: While a Quick Find Search could be conducted for the entry that needs to be modified, we have chosen to show the full method here. After scrolling down and finding the User Prefs category you will need to expand the section. Notice the size of the scrollbar in comparison with the screenshot above…there is quite a lot that you can look at and finesse in Opera if desired. Scroll down until you find the Use Windows 7 Taskbar Thumbnails entry. Uncheck the box but do not close the opera:config Tab yet…or your changes will not take effect. Scroll down once more until you reach the end of the User Prefs category and click Save. With this particular modification you will need to restart Opera after clicking OK. After restarting Opera the Taskbar Icon and Taskbar Thumbnail Preview will revert to the minimal Windows 7 default as shown here. You can see Opera’s Tab Bar in the thumbnail and the Taskbar Icon no longer has a “fanned edge”. Conclusion If you want to disable Opera’s Taskbar Thumbnail Previews on your Windows 7 system, then this quick modification will help get it sorted out in just a few moments. Similar Articles Productive Geek Tips Disable IE 8 Thumbnail Previews on Windows 7 TaskbarIncrease the size of Taskbar Preview Thumbnails in Windows 7Vista Style Popup Previews for Firefox TabsEnable Thumbnail Previews for Firefox in Windows 7 TaskbarWorkaround for Vista Taskbar Thumbnail Previews Not Showing Correctly TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 10 Superb Firefox Wallpapers OpenDNS Guide Google TV The iPod Revolution Ultimate Boot CD can help when disaster strikes Windows Firewall with Advanced Security – How To Guides

    Read the article

  • Force.com presents Database.com SQL Azure/Amazon RDS unfazed

    - by Sarang
    At the DreamForce 2010 event in San Francisco Force.com unveiled their next big thing in the Fat SaaS portfolio "Database.com".  I am still wondering how would they would've shelled out for that domain name. Now why would a already established SaaS player foray into a key building block like Database? Potentially allowing enterprises to build apps that do not utilize the Force.com stack! One key reason is being seen as the Fat SaaS player with evey trick in the SaaS space under his belt. You want CRM come hither, want a custom development PaaS like solution welcome home (VMForce), want all your apps to talk to a cloud DB and minimize latency by having it reside closer to you cloud apps? You've come to the right place sire! Other is potentially killing foray of smaller DB players like Oracle (Not surprisingly, the Database.com offering is a highly customized and scalable Oracle database) from entering the lucrative SaaS db marketplace. The feature set promised looks great out of the box for someone who likes to visualize cool new architectures. The ground realities are certainly going to be a lot different considering the SOAP/REST style access patterns in lieu of the comfortable old shoe of SQL. Microsoft suffered heavily with SDS (SQL Data Services) offering in early 2009 and had to pull the plug on the product only to reintroduce as a simple SQL Server in the cloud, SQL Windows Azure. Though MSFT is playing cool by providing OData semantics to work with SQL Windows Azure satisfying atleast some needs of the Web-Style to a DB. The other features like Social data models including Profiles, Status updates, feeds seem interesting as well. (Although I beleive social is just one of the aspects of large scale collaborative computing). All these features start "Free" for devs its a good news but the good news stops here. The overall pricing model of $ per Users per Transactions / Month is highly disproportionate compared to Amazon RDS (Based on MySQL) or SQL Windows Azure (Based on MSSQL). Roger Jennigs of Oakleaf did an interesting comparo based on 3, 10, 100, 500 users and it turns out that Database.com going by current understanding is way too expensive for the services on offer. The offering may not impact the decision for DotNet shops mulling their cloud stategy or even some Java/MySQL shops thinking about Amazon RDS, however for enterprises having already invested in other force.com offerings this could be a very important piece in the cloud strategy jigsaw. One which would address a key cloud DB issue of "Latency" for them at least it will help having the DB in the neighborhood. The tooling and "SQL like" access provider drivers (Think ODBC/JDBC) will be available later this year. Progress Software has already announced their JDBC driver stack for Database.com. It remains to be seen how effective the overall solutions proves to be in the longer run but for starts its a important decision towards consolidating Force.com's already strong positioning in the SaaS space. As always contrasting views are welcome! :)

    Read the article

  • Quick guide to Oracle IRM 11g: Classification design

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g indexThis is the final article in the quick guide to Oracle IRM. If you've followed everything prior you will now have a fully functional and tested Information Rights Management service. It doesn't matter if you've been following the 10g or 11g guide as this next article is common to both. ContentsWhy this is the most important part... Understanding the classification and standard rights model Identifying business use cases Creating an effective IRM classification modelOne single classification across the entire businessA context for each and every possible granular use caseWhat makes a good context? Deciding on the use of roles in the context Reviewing the features and security for context roles Summary Why this is the most important part...Now the real work begins, installing and getting an IRM system running is as simple as following instructions. However to actually have an IRM technology easily protecting your most sensitive information without interfering with your users existing daily work flows and be able to scale IRM across the entire business, requires thought into how confidential documents are created, used and distributed. This article is going to give you the information you need to ask the business the right questions so that you can deploy your IRM service successfully. The IRM team here at Oracle have over 10 years of experience in helping customers and it is important you understand the following to be successful in securing access to your most confidential information. Whatever you are trying to secure, be it mergers and acquisitions information, engineering intellectual property, health care documentation or financial reports. No matter what type of user is going to access the information, be they employees, contractors or customers, there are common goals you are always trying to achieve.Securing the content at the earliest point possible and do it automatically. Removing the dependency on the user to decide to secure the content reduces the risk of mistakes significantly and therefore results a more secure deployment. K.I.S.S. (Keep It Simple Stupid) Reduce complexity in the rights/classification model. Oracle IRM lets you make changes to access to documents even after they are secured which allows you to start with a simple model and then introduce complexity once you've understood how the technology is going to be used in the business. After an initial learning period you can review your implementation and start to make informed decisions based on user feedback and administration experience. Clearly communicate to the user, when appropriate, any changes to their existing work practice. You must make every effort to make the transition to sealed content as simple as possible. For external users you must help them understand why you are securing the documents and inform them the value of the technology to both your business and them. Before getting into the detail, I must pay homage to Martin White, Vice President of client services in SealedMedia, the company Oracle acquired and who created Oracle IRM. In the SealedMedia years Martin was involved with every single customer and was key to the design of certain aspects of the IRM technology, specifically the context model we will be discussing here. Listening carefully to customers and understanding the flexibility of the IRM technology, Martin taught me all the skills of helping customers build scalable, effective and simple to use IRM deployments. No matter how well the engineering department designed the software, badly designed and poorly executed projects can result in difficult to use and manage, and ultimately insecure solutions. The advice and information that follows was born with Martin and he's still delivering IRM consulting with customers and can be found at www.thinkers.co.uk. It is from Martin and others that Oracle not only has the most advanced, scalable and usable document security solution on the market, but Oracle and their partners have the most experience in delivering successful document security solutions. Understanding the classification and standard rights model The goal of any successful IRM deployment is to balance the increase in security the technology brings without over complicating the way people use secured content and avoid a significant increase in administration and maintenance. With Oracle it is possible to automate the protection of content, deploy the desktop software transparently and use authentication methods such that users can open newly secured content initially unaware the document is any different to an insecure one. That is until of course they attempt to do something for which they don't have any rights, such as copy and paste to an insecure application or try and print. Central to achieving this objective is creating a classification model that is simple to understand and use but also provides the right level of complexity to meet the business needs. In Oracle IRM the term used for each classification is a "context". A context defines the relationship between.A group of related documents The people that use the documents The roles that these people perform The rights that these people need to perform their role The context is the key to the success of Oracle IRM. It provides the separation of the role and rights of a user from the content itself. Documents are sealed to contexts but none of the rights, user or group information is stored within the content itself. Sealing only places information about the location of the IRM server that sealed it, the context applied to the document and a few other pieces of metadata that pertain only to the document. This important separation of rights from content means that millions of documents can be secured against a single classification and a user needs only one right assigned to be able to access all documents. If you have followed all the previous articles in this guide, you will be ready to start defining contexts to which your sensitive information will be protected. But before you even start with IRM, you need to understand how your own business uses and creates sensitive documents and emails. Identifying business use cases Oracle is able to support multiple classification systems, but usually there is one single initial need for the technology which drives a deployment. This need might be to protect sensitive mergers and acquisitions information, engineering intellectual property, financial documents. For this and every subsequent use case you must understand how users create and work with documents, to who they are distributed and how the recipients should interact with them. A successful IRM deployment should start with one well identified use case (we go through some examples towards the end of this article) and then after letting this use case play out in the business, you learn how your users work with content, how well your communication to the business worked and if the classification system you deployed delivered the right balance. It is at this point you can start rolling the technology out further. Creating an effective IRM classification model Once you have selected the initial use case you will address with IRM, you need to design a classification model that defines the access to secured documents within the use case. In Oracle IRM there is an inbuilt classification system called the "context" model. In Oracle IRM 11g it is possible to extend the server to support any rights classification model, but the majority of users who are not using an application integration (such as Oracle IRM within Oracle Beehive) are likely to be starting out with the built in context model. Before looking at creating a classification system with IRM, it is worth reviewing some recognized standards and methods for creating and implementing security policy. A very useful set of documents are the ISO 17799 guidelines and the SANS security policy templates. First task is to create a context against which documents are to be secured. A context consists of a group of related documents (all top secret engineering research), a list of roles (contributors and readers) which define how users can access documents and a list of users (research engineers) who have been given a role allowing them to interact with sealed content. Before even creating the first context it is wise to decide on a philosophy which will dictate the level of granularity, the question is, where do you start? At a department level? By project? By technology? First consider the two ends of the spectrum... One single classification across the entire business Imagine that instead of having separate contexts, one for engineering intellectual property, one for your financial data, one for human resources personally identifiable information, you create one context for all documents across the entire business. Whilst you may have immediate objections, there are some significant benefits in thinking about considering this. Document security classification decisions are simple. You only have one context to chose from! User provisioning is simple, just make sure everyone has a role in the only context in the business. Administration is very low, if you assign rights to groups from the business user repository you probably never have to touch IRM administration again. There are however some obvious downsides to this model.All users in have access to all IRM secured content. So potentially a sales person could access sensitive mergers and acquisition documents, if they can get their hands on a copy that is. You cannot delegate control of different documents to different parts of the business, this may not satisfy your regulatory requirements for the separation and delegation of duties. Changing a users role affects every single document ever secured. Even though it is very unlikely a business would ever use one single context to secure all their sensitive information, thinking about this scenario raises one very important point. Just having one single context and securing all confidential documents to it, whilst incurring some of the problems detailed above, has one huge value. Once secured, IRM protected content can ONLY be accessed by authorized users. Just think of all the sensitive documents in your business today, imagine if you could ensure that only everyone you trust could open them. Even if an employee lost a laptop or someone accidentally sent an email to the wrong recipient, only the right people could open that file. A context for each and every possible granular use case Now let's think about the total opposite of a single context design. What if you created a context for each and every single defined business need and created multiple contexts within this for each level of granularity? Let's take a use case where we need to protect engineering intellectual property. Imagine we have 6 different engineering groups, and in each we have a research department, a design department and manufacturing. The company information security policy defines 3 levels of information sensitivity... restricted, confidential and top secret. Then let's say that each group and department needs to define access to information from both internal and external users. Finally add into the mix that they want to review the rights model for each context every financial quarter. This would result in a huge amount of contexts. For example, lets just look at the resulting contexts for one engineering group. Q1FY2010 Restricted Internal - Engineering Group 1 - Research Q1FY2010 Restricted Internal - Engineering Group 1 - Design Q1FY2010 Restricted Internal - Engineering Group 1 - Manufacturing Q1FY2010 Restricted External- Engineering Group 1 - Research Q1FY2010 Restricted External - Engineering Group 1 - Design Q1FY2010 Restricted External - Engineering Group 1 - Manufacturing Q1FY2010 Confidential Internal - Engineering Group 1 - Research Q1FY2010 Confidential Internal - Engineering Group 1 - Design Q1FY2010 Confidential Internal - Engineering Group 1 - Manufacturing Q1FY2010 Confidential External - Engineering Group 1 - Research Q1FY2010 Confidential External - Engineering Group 1 - Design Q1FY2010 Confidential External - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret Internal - Engineering Group 1 - Research Q1FY2010 Top Secret Internal - Engineering Group 1 - Design Q1FY2010 Top Secret Internal - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret External - Engineering Group 1 - Research Q1FY2010 Top Secret External - Engineering Group 1 - Design Q1FY2010 Top Secret External - Engineering Group 1 - Manufacturing Now multiply the above by 6 for each engineering group, 18 contexts. You are then creating/reviewing another 18 every 3 months. After a year you've got 72 contexts. What would be the advantages of such a complex classification model? You can satisfy very granular rights requirements, for example only an authorized engineering group 1 researcher can create a top secret report for access internally, and his role will be reviewed on a very frequent basis. Your business may have very complex rights requirements and mapping this directly to IRM may be an obvious exercise. The disadvantages of such a classification model are significant...Huge administrative overhead. Someone in the business must manage, review and administrate each of these contexts. If the engineering group had a single administrator, they would have 72 classifications to reside over each year. From an end users perspective life will be very confusing. Imagine if a user has rights in just 6 of these contexts. They may be able to print content from one but not another, be able to edit content in 2 contexts but not the other 4. Such confusion at the end user level causes frustration and resistance to the use of the technology. Increased synchronization complexity. Imagine a user who after 3 years in the company ends up with over 300 rights in many different contexts across the business. This would result in long synchronization times as the client software updates all your offline rights. Hard to understand who can do what with what. Imagine being the VP of engineering and as part of an internal security audit you are asked the question, "What rights to researchers have to our top secret information?". In this complex model the answer is not simple, it would depend on many roles in many contexts. Of course this example is extreme, but it highlights that trying to build many barriers in your business can result in a nightmare of administration and confusion amongst users. In the real world what we need is a balance of the two. We need to seek an optimum number of contexts. Too many contexts are unmanageable and too few contexts does not give fine enough granularity. What makes a good context? Good context design derives mainly from how well you understand your business requirements to secure access to confidential information. Some customers I have worked with can tell me exactly the documents they wish to secure and know exactly who should be opening them. However there are some customers who know only of the government regulation that requires them to control access to certain types of information, they don't actually know where the documents are, how they are created or understand exactly who should have access. Therefore you need to know how to ask the business the right questions that lead to information which help you define a context. First ask these questions about a set of documentsWhat is the topic? Who are legitimate contributors on this topic? Who are the authorized readership? If the answer to any one of these is significantly different, then it probably merits a separate context. Remember that sealed documents are inherently secure and as such they cannot leak to your competitors, therefore it is better sealed to a broad context than not sealed at all. Simplicity is key here. Always revert to the first extreme example of a single classification, then work towards essential complexity. If there is any doubt, always prefer fewer contexts. Remember, Oracle IRM allows you to change your mind later on. You can implement a design now and continue to change and refine as you learn how the technology is used. It is easy to go from a simple model to a more complex one, it is much harder to take a complex model that is already embedded in the work practice of users and try to simplify it. It is also wise to take a single use case and address this first with the business. Don't try and tackle many different problems from the outset. Do one, learn from the process, refine it and then take what you have learned into the next use case, refine and continue. Once you have a good grasp of the technology and understand how your business will use it, you can then start rolling out the technology wider across the business. Deciding on the use of roles in the context Once you have decided on that first initial use case and a context to create let's look at the details you need to decide upon. For each context, identify; Administrative rolesBusiness owner, the person who makes decisions about who may or may not see content in this context. This is often the person who wanted to use IRM and drove the business purchase. They are the usually the person with the most at risk when sensitive information is lost. Point of contact, the person who will handle requests for access to content. Sometimes the same as the business owner, sometimes a trusted secretary or administrator. Context administrator, the person who will enact the decisions of the Business Owner. Sometimes the point of contact, sometimes a trusted IT person. Document related rolesContributors, the people who create and edit documents in this context. Reviewers, the people who are involved in reviewing documents but are not trusted to secure information to this classification. This role is not always necessary. (See later discussion on Published-work and Work-in-Progress) Readers, the people who read documents from this context. Some people may have several of the roles above, which is fine. What you are trying to do is understand and define how the business interacts with your sensitive information. These roles obviously map directly to roles available in Oracle IRM. Reviewing the features and security for context roles At this point we have decided on a classification of information, understand what roles people in the business will play when administrating this classification and how they will interact with content. The final piece of the puzzle in getting the information for our first context is to look at the permissions people will have to sealed documents. First think why are you protecting the documents in the first place? It is to prevent the loss of leaking of information to the wrong people. To control the information, making sure that people only access the latest versions of documents. You are not using Oracle IRM to prevent unauthorized people from doing legitimate work. This is an important point, with IRM you can erect many barriers to prevent access to content yet too many restrictions and authorized users will often find ways to circumvent using the technology and end up distributing unprotected originals. Because IRM is a security technology, it is easy to get carried away restricting different groups. However I would highly recommend starting with a simple solution with few restrictions. Ensure that everyone who reasonably needs to read documents can do so from the outset. Remember that with Oracle IRM you can change rights to content whenever you wish and tighten security. Always return to the fact that the greatest value IRM brings is that ONLY authorized users can access secured content, remember that simple "one context for the entire business" model. At the start of the deployment you really need to aim for user acceptance and therefore a simple model is more likely to succeed. As time passes and users understand how IRM works you can start to introduce more restrictions and complexity. Another key aspect to focus on is handling exceptions. If you decide on a context model where engineering can only access engineering information, and sales can only access sales data. Act quickly when a sales manager needs legitimate access to a set of engineering documents. Having a quick and effective process for permitting other people with legitimate needs to obtain appropriate access will be rewarded with acceptance from the user community. These use cases can often be satisfied by integrating IRM with a good Identity & Access Management technology which simplifies the process of assigning users the correct business roles. The big print issue... Printing is often an issue of contention, users love to print but the business wants to ensure sensitive information remains in the controlled digital world. There are many cases of physical document loss causing a business pain, it is often overlooked that IRM can help with this issue by limiting the ability to generate physical copies of digital content. However it can be hard to maintain a balance between security and usability when it comes to printing. Consider the following points when deciding about whether to give print rights. Oracle IRM sealed documents can contain watermarks that expose information about the user, time and location of access and the classification of the document. This information would reside in the printed copy making it easier to trace who printed it. Printed documents are slower to distribute in comparison to their digital counterparts, so time sensitive information in printed format may present a lower risk. Print activity is audited, therefore you can monitor and react to users abusing print rights. Summary In summary it is important to think carefully about the way you create your context model. As you ask the business these questions you may get a variety of different requirements. There may be special projects that require a context just for sensitive information created during the lifetime of the project. There may be a department that requires all information in the group is secured and you might have a few senior executives who wish to use IRM to exchange a small number of highly sensitive documents with a very small number of people. Oracle IRM, with its very flexible context classification system, can support all of these use cases. The trick is to introducing the complexity to deliver them at the right level. In another article i'm working on I will go through some examples of how Oracle IRM might map to existing business use cases. But for now, this article covers all the important questions you need to get your IRM service deployed and successfully protecting your most sensitive information.

    Read the article

  • Unable to ssh out anywhere - ssh_exchange_identification

    - by Chowlett
    I have a setup where I'm running Ubuntu 11.10 as a VirtualBox guest under a Windows 7 host, behind a restrictive corporate firewall. I have set up NAT from the host port 22 to Ubuntu's port 22; IT inform me that they have opened port 22 outbound for the host machine's IP address. I have run ssh-keygen -t rsa, and am trying to test the setup by connecting to github and another known ssh server. In both cases the connect is refused with ssh_exchange_identification: Connection closed by remote host. Full -vvv log is below. Is this possibly still due to the corporate firewall? If so, what else might I need to request from them? Any other ideas what might be wrong and how to fix it? ~$ ssh -Tvvv [email protected] OpenSSH_5.8p1 Debian-7ubuntu1, OpenSSL 1.0.0e 6 Sep 2011 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to github.com [207.97.227.239] port 22. debug1: Connection established. debug3: Incorrect RSA1 identifier debug3: Could not load "/home/chris/.ssh/id_rsa" as a RSA1 public key debug2: key_type_from_name: unknown key type '-----BEGIN' debug3: key_read: missing keytype debug2: key_type_from_name: unknown key type 'Proc-Type:' debug3: key_read: missing keytype debug2: key_type_from_name: unknown key type 'DEK-Info:' debug3: key_read: missing keytype debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug2: key_type_from_name: unknown key type '-----END' debug3: key_read: missing keytype debug1: identity file /home/chris/.ssh/id_rsa type 1 debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-2048 debug1: Checking blacklist file /etc/ssh/blacklist.RSA-2048 debug1: identity file /home/chris/.ssh/id_rsa-cert type -1 debug1: identity file /home/chris/.ssh/id_dsa type -1 debug1: identity file /home/chris/.ssh/id_dsa-cert type -1 debug1: identity file /home/chris/.ssh/id_ecdsa type -1 debug1: identity file /home/chris/.ssh/id_ecdsa-cert type -1 ssh_exchange_identification: Connection closed by remote host Edit: Requested diagnostics: ~$ ls -la ~/.ssh total 16 drwx------ 2 chris chris 4096 2012-03-30 13:12 . drwxr-xr-x 29 chris chris 4096 2012-03-30 13:25 .. -rw------- 1 chris chris 1766 2012-03-30 13:12 id_rsa -rw-r--r-- 1 chris chris 409 2012-03-30 13:12 id_rsa.pub

    Read the article

  • Teamviewer 8 on Kubuntu 13.04 won't start

    - by kirokko
    The problem is I can't run Teamviewer on Kubuntu. That problem exists for me since 12.10 and I as I remember, it was with the 7th version either. I download official package from officical web site, for 64 bit system. Install it, then install all dependencies (apt-get install -f). When I start it, window with License agreement appears and I can't agree with it, because I don't see anything, even mouse cursor is invisible on window area. Here's the trace of teamviewer from console: kirokko ~ $ teamviewer Init... Checking setup... Launching TeamViewer... fixme:service:scmdatabase_autostart_services Auto-start service L"MountMgr" failed to start: 2 fixme:service:scmdatabase_autostart_services Auto-start service L"PlugPlay" failed to start: 2 fixme:actctx:parse_depend_manifests Could not find dependent assembly L"Microsoft.Windows.Common-Controls" (6.0.0.0) fixme:heap:HeapSetInformation (nil) 1 (nil) 0 fixme:ole:CoInitializeSecurity ((nil),-1,(nil),(nil),0,3,(nil),0,(nil)) - stub! fixme:heap:HeapSetInformation (nil) 1 (nil) 0 fixme:process:SetProcessShutdownParameters (00000100, 00000000): partial stub. fixme:resource:GetGuiResources (0xffffffff,0): stub fixme:win:EnumDisplayDevicesW ((null),0,0x32dc60,0x00000000), stub! fixme:win:EnumDisplayDevicesW (L"\\\\.\\DISPLAY1",0,0x32d918,0x00000000), stub! fixme:win:EnumDisplayDevicesW ((null),1,0x32dc60,0x00000000), stub! fixme:winhttp:WinHttpDetectAutoProxyConfigUrl discovery via DHCP not supported fixme:msg:ChangeWindowMessageFilter 233 00000001 fixme:msg:ChangeWindowMessageFilter 4a 00000001 fixme:msg:ChangeWindowMessageFilter 407 00000001 fixme:msg:ChangeWindowMessageFilter 49 00000001 fixme:bitmap:CreateBitmapIndirect planes = 0 fixme:bitmap:CreateBitmapIndirect planes = 0 fixme:wtsapi:WTSRegisterSessionNotification Stub 0x1005a 0x00000000 err:ole:marshal_object couldn't get IPSFactory buffer for interface {00000131-0000-0000-c000-000000000046} err:ole:marshal_object couldn't get IPSFactory buffer for interface {00000122-0000-0000-c000-000000000046} err:ole:StdMarshalImpl_MarshalInterface Failed to create ifstub, hres=0x80040155 err:ole:CoMarshalInterface Failed to marshal the interface {00000122-0000-0000-c000-000000000046}, 80040155 fixme:msg:ChangeWindowMessageFilter c04f 00000001 fixme:richedit:ME_HandleMessage EM_SETFONTSIZE: stub fixme:dbghelp:elf_search_auxv can't find symbol in module wine: Unhandled page fault on read access to 0xffffffff at address 0xf7585c5a (thread 0009), starting debugger... err:seh:start_debugger Couldn't start debugger ("winedbg --auto 8 5552") (2) Read the Wine Developers Guide on how to set up winedbg or another debugger What's the problem? The same problem was when I had Ubuntu 12.10 installed, then the same problem was when I installed Mint 14 KDE (Kubuntu 12.10). Now I moved to Kubuntu 13.04 and the problem still exists.

    Read the article

  • Looking for Your Next Challenge...Don't Stretch Too Far

    - by david.talamelli
    In my role as a Recruiter at Oracle I receive a large number of resumes of people who are interested in working with us. People contact me for a number of reasons, it can be about a specific role that we may be hiring for or they may send me an email asking if there are any suitable roles for them. Sometimes when I speak to people we have similar roles available to the roles that they may actually be in now. Sometimes people are interested in making this type of sideways move if their motivation to change jobs is not necessarily that they are looking for increased responsibility or career advancement (example: money, redundancy, work environment). However there are times when after walking through a specific role with a candidate that they may say to me - "You know that is very similar to the role that I am doing now. I would not want to move unless my next role presents me with the next challenge in my career". This is a far statement - if a person is looking to change jobs for the next step in their career they should be looking at suitable opportunities that will address their need. In this instance a sideways step will not really present any new challenges or responsibilities. The main change would be the company they are working for. Candidates looking for a new role because they are looking to move up the ladder should be looking for a role that offers them the next level of responsibility. I think the best job changes for people who are looking for career advancement are the roles that stretch someone outside of their comfort zone but do not stretch them so much that they can't cope with the added responsibilities and pressure. In my head I often think of this example in the same context of an elastic band - you can stretch it, but only so much before it snaps. That is what you should be looking for - to be stretched but not so much that you snap. If you are for example in an individual contributor role and would like to move into a management role - you may not be quite ready to take on a role that is managing a large workforce or requires significant people management experience. While your intentions may be right, your lack of management experience may fit you outside of the scope of search to be successful this type of role. In this example you can move from an individual contributor role to a management role but it may need to be managing a smaller team rather than a larger team. While you are trying to make this transition you can try to pick up some responsibilities in your current role that would give you the skills and experience you need for your next role. Never be afraid to put your hand up to help on a new project or piece of work. You never know when that newly gained experience may come in handy in your career. This article was originally posted on David Talamelli's Blog - David's Journal on Tap

    Read the article

  • SQL SERVER – 4 Tips for ETL Software IDE Developers

    - by pinaldave
    In a previous blog, I introduced the notion of Semantic Types. To an end-user, a seamlessly integrated semantic typing engine significantly increases the ease of use of an ETL IDE (integrated development environment, or developer studio). This led me to think about other ease-of-use issues I have encountered while building ETL applications. When I get stumped while programming, I find myself asking the variations on these questions: “How do I…?” “Now what?” “Why isn’t this working?” “Why do I have to redo the work I just did?” It seems to me that a good ETL IDE will anticipate these questions and seek to answer them before they are even asked. So here are my tips to help software vendors build developer IDEs that actually make development easier. How do I…? While developing an ETL application, have you ever asked yourself: “How do I set up the connection to my SQL Server database?”,“How do I import my table definitions from Access?”, etc. An easy answer might be “read the manual” but sometimes product manuals are not robust or easily accessible. So, integrating robust how-to instructions directly into your ETLstudio would help users get the information they need at the time they need it. Now what? IDEs in general know where you last clicked or performed an action using an input device such as a keyboard; so they should be able to reasonably predict the design context you are in and suggest the next steps accordingly. Context-sensitive suggestions based on the state of the user’s work will help users move forward in ETL application development. Why isn’t this working? Or why do I have to wait till I compile to be told about a critical design issue? If an ETL IDE is smart enough to signal to users what in their design structures is left to be completed or has been completed incorrectly, then the developer can spend much less time in the designàcompileàerror-correct loop. Just-in-time validation helps users detect and correct programming errors earlier in the ETL development life cycle. Why do I have to redo the work I just did? In ETL development, schemas, transformation rules, connectivity objects, etc., can be reused in various situations. Using mouse-clicks to build and manage libraries of reusable design objects implies that the application development effort should decrease over time and as the library acquires more objects. I met a great company at SQL Pass that is trying to address many of these usability issues. Check them out at www.expressor-software.com. What other ease-of-use suggestions do you have for ETL software vendors? Please post your valuable comments. ?Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: ETL

    Read the article

  • CDN CNAMEs not resolving to customer origin

    - by Donald Jenkins
    I have set up an Edgecast CDN to mirror all my static content. Because I use the root of my domain (donaldjenkins.com) to host my main site—using Google Analytics which sets cookies—I've stored the corresponding static files in a separate cookieless domain (donaldjenkins.info) which is used only for this purpose. I've set it up (using this guide for general guidance), with the following structure, based on a combination of customer origin and CDN origin to make the most of the chosen short domain name and provide meaningful URLs: http://donaldjenkins.info:80 is set as the customer origin for the content stored in the CDN at directory http://wac.62E0.edgecastcdn.net/8062E0/donaldjenkins.info; I've then set up various subdomains of a separate domain, the conveniently-named cdn.dj, as CDN-origin Edge CNAMEs for each of the corresponding static content types: js.cdn.dj points to the origin directory http://wac.62E0.edgecastcdn.net/0062E0/donaldjenkins.info/js; css.cdn.dj points to the origin directory http://wac.62E0.edgecastcdn.net/0062E0/donaldjenkins.info/css; images.cdn.dj points to the origin directory http://wac.62E0.edgecastcdn.net/0062E0/donaldjenkins.info/images and so on. This results in some pretty nice, short, clear URLs. The DNS zone file for cdn.dj (yes, it's a real domain name registered in Djibouti) is set properly: cdn.dj 43200 IN A 205.186.157.162 css.cdn.dj 43200 IN CNAME wac.62E0.edgecastcdn.net. images.cdn.dj 43200 IN CNAME wac.62E0.edgecastcdn.net. js.cdn.dj 43200 IN CNAME wac.62E0.edgecastcdn.net. The DNS resolves to the Edgecast URL: $ host js.cdn.dj js.cdn.dj is an alias for wac.62E0.edgecastcdn.net. wac.62E0.edgecastcdn.net is an alias for gs1.wac.edgecastcdn.net. gs1.wac.edgecastcdn.net has address 93.184.220.20 But whenever I try to fetch a file in any of the directories to which the CNAME assets map, I get a 404: $ curl http://js.cdn.dj/combined.js <?xml version="1.0" encoding="iso-8859-1"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <title>404 - Not Found</title> </head> <body> <h1>404 - Not Found</h1> </body> </html> despite the fact that the corresponding customer origin file exists: $ curl http://donaldjenkins.info/js/combined.js fetches the content of the combined.js file. Yet it's been more than enough time for the DNS to propagate since I set up the CDN. There's obviously some glaring mistake in the above-described setup, and I'm a bit of a novice with CDNs—but any suggestions would be gratefully received.

    Read the article

  • Add Enhanced Balloon Tooltips to Firefox

    - by Asian Angel
    The default balloon tooltip in Firefox does well at times but then there are instances when a person finds that more information would be much better. The Tooltip Plus extension for Firefox will give your browser that nice extra information boost. Before & After For our example we have placed the “before & after shots” together for better comparison. First off we started with the How-To Geek logo. Note: Does not display the original URL behind shortened URLs. Next we moved on to a permanently linked article title. The “Reviews Tab” in the How-To Geek website toolbar. The article tags listing just beneath the HTG website toolbar. And the link for subscribing to our RSS Feed. In each instance you could actually see the address behind the links. The Tooltip Plus extension will also help out with images in webpages (including “Alt Text” if present). Notice that the link for the image is now available for you to view. Options The options are extremely simple to work with. Decide if you want a document icon to display, the size of the icon, and if you would like “Alt Text” for images to be displayed or not. Conclusion The Tooltip Plus extension does one thing and does it very well…it gives you that extra bit of information when you need it. Links Download the Tooltip Plus extension (Mozilla Add-ons) Similar Articles Productive Geek Tips How To Fix System Tray Tooltips Not Displaying in Windows XPStop the Annoying "There are unused icons on your desktop" Popup BalloonThe Illustrated Guide to the New Firefox 3.6 Windows 7 IntegrationView URLs as Tooltips in FirefoxDisable the Annoying “This device can perform faster” Balloon Message in Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Quickly Switch between Tabs in IE Windows Media Player 12: Tweak Video & Sound with Playback Enhancements Own a cell phone, or does a cell phone own you? Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier Design Your Web Pages Using the Golden Ratio

    Read the article

  • To 'seal' or to 'wrap': that is the question ...

    - by Simon Thorpe
    If you follow this blog you will already have a good idea of what Oracle Information Rights Management (IRM) does. By encrypting documents Oracle IRM secures and tracks all copies of those documents, everywhere they are shared, stored and used, inside and outside your firewall. Unlike earlier encryption products authorized end users can transparently use IRM-encrypted documents within standard desktop applications such as Microsoft Office, Adobe Reader, Internet Explorer, etc. without first having to manually decrypt the documents. Oracle refers to this encryption process as 'sealing', and it is thanks to the freely available Oracle IRM Desktop that end users can transparently open 'sealed' documents within desktop applications without needing to know they are encrypted and without being able to save them out in unencrypted form. So Oracle IRM provides an amazing, unprecedented capability to secure and track every copy of your most sensitive information - even enabling end user access to be revoked long after the documents have been copied to home computers or burnt to CD/DVDs. But what doesn't it do? The main limitation of Oracle IRM (and IRM products in general) is format and platform support. Oracle IRM supports by far the broadest range of desktop applications and the deepest range of application versions, compared to other IRM vendors. This is important because you don't want to exclude sensitive business processes from being 'sealed' just because either the file format is not supported or users cannot upgrade to the latest version of Microsoft Office or Adobe Reader. But even the Oracle IRM Desktop can only open 'sealed' documents on Windows and does not for example currently support CAD (although this is coming in a future release). IRM products from other vendors are much more restrictive. To address this limitation Oracle has just made available the Oracle IRM Wrapper all-format, any-platform encryption/decryption utility. It uses the same core Oracle IRM web services and classification-based rights model to manually encrypt and decrypt files of any format on any Java-capable operating system. The encryption envelope is the same, and it uses the same role- and classification-based rights as 'sealing', but before you can use 'wrapped' files you must manually decrypt them. Essentially it is old-school manual encryption/decryption using the modern classification-based rights model of Oracle IRM. So if you want to share sensitive CAD documents, ZIP archives, media files, etc. with a partner, and you already have Oracle IRM, it's time to get 'wrapping'! Please note that the Oracle IRM Wrapper is made available as a free sample application (with full source code) and is not formally supported by Oracle. However it is informally supported by its author, Martin Lambert, who also created the widely-used Oracle IRM Hot Folder automated sealing application.

    Read the article

  • Extending NerdDinner: Adding Geolocated Flair

    - by Jon Galloway
    NerdDinner is a website with the audacious goal of “Organizing the world’s nerds and helping them eat in packs.” Because nerds aren’t likely to socialize with others unless a website tells them to do it. Scott Hanselman showed off a lot of the cool features we’ve added to NerdDinner lately during his popular talk at MIX10, Beyond File | New Company: From Cheesy Sample to Social Platform. Did you miss it? Go ahead and watch it, I’ll wait. One of the features we wanted to add was flair. You know about flair, right? It’s a way to let folks who like your site show it off in their own site. For example, here’s my StackOverflow flair: Great! So how could we add some of this flair stuff to NerdDinner? What do we want to show? If we’re going to encourage our users to give up a bit of their beautiful website to show off a bit of ours, we need to think about what they’ll want to show. For instance, my StackOverflow flair is all about me, not StackOverflow. So how will this apply to NerdDinner? Since NerdDinner is all about organizing local dinners, in order for the flair to be useful it needs to make sense for the person viewing the web page. If someone visits from Egypt visits my blog, they should see information about NerdDinners in Egypt. That’s geolocation – localizing site content based on where the browser’s sitting, and it makes sense for flair as well as entire websites. So we’ll set up a simple little callout that prompts them to host a dinner in their area: Hopefully our flair works and there is a dinner near your viewers, so they’ll see another view which lists upcoming dinners near them: The Geolocation Part Generally website geolocation is done by mapping the requestor’s IP address to a geographic area. It’s not an exact science, but I’ve always found it to be pretty accurate. There are (at least) three ways to handle it: You pay somebody like MaxMind for a database (with regular updates) that sits on your server, and you use their API to do lookups. I used this on a pretty big project a few years ago and it worked well. You use HTML 5 Geolocation API or Google Gears or some other browser based solution. I think those are cool (I use Google Gears a lot), but they’re both in flux right now and I don’t think either has a wide enough of an install base yet to rely on them. You might want to, but I’ve heard you do all kinds of crazy stuff, and sometimes it gets you in trouble. I don’t mean talk out of line, but we all laugh behind your back a bit. But, hey, it’s up to you. It’s your flair or whatever. There are some free webservices out there that will take an IP address and give you location information. Easy, and works for everyone. That’s what we’re doing. I looked at a few different services and settled on IPInfoDB. It’s free, has a great API, and even returns JSON, which is handy for Javascript use. The IP query is pretty simple. We hit a URL like this: http://ipinfodb.com/ip_query.php?ip=74.125.45.100&timezone=false … and we get an XML response back like this… <?xml version="1.0" encoding="UTF-8"?> <Response> <Ip>74.125.45.100</Ip> <Status>OK</Status> <CountryCode>US</CountryCode> <CountryName>United States</CountryName> <RegionCode>06</RegionCode> <RegionName>California</RegionName> <City>Mountain View</City> <ZipPostalCode>94043</ZipPostalCode> <Latitude>37.4192</Latitude> <Longitude>-122.057</Longitude> </Response> So we’ll build some data transfer classes to hold the location information, like this: public class LocationInfo { public string Country { get; set; } public string RegionName { get; set; } public string City { get; set; } public string ZipPostalCode { get; set; } public LatLong Position { get; set; } } public class LatLong { public float Lat { get; set; } public float Long { get; set; } } And now hitting the service is pretty simple: public static LocationInfo HostIpToPlaceName(string ip) { string url = "http://ipinfodb.com/ip_query.php?ip={0}&timezone=false"; url = String.Format(url, ip); var result = XDocument.Load(url); var location = (from x in result.Descendants("Response") select new LocationInfo { City = (string)x.Element("City"), RegionName = (string)x.Element("RegionName"), Country = (string)x.Element("CountryName"), ZipPostalCode = (string)x.Element("CountryName"), Position = new LatLong { Lat = (float)x.Element("Latitude"), Long = (float)x.Element("Longitude") } }).First(); return location; } Getting The User’s IP Okay, but first we need the end user’s IP, and you’d think it would be as simple as reading the value from HttpContext: HttpContext.Current.Request.UserHostAddress But you’d be wrong. Sorry. UserHostAddress just wraps HttpContext.Current.Request.ServerVariables["REMOTE_ADDR"], but that doesn’t get you the IP for users behind a proxy. That’s in another header, “HTTP_X_FORWARDED_FOR". So you can either hit a wrapper and then check a header, or just check two headers. I went for uniformity: string SourceIP = string.IsNullOrEmpty(Request.ServerVariables["HTTP_X_FORWARDED_FOR"]) ? Request.ServerVariables["REMOTE_ADDR"] : Request.ServerVariables["HTTP_X_FORWARDED_FOR"]; We’re almost set to wrap this up, but first let’s talk about our views. Yes, views, because we’ll have two. Selecting the View We wanted to make it easy for people to include the flair in their sites, so we looked around at how other people were doing this. The StackOverflow folks have a pretty good flair system, which allows you to include the flair in your site as either an IFRAME reference or a Javascript include. We’ll do both. We have a ServicesController to handle use of the site information outside of NerdDinner.com, so this fits in pretty well there. We’ll be displaying the same information for both HTML and Javascript flair, so we can use one Flair controller action which will return a different view depending on the requested format. Here’s our general flow for our controller action: Get the user’s IP Translate it to a location Grab the top three upcoming dinners that are near that location Select the view based on the format (defaulted to “html”) Return a FlairViewModel which contains the list of dinners and the location information public ActionResult Flair(string format = "html") { string SourceIP = string.IsNullOrEmpty( Request.ServerVariables["HTTP_X_FORWARDED_FOR"]) ? Request.ServerVariables["REMOTE_ADDR"] : Request.ServerVariables["HTTP_X_FORWARDED_FOR"]; var location = GeolocationService.HostIpToPlaceName(SourceIP); var dinners = dinnerRepository. FindByLocation(location.Position.Lat, location.Position.Long). OrderByDescending(p => p.EventDate).Take(3); // Select the view we'll return. // Using a switch because we'll add in JSON and other formats later. string view; switch (format.ToLower()) { case "javascript": view = "JavascriptFlair"; break; default: view = "Flair"; break; } return View( view, new FlairViewModel { Dinners = dinners.ToList(), LocationName = string.IsNullOrEmpty(location.City) ? "you" : String.Format("{0}, {1}", location.City, location.RegionName) } ); } Note: I’m not in love with the logic here, but it seems like overkill to extract the switch statement away when we’ll probably just have two or three views. What do you think? The HTML View The HTML version of the view is pretty simple – the only thing of any real interest here is the use of an extension method to truncate strings that are would cause the titles to wrap. public static string Truncate(this string s, int maxLength) { if (string.IsNullOrEmpty(s) || maxLength <= 0) return string.Empty; else if (s.Length > maxLength) return s.Substring(0, maxLength) + "..."; else return s; }   So here’s how the HTML view ends up looking: <%@ Page Title="" Language="C#" Inherits="System.Web.Mvc.ViewPage<FlairViewModel>" %> <%@ Import Namespace="NerdDinner.Helpers" %> <%@ Import Namespace="NerdDinner.Models" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Nerd Dinner</title> <link href="/Content/Flair.css" rel="stylesheet" type="text/css" /> </head> <body> <div id="nd-wrapper"> <h2 id="nd-header">NerdDinner.com</h2> <div id="nd-outer"> <% if (Model.Dinners.Count == 0) { %> <div id="nd-bummer"> Looks like there's no Nerd Dinners near <%:Model.LocationName %> in the near future. Why not <a target="_blank" href="http://www.nerddinner.com/Dinners/Create">host one</a>?</div> <% } else { %> <h3> Dinners Near You</h3> <ul> <% foreach (var item in Model.Dinners) { %> <li> <%: Html.ActionLink(String.Format("{0} with {1} on {2}", item.Title.Truncate(20), item.HostedBy, item.EventDate.ToShortDateString()), "Details", "Dinners", new { id = item.DinnerID }, new { target = "_blank" })%></li> <% } %> </ul> <% } %> <div id="nd-footer"> More dinners and fun at <a target="_blank" href="http://nrddnr.com">http://nrddnr.com</a></div> </div> </div> </body> </html> You’d include this in a page using an IFRAME, like this: <IFRAME height=230 marginHeight=0 src="http://nerddinner.com/services/flair" frameBorder=0 width=160 marginWidth=0 scrolling=no></IFRAME> The Javascript view The Javascript flair is written so you can include it in a webpage with a simple script include, like this: <script type="text/javascript" src="http://nerddinner.com/services/flair?format=javascript"></script> The goal of this view is very similar to the HTML embed view, with a few exceptions: We’re creating a script element and adding it to the head of the document, which will then document.write out the content. Note that you have to consider if your users will actually have a <head> element in their documents, but for website flair use cases I think that’s a safe bet. Since the content is being added to the existing page rather than shown in an IFRAME, all links need to be absolute. That means we can’t use Html.ActionLink, since it generates relative routes. We need to escape everything since it’s being written out as strings. We need to set the content type to application/x-javascript. The easiest way to do that is to use the <%@ Page ContentType%> directive. <%@ Page Language="C#" Inherits="System.Web.Mvc.ViewPage<NerdDinner.Models.FlairViewModel>" ContentType="application/x-javascript" %> <%@ Import Namespace="NerdDinner.Helpers" %> <%@ Import Namespace="NerdDinner.Models" %> document.write('<script>var link = document.createElement(\"link\");link.href = \"http://nerddinner.com/content/Flair.css\";link.rel = \"stylesheet\";link.type = \"text/css\";var head = document.getElementsByTagName(\"head\")[0];head.appendChild(link);</script>'); document.write('<div id=\"nd-wrapper\"><h2 id=\"nd-header\">NerdDinner.com</h2><div id=\"nd-outer\">'); <% if (Model.Dinners.Count == 0) { %> document.write('<div id=\"nd-bummer\">Looks like there\'s no Nerd Dinners near <%:Model.LocationName %> in the near future. Why not <a target=\"_blank\" href=\"http://www.nerddinner.com/Dinners/Create\">host one</a>?</div>'); <% } else { %> document.write('<h3> Dinners Near You</h3><ul>'); <% foreach (var item in Model.Dinners) { %> document.write('<li><a target=\"_blank\" href=\"http://nrddnr.com/<%: item.DinnerID %>\"><%: item.Title.Truncate(20) %> with <%: item.HostedBy %> on <%: item.EventDate.ToShortDateString() %></a></li>'); <% } %> document.write('</ul>'); <% } %> document.write('<div id=\"nd-footer\"> More dinners and fun at <a target=\"_blank\" href=\"http://nrddnr.com\">http://nrddnr.com</a></div></div></div>'); Getting IP’s for Testing There are a variety of online services that will translate a location to an IP, which were handy for testing these out. I found http://www.itouchmap.com/latlong.html to be most useful, but I’m open to suggestions if you know of something better. Next steps I think the next step here is to minimize load – you know, in case people start actually using this flair. There are two places to think about – the NerdDinner.com servers, and the services we’re using for Geolocation. I usually think about caching as a first attack on server load, but that’s less helpful here since every user will have a different IP. Instead, I’d look at taking advantage of Asynchronous Controller Actions, a cool new feature in ASP.NET MVC 2. Async Actions let you call a potentially long-running webservice without tying up a thread on the server while waiting for the response. There’s some good info on that in the MSDN documentation, and Dino Esposito wrote a great article on Asynchronous ASP.NET Pages in the April 2010 issue of MSDN Magazine. But let’s think of the children, shall we? What about ipinfodb.com? Well, they don’t have specific daily limits, but they do throttle you if you put a lot of traffic on them. From their FAQ: We do not have a specific daily limit but queries that are at a rate faster than 2 per second will be put in "queue". If you stay below 2 queries/second everything will be normal. If you go over the limit, you will still get an answer for all queries but they will be slowed down to about 1 per second. This should not affect most users but for high volume websites, you can either use our IP database on your server or we can whitelist your IP for 5$/month (simply use the donate form and leave a comment with your server IP). Good programming practices such as not querying our API for all page views (you can store the data in a cookie or a database) will also help not reaching the limit. So the first step there is to save the geolocalization information in a time-limited cookie, which will allow us to look up the local dinners immediately without having to hit the geolocation service.

    Read the article

  • Introducing AutoVue Document Print Service

    - by celine.beck
    We recently announced the availability of our new AutoVue Document Print Service products. For more information, please read the article entitled Print Any Document Type with AutoVue Document Print Services that was posted on our blog. The AutoVue Document Print Service products help address a trivial, yet very common challenge: printing and batch printing documents. The AutoVue Document Print Service is a Web-Services based interface, which allows developers to complement their print server solutions by leveraging AutoVue's printing capabilities within broader enterprise applications like Asset Lifecycle Management, Product Lifecycle Management, Enterprise Content Management solutions, etc. This means that you can leverage the AutoVue Document Print Service products as part of your printing solution to automate the printing of virtually any document type required in any business process. Clients that consume AutoVue's Document Print Service can be written in any language (for example Java or .NET) as long as they understand Web Services Description Language (WSDL) and communicate using Simple Object Access Protocol (SOAP). The print solution consists of three main components, as described in the diagram below: a print server (not included in the AutoVue Document Print Service offering) that will interact with your application to identify the files that need to be printed, the printer to send each file, as well as the print options needed for each file (paper size, page orientation, etc), and collate the print job requests. The print server will also take care of calling the AutoVue Document Print Service to perform the actual printing. The AutoVue Document Print Services send files to a printer for printing. The AutoVue Document Print Service products leverage AutoVue's format- and platform agnostic technology to let you print/batch virtually any type of files, without requiring the authoring application installed on your machine. and Printers As shown above, you can trigger printing from your application either programmatically through automated business processes or manually through human interaction. If documents that need to be printed from your application are stored inside a content repository/Document Management System (DMS) such as Oracle Universal Content Management System (UCM), then the Print Server will need to identify the list of documents and pass the ID of each document to the AutoVue DPS to print. In this case, AutoVue DPS leverages the AutoVue VueLink integration (note: AutoVue VueLink integrations are pre-packaged AutoVue integrations with most common enterprise systems. Check our Website for more information on the subject) to fetch documents out of the document management system for printing. In lieu of the AutoVue VueLink integration, you can also leverage the AutoVue Integration Software Development Kit (iSDK) to build your own connector. If the documents you need to print from your application are not stored in a content management system, the Print Server will need to ensure that files are made available to the AutoVue Document Print Service. The Print Server could for example fetch the files out of your application or an extension to the application could be developed to fetch the files and make them available to the AutoVue DPS. More information on methods to pass on file information to the AutoVue Document Print Service products can be found in the AutoVue Document Print Service Overview documentation available on the Oracle Technology Network. Related article: Any Document Type with AutoVue Document Print Services

    Read the article

  • Oracle Database 12c: Partner Material

    - by Thanos Terentes Printzios
    Oracle Database 12c offers the latest innovation from Oracle Database Server Technologies with a new Multitenant Architecture, which can help accelerate database consolidation and Cloud projects. The primary resource for Partners on Database 12c is of course the Oracle Database 12c Knowledge Zone where you can get up to speed on the latest Database 12c enhancements so you can sell, implement and support this. Resources and material on Oracle Database 12c can be found all around Oracle.com, but even hidden in AR posters like the one on the left. Here are some additional resources for you Oracle Database 12c: Interactive Quick Reference is a multimedia tool for various terms and concepts used in the Oracle Database 12c release. This reference was built as a multimedia web page which provides descriptions of the database architectural components, and references to relevant documentation. Overall, is a nice little tool which may help you quickly to find a view you are searching for or to get more information about background processes in Oracle Database 12c. Use this tool to find valuable information for any complex concept or product in an intuitive and useful manner. Oracle Database 12c Learning Library contains several technical traininings (2-day DBA, Multitenant Architecture, etc) but also Videos/Demos, Learning Paths by Role and a lot more. Get ready and become an Oracle Database 12c Specialized Partner with the Oracle Database 12c Specialization for Partners. Review the Specialization Criteria, your company status and apply for an Oracle Database 12c Specialization. Access our OPN training repository to get prepared for the exams. "Oracle Database 12c: Plug into the Cloud!"  Marketing Kit includes a great selection of assets to help Oracle partners in their marketing activities to promote solutions that leverage all the new features of Oracle Database 12c. In the package you will find assets (templates, invitation texts, presentations, telemarketing script,...) to be used for your demand generation activities; a full set of presentations with the value propositions for customers; and Sales Enablement and Sales Support material. Review here and start planning your marketing activities around Database 12c. Oracle Database 12c Quick Reference Guide (PDF) and Oracle Database 12c – Partner FAQ (PDF) Partners that need further assistance with Database 12c can always contact us at partner.imc-AT-beehiveonline.oracle-DOT-com or locally address one the Oracle ECEMEA Partner Hubs for assistance.

    Read the article

< Previous Page | 461 462 463 464 465 466 467 468 469 470 471 472  | Next Page >