Search Results

Search found 5290 results on 212 pages for 'refresh rate'.

Page 194/212 | < Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >

  • ASA 5505 stops local internet when connected to VPN

    - by g18c
    Hi I have a Cisco ASA router running firmware 8.2(5) which hosts an internal LAN on 192.168.30.0/24. I have used the VPN Wizard to setup L2TP access and I can connect in fine from a Windows box and can ping hosts behind the VPN router. However, when connected to the VPN I can no longer ping out to my internet or browse web pages. I would like to be able to access the VPN, and also browse the internet at the same time - I understand this is called split tunneling (have ticked the setting in the wizard but to no effect) and if so how do I do this? Alternatively, if split tunneling is a pain to setup, then making the connected VPN client have internet access from the ASA WAN IP would be OK. Thanks, Chris names ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Vlan1 nameif inside security-level 100 ip address 192.168.30.1 255.255.255.0 ! interface Vlan2 nameif outside security-level 0 ip address 208.74.158.58 255.255.255.252 ! ftp mode passive access-list inside_nat0_outbound extended permit ip any 10.10.10.0 255.255.255.128 access-list inside_nat0_outbound extended permit ip 192.168.30.0 255.255.255.0 192.168.30.192 255.255.255.192 access-list DefaultRAGroup_splitTunnelAcl standard permit 192.168.30.0 255.255.255.0 access-list DefaultRAGroup_splitTunnelAcl_1 standard permit 192.168.30.0 255.255.255.0 pager lines 24 logging asdm informational mtu inside 1500 mtu outside 1500 ip local pool LANVPNPOOL 192.168.30.220-192.168.30.249 mask 255.255.255.0 icmp unreachable rate-limit 1 burst-size 1 no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 0 access-list inside_nat0_outbound nat (inside) 1 192.168.30.0 255.255.255.0 route outside 0.0.0.0 0.0.0.0 208.74.158.57 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 timeout floating-conn 0:00:00 dynamic-access-policy-record DfltAccessPolicy http server enable http 192.168.30.0 255.255.255.0 inside snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec transform-set ESP-AES-256-MD5 esp-aes-256 esp-md5-hmac crypto ipsec transform-set ESP-DES-SHA esp-des esp-sha-hmac crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto ipsec transform-set ESP-DES-MD5 esp-des esp-md5-hmac crypto ipsec transform-set ESP-AES-192-MD5 esp-aes-192 esp-md5-hmac crypto ipsec transform-set ESP-3DES-MD5 esp-3des esp-md5-hmac crypto ipsec transform-set ESP-AES-256-SHA esp-aes-256 esp-sha-hmac crypto ipsec transform-set ESP-AES-128-SHA esp-aes esp-sha-hmac crypto ipsec transform-set ESP-AES-192-SHA esp-aes-192 esp-sha-hmac crypto ipsec transform-set ESP-AES-128-MD5 esp-aes esp-md5-hmac crypto ipsec transform-set TRANS_ESP_3DES_SHA esp-3des esp-sha-hmac crypto ipsec transform-set TRANS_ESP_3DES_SHA mode transport crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 crypto dynamic-map SYSTEM_DEFAULT_CRYPTO_MAP 65535 set transform-set ESP-AES-128-SHA ESP-AES-128-MD5 ESP-AES-192-SHA ESP-AES-192-MD5 ESP-AES-256-SHA ESP-AES-256-MD5 ESP-3DES-SHA ESP-3DES-MD5 ESP-DES-SHA ESP-DES-MD5 TRANS_ESP_3DES_SHA crypto map outside_map 65535 ipsec-isakmp dynamic SYSTEM_DEFAULT_CRYPTO_MAP crypto map outside_map interface outside crypto isakmp enable outside crypto isakmp policy 10 authentication pre-share encryption 3des hash sha group 2 lifetime 86400 telnet timeout 5 ssh timeout 5 console timeout 0 dhcpd auto_config outside ! threat-detection basic-threat threat-detection statistics access-list no threat-detection statistics tcp-intercept webvpn group-policy DefaultRAGroup internal group-policy DefaultRAGroup attributes dns-server value 192.168.30.3 vpn-tunnel-protocol l2tp-ipsec split-tunnel-policy tunnelspecified split-tunnel-network-list value DefaultRAGroup_splitTunnelAcl_1 username user password Cj7W5X7wERleAewO8ENYtg== nt-encrypted privilege 0 tunnel-group DefaultRAGroup general-attributes address-pool LANVPNPOOL default-group-policy DefaultRAGroup tunnel-group DefaultRAGroup ipsec-attributes pre-shared-key ***** tunnel-group DefaultRAGroup ppp-attributes no authentication chap authentication ms-chap-v2 ! class-map inspection_default match default-inspection-traffic ! ! policy-map type inspect dns preset_dns_map parameters message-length maximum client auto message-length maximum 512 policy-map global_policy class inspection_default inspect dns preset_dns_map inspect ftp inspect h323 h225 inspect h323 ras inspect rsh inspect rtsp inspect esmtp inspect sqlnet inspect skinny inspect sunrpc inspect xdmcp inspect sip inspect netbios inspect tftp inspect ip-options ! service-policy global_policy global prompt hostname context : end

    Read the article

  • what is best config for nginx worker_rlimit_nofile and worker_connections 28672

    - by Binh Nguyen
    i have issue of web-brower response ( especially on ie ) very slow, some time time out, and sometime hang out up to 20 seconds for one file redirect 301 when test with "f12 derverloper tool of ie" .. it report wait/start time very long. but after got connected the elements on web weill be dowload and show out fast ( test at xaluan.com ) It most happen when active user on web more than 2100 ( use google real time live analytic ). server running cenos 5 with ngix, apache, 32core cpu, 96G ram, raid 10 sas hdd.. == flowing is my config == user nobody; # no need for more workers in the proxy mode worker_processes 28; #old 32 #good at 24 error_log /var/log/nginx/error.log; #old add in end: info worker_rlimit_nofile 22528; events { worker_connections 22528; use epoll; # you should use epoll here for Linux kernels 2.6.x } http { server_name_in_redirect off; server_names_hash_max_size 10240; server_names_hash_bucket_size 1024; include mime.types; default_type application/octet-stream; server_tokens off; disable_symlinks off; sendfile on; tcp_nopush on; tcp_nodelay on; server_name_in_redirect off; server_names_hash_max_size 10240; server_names_hash_bucket_size 1024; include mime.types; default_type application/octet-stream; server_tokens off; disable_symlinks off; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 25; #old 5 gzip on; #old on gzip_vary on; gzip_disable "MSIE [1-6]\."; gzip_proxied any; gzip_http_version 1.1; gzip_min_length 1000; gzip_comp_level 6; gzip_buffers 16 8k; ignore_invalid_headers on; client_header_timeout 1m; #3m client_body_timeout 1m; #3m send_timeout 1m; #3m reset_timedout_connection on; connection_pool_size 256; client_header_buffer_size 256k; large_client_header_buffers 4 256k; client_max_body_size 100M; client_body_buffer_size 256k; request_pool_size 32k; output_buffers 4 32k; postpone_output 1460; proxy_temp_path /tmp/nginx_proxy/; client_body_in_file_only on; log_format bytes_log "$msec $bytes_sent ."; limit_conn_zone $binary_remote_addr zone=limit_per_ip:1m; limit_conn limit_per_ip 20; limit_req_zone $binary_remote_addr zone=allips:5m rate=200r/s; limit_req zone=allips burst=200 nodelay; include "/etc/nginx/vhosts/*"; } =========== I have play around with worker config 1- tried increase as some one suggess: worker_rlimit_nofile = worker_connections = worker_processes * 1024 = 32768 2- tried to set low: worker_processes = 28 and other worker at 22582 and other solution too .. but not work cause some time it make server load hight very quick 3- tried to comment out the # worker_rlimit_nofile . so it will be unlimited. it look like solved a bit about issue response time. but it also make server high load quick in peak time... Please help thanks PS: other apache you may have look for help me out thanks Listen 0.0.0.0:8081 User nobody Group nobody ExtendedStatus On ServerAdmin [email protected] ServerName server.xaluan.com LogLevel warn # These can be set in WHM under 'Apache Global Configuration' Timeout 100 TraceEnable Off ServerSignature Off ServerTokens ProductOnly FileETag None StartServers 15 <IfModule prefork.c> MinSpareServers 20 MaxSpareServers 50 #MaxSpareServers 40 </IfModule> ServerLimit 1572 MaxClients 1572 MaxRequestsPerChild 4000 # MaxRequestsPerChild 3000 KeepAlive On KeepAliveTimeout 3 MaxKeepAliveRequests 300 #MaxKeepAliveRequests 130

    Read the article

  • Postfix not sending/allowing receiving of messages after server (hardware) changed

    - by 537mfb
    We had na old notebook runing Ubuntu 12.04 working as a web/ftp/mail server and it worked but since the notebook was a notebook and pretty old and unreliable, a desktop was bought to replace it before it stopped working all together. Due to issues with the new desktop's vídeo card, we couldn't use Ubuntu 12.04 so we installed Ubuntu 13.10 and wen't about configuring it. Since we removed the notebook from the network, we kept the same Computer Name and local IP address to make things as close to the old server as possible configuration-wise. However, something has gone wrong since Postfix is throwing error 451 4.3.0 lookup faillure on every attempt to send a mail, and no email can be received either. Our main.cf file is a copy of the one we were using (and working) on the old server (notice we use EHCP) # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name powered by Easy Hosting Control Panel (ehcp) on Ubuntu, www.ehcp.net biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no myhostname = m21-traducoes.com.pt relayhost = mydestination = localhost, 89.152.248.139 mynetworks = 127.0.0.0/8, 192.168.0.0/16, 172.16.0.0/16, 10.0.0.0/8, 89.152.248.0/24 virtual_alias_domains = virtual_alias_maps = proxy:mysql:/etc/postfix/mysql-virtual_forwardings.cf, proxy:mysql:/etc/postfix/mysql-virtual_email2email.cf transport_maps = proxy:mysql:/etc/postfix/mysql-virtual_transports.cf virtual_mailbox_domains = proxy:mysql:/etc/postfix/mysql-virtual_domains.cf virtual_mailbox_maps = proxy:mysql:/etc/postfix/mysql-virtual_mailboxes.cf virtual_mailbox_base = /home/vmail virtual_uid_maps = static:5000 virtual_gid_maps = static:5000 smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous broken_sasl_auth_clients = yes smtpd_recipient_restrictions = permit_mynetworks,permit_sasl_authenticated,check_client_access hash:/var/lib/pop-before-smtp/hosts,reject_unauth_destination smtp_use_tls = yes smtpd_use_tls = yes smtpd_tls_auth_only = no smtpd_tls_CAfile = /etc/postfix/cacert.pem smtpd_tls_cert_file = /etc/postfix/smtpd.cert smtpd_tls_key_file = /etc/postfix/smtpd.key smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_session_cache_timeout = 3600s tls_random_source = dev:/dev/urandom virtual_create_maildirsize = yes virtual_mailbox_extended = yes virtual_mailbox_limit_maps = proxy:mysql:/etc/postfix/mysql-virtual_mailbox_limit_maps.cf virtual_mailbox_limit_override = yes virtual_maildir_limit_message = "The user you are trying to reach is over quota." virtual_overquota_bounce = yes debug_peer_list = sender_canonical_maps = debug_peer_level = 1 proxy_read_maps = $local_recipient_maps $mydestination $virtual_alias_maps $virtual_alias_domains $virtual_mailbox_maps $virtual_mailbox_domains $relay_recipient_maps $canonical_maps $sender_canonical_maps $recipient_canonical_maps $relocated_maps $mynetworks $virtual_mailbox_limit_maps $transport_maps alias_maps = hash:/etc/aliases smtpd_relay_restrictions = permit_mynetworks, permit_sasl_authenticated,check_client_access hash:/var/lib/pop-before-smtp/hosts,reject_unauth_destination smtpd_destination_concurrency_limit = 2 smtpd_destination_rate_delay = 1s smtpd_extra_recipient_limit = 10 disable_vrfy_command = yes smtpd_delay_reject = yes smtpd_helo_required = yes smtpd_error_sleep_time = 1s smtpd_soft_error_limit = 10 smtpd_hard_error_limit = 20 This configuration was working before but now everytime i try to send a mail in squirrelmail it reports: Message not sent. Server replied: Requested action aborted: error in processing 451 4.3.0 <[email protected]>: Temporary lookup failure And i can't send mail to it from outsider either. Any ideas? EDIT: Here are some issues MXToolBox reports to my domain, answering hopefully to @Teun Vink: BlackList Mail Server Web Server DNS Error 4 0 2 0 Warnings 0 0 0 3 Passed 0 6 3 12 So the domain is on some blacklist, but that doesn't explain the error at all No mail server issues found (except it's not working) Those two web server errors it's because i don't have HTTPS workin (No SSL Certificate) so the test fails Those 3 DNS warnings we're already there when it was working with the other machine and are related to stuff i can't control: SOA Refresh Value is outside of the recommended range SOA Expire Value out of recommended range SOA NXDOMAIN Value too high I've searched and as far as i can tell only the guys who sold the retail can change those values and they won't. Edit2: I half solved the issue.on the new machine postfix was installed but postfix-mysql waasn't so he couldn't connect to the database (rookie mistake). After fixing that, i can now send mails to the outsider without any issues, however i am still not able to receive mails from utside. The sender doesn't get any message warning about the non-delivery but the message doesn't fall in the inbox and the log shows: Nov 13 15:11:57 m21-traducoes postfix/smtpd[5872]: NOQUEUE: reject: RCPT from re lay4.ptmail.sapo.pt[212.55.154.24]: 451 4.3.5 <relay4.ptmail.sapo.pt[212.55.154. 24]>: Client host rejected: Server configuration error; from=<[email protected]> to=<[email protected]> proto=SMTP helo=<sapo.pt> Nov 13 15:11:57 m21-traducoes postfix/smtpd[5872]: disconnect from relay4.ptmail .sapo.pt[212.55.154.24]

    Read the article

  • Bind9 Debian Not responding

    - by Marc
    Im trying to set up a webserver with Bind9, apache2 on Debian 6. I am trying to learn to do it manualy so I do not have any control panels or anything just the command line. I have a domain name lets call it www.example.com I want a virtual host setup so that I can have multiple websites with different names on my server. I have ns1.example.com and ns2.example.com registered at my servers IP (123.456.789.12). Below is my Bind9 named.conf.options options { directory "/var/cache/bind"; // If there is a firewall between you and nameservers you want // to talk to, you may need to fix the firewall to allow multiple // ports to talk. See http://www.kb.cert.org/vuls/id/800113 // If your ISP provided one or more IP addresses for stable // nameservers, you probably want to use them as forwarders. // Uncomment the following block, and insert the addresses replacing // the all-0's placeholder. // forwarders { // 0.0.0.0; // }; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; This is the default I'm not sure if i was supposed to edit it. I didn't. Here is my named.conf.default-zones: // prime the server with knowledge of the root servers zone "." { type hint; file "/etc/bind/db.root"; }; // be authoritative for the localhost forward and reverse zones, and for // broadcast zones as per RFC 1912 zone "localhost" { type master; file "/etc/bind/db.local"; }; zone "127.in-addr.arpa" { type master; file "/etc/bind/db.127"; }; zone "0.in-addr.arpa" { type master; file "/etc/bind/db.0"; }; zone "255.in-addr.arpa" { type master; file "/etc/bind/db.255"; }; zone "example.com.com" { type master; file "etc/bind/example.com.db"; }; named.conf.local Is an empty file with a comment saying to do local configuration here. example.com.db looks like this: ; BIND data file for mywebsite.com ; $ORIGIN example.com. $TTL 604800 @ IN SOA ns1.example.com. [email protected]. ( 2009120101 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; IN NS ns1.example.com. IN NS ns2.example.com. IN MX 10 mail.example.com. localhost IN A 127.0.0.1 example.com. IN A 123.456.789.12 ns1 IN A 123.456.789.12 ns2 IN A 123.456.789.12 www IN A 123.456.789.12 ftp IN A 123.456.789.12 mail IN A 123.456.789.12 boards IN CNAME www These are all settings I've found from various tutorials. Now when i go to intodns I get: You should already know that your NS records at your nameservers are missing, so here it is again: ns1.example.com ns2.example.com Can someone help me? I'm not sure what Im doing wrong.

    Read the article

  • duplicate cache pages: Varnish

    - by Sukhjinder Singh
    Recently we have configured Varnish on our server, it was successfully setup but we noticed that if we open any page in multiple browsers, the Varnish send request to Apache not matter page is cached or not. If we refresh twice on each browser it creates duplicate copies of the same page. What exactly should happen: If any page is cached by Varnish, the subsequent request should be served from Varnish itself when we are opening the same page in browser OR we are opening that page from different IP address. Following is my default.vcl file backend default { .host = "127.0.0.1"; .port = "80"; } sub vcl_recv { if( req.url ~ "^/search/.*$") { }else { set req.url = regsub(req.url, "\?.*", ""); } if (req.restarts == 0) { if (req.http.x-forwarded-for) { set req.http.X-Forwarded-For = req.http.X-Forwarded-For + ", " + client.ip; } else { set req.http.X-Forwarded-For = client.ip; } } if (!req.backend.healthy) { unset req.http.Cookie; } set req.grace = 6h; if (req.url ~ "^/status\.php$" || req.url ~ "^/update\.php$" || req.url ~ "^/admin$" || req.url ~ "^/admin/.*$" || req.url ~ "^/flag/.*$" || req.url ~ "^.*/ajax/.*$" || req.url ~ "^.*/ahah/.*$") { return (pass); } if (req.url ~ "(?i)\.(pdf|asc|dat|txt|doc|xls|ppt|tgz|csv|png|gif|jpeg|jpg|ico|swf|css|js)(\?.*)?$") { unset req.http.Cookie; } if (req.http.Cookie) { set req.http.Cookie = ";" + req.http.Cookie; set req.http.Cookie = regsuball(req.http.Cookie, "; +", ";"); set req.http.Cookie = regsuball(req.http.Cookie, ";(SESS[a-z0-9]+|SSESS[a-z0-9]+|NO_CACHE)=", "; \1="); set req.http.Cookie = regsuball(req.http.Cookie, ";[^ ][^;]*", ""); set req.http.Cookie = regsuball(req.http.Cookie, "^[; ]+|[; ]+$", ""); if (req.http.Cookie == "") { unset req.http.Cookie; } else { return (pass); } } if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") {return(pipe);} /* Non-RFC2616 or CONNECT which is weird. */ if (req.request != "GET" && req.request != "HEAD") { return (pass); } if (req.http.Accept-Encoding) { if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg)$") { # No point in compressing these remove req.http.Accept-Encoding; } else if (req.http.Accept-Encoding ~ "gzip") { set req.http.Accept-Encoding = "gzip"; } else if (req.http.Accept-Encoding ~ "deflate") { set req.http.Accept-Encoding = "deflate"; } else { # unknown algorithm remove req.http.Accept-Encoding; } } return (lookup); } sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Varnish-Cache = "HIT"; } else { set resp.http.X-Varnish-Cache = "MISS"; } } sub vcl_fetch { if (beresp.status == 404 || beresp.status == 301 || beresp.status == 500) { set beresp.ttl = 10m; } if (req.url ~ "(?i)\.(pdf|asc|dat|txt|doc|xls|ppt|tgz|csv|png|gif|jpeg|jpg|ico|swf|css|js)(\?.*)?$") { unset beresp.http.set-cookie; } set beresp.grace = 6h; } sub vcl_hash { hash_data(req.url); if (req.http.host) { hash_data(req.http.host); } else { hash_data(server.ip); } return (hash); } sub vcl_pipe { set req.http.connection = "close"; } sub vcl_hit { if (req.request == "PURGE") {ban_url(req.url); error 200 "Purged";} if (!obj.ttl > 0s) {return(pass);} } sub vcl_miss { if (req.request == "PURGE") {error 200 "Not in cache";} }

    Read the article

  • MySQL reserves too much RAM

    - by Buddy
    I have a cheap VPS with 128Mb RAM and 256Mb burst. MySQL starts and reserves about 110Mb, but uses not more than 20Mb of them. My VPS Control Panel shows, that I use 127Mb (I also running nginx and sphinx), I know, that it shows reserved RAM, but when I reach over 128Mb, my VPS reboots automatically every 4 hours. So I want to force MySQL to reserve less RAM. How can i do that? I did some tweaks with my.conf but it helped not so much. top output: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 15 0 2156 668 572 S 0.0 0.3 0:00.03 init 11311 root 15 0 11212 356 228 S 0.0 0.1 0:00.00 vzctl 11312 root 18 0 3712 1484 1248 S 0.0 0.6 0:00.01 bash 11347 root 18 0 2284 916 732 R 0.0 0.3 0:00.00 top 13978 root 17 -4 2248 552 344 S 0.0 0.2 0:00.00 udevd 14262 root 15 0 1812 564 472 S 0.0 0.2 0:00.03 syslogd 14293 sphinx 15 0 11816 1172 672 S 0.0 0.4 0:00.07 searchd 14305 root 25 0 7192 1036 636 S 0.0 0.4 0:00.00 sshd 14321 root 25 0 2832 836 668 S 0.0 0.3 0:00.00 xinetd 15389 root 18 0 3708 1300 1132 S 0.0 0.5 0:00.00 mysqld_safe 15441 mysql 15 0 113m 16m 4440 S 0.0 6.4 0:00.15 mysqld 15489 root 21 0 13056 1456 340 S 0.0 0.6 0:00.00 nginx 15490 nginx 18 0 13328 2388 992 S 0.0 0.9 0:00.06 nginx 15507 nginx 25 0 19520 5888 4244 S 0.0 2.2 0:00.00 php-cgi 15508 nginx 18 0 19636 4876 2748 S 0.0 1.9 0:00.12 php-cgi 15509 nginx 15 0 19668 4872 2716 S 0.0 1.9 0:00.11 php-cgi 15518 root 18 0 4492 1116 568 S 0.0 0.4 0:00.01 crond MySQL tuner: >> MySQLTuner 1.0.1 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering Please enter your MySQL administrative login: root Please enter your MySQL administrative password: -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.0.77 [OK] Operating on 32-bit architecture with less than 2GB RAM -------- Storage Engine Statistics ------------------------------------------- [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in InnoDB tables: 1M (Tables: 1) [OK] Total fragmented tables: 0 -------- Performance Metrics ------------------------------------------------- [--] Up for: 38m 43s (37 q [0.016 qps], 20 conn, TX: 4M, RX: 3K) [--] Reads / Writes: 100% / 0% [--] Total buffers: 28.1M global + 832.0K per thread (100 max threads) [OK] Maximum possible memory usage: 109.4M (42% of installed RAM) [OK] Slow queries: 0% (0/37) [OK] Highest usage of available connections: 1% (1/100) [OK] Key buffer size / total MyISAM indexes: 128.0K/64.0K [OK] Query cache efficiency: 42.1% (8 cached / 19 selects) [OK] Query cache prunes per day: 0 [!!] Temporary tables created on disk: 27% (3 on disk / 11 total) [!!] Thread cache is disabled [OK] Table cache hit rate: 57% (8 open / 14 opened) [OK] Open file limit used: 1% (12/1K) [OK] Table locks acquired immediately: 100% (22 immediate / 22 locks) [!!] Connections aborted: 10% [OK] InnoDB data size / buffer pool: 1.5M/8.0M -------- Recommendations ----------------------------------------------------- General recommendations: MySQL started within last 24 hours - recommendations may be inaccurate Enable the slow query log to troubleshoot bad queries When making adjustments, make tmp_table_size/max_heap_table_size equal Reduce your SELECT DISTINCT queries without LIMIT clauses Set thread_cache_size to 4 as a starting value Your applications are not closing MySQL connections properly Variables to adjust: tmp_table_size (> 32M) max_heap_table_size (> 16M) thread_cache_size (start at 4) I think if I do what MySQLtuner says, MySQL will use more RAM.

    Read the article

  • got VPN l2l connect between a site & HQ but not traffice using ASA5505 on both ends

    - by vinlata
    Hi, Could anyone see what did I do wrong here? this is one configuration of site1 to HQ on ASA5505, I can get connected but seems like no traffic going (allowed) between them, could it be a NAT issue? any helps would much be appreciated Thanks interface Vlan1 nameif inside security-level 100 ip address 172.30.205.1 255.255.255.0 ! interface Vlan2 nameif outside security-level 0 ip address pppoe setroute ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Ethernet0/2 shutdown ! interface Ethernet0/3 shutdown ! interface Ethernet0/4 shutdown ! interface Ethernet0/5 shutdown ! interface Ethernet0/6 shutdown ! interface Ethernet0/7 shutdown ! passwd .dIuXDIYzD6RSHz7 encrypted ftp mode passive dns server-group DefaultDNS domain-name errg.net object-group network HQ network-object 172.22.0.0 255.255.0.0 network-object 172.22.0.0 255.255.128.0 network-object 172.22.0.0 255.255.255.128 network-object 172.22.1.0 255.255.255.128 network-object 172.22.1.0 255.255.255.0 access-list inside_access_in extended permit ip any any access-list outside_access_in extended permit icmp any any echo-reply access-list outside_20_cryptomap extended permit ip 172.30.205.0 255.255.255.0 o bject-group HQ access-list inside_nat0_outbound extended permit ip 172.30.205.0 255.255.255.0 o bject-group HQ access-list policy-nat extended permit ip 172.30.205.0 255.255.255.0 172.22.0.0 255.255.0.0 pager lines 24 logging asdm informational mtu inside 1500 mtu outside 1500 icmp unreachable rate-limit 1 burst-size 1 no asdm history enable arp timeout 14400 nat-control global (outside) 1 interface nat (inside) 0 access-list inside_nat0_outbound nat (inside) 1 0.0.0.0 0.0.0.0 static (inside,outside) 172.30.205.0 access-list policy-nat access-group inside_access_in in interface inside access-group outside_access_in in interface outside timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout uauth 0:05:00 absolute username errgadmin password Os98gTdF8BZ0X2Px encrypted privilege 15 http server enable http 64.42.2.224 255.255.255.240 outside http 172.22.0.0 255.255.0.0 outside no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto map outside_map 190 match address outside_20_cryptomap crypto map outside_map 190 set pfs crypto map outside_map 190 set peer 66.7.249.109 crypto map outside_map 190 set transform-set ESP-3DES-SHA crypto map outside_map 190 set phase1-mode aggressive crypto map outside_map interface outside crypto isakmp enable outside crypto isakmp policy 30 authentication pre-share encryption 3des hash sha group 2 lifetime 86400 crypto isakmp policy 65535 authentication pre-share encryption 3des hash sha group 2 lifetime 86400 crypto isakmp nat-traversal 190 crypto isakmp ipsec-over-tcp port 10000 tunnel-group 66.7.249.109 type ipsec-l2l tunnel-group 66.7.249.109 ipsec-attributes pre-shared-key * telnet timeout 5 ssh 172.30.205.0 255.255.255.0 inside ssh 172.22.0.0 255.255.0.0 outside ssh 64.42.2.224 255.255.255.240 outside ssh 172.25.0.0 255.255.128.0 outside ssh timeout 5 console timeout 0 management-access inside vpdn group PPPoEx request dialout pppoe vpdn group PPPoEx localname [email protected] vpdn group PPPoEx ppp authentication pap vpdn username [email protected] password ********* dhcpd address 172.30.205.100-172.30.205.131 inside dhcpd dns 172.22.0.133 68.94.156.1 interface inside dhcpd wins 172.22.0.133 interface inside dhcpd domain errg.net interface inside dhcpd enable inside ! ! class-map inspection_default match default-inspection-traffic ! ! policy-map type inspect dns preset_dns_map parameters message-length maximum 512 policy-map global_policy class inspection_default inspect dns preset_dns_map inspect ftp inspect h323 h225 inspect h323 ras inspect netbios inspect rsh inspect rtsp inspect skinny inspect esmtp inspect sqlnet inspect sunrpc inspect tftp inspect sip inspect xdmcp ! end

    Read the article

  • website connection reset on first load

    - by Tar
    i'm using nginx with php-cgi. lately a problem has arose where if you don't view my site for a while, like 3-4 minutes, and then open it again, the first request you send will return connection reset by peer in the browser. if you refresh, operation is normal for all subsequent requests. this happens every time and it isn't just an isolated incident, it happens to everyone using my site. i've tried to restart nginx and php-cgi but to no avail. does anyone know what the problem could be? i can provide whatever information necessary. it's worth noting that there's nothing in error log besides that message about client closing the connection early. nginx.conf user nobody; worker_processes 4; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 2048; } http { include /etc/nginx/mime.types; error_page 404 /404.html; error_page 403 /403.html; error_page 444 /444.html; error_page 502 /502.html; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; large_client_header_buffers 8 8k; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 30; server_tokens off; gzip on; gzip_proxied any; gzip_comp_level 6; gzip_buffers 64 8k; gzip_min_length 1024; gzip_http_version 1.1; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; include /etc/nginx/conf.d/*.conf; } default.conf server { listen 80; server_name domain.com; error_log /var/log/nginx/error.log debug; access_log /var/log/nginx/access.log; location / { if ($request_method !~ ^(GET|HEAD|POST)$ ) { return 444; } if ($http_user_agent ~* Havij|hvj|acunetix|wget|HTtrack) { return 403; } root /home/admin06/public_html; autoindex off; index index.php; # Images and static content is treated different location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico|xml)$ { access_log off; expires 30d; root /home/admin06/public_html; } location /nginx_status { stub_status on; access_log off;] deny all; } location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.*)$; #try_files $uri =404; fastcgi_pass backend; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /home/site/public_html$fastcgi_script_name; include fastcgi_params; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_intercept_errors on; fastcgi_ignore_client_abort off; fastcgi_connect_timeout 60; fastcgi_send_timeout 60; fastcgi_read_timeout 60; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; } ## Disable viewing .htaccess & .htpassword location ~ /\.ht { deny all; } location ~ error_log { deny all; } location ~ access_log { deny all; } location ~ \.cgi { deny all; } location ~ \.db { deny all; } }

    Read the article

  • dns server bind is not work [closed]

    - by user1742080
    I just installed bind on RHEL 6 and point a domain to that server. but actually when i ping domain it returns error 1214: Here is my named.conf: // // named.conf // // Provided by Red Hat bind package to configure the ISC BIND named(8) DNS // server as a caching only nameserver (as a localhost DNS resolver only). // // See /usr/share/doc/bind*/sample/ for example named configuration files. // options { listen-on port 53 { any; }; listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query { any; }; recursion yes; dnssec-enable yes; dnssec-validation yes; dnssec-lookaside auto; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; managed-keys-directory "/var/named/dynamic"; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; zone "." IN { type hint; file "named.ca"; }; include "/etc/named.rfc1912.zones"; include "/etc/named.root.key"; zone "mydomain.com"{ type master; file "/var/named/data/named.mydomain.com"; allow-update { none; }; }; AND The content of "/var/named/data/named.mydomain.com": 1 $TTL 38400 2 3 mydomain.com. IN SOA ns1.mydomain.com. milad.yahoo.com. ( 4 2012101201 ; serial number YYMMDDNN 5 28800 ; Refresh 6 7200 ; Retry 7 864000 ; Expire 8 38400 ; Min TTL 9 ) 10 11 mydomain.com. IN A 1.2.3.4 12 www IN A 1.2.3.4 13 ns1.mydomain.com. IN A 1.2.3.4 14 ns2.mydomain.com. IN A 1.2.3.4 15 mydomain.com. IN NS ns1.mydomain.com. 16 mydomain.com. IN NS ns2.mydomain.com. AND i'm sure the named service is running: [root@server ~]# service named status version: 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6_3.3 CPUs found: 8 worker threads: 8 number of zones: 20 debug level: 0 xfers running: 0 xfers deferred: 0 soa queries in progress: 0 query logging is OFF recursive clients: 0/0/1000 tcp clients: 0/100 server is up and running named (pid 26299) is running...

    Read the article

  • Windows 7 disk errors after a few hours of runtime

    - by GFK
    I'm having trouble understanding what is going on with my work PC. Whenever I boot it, it runs fine for a while, then starts to randomly show disk errors. The displayed error often contains the message "not enough storage is available to process this command", although depending on the application that fails it can be different. This has happened for weeks now and is getting worse. This is what troubles me: It never seems to impact critical parts of the system (no BSOD, no freeze). Only some applications seem impacted, refusing to function correctly after a while: Outlook 2010 cannot download RSS feeds anymore, Firefox 6 or IE9 cannot download anything bigger than 3MB without failing, Windows Update fails, all msi installers fail, Visual Studio 2010 starts failing in weird manners... It only happens after a while using it (typically 3 hours, but it seems that installing a program or compiling several times makes it shorter) Rebooting solves it (temporarily). The system: The OS is Windows 7 Pro Spanish SP1, 32 bits The system is an HP Compaq 6000 Pro with 4 GB memory (only 3.4GB usable since the system is 32bit), one 500GB hard drive. Installed applications include: Visual Studio 2010, SQL Server 2008 R2, VMWare Workstation 7, Microsoft Security Essentials, Office 2010. Shutting down all related services and processes doesn't seem to change anything. The diagnostics I've run so far: Hard drive : 465GB, 165GB free Process Explorer : physical and virtual memory seem ok (pagefile is 5.3GB, physical memory usage 70%, system commit 39%) Windows Memory diagnostic tool: OK CHKDSK returned: 488282111 KB total disk space. 281668248 KB in 265779 files. 150188 KB in 62949 indexes. 0 KB in bad sectors. 571755 KB in use by the system. The log file has occupied 65536 kilobytes. 205891920 KB available on disk. For non-spanish speakers, that means all ok. SMART diagnostic tools (DiskCheckup) report all values normal. temperatures are in the normal range (HWinfo). The event viewer doesn't seem to contain any significant message. ran CCleaner 3, without any noticeable effect. I was thinking about some file number limit (between Visual Studio projects and other applications, there are around 300.000 files on the hard drive), but I couldn't find any. It's possible there is something related with the use of the temporary folders (it's the only explanation I have for why applications fail but Windows doesn't), but I cannot confirm that. Only thing I cannot find out is if chkdsk reporting 65MB for the log is normal. It seems since Vista it always reports this. Any other cleaning/diagnostic tool you might know of? Edit: I ran several other tools since I first published the question: Seagate SeaTools (the HD manufacturer's analysis tool): complete test run OK. Intel Rapid 10.1 (the HD controller manufacturer's troubleshooting tool): the HD's ok. Microsoft Desktop Heap Monitor: Desktop Heap Information Monitor Tool (Version 8.1.2925.0) Copyright (c) Microsoft Corporation. All rights reserved. Session ID: 1 Total Desktop: ( 46464 KB - 11 desktops) WinStation\Desktop Heap Size(KB) Used Rate(%) WinSta0\Winlogon (s1) 128 3.6 WinSta0\Disconnect (s1) 64 3.8 WinSta0\Default (s1) 20480 3.0 msswindowstation\mssrestricteddesk (s0) 1024 0.2 __X78B95_89_IW__A8D9S1_42_ID (s0) 1024 0.2 Service-0x0-3e5$\Default (s0) 1024 0.6 Service-0x0-3e4$\Default (s0) 1024 0.3 Service-0x0-3e7$\Default (s0) 1024 2.1 WinSta0\Winlogon (s0) 128 1.9 WinSta0\Disconnect (s0) 64 3.8 WinSta0\Default (s0) 20480 0.0 All ok, desktop heap usage < 5% Edit 2: I tried totally resetting my account by creating a new one, logging under this new one and delete the first one (local rights and files), then logging back with this deleted account (it is a domain account). No luck. Also, I found out often the error is "not enough storage is available to process this command". Searching on the internet, I found an old troubleshooting tip (setting a registry key to raise the IRP stack limit, whatever it is) which did not change anything.

    Read the article

  • Distributed and/or Parallel SSIS processing

    - by Jeff
    Background: Our company hosts SaaS DSS applications, where clients provide us data Daily and/or Weekly, which we process & merge into their existing database. During business hours, load in the servers are pretty minimal as it's mostly users running simple pre-defined queries via the website, or running drill-through reports that mostly hit the SSAS OLAP cube. I manage the IT Operations Team, and so far this has presented an interesting "scaling" issue for us. For our daily-refreshed clients, the server is only "busy" for about 4-6 hrs at night. For our weekly-refresh clients, the server is only "busy" for maybe 8-10 hrs per week! We've done our best to use some simple methods of distributing the load by spreading the daily clients evenly among the servers such that we're not trying to process daily clients back-to-back over night. But long-term this scaling strategy creates two notable issues. First, it's going to consume a pretty immense amount of hardware that sits idle for large periods of time. Second, it takes significant Production Support over-head to basically "schedule" the ETL such that they don't over-lap, and move clients/schedules around if they out-grow the resources on a particular server or allocated time-slot. As the title would imply, one option we've tried is running multiple SSIS packages in parallel, but in most cases this has yielded VERY inconsistent results. The most common failures are DTExec, SQL, and SSAS fighting for physical memory and throwing out-of-memory errors, and ETLs running 3,4,5x longer than expected. So from my practical experience thus far, it seems like running multiple ETL packages on the same hardware isn't a good idea, but I can't be the first person that doesn't want to scale multiple ETLs around manual scheduling, and sequential processing. One option we've considered is virtualizing the servers, which obviously doesn't give you any additional resources, but moves the resource contention onto the hypervisor, which (from my experience) seems to manage simultaneous CPU/RAM/Disk I/O a little more gracefully than letting DTExec, SQL, and SSAS battle it out within Windows. Question to the forum: So my question to the forum is, are we missing something obvious here? Are there tools out there that can help manage running multiple SSIS packages on the same hardware? Would it be more "efficient" in terms of parallel execution if instead of running DTExec, SQL, and SSAS same machine (with every machine running that configuration), we run in pairs of three machines with SSIS running on one machine, SQL on another, and SSAS on a third? Obviously that would only make sense if we could process more than the three ETL we were able to process on the machine independently. Another option we've considered is completely re-architecting our SSIS package to have one "master" package for all clients that attempts to intelligently chose a server based off how "busy" it already is in terms of CPU/Memory/Disk utilization, but that would be a herculean effort, and seems like we're trying to reinvent something that you would think someone would sell (although I haven't had any luck finding it). So in summary, are we missing an obvious solution for this, and does anyone know if any tools (for free or for purchase, doesn't matter) that facilitate running multiple SSIS ETL packages in parallel and on multiple servers? (What I would call a "queue & node based" system, but that's not an official term). Ultimately VMWare's Distributed Resource Scheduler addresses this as you simply run a consistent number of clients per VM that you know will never conflict scheduleing-wise, then leave it up to VMWare to move the VMs around to balance out hardware usage. I'm definitely not against using VMWare to do this, but since we're a 100% Microsoft app stack, it seems like -someone- out there would have solved this problem at the application layer instead of the hypervisor layer by checking on resource utilization at the OS, SQL, SSAS levels. I'm open to ANY discussion on this, and remember no suggestion is too crazy or radical! :-) Right now, VMWare is the only option we've found to get away from "manually" balancing our resources, so any suggestions that leave us on a pure Microsoft stack would be great. Thanks guys, Jeff

    Read the article

  • How do I connect my Windows XP laptop to the internet?

    - by rubysiddhi
    Hello fellow super users, The Past I have a Acer Travelmate 2300 laptop running Windows XP. 6 months ago I moved into a new apartment and got a new internet connection set up. After getting an internet connection installed in my apartment I reinstalled Windows XP and at the same time wiped my drive clean losing all the original Acer software and drivers. Once XP was reinstalled I had to find all the drivers again to get the Travelmate laptop connected to the internet. So, using my Vista laptop which was connected fine, I went to the Acer Travelmate Series drivers download page to download the necessary drivers. I transferred them to my Acer XP machine and installed them the best I could (there were no easy instructions so I just had to find all the executables and run them). I eventually got connected to the internet but not exactly in the way I had hoped for. The Present To be connected to the internet I need to have an Ethernet cord connecting my computer (via the Ethernet port) to my router. This is a problem since it defeats the purpose of having a Wireless LAN card in my Acer laptop. One of the programs I downloaded from the Acer Travelmate Series page was the Acer Wireless LAN Configuration Utility. This program allows me to see the current network I am connected to and all the available networks I could potentially connect to. It reminds me of XP's Wireless Network Connection window/utility where you can see all available wireless networks, refresh the network list and connect to one of the networks. I should mention that my ISP set up a security enabled wireless network with WPA. This network requires a network key if you want to connect to it. I guess my Vista computer has the network key entered into it already. The problem is that I do not know what the network key is. Now obviously you would say just contact my ISP to get the key. And I will but there is just one extra weird issue. I am able to connect to another unsecured wireless network in the Wireless Network Connection window/utility. I can be on it as long as my Ethernet cable is plugged in. So this is not really wireless is it? And this indicates that even if I do get that network key password from my ISP, I will only solve one of the two problems I have. I will only solve being able to get online as long as I am connected to my router via the Ethernet cable. The Main Questions So how do I enable my acer IPN2220 Wireless LAN Card so that I can use my Acer laptop from anywhere with in my apartment? Or should I first get the network key from my ISP to access my security enabled wireless network? And then deal with getting the acer IPN2220 Wireless LAN Card working? Hard & Learned VS Easy & Stupid Of course contacting the ISP would be easier. Have em just come in here and do there thing. The problem with that is that they do not speak English (yeah, im in Poland) and it'd be a hell of a time trying to understand what they are doing (uncomfortable looking over their shoulder). Also, I want to learn how to do this task myself so that I can fix the problem if it ever happens again. You know, be more self sufficient. I look forward to helpful replies. Thanks, Xaviour

    Read the article

  • I have finally traded my Blackberry in for a Droid!

    - by Bob Porter
    Over the years I have used a number of different types of phones. Windows Mobile, Blackberry, Nokia, and now Android. Until the Blackberry, which was my last phone (and I still have one issued from my office) I had never found a phone that “just worked” especially with email and messaging. The Blackberry did, and does, excel at those functions. My last personal phone was a Storm 1 which was Blackberry’s first touch screen phone. The Storm 2 was an improved version that fixed some screen press detection issues from the first model and it added Wifi. Over the last few years I have watched others acquire and fall in love with their ‘Droid’s including a number of iPhone users which surprised me. Our office has until recently only supported Blackberry phones, adding iPhones within the last year or so. When I spoke with our internal telecom folks they confirmed they were evaluating Android phones, but felt they still were not secure enough out of the box for corporate use and SOX compliance. That being said, as a personal phone, the Droid Rocks! I am impressed with its speed, the number of apps available, and the overall design. It is not as “flashy” as an iPhone but it does everything that I care about and more. The model I bought is the Motorola Droid 2 Global from Verizon. It is currently running Android 2.2 for it’s OS, 2.3 is just around the corner. It has 8 gigs of internal flash memory and can handle up to a 32 gig SDCard. (I currently have 2 8 gig cards, one for backups, and have ordered a 16 gig card!) Being a geek at heart, I “rooted” the phone which means gained superuser access to the OS on the phone. And opens a number of doors for further modifications down the road. Also being a geek meant I have already setup a development environment and built and deployed the obligatory “Hello Droid” application. I will be writing of my development experiences with this new platform here often, to start off I thought I would share my current application list to give you an idea what I am using. Zedge: http://market.android.com/details?id=net.zedge.android XDA: http://market.android.com/details?id=com.quoord.tapatalkxda.activity WRAL.com: http://market.android.com/details?id=com.mylocaltv.wral Wireless Tether: http://market.android.com/details?id=android.tether Winamp: http://market.android.com/details?id=com.nullsoft.winamp Win7 Clock: http://market.android.com/details?id=com.androidapps.widget.toggles.win7 Wifi Analyzer: http://market.android.com/details?id=com.farproc.wifi.analyzer WeatherBug: http://market.android.com/details?id=com.aws.android Weather Widget Forecast Addon: http://market.android.com/details?id=com.androidapps.weather.forecastaddon Weather & Toggle Widgets: http://market.android.com/details?id=com.androidapps.widget.weather2 Vlingo: http://market.android.com/details?id=com.vlingo.client VirtualTENHO-G: http://market.android.com/details?id=jp.bustercurry.virtualtenho_g Twitter: http://market.android.com/details?id=com.twitter.android TweetDeck: http://market.android.com/details?id=com.thedeck.android.app Tricorder: http://market.android.com/details?id=org.hermit.tricorder Titanium Backup PRO: http://market.android.com/details?id=com.keramidas.TitaniumBackupPro Titanium Backup: http://market.android.com/details?id=com.keramidas.TitaniumBackup Terminal Emulator: http://market.android.com/details?id=jackpal.androidterm Talking Tom Free: http://market.android.com/details?id=com.outfit7.talkingtom Stock Blue: http://market.android.com/details?id=org.adw.theme.stockblue ST: Red Alert Free: http://market.android.com/details?id=com.oldplanets.redalertwallpaper ST: Red Alert: http://market.android.com/details?id=com.oldplanets.redalertwallpaperplus Solitaire: http://market.android.com/details?id=com.kmagic.solitaire Skype: http://market.android.com/details?id=com.skype.raider Silent Time Lite: http://market.android.com/details?id=com.QuiteHypnotic.SilentTime ShopSavvy: http://market.android.com/details?id=com.biggu.shopsavvy Shopper: http://market.android.com/details?id=com.google.android.apps.shopper Shiny clock: http://market.android.com/details?id=com.androidapps.clock.shiny ShareMyApps: http://market.android.com/details?id=com.mattlary.shareMyApps Sense Glass ADW Theme: http://market.android.com/details?id=com.dtanquary.senseglassadwtheme ROM Manager: http://market.android.com/details?id=com.koushikdutta.rommanager Roboform Bookmarklet Installer: http://market.android.com/details?id=roboformBookmarkletInstaller.android.com RealCalc: http://market.android.com/details?id=uk.co.nickfines.RealCalc Package Buddy: http://market.android.com/details?id=com.psyrus.packagebuddy Overstock: http://market.android.com/details?id=com.overstock OMGPOP Toggle: http://market.android.com/details?id=com.androidapps.widget.toggle.omgpop OI File Manager: http://market.android.com/details?id=org.openintents.filemanager nook: http://market.android.com/details?id=bn.ereader MyAtlas-Google Maps Navigation ext: http://market.android.com/details?id=com.adaptdroid.navbookfree3 MSN Droid: http://market.android.com/details?id=msn.droid.im Matrix Live Wallpaper: http://market.android.com/details?id=com.jarodyv.livewallpaper.matrix LogMeIn: http://market.android.com/details?id=com.logmein.ignitionpro.android Liveshare: http://market.android.com/details?id=com.cooliris.app.liveshare Kobo: http://market.android.com/details?id=com.kobobooks.android Instant Heart Rate: http://market.android.com/details?id=si.modula.android.instantheartrate IMDb: http://market.android.com/details?id=com.imdb.mobile Home Plus Weather: http://market.android.com/details?id=com.androidapps.widget.skin.weather.homeplus Handcent SMS: http://market.android.com/details?id=com.handcent.nextsms H7C Clock: http://market.android.com/details?id=com.androidapps.widget.clock.skin.h7c GTasks: http://market.android.com/details?id=org.dayup.gtask GPS Status: http://market.android.com/details?id=com.eclipsim.gpsstatus2 Google Voice: http://market.android.com/details?id=com.google.android.apps.googlevoice Google Sky Map: http://market.android.com/details?id=com.google.android.stardroid Google Reader: http://market.android.com/details?id=com.google.android.apps.reader GoMarks: http://market.android.com/details?id=com.androappsdev.gomarks Goggles: http://market.android.com/details?id=com.google.android.apps.unveil Glossy Black Weather: http://market.android.com/details?id=com.androidapps.widget.weather.skin.glossyblack Fox News: http://market.android.com/details?id=com.foxnews.android Foursquare: http://market.android.com/details?id=com.joelapenna.foursquared FBReader: http://market.android.com/details?id=org.geometerplus.zlibrary.ui.android Fandango: http://market.android.com/details?id=com.fandango Facebook: http://market.android.com/details?id=com.facebook.katana Extensive Notes Pro: http://market.android.com/details?id=com.flufflydelusions.app.extensive_notes_donate Expense Manager: http://market.android.com/details?id=com.expensemanager Espresso UI (LightShow w/ Slide): http://market.android.com/details?id=com.jaguirre.slide.lightshow Engadget: http://market.android.com/details?id=com.aol.mobile.engadget Earth: http://market.android.com/details?id=com.google.earth Drudge: http://market.android.com/details?id=com.iavian.dreport Dropbox: http://market.android.com/details?id=com.dropbox.android DroidForums: http://market.android.com/details?id=com.quoord.tapatalkdrodiforums.activity DroidArmor ADW: http://market.android.com/details?id=mobi.addesigns.droidarmorADW Droid Weather Icons: http://market.android.com/details?id=com.androidapps.widget.weather.skins.white Droid 2 Bootstrapper: http://market.android.com/details?id=com.koushikdutta.droid2.bootstrap doubleTwist: http://market.android.com/details?id=com.doubleTwist.androidPlayer Documents To Go: http://market.android.com/details?id=com.dataviz.docstogo Digital Clock Widget: http://market.android.com/details?id=com.maize.digitalClock Desk Home: http://market.android.com/details?id=com.cowbellsoftware.deskdock Default Clock: http://market.android.com/details?id=com.androidapps.widget.clock.skins.defaultclock Daily Expense Manager: http://market.android.com/details?id=com.techahead.ExpenseManager ConnectBot: http://market.android.com/details?id=org.connectbot Colorized Weather Icons: http://market.android.com/details?id=com.androidapps.widget.weather.colorized Chrome to Phone: http://market.android.com/details?id=com.google.android.apps.chrometophone CardStar: http://market.android.com/details?id=com.cardstar.android Books: http://market.android.com/details?id=com.google.android.apps.books Black Ipad Toggle: http://market.android.com/details?id=com.androidapps.toggle.widget.skin.blackipad Black Glass ADW Theme: http://market.android.com/details?id=com.dtanquary.blackglassadwtheme Bing: http://market.android.com/details?id=com.microsoft.mobileexperiences.bing BeyondPod Unlock Key: http://market.android.com/details?id=mobi.beyondpod.unlockkey BeyondPod: http://market.android.com/details?id=mobi.beyondpod BeejiveIM: http://market.android.com/details?id=com.beejive.im Beautiful Widgets Animations Addon: http://market.android.com/details?id=com.levelup.bw.forecast Beautiful Widgets: http://market.android.com/details?id=com.levelup.beautifulwidgets Beautiful Live Weather: http://market.android.com/details?id=com.levelup.beautifullive BBC News: http://market.android.com/details?id=net.jimblackler.newswidget Barnacle Wifi Tether: http://market.android.com/details?id=net.szym.barnacle Barcode Scanner: http://market.android.com/details?id=com.google.zxing.client.android ASTRO SMB Module: http://market.android.com/details?id=com.metago.astro.smb ASTRO Pro: http://market.android.com/details?id=com.metago.astro.pro ASTRO Bluetooth Module: http://market.android.com/details?id=com.metago.astro.network.bluetooth ASTRO: http://market.android.com/details?id=com.metago.astro AppBrain App Market: http://market.android.com/details?id=com.appspot.swisscodemonkeys.apps App Drawer Icon Pack: http://market.android.com/details?id=com.adwtheme.appdrawericonpack androidVNC: http://market.android.com/details?id=android.androidVNC AndroidGuys: http://market.android.com/details?id=com.handmark.mpp.AndroidGuys Android System Info: http://market.android.com/details?id=com.electricsheep.asi AndFTP: http://market.android.com/details?id=lysesoft.andftp ADWTheme Red: http://market.android.com/details?id=adw.theme.red ADWLauncher EX: http://market.android.com/details?id=org.adwfreak.launcher ADW.Theme.One: http://market.android.com/details?id=org.adw.theme.one ADW.Faded theme: http://market.android.com/details?id=com.xrcore.adwtheme.faded ADW Gingerbread: http://market.android.com/details?id=me.robertburns.android.adwtheme.gingerbread Advanced Task Killer Free: http://market.android.com/details?id=com.rechild.advancedtaskkiller Adobe Reader: http://market.android.com/details?id=com.adobe.reader Adobe Flash Player 10.1: http://market.android.com/details?id=com.adobe.flashplayer Adobe AIR: http://market.android.com/details?id=com.adobe.air 3G Auto OnOff: http://market.android.com/details?id=com.yuantuo --- Generated by ShareMyApps http://market.android.com/details?id=com.mattlary.shareMyApps Sent from my Droid

    Read the article

  • Conversation as User Assistance

    - by ultan o'broin
    Applications User Experience members (Erika Web, Laurie Pattison, and I) attended the User Assistance Europe Conference in Stockholm, Sweden. We were impressed with the thought leadership and practical application of ideas in Anne Gentle's keynote address "Social Web Strategies for Documentation". After the conference, we spoke with Anne to explore the ideas further. Anne Gentle (left) with Applications User Experience Senior Director Laurie Pattison In Anne's book called Conversation and Community: The Social Web for Documentation, she explains how user assistance is undergoing a seismic shift. The direction is away from the old print manuals and online help concept towards a web-based, user community-driven solution using social media tools. User experience professionals now have a vast range of such tools to start and nurture this "conversation": blogs, wikis, forums, social networking sites, microblogging systems, image and video sharing sites, virtual worlds, podcasts, instant messaging, mashups, and so on. That user communities are a rich source of user assistance is not a surprise, but the extent of available assistance is. For example, we know from the Consortium for Service Innovation that there has been an 'explosion' of user-generated content on the web. User-initiated community conversations provide as much as 30 times the number of official help desk solutions for consortium members! The growing reliance on user community solutions is clearly a user experience issue. Anne says that user assistance as conversation "means getting closer to users and helping them perform well. User-centered design has been touted as one of the most important ideas developed in the last 20 years of workplace writing. Now writers can take the idea of user-centered design a step further by starting conversations with users and enabling user assistance in interactions." Some of Anne's favorite examples of this paradigm shift from the world of traditional documentation to community conversation include: Writer Bob Bringhurst's blog about Adobe InDesign and InCopy products and Adobe's community help The Microsoft Development Network Community Center ·The former Sun (now Oracle) OpenDS wiki, NetBeans Ruby and other community approaches to engage diverse audiences using screencasts, wikis, and blogs. Cisco's customer support wiki, EMC's community, as well as Symantec and Intuit's approaches The efforts of Ubuntu, Mozilla, and the FLOSS community generally Adobe Writer Bob Bringhurst's Blog Oracle is not without a user community conversation too. Besides the community discussions and blogs around documentation offerings, we have the My Oracle Support Community forums, Oracle Technology Network (OTN) communities, wiki, blogs, and so on. We have the great work done by our user groups and customer councils. Employees like David Haimes reach out, and enthusiastic non-employee gurus like Chet Justice (OracleNerd), Floyd Teter and Eddie Awad provide great "how-to" information too. But what does this paradigm shift mean for existing technical writers as users turn away from the traditional printable PDF manual deliverables? We asked Anne after the conference. The writer role becomes one of conversation initiator or enabler. The role evolves, along with the process, as the users define their concept of user assistance and terms of engagement with the product instead of having it pre-determined. It is largely a case now of "inventing the job while you're doing it, instead of being hired for it" Anne said. There is less emphasis on formal titles. Anne mentions that her own title "Content Stacker" at OpenStack; others use titles such as "Content Curator" or "Community Lead". However, the role remains one essentially about communications, "but of a new type--interacting with users, moderating, curating content, instead of sitting down to write a manual from start to finish." Clearly then, this role is open to more than professional technical writers. Product managers who write blogs, developers who moderate forums, support professionals who update wikis, rock star programmers with a penchant for YouTube are ideal. Anyone with the product knowledge, empathy for the user, and flair for relationships on the social web can join in. Some even perform these roles already but do not realize it. Anne feels the technical communicator space will move from hiring new community conversation professionals (who are already active in the space through blogging, tweets, wikis, and so on) to retraining some existing writers over time. Our own research reveals that the established proponents of community user assistance even set employee performance objectives for internal content curators about the amount of community content delivered by people outside the organization! To take advantage of the conversations on the web as user assistance, enterprises must first establish where on the spectrum their community lies. "What is the line between community willingness to contribute and the enterprise objectives?" Anne asked. "The relationship with users must be managed and also measured." Anne believes that the process can start with a "just do it" approach. Begin by reaching out to existing user groups, individual bloggers and tweeters, forum posters, early adopter program participants, conference attendees, customer advisory board members, and so on. Use analytical tools to measure the level of conversation about your products and services to show a return on investment (ROI), winning management support. Anne emphasized that success with the community model is dependent on lowering the technical and motivational barriers so that users can readily contribute to the conversation. Simple tools must be provided, and guidelines, if any, must be straightforward but not mandatory. The conversational approach is one where traditional style and branding guides do not necessarily apply. Tools and infrastructure help users to create content easily, to search and find the information online, read it, rate it, translate it, and participate further in the content's evolution. Recognizing contributors by using ratings on forums, giving out Twitter kudos, conference invitations, visits to headquarters, free products, preview releases, and so on, also encourages the adoption of the conversation model. The move to conversation as user assistance is not free, but there is a business ROI. The conversational model means that customer service is enhanced, as user experience moves from a functional to a valued, emotional level. Studies show a positive correlation between loyalty and financial performance (Consortium for Service Innovation, 2010), and as customer experience and loyalty become key differentiators, user experience professionals cannot explore the model's possibilities. The digital universe (measured at 1.2 million petabytes in 2010) is doubling every 12 to 18 months, and 70 percent of that universe consists of user-generated content (IDC, 2010). Conversation as user assistance cannot be ignored but must be embraced. It is a time to manage for abundance, not scarcity. Besides, the conversation approach certainly sounds more interesting, rewarding, and fun than the traditional model! I would like to thank Anne for her time and thoughts, and recommend that all user assistance professionals read her book. You can follow Anne on Twitter at: http://www.twitter.com/annegentle. Oracle's Acrolinx IQ deployment was used to author this article.

    Read the article

  • I&rsquo;m sorry RPGs, it&rsquo;s not you, it&rsquo;s me: The birth of my game idea

    - by George Clingerman
    One of the things I’ve had to give up in order to have some development time at night is gaming. It’s something I refused to admit for years but I’ve just had to face the facts. I’m no longer a gamer. I just don’t have hours and hours of free time to pour into gaming and when I do have hours and hours of free time I want to pour them into game development. That doesn’t mean I don’t game at all! I play games pretty much every day. It just means I’ve moved more into the casual game realm. It’s all I have time for when juggling priorities in my life. That means that games like Gears of War 2 sit shrink wrapped on my shelf and although I popped Dragon Age into my Xbox 360 one time, I barely made it through the opening sequence and haven’t had time to sit down and play again. Instead I’m playing short games like Jamestown, Atom Zombie Smasher, Fortix or if I have time to jump in and play a few rounds maybe some Monday Night Combat or Team Fortress 2. These are games I can instantly get into and play for just a short period of time and then walk away. Breath of Death VII saved my life: Back in the day (way, way back in the day) I used to be a pretty big RPG fan. Not big by a lot of RPG gamers' standards (most of the RPGs RPG fans about I’ve never heard of) but I used to LOVE to play them on the NES, SNES and Genesis and considered that my genre. Final Fantasy, Shining in the Darkness, Bard’s Tale, Faxanadu, Shadowrun, Ultima, Dragon Warrior, Chrono Trigger, Phantasy Star, Shining Force and well the list could go on but those are the ones I remember off the top of my head. I loved playing RPGs and they were my games of choice. After my first son was born (this was just about 12 years ago), I tried to continue playing RPGs and purchased games like Baldur’s Gate I & II, Neverwinter Nights, Fable, then a few of the Final Fantasy’s then Kingdom Hearts. I kept buying these games and then only playing for about fifteen minutes and never getting back to them. I still loved RPGs but they just no longer fit into my life (I still haven’t accepted that since I still purchased Dragon Age II for some reason and convinced myself I’d find the time). Adding three more sons to the mix (that’s 4 total) didn’t help much to finding more RPG time (except for Breath of Death VII and other XBLIG RPG titles, thanks guys!) All work and no RPG: A few months ago as I was sitting thinking about the lack of RPGs in my life and talking to my wife about why I wish RPGs were different and easier for a dad like me to get into. She seemed like she was listening, so I started listing all the things that made them impossible for me to play. Here’s a short list I came up with. They take 15 billion hours to complete I have a few minutes at a time I can grab to play them if I want to have time to code. At that rate it would take me 9 trillion years to beat just one RPG. There’s such long spans of times between when I can play them I forget what I was even doing so I have to spend most of the playtime I have just figuring that out and then my play time is over. Repeat. I’ll never finish one and since it takes so long to get to the fun part in an RPG, I’m never having fun. RPGs aren’t fun if you don’t have hours to play them at a time. As you can see based on my science and math, RPGs aren’t fun for me any more. From there my brain started toying around with ideas of RPGs that would work for me. They would have to be a short RPG, you know one you could beat in a single play session. A dad sized play session. I started thinking, wouldn’t it be awesome if there was a fifteen minute RPG? That got me laughing and I took that as a good sign that it sounded fun and so I thought about it a little more. I immediately discarded the idea of doing a real RPG. I’m sure a short RPG like that could be done but it wasn’t the vibe that I had in my head. No this was going to be something that just had the core essence of an RPG. In reality what I’d be making would be more of an arcade style game. One with high scores and lots of crazy action on the screen. And that’s when it hit me. It would be a speed run RPG. That’s the basics of the game I’m working on.   The Elevator Pitch: It’s a 2D top down RPG themed arcade game focused on speed. It sounds like an RPG, smells like an RPG but it’s merely emulating an RPG. The game is focused on fun and mayhem in RPG form with players leveling up in seconds instead of hours and rushing to finish quests as quickly as possible because they’ve only got fifteen minutes before EVIL overtakes the world. If the player takes longer than fifteen minutes, it’s game over man. One to four player co-operative play to really see just how fast players can level up and beat the game. Gamers will compete on leaderboards for bragging rights for fastest 1, 2, 3, and 4 player speed runs, lowest leveled characters to beat the game, highest leveled characters to beat the game and so on. Times will be tracked for everything from how long a player sat distributing stats, equipping items, talking to NPCs to running around the level. These stats will be shown at the end of each quest/level so the players can work on improving their speed run for that part of the game next time around. It’s the perfect RPG for those of us who only have fifteen minutes of game time! Where I’m at: I’m still at the prototyping stage attempting to but all the basic framework pieces in place that will at minimum give me one level to rush through. I’ve been working on this prototype for about a month now though so I’m going to have to step it up a bit or I’m not going to get finished in time (remember I’ve only got 85 days left!) Lots of the game code is in place (although pretty sloppy) but I still can’t play through that first quest/level just yet. That’s my goal to finish up by the end of next Sunday (3/25/2012). You can all hold me to that and cheer me on or heckle me throughout the week. Either way that should help me stay a bit more motivated and focused. In my head this feels like it’s going to be a fun game so I’m looking forward to seeing how it actually plays!

    Read the article

  • How to subscribe to the free Oracle Linux errata yum repositories

    - by Lenz Grimmer
    Now that updates and errata for Oracle Linux are available for free (both as in beer and freedom), here's a quick HOWTO on how to subscribe your Oracle Linux system to the newly added yum repositories on our public yum server, assuming that you just installed Oracle Linux from scratch, e.g. by using the installation media (ISO images) available from the Oracle Software Delivery Cloud You need to download the appropriate yum repository configuration file from the public yum server and install it in the yum repository directory. For Oracle Linux 6, the process would look as follows: as the root user, run the following command: [root@oraclelinux62 ~]# wget http://public-yum.oracle.com/public-yum-ol6.repo \ -P /etc/yum.repos.d/ --2012-03-23 00:18:25-- http://public-yum.oracle.com/public-yum-ol6.repo Resolving public-yum.oracle.com... 141.146.44.34 Connecting to public-yum.oracle.com|141.146.44.34|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 1461 (1.4K) [text/plain] Saving to: “/etc/yum.repos.d/public-yum-ol6.repo” 100%[=================================================>] 1,461 --.-K/s in 0s 2012-03-23 00:18:26 (37.1 MB/s) - “/etc/yum.repos.d/public-yum-ol6.repo” saved [1461/1461] For Oracle Linux 5, the file name would be public-yum-ol5.repo in the URL above instead. The "_latest" repositories that contain the errata packages are already enabled by default — you can simply pull in all available updates by running "yum update" next: [root@oraclelinux62 ~]# yum update Loaded plugins: refresh-packagekit, security ol6_latest | 1.1 kB 00:00 ol6_latest/primary | 15 MB 00:42 ol6_latest 14643/14643 Setting up Update Process Resolving Dependencies --> Running transaction check ---> Package at.x86_64 0:3.1.10-43.el6 will be updated ---> Package at.x86_64 0:3.1.10-43.el6_2.1 will be an update ---> Package autofs.x86_64 1:5.0.5-39.el6 will be updated ---> Package autofs.x86_64 1:5.0.5-39.el6_2.1 will be an update ---> Package bind-libs.x86_64 32:9.7.3-8.P3.el6 will be updated ---> Package bind-libs.x86_64 32:9.7.3-8.P3.el6_2.2 will be an update ---> Package bind-utils.x86_64 32:9.7.3-8.P3.el6 will be updated ---> Package bind-utils.x86_64 32:9.7.3-8.P3.el6_2.2 will be an update ---> Package cvs.x86_64 0:1.11.23-11.el6_0.1 will be updated ---> Package cvs.x86_64 0:1.11.23-11.el6_2.1 will be an update [...] ---> Package yum.noarch 0:3.2.29-22.0.1.el6 will be updated ---> Package yum.noarch 0:3.2.29-22.0.2.el6_2.2 will be an update ---> Package yum-plugin-security.noarch 0:1.1.30-10.el6 will be updated ---> Package yum-plugin-security.noarch 0:1.1.30-10.0.1.el6 will be an update ---> Package yum-utils.noarch 0:1.1.30-10.el6 will be updated ---> Package yum-utils.noarch 0:1.1.30-10.0.1.el6 will be an update --> Finished Dependency Resolution Dependencies Resolved ===================================================================================== Package Arch Version Repository Size ===================================================================================== Installing: kernel x86_64 2.6.32-220.7.1.el6 ol6_latest 24 M kernel-uek x86_64 2.6.32-300.11.1.el6uek ol6_latest 21 M kernel-uek-devel x86_64 2.6.32-300.11.1.el6uek ol6_latest 6.3 M Updating: at x86_64 3.1.10-43.el6_2.1 ol6_latest 60 k autofs x86_64 1:5.0.5-39.el6_2.1 ol6_latest 470 k bind-libs x86_64 32:9.7.3-8.P3.el6_2.2 ol6_latest 839 k bind-utils x86_64 32:9.7.3-8.P3.el6_2.2 ol6_latest 178 k cvs x86_64 1.11.23-11.el6_2.1 ol6_latest 711 k [...] xulrunner x86_64 10.0.3-1.0.1.el6_2 ol6_latest 12 M yelp x86_64 2.28.1-13.el6_2 ol6_latest 778 k yum noarch 3.2.29-22.0.2.el6_2.2 ol6_latest 987 k yum-plugin-security noarch 1.1.30-10.0.1.el6 ol6_latest 36 k yum-utils noarch 1.1.30-10.0.1.el6 ol6_latest 94 k Transaction Summary ===================================================================================== Install 3 Package(s) Upgrade 96 Package(s) Total download size: 173 M Is this ok [y/N]: y Downloading Packages: (1/99): at-3.1.10-43.el6_2.1.x86_64.rpm | 60 kB 00:00 (2/99): autofs-5.0.5-39.el6_2.1.x86_64.rpm | 470 kB 00:01 (3/99): bind-libs-9.7.3-8.P3.el6_2.2.x86_64.rpm | 839 kB 00:02 (4/99): bind-utils-9.7.3-8.P3.el6_2.2.x86_64.rpm | 178 kB 00:00 [...] (96/99): yelp-2.28.1-13.el6_2.x86_64.rpm | 778 kB 00:02 (97/99): yum-3.2.29-22.0.2.el6_2.2.noarch.rpm | 987 kB 00:03 (98/99): yum-plugin-security-1.1.30-10.0.1.el6.noarch.rpm | 36 kB 00:00 (99/99): yum-utils-1.1.30-10.0.1.el6.noarch.rpm | 94 kB 00:00 ------------------------------------------------------------------------------------- Total 306 kB/s | 173 MB 09:38 warning: rpmts_HdrFromFdno: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY Retrieving key from http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 Importing GPG key 0xEC551F03: Userid: "Oracle OSS group (Open Source Software group) " From : http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 Is this ok [y/N]: y Running rpm_check_debug Running Transaction Test Transaction Test Succeeded Running Transaction Updating : yum-3.2.29-22.0.2.el6_2.2.noarch 1/195 Updating : xorg-x11-server-common-1.10.4-6.el6_2.3.x86_64 2/195 Updating : kernel-uek-headers-2.6.32-300.11.1.el6uek.x86_64 3/195 Updating : 12:dhcp-common-4.1.1-25.P1.el6_2.1.x86_64 4/195 Updating : tzdata-java-2011n-2.el6.noarch 5/195 Updating : tzdata-2011n-2.el6.noarch 6/195 Updating : glibc-common-2.12-1.47.el6_2.9.x86_64 7/195 Updating : glibc-2.12-1.47.el6_2.9.x86_64 8/195 [...] Cleanup : kernel-firmware-2.6.32-220.el6.noarch 191/195 Cleanup : kernel-uek-firmware-2.6.32-300.3.1.el6uek.noarch 192/195 Cleanup : glibc-common-2.12-1.47.el6.x86_64 193/195 Cleanup : glibc-2.12-1.47.el6.x86_64 194/195 Cleanup : tzdata-2011l-4.el6.noarch 195/195 Installed: kernel.x86_64 0:2.6.32-220.7.1.el6 kernel-uek.x86_64 0:2.6.32-300.11.1.el6uek kernel-uek-devel.x86_64 0:2.6.32-300.11.1.el6uek Updated: at.x86_64 0:3.1.10-43.el6_2.1 autofs.x86_64 1:5.0.5-39.el6_2.1 bind-libs.x86_64 32:9.7.3-8.P3.el6_2.2 bind-utils.x86_64 32:9.7.3-8.P3.el6_2.2 cvs.x86_64 0:1.11.23-11.el6_2.1 dhclient.x86_64 12:4.1.1-25.P1.el6_2.1 [...] xorg-x11-server-common.x86_64 0:1.10.4-6.el6_2.3 xulrunner.x86_64 0:10.0.3-1.0.1.el6_2 yelp.x86_64 0:2.28.1-13.el6_2 yum.noarch 0:3.2.29-22.0.2.el6_2.2 yum-plugin-security.noarch 0:1.1.30-10.0.1.el6 yum-utils.noarch 0:1.1.30-10.0.1.el6 Complete! At this point, your system is fully up to date. As the kernel was updated as well, a reboot is the recommended next action. If you want to install the latest release of the Unbreakable Enterprise Kernel Release 2 as well, you need to edit the .repo file and enable the respective yum repository (e.g. "ol6_UEK_latest" for Oracle Linux 6 and "ol5_UEK_latest" for Oracle Linux 5) manually, by setting enabled to "1". The next yum update run will download and install the second release of the Unbreakable Enterprise Kernel, which will be enabled after the next reboot. -Lenz

    Read the article

  • SQL Server and Hyper-V Dynamic Memory Part 3

    - by SQLOS Team
    In parts 1 and 2 of this series we looked at the basics of Hyper-V Dynamic Memory and SQL Server memory management. In this part Serdar looks at configuration guidelines for SQL Server memory management. Part 3: Configuration Guidelines for Hyper-V Dynamic Memory and SQL Server Now that we understand SQL Server Memory Management and Hyper-V Dynamic Memory basics, let’s take a look at general configuration guidelines in order to utilize benefits of Hyper-V Dynamic Memory in your SQL Server VMs. Requirements Host Operating System Requirements Hyper-V Dynamic Memory feature is introduced with Windows Server 2008 R2 SP1. Therefore in order to use Dynamic Memory for your virtual machines, you need to have Windows Server 2008 R2 SP1 or Microsoft Hyper-V Server 2008 R2 SP1 in your Hyper-V host. Guest Operating System Requirements In addition to this Dynamic Memory is only supported in Standard, Web, Enterprise and Datacenter editions of windows running inside VMs. Make sure that your VM is running one of these editions. For additional requirements on each operating system see “Dynamic Memory Configuration Guidelines” here. SQL Server Requirements All versions of SQL Server support Hyper-V Dynamic Memory. However, only certain editions of SQL Server are aware of dynamically changing system memory. To have a truly dynamic environment for your SQL Server VMs make sure that you are running one of the SQL Server editions listed below: ·         SQL Server 2005 Enterprise ·         SQL Server 2008 Enterprise / Datacenter Editions ·         SQL Server 2008 R2 Enterprise / Datacenter Editions Configuration guidelines for other versions of SQL Server are covered below in the FAQ section. Guidelines for configuring Dynamic Memory Parameters Here is how to configure Dynamic Memory for your SQL VMs in a nutshell: Hyper-V Dynamic Memory Parameter Recommendation Startup RAM 1 GB + SQL Min Server Memory Maximum RAM > SQL Max Server Memory Memory Buffer % 5 Memory Weight Based on performance needs   Startup RAM In order to ensure that your SQL Server VMs can start correctly, ensure that Startup RAM is higher than configured SQL Min Server Memory for your VMs. Otherwise SQL Server service will need to do paging in order to start since it will not be able to see enough memory during startup. Also note that Startup Memory will always be reserved for your VMs. This will guarantee a certain level of performance for your SQL Servers, however setting this too high will limit the consolidation benefits you’ll get out of your virtualization environment. Maximum RAM This one is obvious. If you’ve configured SQL Max Server Memory for your SQL Server, make sure that Dynamic Memory Maximum RAM configuration is higher than this value. Otherwise your SQL Server will not grow to memory values higher than the value configured for Dynamic Memory. Memory Buffer % Memory buffer configuration is used to provision file cache to virtual machines in order to improve performance. Due to the fact that SQL Server is managing its own buffer pool, Memory Buffer setting should be configured to the lowest value possible, 5%. Configuring a higher memory buffer will prevent low resource notifications from Windows Memory Manager and it will prevent reclaiming memory from SQL Server VMs. Memory Weight Memory weight configuration defines the importance of memory to a VM. Configure higher values for the VMs that have higher performance requirements. VMs with higher memory weight will have more memory under high memory pressure conditions on your host. Questions and Answers Q1 – Which SQL Server memory model is best for Dynamic Memory? The best SQL Server model for Dynamic Memory is “Locked Page Memory Model”. This memory model ensures that SQL Server memory is never paged out and it’s also adaptive to dynamically changing memory in the system. This will be extremely useful when Dynamic Memory is attempting to remove memory from SQL Server VMs ensuring no SQL Server memory is paged out. You can find instructions on configuring “Locked Page Memory Model” for your SQL Servers here. Q2 – What about other SQL Server Editions, how should I configure Dynamic Memory for them? Other editions of SQL Server do not adapt to dynamically changing environments. They will determine how much memory they should allocate during startup and don’t change this value afterwards. Therefore make sure that you configure a higher startup memory for your VM because that will be all the memory that SQL Server utilize Tune Maximum Memory and Memory Buffer based on the other workloads running on the system. If there are no other workloads consider using Static Memory for these editions. Q3 – What if I have multiple SQL Server instances in a VM? Having multiple SQL Server instances in a VM is not a general recommendation for predictable performance, manageability and isolation. In order to achieve a predictable behavior make sure that you configure SQL Min Server Memory and SQL Max Server Memory for each instance in the VM. And make sure that: ·         Dynamic Memory Startup Memory is greater than the sum of SQL Min Server Memory values for the instances in the VM ·         Dynamic Memory Maximum Memory is greater than the sum of SQL Max Server Memory values for the instances in the VM Q4 – I’m using Large Page Memory Model for my SQL Server. Can I still use Dynamic Memory? The short answer is no. SQL Server does not dynamically change its memory size when configured with Large Page Memory Model. In virtualized environments Hyper-V provides large page support by default. Most of the time, Large Page Memory Model doesn’t bring any benefits to a SQL Server if it’s running in virtualized environments. Q5 – How do I monitor SQL performance when I’m trying Dynamic Memory on my VMs? Use the performance counters below to monitor memory performance for SQL Server: Process - Working Set: This counter is available in the VM via process performance counters. It represents the actual amount of physical memory being used by SQL Server process in the VM. SQL Server – Buffer Cache Hit Ratio: This counter is available in the VM via SQL Server counters. This represents the paging being done by SQL Server. A rate of 90% or higher is desirable. Conclusion These blog posts are a quick start to a story that will be developing more in the near future. We’re still continuing our testing and investigations to provide more detailed configuration guidelines with example performance numbers with a white paper in the upcoming months. Now it’s time to give SQL Server and Hyper-V Dynamic Memory a try. Use this guidelines to kick-start your environment. See what you think about it and let us know of your experiences. - Serdar Sutay Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • A Guided Tour of Complexity

    - by JoshReuben
    I just re-read Complexity – A Guided Tour by Melanie Mitchell , protégé of Douglas Hofstadter ( author of “Gödel, Escher, Bach”) http://www.amazon.com/Complexity-Guided-Tour-Melanie-Mitchell/dp/0199798109/ref=sr_1_1?ie=UTF8&qid=1339744329&sr=8-1 here are some notes and links:   Evolved from Cybernetics, General Systems Theory, Synergetics some interesting transdisciplinary fields to investigate: Chaos Theory - http://en.wikipedia.org/wiki/Chaos_theory – small differences in initial conditions (such as those due to rounding errors in numerical computation) yield widely diverging outcomes for chaotic systems, rendering long-term prediction impossible. System Dynamics / Cybernetics - http://en.wikipedia.org/wiki/System_Dynamics – study of how feedback changes system behavior Network Theory - http://en.wikipedia.org/wiki/Network_theory – leverage Graph Theory to analyze symmetric  / asymmetric relations between discrete objects Algebraic Topology - http://en.wikipedia.org/wiki/Algebraic_topology – leverage abstract algebra to analyze topological spaces There are limits to deterministic systems & to computation. Chaos Theory definitely applies to training an ANN (artificial neural network) – different weights will emerge depending upon the random selection of the training set. In recursive Non-Linear systems http://en.wikipedia.org/wiki/Nonlinear_system – output is not directly inferable from input. E.g. a Logistic map: Xt+1 = R Xt(1-Xt) Different types of bifurcations, attractor states and oscillations may occur – e.g. a Lorenz Attractor http://en.wikipedia.org/wiki/Lorenz_system Feigenbaum Constants http://en.wikipedia.org/wiki/Feigenbaum_constants express ratios in a bifurcation diagram for a non-linear map – the convergent limit of R (the rate of period-doubling bifurcations) is 4.6692016 Maxwell’s Demon - http://en.wikipedia.org/wiki/Maxwell%27s_demon - the Second Law of Thermodynamics has only a statistical certainty – the universe (and thus information) tends towards entropy. While any computation can theoretically be done without expending energy, with finite memory, the act of erasing memory is permanent and increases entropy. Life & thought is a counter-example to the universe’s tendency towards entropy. Leo Szilard and later Claude Shannon came up with the Information Theory of Entropy - http://en.wikipedia.org/wiki/Entropy_(information_theory) whereby Shannon entropy quantifies the expected value of a message’s information in bits in order to determine channel capacity and leverage Coding Theory (compression analysis). Ludwig Boltzmann came up with Statistical Mechanics - http://en.wikipedia.org/wiki/Statistical_mechanics – whereby our Newtonian perception of continuous reality is a probabilistic and statistical aggregate of many discrete quantum microstates. This is relevant for Quantum Information Theory http://en.wikipedia.org/wiki/Quantum_information and the Physics of Information - http://en.wikipedia.org/wiki/Physical_information. Hilbert’s Problems http://en.wikipedia.org/wiki/Hilbert's_problems pondered whether mathematics is complete, consistent, and decidable (the Decision Problem – http://en.wikipedia.org/wiki/Entscheidungsproblem – is there always an algorithm that can determine whether a statement is true).  Godel’s Incompleteness Theorems http://en.wikipedia.org/wiki/G%C3%B6del's_incompleteness_theorems  proved that mathematics cannot be both complete and consistent (e.g. “This statement is not provable”). Turing through the use of Turing Machines (http://en.wikipedia.org/wiki/Turing_machine symbol processors that can prove mathematical statements) and Universal Turing Machines (http://en.wikipedia.org/wiki/Universal_Turing_machine Turing Machines that can emulate other any Turing Machine via accepting programs as well as data as input symbols) that computation is limited by demonstrating the Halting Problem http://en.wikipedia.org/wiki/Halting_problem (is is not possible to know when a program will complete – you cannot build an infinite loop detector). You may be used to thinking of 1 / 2 / 3 dimensional systems, but Fractal http://en.wikipedia.org/wiki/Fractal systems are defined by self-similarity & have non-integer Hausdorff Dimensions !!!  http://en.wikipedia.org/wiki/List_of_fractals_by_Hausdorff_dimension – the fractal dimension quantifies the number of copies of a self similar object at each level of detail – eg Koch Snowflake - http://en.wikipedia.org/wiki/Koch_snowflake Definitions of complexity: size, Shannon entropy, Algorithmic Information Content (http://en.wikipedia.org/wiki/Algorithmic_information_theory - size of shortest program that can generate a description of an object) Logical depth (amount of info processed), thermodynamic depth (resources required). Complexity is statistical and fractal. John Von Neumann’s other machine was the Self-Reproducing Automaton http://en.wikipedia.org/wiki/Self-replicating_machine  . Cellular Automata http://en.wikipedia.org/wiki/Cellular_automaton are alternative form of Universal Turing machine to traditional Von Neumann machines where grid cells are locally synchronized with their neighbors according to a rule. Conway’s Game of Life http://en.wikipedia.org/wiki/Conway's_Game_of_Life demonstrates various emergent constructs such as “Glider Guns” and “Spaceships”. Cellular Automatons are not practical because logical ops require a large number of cells – wasteful & inefficient. There are no compilers or general program languages available for Cellular Automatons (as far as I am aware). Random Boolean Networks http://en.wikipedia.org/wiki/Boolean_network are extensions of cellular automata where nodes are connected at random (not to spatial neighbors) and each node has its own rule –> they demonstrate the emergence of complex  & self organized behavior. Stephen Wolfram’s (creator of Mathematica, so give him the benefit of the doubt) New Kind of Science http://en.wikipedia.org/wiki/A_New_Kind_of_Science proposes the universe may be a discrete Finite State Automata http://en.wikipedia.org/wiki/Finite-state_machine whereby reality emerges from simple rules. I am 2/3 through this book. It is feasible that the universe is quantum discrete at the plank scale and that it computes itself – Digital Physics: http://en.wikipedia.org/wiki/Digital_physics – a simulated reality? Anyway, all behavior is supposedly derived from simple algorithmic rules & falls into 4 patterns: uniform , nested / cyclical, random (Rule 30 http://en.wikipedia.org/wiki/Rule_30) & mixed (Rule 110 - http://en.wikipedia.org/wiki/Rule_110 localized structures – it is this that is interesting). interaction between colliding propagating signal inputs is then information processing. Wolfram proposes the Principle of Computational Equivalence - http://mathworld.wolfram.com/PrincipleofComputationalEquivalence.html - all processes that are not obviously simple can be viewed as computations of equivalent sophistication. Meaning in information may emerge from analogy & conceptual slippages – see the CopyCat program: http://cognitrn.psych.indiana.edu/rgoldsto/courses/concepts/copycat.pdf Scale Free Networks http://en.wikipedia.org/wiki/Scale-free_network have a distribution governed by a Power Law (http://en.wikipedia.org/wiki/Power_law - much more common than Normal Distribution). They are characterized by hubs (resilience to random deletion of nodes), heterogeneity of degree values, self similarity, & small world structure. They grow via preferential attachment http://en.wikipedia.org/wiki/Preferential_attachment – tipping points triggered by positive feedback loops. 2 theories of cascading system failures in complex systems are Self-Organized Criticality http://en.wikipedia.org/wiki/Self-organized_criticality and Highly Optimized Tolerance http://en.wikipedia.org/wiki/Highly_optimized_tolerance. Computational Mechanics http://en.wikipedia.org/wiki/Computational_mechanics – use of computational methods to study phenomena governed by the principles of mechanics. This book is a great intuition pump, but does not cover the more mathematical subject of Computational Complexity Theory – http://en.wikipedia.org/wiki/Computational_complexity_theory I am currently reading this book on this subject: http://www.amazon.com/Computational-Complexity-Christos-H-Papadimitriou/dp/0201530821/ref=pd_sim_b_1   stay tuned for that review!

    Read the article

  • HDMI video connection cuts top and bottom borders of screen

    - by Luis Alvarado
    Ok this is an extension of another problem I had with a VGA connection and an Nvidia Geforce GT 440 card. Here is goes the explanation of this particular problem: I have a Soneview 32' TV. This TV has many connections including VGA (First reason I bought it), HDMI (Second reason but did not have a HDMI cable at that time) and DVI. I have had this TV for little over a month now, actually I had it to celebrate the release of Ubuntu 11.10 and started using it exactly on that date (I know too much fan there but hey, I like geek stuff). I started using it with the VGA cable. After 2 weeks I bought an Nvidia GT440 card. The previous 9500GT was working correctly with no problems whatsoever. I installed the GT440 and the first problem that I encountered using this latest card is mentioned here: Nvidia GT 440 black screen problem when loading lightdm greeter. The solution to this problem was to actually disconnect then connect again the VGA cable. This would result in the screen showing me the lightdm screen for my login. If I did not disconnect then connect the cable I could be there forever thinking that there is no video signal. I got tired of looking for answers that did not work and for solutions that made me literally have to install Ubuntu again. I just went and bought a HDMI cable and changed the VGA one for that one. It worked and I did not have to disconnect/connect the cable but now I have this problem when using any resolution. My normal resolution is 1920x1080 (This TV is 1080HD) so in VGA I could use this resolution with no problem, but on HDMI am getting the borders cut out. Here is a pic: As you can see from the PIC, the Launcher icons only show less than 50% of their witdh. Forget about the top and bottom parts, I can access them with the mouse but I can not visualize them in the screen. It is like it's outside of the TVs view. Basically there is like 20 to 30 pixels gone from all sides. I searched around and came to running xrand --verbose to see what it could detect from the TV. I got this: cyrex@cyrex:~$ xrandr --verbose xrandr: Failed to get size of gamma for output default Screen 0: minimum 320 x 175, current 1920 x 1080, maximum 1920 x 1080 default connected 1920x1080+0+0 (0x164) normal (normal) 0mm x 0mm Identifier: 0x163 Timestamp: 465485 Subpixel: unknown Clones: CRTC: 0 CRTCs: 0 Transform: 1.000000 0.000000 0.000000 0.000000 1.000000 0.000000 0.000000 0.000000 1.000000 filter: 1920x1080 (0x164) 103.7MHz *current h: width 1920 start 0 end 0 total 1920 skew 0 clock 54.0KHz v: height 1080 start 0 end 0 total 1080 clock 50.0Hz 1920x1080 (0x165) 105.8MHz h: width 1920 start 0 end 0 total 1920 skew 0 clock 55.1KHz v: height 1080 start 0 end 0 total 1080 clock 51.0Hz 1920x1080 (0x166) 107.8MHz h: width 1920 start 0 end 0 total 1920 skew 0 clock 56.2KHz v: height 1080 start 0 end 0 total 1080 clock 52.0Hz 1920x1080 (0x167) 109.9MHz h: width 1920 start 0 end 0 total 1920 skew 0 clock 57.2KHz v: height 1080 start 0 end 0 total 1080 clock 53.0Hz 1920x1080 (0x168) 112.0MHz h: width 1920 start 0 end 0 total 1920 skew 0 clock 58.3KHz v: height 1080 start 0 end 0 total 1080 clock 54.0Hz 1920x1080 (0x169) 114.0MHz h: width 1920 start 0 end 0 total 1920 skew 0 clock 59.4KHz v: height 1080 start 0 end 0 total 1080 clock 55.0Hz 1680x1050 (0x16a) 98.8MHz h: width 1680 start 0 end 0 total 1680 skew 0 clock 58.8KHz v: height 1050 start 0 end 0 total 1050 clock 56.0Hz 1680x1050 (0x16b) 100.5MHz h: width 1680 start 0 end 0 total 1680 skew 0 clock 59.9KHz v: height 1050 start 0 end 0 total 1050 clock 57.0Hz 1600x1024 (0x16c) 95.0MHz h: width 1600 start 0 end 0 total 1600 skew 0 clock 59.4KHz v: height 1024 start 0 end 0 total 1024 clock 58.0Hz 1440x900 (0x16d) 76.5MHz h: width 1440 start 0 end 0 total 1440 skew 0 clock 53.1KHz v: height 900 start 0 end 0 total 900 clock 59.0Hz 1360x768 (0x171) 65.8MHz h: width 1360 start 0 end 0 total 1360 skew 0 clock 48.4KHz v: height 768 start 0 end 0 total 768 clock 63.0Hz 1360x768 (0x172) 66.8MHz h: width 1360 start 0 end 0 total 1360 skew 0 clock 49.2KHz v: height 768 start 0 end 0 total 768 clock 64.0Hz 1280x1024 (0x173) 85.2MHz h: width 1280 start 0 end 0 total 1280 skew 0 clock 66.6KHz v: height 1024 start 0 end 0 total 1024 clock 65.0Hz 1280x960 (0x176) 83.6MHz h: width 1280 start 0 end 0 total 1280 skew 0 clock 65.3KHz v: height 960 start 0 end 0 total 960 clock 68.0Hz 1280x960 (0x177) 84.8MHz h: width 1280 start 0 end 0 total 1280 skew 0 clock 66.2KHz v: height 960 start 0 end 0 total 960 clock 69.0Hz 1280x720 (0x178) 64.5MHz h: width 1280 start 0 end 0 total 1280 skew 0 clock 50.4KHz v: height 720 start 0 end 0 total 720 clock 70.0Hz 1280x720 (0x179) 65.4MHz h: width 1280 start 0 end 0 total 1280 skew 0 clock 51.1KHz v: height 720 start 0 end 0 total 720 clock 71.0Hz 1280x720 (0x17a) 66.4MHz h: width 1280 start 0 end 0 total 1280 skew 0 clock 51.8KHz v: height 720 start 0 end 0 total 720 clock 72.0Hz 1152x864 (0x17b) 72.7MHz h: width 1152 start 0 end 0 total 1152 skew 0 clock 63.1KHz v: height 864 start 0 end 0 total 864 clock 73.0Hz 1152x864 (0x17c) 73.7MHz h: width 1152 start 0 end 0 total 1152 skew 0 clock 63.9KHz v: height 864 start 0 end 0 total 864 clock 74.0Hz ....Many Resolutions later... 320x200 (0x1d1) 10.2MHz h: width 320 start 0 end 0 total 320 skew 0 clock 31.8KHz v: height 200 start 0 end 0 total 200 clock 159.0Hz 320x175 (0x1d2) 9.0MHz h: width 320 start 0 end 0 total 320 skew 0 clock 28.0KHz v: height 175 start 0 end 0 total 175 clock 160.0Hz 1920x1080 (0x1dd) 333.8MHz h: width 1920 start 0 end 0 total 1920 skew 0 clock 173.9KHz v: height 1080 start 0 end 0 total 1080 clock 161.0Hz If it helps, the Refresh Rate at 1920x1080 is 60. There is a flickering effect at this resolution using HDMI but not VGA which I imagine is related to the borders cut off issue am asking here. I have also done the following but this will only solve the problem on lower resolutions than 1920x1080 or on others TV (My father has a Sony TV where this problem is also solved): NVIDIA WAY Go to Nvidia-Settings and there will be an option that will have more features if a HDMI cable is connected. In the next pic the option is DFP-1 (CNDLCD) but this name changes depending on what device the PC is connected to: Uncheck Force Full GPU Scaling What this will do for resolutions LOWER than 1920x1080 (At least in my case) is solve the flickering problem and fix the borders cut by the monitor. Save to Xorg.conf file the changes made after changing to a resolution acceptable to your eyes. TV WAY If you TV has OSD Menu and this menu has options for scanning the screen resolution or auto adjusting to it, disable them. Specifically the option about SCAN. If you have an option for AV Mode disable it. Basically disable any option that needs to scan and scale the resolution. Test one by one. In the case of my father's TV this did it. In my case, the Nvidia solved it for lower resolutions. NOTE: In the case this is not solved in the next couple of weeks I will add this as the answer but take into consideration that the issue is still active with 1920x1080 resolutions.

    Read the article

  • Community Conversation

    - by ultan o'broin
    Applications User Experience members (Erika Webb, Laurie Pattison, and I) attended the User Assistance Europe Conference in Stockholm, Sweden. We were impressed with the thought leadership and practical application of ideas in Anne Gentle's keynote address "Social Web Strategies for Documentation". After the conference, we spoke with Anne to explore the ideas further. Applications User Experience Senior Director Laurie Pattison (left) with Anne Gentle at the User Assistance Europe Conference In Anne's book called Conversation and Community: The Social Web for Documentation, she explains how user assistance is undergoing a seismic shift. The direction is away from the old print manuals and online help concept towards a web-based, user community-driven solution using social media tools. User experience professionals now have a vast range of such tools to start and nurture this "conversation": blogs, wikis, forums, social networking sites, microblogging systems, image and video sharing sites, virtual worlds, podcasts, instant messaging, mashups, and so on. That user communities are a rich source of user assistance is not a surprise, but the extent of available assistance is. For example, we know from the Consortium for Service Innovation that there has been an 'explosion' of user-generated content on the web. User-initiated community conversations provide as much as 30 times the number of official help desk solutions for consortium members! The growing reliance on user community solutions is clearly a user experience issue. Anne says that user assistance as conversation "means getting closer to users and helping them perform well. User-centered design has been touted as one of the most important ideas developed in the last 20 years of workplace writing. Now writers can take the idea of user-centered design a step further by starting conversations with users and enabling user assistance in interactions." Some of Anne's favorite examples of this paradigm shift from the world of traditional documentation to community conversation include: * Writer Bob Bringhurst's blog about Adobe InDesign and InCopy products and Adobe's community help * The Microsoft Development Network Community Center * ·The former Sun (now Oracle) OpenDS wiki, NetBeans Ruby and other community approaches to engage diverse audiences using screencasts, wikis, and blogs. * Cisco's customer support wiki, EMC's community, as well as Symantec and Intuit's approaches * The efforts of Ubuntu, Mozilla, and the FLOSS community generally Adobe Writer Bob Bringhurst's Blog Oracle is not without a user community conversation too. Besides the community discussions and blogs around documentation offerings, we have the My Oracle Support Community forums, Oracle Technology Network (OTN) communities, wiki, blogs, and so on. We have the great work done by our user groups and customer councils. Employees like David Haimes are reaching out, and enthusiastic non-employee gurus like Chet Justice (OracleNerd), Floyd Teter and Eddie Awad provide great "how-to" information too. But what does this paradigm shift mean for existing technical writers as users turn away from the traditional printable PDF manual deliverables? We asked Anne after the conference. The writer role becomes one of conversation initiator or enabler. The role evolves, along with the process, as the users define their concept of user assistance and terms of engagement with the product instead of having it pre-determined. It is largely a case now of "inventing the job while you're doing it, instead of being hired for it" Anne said. There is less emphasis on formal titles. Anne mentions that her own title "Content Stacker" at OpenStack; others use titles such as "Content Curator" or "Community Lead". However, the role remains one essentially about communications, "but of a new type--interacting with users, moderating, curating content, instead of sitting down to write a manual from start to finish." Clearly then, this role is open to more than professional technical writers. Product managers who write blogs, developers who moderate forums, support professionals who update wikis, rock star programmers with a penchant for YouTube are ideal. Anyone with the product knowledge, empathy for the user, and flair for relationships on the social web can join in. Some even perform these roles already but do not realize it. Anne feels the technical communicator space will move from hiring new community conversation professionals (who are already active in the space through blogging, tweets, wikis, and so on) to retraining some existing writers over time. Our own research reveals that the established proponents of community user assistance even set employee performance objectives for internal content curators about the amount of community content delivered by people outside the organization! To take advantage of the conversations on the web as user assistance, enterprises must first establish where on the spectrum their community lies. "What is the line between community willingness to contribute and the enterprise objectives?" Anne asked. "The relationship with users must be managed and also measured." Anne believes that the process can start with a "just do it" approach. Begin by reaching out to existing user groups, individual bloggers and tweeters, forum posters, early adopter program participants, conference attendees, customer advisory board members, and so on. Use analytical tools to measure the level of conversation about your products and services to show a return on investment (ROI), winning management support. Anne emphasized that success with the community model is dependent on lowering the technical and motivational barriers so that users can readily contribute to the conversation. Simple tools must be provided, and guidelines, if any, must be straightforward but not mandatory. The conversational approach is one where traditional style and branding guides do not necessarily apply. Tools and infrastructure help users to create content easily, to search and find the information online, read it, rate it, translate it, and participate further in the content's evolution. Recognizing contributors by using ratings on forums, giving out Twitter kudos, conference invitations, visits to headquarters, free products, preview releases, and so on, also encourages the adoption of the conversation model. The move to conversation as user assistance is not free, but there is a business ROI. The conversational model means that customer service is enhanced, as user experience moves from a functional to a valued, emotional level. Studies show a positive correlation between loyalty and financial performance (Consortium for Service Innovation, 2010), and as customer experience and loyalty become key differentiators, user experience professionals cannot explore the model's possibilities. The digital universe (measured at 1.2 million petabytes in 2010) is doubling every 12 to 18 months, and 70 percent of that universe consists of user-generated content (IDC, 2010). Conversation as user assistance cannot be ignored but must be embraced. It is a time to manage for abundance, not scarcity. Besides, the conversation approach certainly sounds more interesting, rewarding, and fun than the traditional model! I would like to thank Anne for her time and thoughts, and recommend that all user assistance professionals read her book. You can follow Anne on Twitter at: http://www.twitter.com/annegentle. Oracle's Acrolinx IQ deployment was used to author this article.

    Read the article

  • LLBLGen Pro v3.1 released!

    - by FransBouma
    Yesterday we released LLBLGen Pro v3.1! Version 3.1 comes with new features and enhancements, which I'll describe briefly below. v3.1 is a free upgrade for v3.x licensees. What's new / changed? Designer Extensible Import system. An extensible import system has been added to the designer to import project data from external sources. Importers are plug-ins which import project meta-data (like entity definitions, mappings and relational model data) from an external source into the loaded project. In v3.1, an importer plug-in for importing project elements from existing LLBLGen Pro v3.x project files has been included. You can use this importer to create source projects from which you import parts of models to build your actual project with. Model-only relationships. In v3.1, relationships of the type 1:1, m:1 and 1:n can be marked as model-only. A model-only relationship isn't required to have a backing foreign key constraint in the relational model data. They're ideal for projects which have to work with relational databases where changes can't always be made or some relationships can't be added to (e.g. the ones which are important for the entity model, but are not allowed to be added to the relational model for some reason). Custom field ordering. Although fields in an entity definition don't really have an ordering, it can be important for some situations to have the entity fields in a given order, e.g. when you use compound primary keys. Field ordering can be defined using a pop-up dialog which can be opened through various ways, e.g. inside the project explorer, model view and entity editor. It can also be set automatically during refreshes based on new settings. Command line relational model data refresher tool, CliRefresher.exe. The command line refresh tool shipped with v2.6 is now available for v3.1 as well Navigation enhancements in various designer elements. It's now easier to find elements like entities, typed views etc. in the project explorer from editors, to navigate to related entities in the project explorer by right clicking a relationship, navigate to the super-type in the project explorer when right-clicking an entity and navigate to the sub-type in the project explorer when right-clicking a sub-type node in the project explorer. Minor visual enhancements / tweaks LLBLGen Pro Runtime Framework Entity creation is now up to 30% faster and takes 5% less memory. Creating an entity object has been optimized further by tweaks inside the framework to make instantiating an entity object up to 30% faster. It now also takes up to 5% less memory than in v3.0 Prefetch Path node merging is now up to 20-25% faster. Setting entity references required the creation of a new relationship object. As this relationship object is always used internally it could be cached (as it's used for syncing only). This increases performance by 20-25% in the merging functionality. Entity fetches are now up to 20% faster. A large number of tweaks have been applied to make entity fetches up to 20% faster than in v3.0. Full WCF RIA support. It's now possible to use your LLBLGen Pro runtime framework powered domain layer in a WCF RIA application using the VS.NET tools for WCF RIA services. WCF RIA services is a Microsoft technology for .NET 4 and typically used within silverlight applications. SQL Server DQE compatibility level is now per instance. (Usable in Adapter). It's now possible to set the compatibility level of the SQL Server Dynamic Query Engine (DQE) per instance of the DQE instead of the global setting it was before. The global setting is still available and is used as the default value for the compatibility level per-instance. You can use this to switch between CE Desktop and normal SQL Server compatibility per DataAccessAdapter instance. Support for COUNT_BIG aggregate function (SQL Server specific). The aggregate function COUNT_BIG has been added to the list of available aggregate functions to be used in the framework. Minor changes / tweaks I'm especially pleased with the import system, as that makes working with entity models a lot easier. The import system lets you import from another LLBLGen Pro v3 project any entity definition, mapping and / or meta-data like table definitions. This way you can build repository projects where you store model fragments, e.g. the building blocks for a customer-order system, a user credential model etc., any model you can think of. In most projects, you'll recognize that some parts of your new model look familiar. In these cases it would have been easier if you would have been able to import these parts from projects you had pre-created. With LLBLGen Pro v3.1 you can. For example, say you have an Oracle schema called CRM which contains the bread 'n' butter customer-order-product kind of model. You create an entity model from that schema and save it in a project file. Now you start working on another project for another customer and you have to use SQL Server. You also start using model-first development, so develop the entity model from scratch as there's no existing database. As this customer also requires some CRM like entity model, you import the entities from your saved Oracle project into this new SQL Server targeting project. Because you don't work with Oracle this time, you don't import the relational meta-data, just the entities, their relationships and possibly their inheritance hierarchies, if any. As they're now entities in your project you can change them a bit to match the new customer's requirements. This can save you a lot of time, because you can re-use pre-fab model fragments for new projects. In the example above there are no tables yet (as you work model first) so using the forward mapping capabilities of LLBLGen Pro v3 creates the tables, PK constraints, Unique Constraints and FK constraints for you. This way you can build a nice repository of model fragments which you can re-use in new projects.

    Read the article

  • Flashing your Windows Phone Dummies

    - by Martin Hinshelwood
    The rate at which vendors release new updates for the HD2 is ridiculously slow. You have to wait for Microsoft to release the new OS, then you wait for HTC to build it into a ROM, and then you have to wait up to 6 months for your operator to badly customise it for their network. Once Windows Phone 7 is released this problem should go away as Microsoft is likely to be able to update the phone over the air, but what do we do until then? I want Windows Mobile 6.5.5 now!   I’m an early adopter. If there is a new version of something then that’s the version I want. As long as you accept that you are using something on a “let the early adopter beware” and accept that there may be bugs, sometimes serious crippling bugs the go for it. Note that I won't be responsible if you end up bricking your phone, unlocking or flashing your radio or ROM can be risky. If you follow the instructions then you should be fine, I've flashed my phones (SPV, M300, M1000, M2000, M3100, TyTN, TyTN 2, HD2) hundreds of times without any problems! I have been using Windows Mobile 6.5.5 before it was called 6.5.5 and for long enough that I don’t even remember when I first started using it. I was using it on my HTC TyTN 2 before I got an HD2 a couple of months before Christmas, and the first custom ROM’s for the HD2 were a couple of months after that. I always update to the latest ROM that I like, and occasionally I go back to the stock ROM’s to have a look see, but I am always disappointed. Terms: Soft Reset: Same as pulling out the battery, but is like a reboot for your phone Hard Reset: Reinstalls the Operating system from the Image that is stored on it ROM: This is Image that is loaded onto your phone and it is used to reinstall your phone whenever you do a “hard reset”. Stock ROM: A ROM from the original vendor… So HTC Cook a ROM: Referring to Cooking a ROM is the process a ROM developer goes through to take all of the parts (OS, Drivers and Applications) that make up a running phone and compiling them into a ROM. ROM Kitchen: A place where you get an SDK and all the component parts of the phone: OD, Drivers and Application. There are usually lots of Tools for making it easier to compile and build the image. Flashing: The process of updating one of the layers of your phone with a new layer Bricked: This is what happens when flashing goes wrong. Your phone is now good for only one thing… stopping paper blowing away in a windy place. You can “cook” you own ROM using one of the many good “ROM Kitchens” or you can use a ROM built and tested by someone else. I have cooked my own ROM before, and while the tutorials are good, it is a lot of hassle. You can only Flash new ROM’s that are specifically for your phone only so find a ROM for your phone and XDA Developers is the best place to look. It has a forum based structure and you can find your phone quite easily. XDA Developer Forum Installing a new ROM does have its risks. In the past there have been stories about phones being “bricked” but I have not heard of a bricked phone for quite some years. if you follow the instructions carefully you should not have any problems. note: Most of the tools are written by people for whom English is not their first language to you will need concentrate hard to understand some of the instructions. Have you ever read a manual that was just literally translated from another language? Enough said… There are a number of layers on your phone that you will need to know about: SPL: This is the lowest level, like a BIOS on a PC and is the Operating Systems gateway to the hardware Radio: I think of this as the hardware drivers, and you will need a different Radio for CDMA than GSM networks ROM: This is like your Windows CD, but it is stored internally to the Phone. Flashing your phone consists of replacing one Image with another and then wiping your phone and automatically reinstall from the Image. Sometimes when you download an Image wither it is for a Radio or for ROM you only get a file called *.nbh. What do you do with this? Well you need an RUU application to push that Image to your phone. The RUU’s are different per phone, but there is a CustomRUU for the HD2 that will update your phone with any *.nbh placed in the same directory. Download and Instructions for CustomRUU #1 Flash HardSPL An SPL is kind of like a BIOS, and the default one has checks to make sure that you are only installing a signed ROM. This would prevent you from installing one that comes from any other source but the vendor. NOTE: Installing a HARD SPL invalidates your warranty so remember to Flash your phone with a “stock” vendor ROM before trying to send your phone in for repairs. Is the warranty reinstated when you go back to a stock ROM? I don’t know… Updating your SPL to a HardSPL effectively unlocks your phone so you can install anything you like. I would recommend the HardSPL2. Download and Instructions for HardSPL2 #2 Task29 One of the problems that has been seen on the HD2 when flashing new ROM’s is that things are left over from the old ROM. For a while the recommendation was to Flash a stock ROM first, but some clever cookies have come up with “Task29” which formats your phone first. After running this your phone will be blank and will only boot to the white HTC logo and no further. You should follow the instructions and reboot (remove battery) and hold down the “volume down” button while turning you HD2 on to enter the bootloader. From here you can run CustomRUU once the USB message appears. Download and Instructions for Task29 #2 Flash Radio You may need to play around with this one, there is no good and bad version and the latest is not always the best. You know that annoying thing when you hit “end call” on your phone and nothing happens? Well that's down to the Radio. Get this version right for you and you may even be able to make calls. From a Windows Mobile as well Download There are no instructions here, but they are the same as th ROM, but you use this *.nbh file. #3 Flash ROM If you have gotten this far then you are probably a pro by now Just download the latest ROM below and Flash to your phone. I have been really impressed by the Artemis line of ROM’s but it is no way the only choice. I like this one as the developer builds them as close to the stock ROM as possible while updating to the latest of everything. Download and Instructions for  Artemis HD2 vXX Conclusion While updating your ROM is not for the faint hearted it provides more options than the Stock ROM’s and quicker feature updates than waiting… Technorati Tags: WM6

    Read the article

  • EM12c Release 4: New EMCLI Verbs

    - by SubinDaniVarughese
    Here are the new EM CLI verbs in Enterprise Manager 12c Release 4 (12.1.0.4). This helps you in writing new scripts or enhancing your existing scripts for further automation. Basic Administration Verbs invoke_ws - Invoke EM web service.ADM Verbs associate_target_to_adm - Associate a target to an application data model. export_adm - Export Application Data Model to a specified .xml file. import_adm - Import Application Data Model from a specified .xml file. list_adms - List the names, target names and application suites of existing Application Data Models verify_adm - Submit an application data model verify job for the target specified.Agent Update Verbs get_agent_update_status -  Show Agent Update Results get_not_updatable_agents - Shows Not Updatable Agents get_updatable_agents - Show Updatable Agents update_agents - Performs Agent Update Prereqs and submits Agent Update JobBI Publisher Reports Verbs grant_bipublisher_roles - Grants access to the BI Publisher catalog and features. revoke_bipublisher_roles - Revokes access to the BI Publisher catalog and features.Blackout Verbs create_rbk - Create a Retro-active blackout.CFW Verbs cancel_cloud_service_requests -  To cancel cloud service requests delete_cloud_service_instances -  To delete cloud service instances delete_cloud_user_objects - To delete cloud user objects. get_cloud_service_instances - To get information about cloud service instances get_cloud_service_requests - To get information about cloud requests get_cloud_user_objects - To get information about cloud user objects.Chargeback Verbs add_chargeback_entity - Adds the given entity to Chargeback. assign_charge_plan - Assign a plan to a chargeback entity. assign_cost_center - Assign a cost center to a chargeback entity. create_charge_entity_type - Create  charge entity type export_charge_plans - Exports charge plans metadata to file export_custom_charge_items -  Exports user defined charge items to a file import_charge_plans - Imports charge plans metadata from given file import_custom_charge_items -  Imports user defined charge items metadata from given file list_charge_plans - Gives a list of charge plans in Chargeback. list_chargeback_entities - Gives a list of all the entities in Chargeback list_chargeback_entity_types - Gives a list of all the entity types that are supported in Chargeback list_cost_centers - Lists the cost centers in Chargeback. remove_chargeback_entity - Removes the given entity from Chargeback. unassign_charge_plan - Un-assign the plan associated to a chargeback entity. unassign_cost_center - Un-assign the cost center associated to a chargeback entity.Configuration/Association History disable_config_history - Disable configuration history computation for a target type. enable_config_history - Enable configuration history computation for a target type. set_config_history_retention_period - Sets the amount of time for which Configuration History is retained.ConfigurationCompare config_compare - Submits the configuration comparison job get_config_templates - Gets all the comparison templates from the repositoryCompliance Verbs fix_compliance_state -  Fix compliance state by removing references in deleted targets.Credential Verbs update_credential_setData Subset Verbs export_subset_definition - Exports specified subset definition as XML file at specified directory path. generate_subset - Generate subset using specified subset definition and target database. import_subset_definition - Import a subset definition from specified XML file. import_subset_dump - Imports dump file into specified target database. list_subset_definitions - Get the list of subset definition, adm and target nameDelete pluggable Database Job Verbs delete_pluggable_database - Delete a pluggable databaseDeployment Procedure Verbs get_runtime_data - Get the runtime data of an executionDiscover and Push to Agents Verbs generate_discovery_input - Generate Discovery Input file for discovering Auto-Discovered Domains refresh_fa - Refresh Fusion Instance run_fa_diagnostics - Run Fusion Applications DiagnosticsFusion Middleware Provisioning Verbs create_fmw_domain_profile - Create a Fusion Middleware Provisioning Profile from a WebLogic Domain create_fmw_home_profile - Create a Fusion Middleware Provisioning Profile from an Oracle Home create_inst_media_profile - Create a Fusion Middleware Provisioning Profile from Installation MediaGold Agent Image Verbs create_gold_agent_image - Creates a gold agent image. decouple_gold_agent_image - Decouples the agent from gold agent image. delete_gold_agent_image - Deletes a gold agent image. get_gold_agent_image_activity_status -  Gets gold agent image activity status. get_gold_agent_image_details - Get the gold agent image details. list_agents_on_gold_image - Lists agents on a gold agent image. list_gold_agent_image_activities - Lists gold agent image activities. list_gold_agent_image_series - Lists gold agent image series. list_gold_agent_images - Lists the available gold agent images. promote_gold_agent_image - Promotes a gold agent image. stage_gold_agent_image - Stages a gold agent image.Incident Rules Verbs add_target_to_rule_set - Add a target to an enterprise rule set. delete_incident_record - Delete one or more open incidents remove_target_from_rule_set - Remove a target from an enterprise rule set. Job Verbs export_jobs - Export job details in to an xml file import_jobs - Import job definitions from an xml file job_input_file - Supply details for a job verb in a property file resume_job - Resume a job or set of jobs suspend_job - Suspend a job or set of jobs Oracle Database as Service Verbs config_db_service_target - Configure DB Service target for OPCPrivilege Delegation Settings Verbs clear_default_privilege_delegation_setting - Clears the default privilege delegation setting for a given list of platforms set_default_privilege_delegation_setting - Sets the default privilege delegation setting for a given list of platforms test_privilege_delegation_setting - Tests a Privilege Delegation Setting on a hostSSA Verbs cleanup_dbaas_requests - Submit cleanup request for failed request create_dbaas_quota - Create Database Quota for a SSA User Role create_service_template - Create a Service Template delete_dbaas_quota - Delete the Database Quota setup for a SSA User Role delete_service_template - Delete a given service template get_dbaas_quota - List the Database Quota setup for all SSA User Roles get_dbaas_request_settings - List the Database Request Settings get_service_template_detail - Get details of a given service template get_service_templates -  Get the list of available service templates rename_service_template -  Rename a given service template update_dbaas_quota - Update the Database Quota for a SSA User Role update_dbaas_request_settings - Update the Database Request Settings update_service_template -  Update a given service template. SavedConfigurations get_saved_configs  - Gets the saved configurations from the repository Server Generated Alert Metric Verbs validate_server_generated_alerts  - Server Generated Alert Metric VerbServices Verbs edit_sl_rule - Edit the service level rule for the specified serviceSiebel Verbs list_siebel_enterprises -  List Siebel enterprises currently monitored in EM list_siebel_servers -  List Siebel servers under a specified siebel enterprise update_siebel- Update a Siebel enterprise or its underlying serversSiteGuard Verbs add_siteguard_aux_hosts -  Associate new auxiliary hosts to the system configure_siteguard_lag -  Configure apply lag and transport lag limit for databases delete_siteguard_aux_host -  Delete auxiliary host associated with a site delete_siteguard_lag -  Erases apply lag or transport lag limit for databases get_siteguard_aux_hosts -  Get all auxiliary hosts associated with a site get_siteguard_health_checks -  Shows schedule of health checks get_siteguard_lag -  Shows apply lag or transport lag limit for databases schedule_siteguard_health_checks -  Schedule health checks for an operation plan stop_siteguard_health_checks -  Stops all future health check execution of an operation plan update_siteguard_lag -  Updates apply lag and transport lag limit for databasesSoftware Library Verbs stage_swlib_entity_files -  Stage files of an entity from Software Library to a host target.Target Data Verbs create_assoc - Creates target associations delete_assoc - Deletes target associations list_allowed_pairs - Lists allowed association types for specified source and destination list_assoc - Lists associations between source and destination targets manage_agent_partnership - Manages partnership between agents. Used for explicitly assigning agent partnershipsTrace Reports generate_ui_trace_report  -  Generate and download UI Page performance report (to identify slow rendering pages)VI EMCLI Verbs add_virtual_platform - Add Oracle Virtual PLatform(s). modify_virtual_platform - Modify Oracle Virtual Platform.To get more details about each verb, execute$ emcli help <verb_name>Example: $ emcli help list_assocNew resources in list verbThese are the new resources in EM CLI list verb :Certificates  WLSCertificateDetails Credential Resource Group  PreferredCredentialsDefaultSystemScope - Preferred credentials (System Scope)   PreferredCredentialsSystemScope - Target preferred credentialPrivilege Delegation Settings  TargetPrivilegeDelegationSettingDetails  - List privilege delegation setting details on a host  TargetPrivilegeDelegationSetting - List privilege delegation settings on a host   PrivilegeDelegationSettings  - Lists all Privilege Delegation Settings   PrivilegeDelegationSettingDetails - Lists details of  Privilege Delegation Settings To get more details about each resource, execute$ emcli list -resource="<resource_name>" -helpExample: $ emcli list -resource="PrivilegeDelegationSettings" -helpDeprecated Verbs:Agent Administration Verbs resecure_agent - Resecure an agentTo get the complete list of verbs, execute:$ emcli help Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter Download the Oracle Enterprise Manager 12c Mobile app

    Read the article

  • 13.10 - Weird WiFi connection problems - WMP300N - Broadcom BCM4321

    - by user1898041
    Just installed 13.10 on my desktop and I really like it. After having problems with getting the wifi to work, I installed it connected to the internet with an ethernet cable and added in the 3rd party software and updates as per the installation procedure. After installation was completed, I saw the wifi icon in the upper right hand corner, but it was not seeing any wifi networks. Some Googling brought me to use the 'Additional Drivers' application. It found the WMP300N Broadcom BDM4321 based pci wifi card and installed the proprietary Broadcom STA wireless driver, which may have been installed before. I'm not sure. Here is the weird part: when I start my system, wifi seems to be in some sort of suspended state where the system sees that the card exists but the card will not detect any wifi networks. It will work after booting once I 'Additional Drivers' application and then start FireFox. I know it seems weird, but this is the process I've got down to get the card to recognize wifi networks. After those applications are open for a few seconds, the card starts to function like normal (although maintaining the wifi connection is problem but most likely a seperate issue). The reason this is a problem is because this is supposed to just be a headless box managed through SSH. Here are the readouts from the common network diagnosis programs BEFORE I open 'Additional Drivers' and 'FireFox'. All commands were done with sudo. lspci 00:00.0 Host bridge: Intel Corporation 82G35 Express DRAM Controller (rev 03) 00:01.0 PCI bridge: Intel Corporation 82G35 Express PCI Express Root Port (rev 03) 00:1a.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 02) 00:1a.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 02) 00:1a.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 02) 00:1a.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 02) 00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 02) 00:1c.0 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 1 (rev 02) 00:1c.4 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 5 (rev 02) 00:1c.5 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 6 (rev 02) 00:1d.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 02) 00:1d.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 02) 00:1d.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 02) 00:1d.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 02) 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 92) 00:1f.0 ISA bridge: Intel Corporation 82801IR (ICH9R) LPC Interface Controller (rev 02) 00:1f.2 SATA controller: Intel Corporation 82801IR/IO/IH (ICH9R/DO/DH) 6 port SATA Controller [AHCI mode] (rev 02) 00:1f.3 SMBus: Intel Corporation 82801I (ICH9 Family) SMBus Controller (rev 02) 01:00.0 VGA compatible controller: NVIDIA Corporation GT216 [GeForce GT 220] (rev a2) 01:00.1 Audio device: NVIDIA Corporation High Definition Audio Controller (rev a1) 02:00.0 Ethernet controller: Qualcomm Atheros Attansic L1 Gigabit Ethernet (rev b0) 03:00.0 IDE interface: JMicron Technology Corp. JMB368 IDE controller 05:00.0 Network controller: Broadcom Corporation BCM4321 802.11b/g/n (rev 01) 05:03.0 FireWire (IEEE 1394): VIA Technologies, Inc. VT6306/7/8 [Fire II(M)] IEEE 1394 OHCI Controller (rev c0) - lshw *-network description: Ethernet interface product: Attansic L1 Gigabit Ethernet vendor: Qualcomm Atheros physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: b0 serial: 00:22:15:00:a8:12 capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list rom ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=atl1 driverversion=2.1.3 latency=0 link=no multicast=yes port=twisted pair resources: irq:46 memory:feac0000-feafffff memory:feaa0000-feabffff *-network description: Wireless interface product: BCM4321 802.11b/g/n vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:05:00.0 logical name: eth1 version: 01 serial: 00:23:69:d8:2b:16 width: 32 bits clock: 33MHz capabilities: bus_master ethernet physical wireless configuration: broadcast=yes driver=wl0 driverversion=6.30.223.141 (r415941) latency=64 multicast=yes wireless=IEEE 802.11abg resources: irq:16 memory:febfc000-febfffff - ifconfig eth0 Link encap:Ethernet HWaddr 00:22:15:00:a8:12 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) eth1 Link encap:Ethernet HWaddr 00:23:69:d8:2b:16 inet6 addr: fe80::223:69ff:fed8:2b16/64 Scope:Link UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:16 Base address:0xc000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:24 errors:0 dropped:0 overruns:0 frame:0 TX packets:24 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1856 (1.8 KB) TX bytes:1856 (1.8 KB) - iwconfig eth1 IEEE 802.11abg ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=200 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:off - iwlist scan eth1 No scan results - Here are the various commands AFTER I open 'Additional Drivers' and 'FireFox' lspci 00:00.0 Host bridge: Intel Corporation 82G35 Express DRAM Controller (rev 03) 00:01.0 PCI bridge: Intel Corporation 82G35 Express PCI Express Root Port (rev 03) 00:1a.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 02) 00:1a.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 02) 00:1a.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 02) 00:1a.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 02) 00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 02) 00:1c.0 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 1 (rev 02) 00:1c.4 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 5 (rev 02) 00:1c.5 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 6 (rev 02) 00:1d.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 02) 00:1d.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 02) 00:1d.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 02) 00:1d.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 02) 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 92) 00:1f.0 ISA bridge: Intel Corporation 82801IR (ICH9R) LPC Interface Controller (rev 02) 00:1f.2 SATA controller: Intel Corporation 82801IR/IO/IH (ICH9R/DO/DH) 6 port SATA Controller [AHCI mode] (rev 02) 00:1f.3 SMBus: Intel Corporation 82801I (ICH9 Family) SMBus Controller (rev 02) 01:00.0 VGA compatible controller: NVIDIA Corporation GT216 [GeForce GT 220] (rev a2) 01:00.1 Audio device: NVIDIA Corporation High Definition Audio Controller (rev a1) 02:00.0 Ethernet controller: Qualcomm Atheros Attansic L1 Gigabit Ethernet (rev b0) 03:00.0 IDE interface: JMicron Technology Corp. JMB368 IDE controller 05:00.0 Network controller: Broadcom Corporation BCM4321 802.11b/g/n (rev 01) 05:03.0 FireWire (IEEE 1394): VIA Technologies, Inc. VT6306/7/8 [Fire II(M)] IEEE 1394 OHCI Controller (rev c0) - lshw *-network description: Ethernet interface product: Attansic L1 Gigabit Ethernet vendor: Qualcomm Atheros physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: b0 serial: 00:22:15:00:a8:12 capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list rom ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=atl1 driverversion=2.1.3 latency=0 link=no multicast=yes port=twisted pair resources: irq:46 memory:feac0000-feafffff memory:feaa0000-feabffff *-network description: Wireless interface product: BCM4321 802.11b/g/n vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:05:00.0 logical name: eth1 version: 01 serial: 00:23:69:d8:2b:16 width: 32 bits clock: 33MHz capabilities: bus_master ethernet physical wireless configuration: broadcast=yes driver=wl0 driverversion=6.30.223.141 (r415941) ip=192.168.1.103 latency=64 multicast=yes wireless=IEEE 802.11abg resources: irq:16 memory:febfc000-febfffff - ifconfig eth0 Link encap:Ethernet HWaddr 00:22:15:00:a8:12 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) eth1 Link encap:Ethernet HWaddr 00:23:69:d8:2b:16 inet addr:192.168.1.103 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::223:69ff:fed8:2b16/64 Scope:Link UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:85 errors:0 dropped:0 overruns:0 frame:11901 TX packets:132 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:52641 (52.6 KB) TX bytes:19058 (19.0 KB) Interrupt:16 Base address:0xc000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:76 errors:0 dropped:0 overruns:0 frame:0 TX packets:76 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:6084 (6.0 KB) TX bytes:6084 (6.0 KB) - iwconfig eth1 IEEE 802.11abg ESSID:"BU" Mode:Managed Frequency:2.447 GHz Access Point: 00:26:F2:1F:81:02 Bit Rate=54 Mb/s Tx-Power=200 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:off Link Quality=59/70 Signal level=-51 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 - iwlist scan A LOT OF SSIDs FOUND! - I'd like to have this problem fixed, but I'm not quite sure where to go. Been Googling a lot and can't seem to find anyone else with this problem.

    Read the article

  • CDN on Hosted Service in Windows Azure

    - by Shaun
    Yesterday I told Wang Tao, an annoying colleague sitting beside me, about how to make the static content enable the CDN in his website which had just been published on Windows Azure. The approach would be Move the static content, the images, CSS files, etc. into the blob storage. Enable the CDN on his storage account. Change the URL of those static files to the CDN URL. I think these are the very common steps when using CDN. But this morning I found that the new Windows Azure SDK 1.4 and new Windows Azure Developer Portal had just been published announced at the Windows Azure Blog. One of the new features in this release is about the CDN, which means we can enabled the CDN not only for a storage account, but a hosted service as well. Within this new feature the steps I mentioned above would be turned simpler a lot.   Enable CDN for Hosted Service To enable the CDN for a hosted service we just need to log on the Windows Azure Developer Portal. Under the “Hosted Services, Storage Accounts & CDN” item we will find a new menu on the left hand side said “CDN”, where we can manage the CDN for storage account and hosted service. As we can see the hosted services and storage accounts are all listed in my subscriptions. To enable a CDN for a hosted service is veru simple, just select a hosted service and click the New Endpoint button on top. In this dialog we can select the subscription and the storage account, or the hosted service we want the CDN to be enabled. If we selected the hosted service, like I did in the image above, the “Source URL for the CDN endpoint” will be shown automatically. This means the windows azure platform will make all contents under the “/cdn” folder as CDN enabled. But we cannot change the value at the moment. The following 3 checkboxes next to the URL are: Enable CDN: Enable or disable the CDN. HTTPS: If we need to use HTTPS connections check it. Query String: If we are caching content from a hosted service and we are using query strings to specify the content to be retrieved, check it. Just click the “Create” button to let the windows azure create the CDN for our hosted service. The CDN would be available within 60 minutes as Microsoft mentioned. My experience is that about 15 minutes the CDN could be used and we can find the CDN URL in the portal as well.   Put the Content in CDN in Hosted Service Let’s create a simple windows azure project in Visual Studio with a MVC 2 Web Role. When we created the CDN mentioned above the source URL of CDN endpoint would be under the “/cdn” folder. So in the Visual Studio we create a folder under the website named “cdn” and put some static files there. Then all these files would be cached by CDN if we use the CDN endpoint. The CDN of the hosted service can cache some kind of “dynamic” result with the Query String feature enabled. We create a controller named CdnController and a GetNumber action in it. The routed URL of this controller would be /Cdn/GetNumber which can be CDN-ed as well since the URL said it’s under the “/cdn” folder. In the GetNumber action we just put a number value which specified by parameter into the view model, then the URL could be like /Cdn/GetNumber?number=2. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Web; 5: using System.Web.Mvc; 6:  7: namespace MvcWebRole1.Controllers 8: { 9: public class CdnController : Controller 10: { 11: // 12: // GET: /Cdn/ 13:  14: public ActionResult GetNumber(int number) 15: { 16: return View(number); 17: } 18:  19: } 20: } And we add a view to display the number which is super simple. 1: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<int>" %> 2:  3: <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> 4: GetNumber 5: </asp:Content> 6:  7: <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> 8:  9: <h2>The number is: <% 1: : Model.ToString() %></h2> 10:  11: </asp:Content> Since this action is under the CdnController the URL would be under the “/cdn” folder which means it can be CDN-ed. And since we checked the “Query String” the content of this dynamic page will be cached by its query string. So if I use the CDN URL, http://az25311.vo.msecnd.net/GetNumber?number=2, the CDN will firstly check if there’s any content cached with the key “GetNumber?number=2”. If yes then the CDN will return the content directly; otherwise it will connect to the hosted service, http://aurora-sys.cloudapp.net/Cdn/GetNumber?number=2, and then send the result back to the browser and cached in CDN. But to be notice that the query string are treated as string when used by the key of CDN element. This means the URLs below would be cached in 2 elements in CDN: http://az25311.vo.msecnd.net/GetNumber?number=2&page=1 http://az25311.vo.msecnd.net/GetNumber?page=1&number=2 The final step is to upload the project onto azure. Test the Hosted Service CDN After published the project on azure, we can use the CDN in the website. The CDN endpoint we had created is az25311.vo.msecnd.net so all files under the “/cdn” folder can be requested with it. Let’s have a try on the sample.htm and c_great_wall.jpg static files. Also we can request the dynamic page GetNumber with the query string with the CDN endpoint. And if we refresh this page it will be shown very quickly since the content comes from the CDN without MCV server side process. We style of this page was missing. This is because the CSS file was not includes in the “/cdn” folder so the page cannot retrieve the CSS file from the CDN URL.   Summary In this post I introduced the new feature in Windows Azure CDN with the release of Windows Azure SDK 1.4 and new Developer Portal. With the CDN of the Hosted Service we can just put the static resources under a “/cdn” folder so that the CDN can cache them automatically and no need to put then into the blob storage. Also it support caching the dynamic content with the Query String feature. So that we can cache some parts of the web page by using the UserController and CDN. For example we can cache the log on user control in the master page so that the log on part will be loaded super-fast. There are some other new features within this release you can find here. And for more detailed information about the Windows Azure CDN please have a look here as well.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

< Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >