Search Results

Search found 5843 results on 234 pages for 'material handling equipme'.

Page 184/234 | < Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >

  • WordPress not resizing images with Nginx + php-fpm and other issues

    - by Julian Fernandes
    Recently i setup a Ubuntu 12.04 VPS with 512mb/1ghz CPU, Nginx + php-fpm + Varnish + APC + Percona's MySQL server + CloudFlare Pro for our Ubuntu LoCo Team's WordPress blog. The blog get about 3~4k daily hits, use about 180MB and 8~20% CPU. Everything seems to be working insanely fast... page load is really good and is about 16x faster than any of our competitors... but there is one problem. When we upload a image, WordPress don't resize it, so all we can do it insert the full image in the post. If the imagem have, let's say, 30kb, it resize fine... but if the image have 100kb+, it won't... In nginx error logs i see this: upstream timed out (110: Connection timed out) while reading response header from upstream, client: 150.162.216.64, server: www.ubuntubrsc.com, request: "POST /wp-admin/async-upload.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "www.ubuntubrsc.com", referrer: "http://www.ubuntubrsc.com/wp-admin/media-upload.php?post_id=2668&" It seems to be related with the issue, but i dunno. When that timeout happens, i started to get it when i'm trying to view a post too: upstream timed out (110: Connection timed out) while reading response header from upstream, client: 150.162.216.64, server: www.ubuntubrsc.com, request: "GET /tutoriais-gimp-6-adicionando-aplicando-novos-pinceis.html HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "www.ubuntubrsc.com", referrer: "http://www.ubuntubrsc.com/" And only a restart of php5-fpm fix it. I tryed increasing some timeouts and stuffs but it did not worked, so i guess it's some kind of limitation i did not figured yet. Could someone help me with it, please? /etc/nginx/nginx.conf: user www-data; worker_processes 1; pid /var/run/nginx.pid; events { worker_connections 1024; use epoll; multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay off; keepalive_timeout 15; keepalive_requests 2000; types_hash_max_size 2048; server_tokens off; server_name_in_redirect off; open_file_cache max=1000 inactive=300s; open_file_cache_valid 360s; open_file_cache_min_uses 2; open_file_cache_errors off; server_names_hash_bucket_size 64; # server_name_in_redirect off; client_body_buffer_size 128K; client_header_buffer_size 1k; client_max_body_size 2m; large_client_header_buffers 4 8k; client_body_timeout 10m; client_header_timeout 10m; send_timeout 10m; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## error_log /var/log/nginx/error.log; access_log off; ## # CloudFlare's IPs (uncomment when site goes live) ## set_real_ip_from 204.93.240.0/24; set_real_ip_from 204.93.177.0/24; set_real_ip_from 199.27.128.0/21; set_real_ip_from 173.245.48.0/20; set_real_ip_from 103.22.200.0/22; set_real_ip_from 141.101.64.0/18; set_real_ip_from 108.162.192.0/18; set_real_ip_from 190.93.240.0/20; real_ip_header CF-Connecting-IP; set_real_ip_from 127.0.0.1/32; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; gzip_vary on; gzip_proxied any; gzip_comp_level 9; gzip_min_length 1000; gzip_proxied expired no-cache no-store private auth; gzip_buffers 32 8k; # gzip_http_version 1.1; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } /etc/nginx/fastcgi_params: fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param HTTPS $https; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 256 4k; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; /etc/nginx/sites-avaiable/default: ## # DEFAULT HANDLER # ubuntubrsc.com ## server { listen 8080; # Make site available from main domain server_name www.ubuntubrsc.com; # Root directory root /var/www; index index.php index.html index.htm; include /var/www/nginx.conf; access_log off; location / { try_files $uri $uri/ /index.php?q=$uri&$args; } location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } location ~ /\. { deny all; access_log off; log_not_found off; } location ~* ^/wp-content/uploads/.*.php$ { deny all; access_log off; log_not_found off; } rewrite /wp-admin$ $scheme://$host$uri/ permanent; error_page 404 = @wordpress; log_not_found off; location @wordpress { include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_param SCRIPT_NAME /index.php; fastcgi_param SCRIPT_FILENAME $document_root/index.php; } location ~ \.php$ { try_files $uri =404; include /etc/nginx/fastcgi_params; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; if (-f $request_filename) { fastcgi_pass unix:/var/run/php5-fpm.sock; } } } server { listen 8080; server_name ubuntubrsc.* www.ubuntubrsc.net www.ubuntubrsc.org www.ubuntubrsc.com.br www.ubuntubrsc.info www.ubuntubrsc.in; return 301 $scheme://www.ubuntubrsc.com$request_uri; } /var/www/nginx.conf: # BEGIN W3TC Minify cache location ~ /wp-content/w3tc/min.*\.js$ { types {} default_type application/x-javascript; expires modified 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; add_header Vary "Accept-Encoding"; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; } location ~ /wp-content/w3tc/min.*\.css$ { types {} default_type text/css; expires modified 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; add_header Vary "Accept-Encoding"; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; } location ~ /wp-content/w3tc/min.*js\.gzip$ { gzip off; types {} default_type application/x-javascript; expires modified 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; add_header Vary "Accept-Encoding"; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; add_header Content-Encoding gzip; } location ~ /wp-content/w3tc/min.*css\.gzip$ { gzip off; types {} default_type text/css; expires modified 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; add_header Vary "Accept-Encoding"; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; add_header Content-Encoding gzip; } # END W3TC Minify cache # BEGIN W3TC Browser Cache gzip on; gzip_types text/css application/x-javascript text/x-component text/richtext image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon; location ~ \.(css|js|htc)$ { expires 31536000s; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; } location ~ \.(html|htm|rtf|rtx|svg|svgz|txt|xsd|xsl|xml)$ { expires 3600s; add_header Pragma "public"; add_header Cache-Control "max-age=3600, public, must-revalidate, proxy-revalidate"; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; try_files $uri $uri/ $uri.html /index.php?$args; } location ~ \.(asf|asx|wax|wmv|wmx|avi|bmp|class|divx|doc|docx|eot|exe|gif|gz|gzip|ico|jpg|jpeg|jpe|mdb|mid|midi|mov|qt|mp3|m4a|mp4|m4v|mpeg|mpg|mpe|mpp|otf|odb|odc|odf|odg|odp|ods|odt|ogg|pdf|png|pot|pps|ppt|pptx|ra|ram|svg|svgz|swf|tar|tif|tiff|ttf|ttc|wav|wma|wri|xla|xls|xlsx|xlt|xlw|zip)$ { expires 31536000s; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; } # END W3TC Browser Cache # BEGIN W3TC Minify core rewrite ^/wp-content/w3tc/min/w3tc_rewrite_test$ /wp-content/w3tc/min/index.php?w3tc_rewrite_test=1 last; set $w3tc_enc ""; if ($http_accept_encoding ~ gzip) { set $w3tc_enc .gzip; } if (-f $request_filename$w3tc_enc) { rewrite (.*) $1$w3tc_enc break; } rewrite ^/wp-content/w3tc/min/(.+\.(css|js))$ /wp-content/w3tc/min/index.php?file=$1 last; # END W3TC Minify core # BEGIN W3TC Skip 404 error handling by WordPress for static files if (-f $request_filename) { break; } if (-d $request_filename) { break; } if ($request_uri ~ "(robots\.txt|sitemap(_index)?\.xml(\.gz)?|[a-z0-9_\-]+-sitemap([0-9]+)?\.xml(\.gz)?)") { break; } if ($request_uri ~* \.(css|js|htc|htm|rtf|rtx|svg|svgz|txt|xsd|xsl|xml|asf|asx|wax|wmv|wmx|avi|bmp|class|divx|doc|docx|eot|exe|gif|gz|gzip|ico|jpg|jpeg|jpe|mdb|mid|midi|mov|qt|mp3|m4a|mp4|m4v|mpeg|mpg|mpe|mpp|otf|odb|odc|odf|odg|odp|ods|odt|ogg|pdf|png|pot|pps|ppt|pptx|ra|ram|svg|svgz|swf|tar|tif|tiff|ttf|ttc|wav|wma|wri|xla|xls|xlsx|xlt|xlw|zip)$) { return 404; } # END W3TC Skip 404 error handling by WordPress for static files # BEGIN Better WP Security location ~ /\.ht { deny all; } location ~ wp-config.php { deny all; } location ~ readme.html { deny all; } location ~ readme.txt { deny all; } location ~ /install.php { deny all; } set $susquery 0; set $rule_2 0; set $rule_3 0; rewrite ^wp-includes/(.*).php /not_found last; rewrite ^/wp-admin/includes(.*)$ /not_found last; if ($request_method ~* "^(TRACE|DELETE|TRACK)"){ return 403; } set $rule_0 0; if ($request_method ~ "POST"){ set $rule_0 1; } if ($uri ~ "^(.*)wp-comments-post.php*"){ set $rule_0 2$rule_0; } if ($http_user_agent ~ "^$"){ set $rule_0 4$rule_0; } if ($rule_0 = "421"){ return 403; } if ($args ~* "\.\./") { set $susquery 1; } if ($args ~* "boot.ini") { set $susquery 1; } if ($args ~* "tag=") { set $susquery 1; } if ($args ~* "ftp:") { set $susquery 1; } if ($args ~* "http:") { set $susquery 1; } if ($args ~* "https:") { set $susquery 1; } if ($args ~* "(<|%3C).*script.*(>|%3E)") { set $susquery 1; } if ($args ~* "mosConfig_[a-zA-Z_]{1,21}(=|%3D)") { set $susquery 1; } if ($args ~* "base64_encode") { set $susquery 1; } if ($args ~* "(%24&x)") { set $susquery 1; } if ($args ~* "(\[|\]|\(|\)|<|>|ê|\"|;|\?|\*|=$)"){ set $susquery 1; } if ($args ~* "(&#x22;|&#x27;|&#x3C;|&#x3E;|&#x5C;|&#x7B;|&#x7C;|%24&x)"){ set $susquery 1; } if ($args ~* "(%0|%A|%B|%C|%D|%E|%F|127.0)") { set $susquery 1; } if ($args ~* "(globals|encode|localhost|loopback)") { set $susquery 1; } if ($args ~* "(request|select|insert|concat|union|declare)") { set $susquery 1; } if ($http_cookie !~* "wordpress_logged_in_" ) { set $susquery "${susquery}2"; set $rule_2 1; set $rule_3 1; } if ($susquery = 12) { return 403; } # END Better WP Security /etc/php5/fpm/php-fpm.conf: pid = /var/run/php5-fpm.pid error_log = /var/log/php5-fpm.log emergency_restart_threshold = 3 emergency_restart_interval = 1m process_control_timeout = 10s events.mechanism = epoll /etc/php5/fpm/php.ini (only options i changed): open_basedir ="/var/www/" disable_functions = pcntl_alarm,pcntl_fork,pcntl_waitpid,pcntl_wait,pcntl_wifexited,pcntl_wifstopped,pcntl_wifsignaled,pcntl_wexitstatus,pcntl_wtermsig,pcntl_wstopsig,pcntl_signal,pcntl_signal_dispatch,pcntl_get_last_error,pcntl_strerror,pcntl_sigprocmask,pcntl_sigwaitinfo,pcntl_sigtimedwait,pcntl_exec,pcntl_getpriority,pcntl_setpriority,dl,system,shell_exec,fsockopen,parse_ini_file,passthru,popen,proc_open,proc_close,shell_exec,show_source,symlink,proc_close,proc_get_status,proc_nice,proc_open,proc_terminate,shell_exec ,highlight_file,escapeshellcmd,define_syslog_variables,posix_uname,posix_getpwuid,apache_child_terminate,posix_kill,posix_mkfifo,posix_setpgid,posix_setsid,posix_setuid,escapeshellarg,posix_uname,ftp_exec,ftp_connect,ftp_login,ftp_get,ftp_put,ftp_nb_fput,ftp_raw,ftp_rawlist,ini_alter,ini_restore,inject_code,syslog,openlog,define_syslog_variables,apache_setenv,mysql_pconnect,eval,phpAds_XmlRpc,phpA ds_remoteInfo,phpAds_xmlrpcEncode,phpAds_xmlrpcDecode,xmlrpc_entity_decode,fp,fput,virtual,show_source,pclose,readfile,wget expose_php = off max_execution_time = 30 max_input_time = 60 memory_limit = 128M display_errors = Off post_max_size = 2M allow_url_fopen = off default_socket_timeout = 60 APC settings: [APC] apc.enabled = 1 apc.shm_segments = 1 apc.shm_size = 64M apc.optimization = 0 apc.num_files_hint = 4096 apc.ttl = 60 apc.user_ttl = 7200 apc.gc_ttl = 0 apc.cache_by_default = 1 apc.filters = "" apc.mmap_file_mask = "/tmp/apc.XXXXXX" apc.slam_defense = 0 apc.file_update_protection = 2 apc.enable_cli = 0 apc.max_file_size = 10M apc.stat = 1 apc.write_lock = 1 apc.report_autofilter = 0 apc.include_once_override = 0 apc.localcache = 0 apc.localcache.size = 512 apc.coredump_unmap = 0 apc.stat_ctime = 0 /etc/php5/fpm/pool.d/www.conf user = www-data group = www-data listen = /var/run/php5-fpm.sock listen.owner = www-data listen.group = www-data listen.mode = 0666 pm = ondemand pm.max_children = 5 pm.process_idle_timeout = 3s; pm.max_requests = 50 I also started to get 404 errors in front page if i use W3 Total Cache's Page Cache (Disk Enhanced). It worked fine untill somedays ago, and then, out of nowhere, it started to happen. Tonight i will disable my mobile plugin and activate only W3 Total Cache to see if it's a conflict with them... And to finish all this, i have been getting this error: PHP Warning: apc_store(): Unable to allocate memory for pool. in /var/www/wp-content/plugins/w3-total-cache/lib/W3/Cache/Apc.php on line 41 I already modifed my APC settings, but no sucess. So... could anyone help me with those issuees, please? Ooohh... if it helps, i instaled PHP like this: sudo apt-get install php5-fpm php5-suhosin php-apc php5-gd php5-imagick php5-curl And Nginx from the official PPA. Sorry for my bad english and thanks for your time people! (:

    Read the article

  • On StringComparison Values

    - by Jesse
    When you use the .NET Framework’s String.Equals and String.Compare methods do you use an overloStringComparison enumeration value? If not, you should be because the value provided for that StringComparison argument can have a big impact on the results of your string comparison. The StringComparison enumeration defines values that fall into three different major categories: Culture-sensitive comparison using a specific culture, defaulted to the Thread.CurrentThread.CurrentCulture value (StringComparison.CurrentCulture and StringComparison.CurrentCutlureIgnoreCase) Invariant culture comparison (StringComparison.InvariantCulture and StringComparison.InvariantCultureIgnoreCase) Ordinal (byte-by-byte) comparison of  (StringComparison.Ordinal and StringComparison.OrdinalIgnoreCase) There is a lot of great material available that detail the technical ins and outs of these different string comparison approaches. If you’re at all interested in the topic these two MSDN articles are worth a read: Best Practices For Using Strings in the .NET Framework: http://msdn.microsoft.com/en-us/library/dd465121.aspx How To Compare Strings: http://msdn.microsoft.com/en-us/library/cc165449.aspx Those articles cover the technical details of string comparison well enough that I’m not going to reiterate them here other than to say that the upshot is that you typically want to use the culture-sensitive comparison whenever you’re comparing strings that were entered by or will be displayed to users and the ordinal comparison in nearly all other cases. So where does that leave the invariant culture comparisons? The “Best Practices For Using Strings in the .NET Framework” article has the following to say: “On balance, the invariant culture has very few properties that make it useful for comparison. It does comparison in a linguistically relevant manner, which prevents it from guaranteeing full symbolic equivalence, but it is not the choice for display in any culture. One of the few reasons to use StringComparison.InvariantCulture for comparison is to persist ordered data for a cross-culturally identical display. For example, if a large data file that contains a list of sorted identifiers for display accompanies an application, adding to this list would require an insertion with invariant-style sorting.” I don’t know about you, but I feel like that paragraph is a bit lacking. Are there really any “real world” reasons to use the invariant culture comparison? I think the answer to this question is, “yes”, but in order to understand why we should first think about what the invariant culture comparison really does. The invariant culture comparison is really just a culture-sensitive comparison using a special invariant culture (Michael Kaplan has a great post on the history of the invariant culture on his blog: http://blogs.msdn.com/b/michkap/archive/2004/12/29/344136.aspx). This means that the invariant culture comparison will apply the linguistic customs defined by the invariant culture which are guaranteed not to differ between different machines or execution contexts. This sort of consistently does prove useful if you needed to maintain a list of strings that are sorted in a meaningful and consistent way regardless of the user viewing them or the machine on which they are being viewed. Example: Prototype Names Let’s say that you work for a large multi-national toy company with branch offices in 10 different countries. Each year the company would work on 15-25 new toy prototypes each of which is assigned a “code name” while it is under development. Coming up with fun new code names is a big part of the company culture that everyone really enjoys, so to be fair the CEO of the company spent a lot of time coming up with a prototype naming scheme that would be fun for everyone to participate in, fair to all of the different branch locations, and accessible to all members of the organization regardless of the country they were from and the language that they spoke. Each new prototype will get a code name that begins with a letter following the previously created name using the alphabetical order of the Latin/Roman alphabet. Each new year prototype names would start back at “A”. The country that leads the prototype development effort gets to choose the name in their native language. (An appropriate Romanization system will be used for countries where the primary language is not written in the Latin/Roman alphabet. For example, the Pinyin system could be used for Chinese). To avoid repeating names, a list of all current and past prototype names will be maintained on each branch location’s company intranet site. Assuming that maintaining a single pre-sorted list is not feasible among all of the highly distributed intranet implementations, what string comparison method would you use to sort each year’s list of prototype names so that the list is both meaningful and consistent regardless of the country within which the list is being viewed? Sorting the list with a culture-sensitive comparison using the default configured culture on each country’s intranet server the list would probably work most of the time, but subtle differences between cultures could mean that two different people would see a list that was sorted slightly differently. The CEO wants the prototype names to be a unifying aspect of company culture and is adamant that everyone see the the same list sorted in the same order and there’s no way to guarantee a consistent sort across different cultures using the culture-sensitive string comparison rules. The culture-sensitive sort would produce a meaningful list for the specific user viewing it, but it wouldn’t always be consistent between different users. Sorting with the ordinal comparison would certainly be consistent regardless of the user viewing it, but would it be meaningful? Let’s say that the current year’s prototype name list looks like this: Antílope (Spanish) Babouin (French) Cahoun (Czech) Diamond (English) Flosse (German) If you were to sort this list using ordinal rules you’d end up with: Antílope Babouin Diamond Flosse Cahoun This sort is no good because the entry for “C” appears the bottom of the list after “F”. This is because the Czech entry for the letter “C” makes use of a diacritic (accent mark). The ordinal string comparison does a byte-by-byte comparison of the code points that make up each character in the string and the code point for the “C” with the diacritic mark is higher than any letter without a diacritic mark, which pushes that entry to the bottom of the sorted list. The CEO wants each country to be able to create prototype names in their native language, which means we need to allow for names that might begin with letters that have diacritics, so ordinal sorting kills the meaningfulness of the list. As it turns out, this situation is actually well-suited for the invariant culture comparison. The invariant culture accounts for linguistically relevant factors like the use of diacritics but will provide a consistent sort across all machines that perform the sort. Now that we’ve walked through this example, the following line from the “Best Practices For Using Strings in the .NET Framework” makes a lot more sense: One of the few reasons to use StringComparison.InvariantCulture for comparison is to persist ordered data for a cross-culturally identical display That line describes the prototype name example perfectly: we need a way to persist ordered data for a cross-culturally identical display. While this example is 100% made-up, I think it illustrates that there are indeed real-world situations where the invariant culture comparison is useful.

    Read the article

  • TiVo Follow-up&hellip;Training Opportunities

    - by MightyZot
    A few posts ago I talked about my experience with TiVo Customer Service. While I didn’t receive bad service per se, I felt like the reps could have communicated better. I made the argument that it should be just as easy to leave a company as it is to engage with a company, even though my intention is to remain a TiVo fan. I worked for DataStorm Technologies in the early 90s. I pointed out to another developer that we were leaving files behind in our installations. My opinion was that, if the customer is uninstalling our application, there should be no trace of it left after uninstall except for the customer’s data. He replied with, “screw ‘em. They’re leaving us. Why do we care if we left anything behind?” Wow. Surely there is a lot of arrogance in that statement. Think about this…how often do you change your services, devices, or whatever?  Personally, I change things up about once every two or three years. If I don’t change things up, I at least think about it. So, every two or three years there is an opportunity for you (as a vendor or business) to sell me something. (That opportunity actually exists all the time, because there are many of these two or three year periods overlapping.) Likewise, you have the opportunity to win back my business every two or three years as well. Customer service on exit is just as important as customer service during engagement because, every so often, you have another chance to gain back my loyalty. If you screw that up on exit, your chances are close to zero. In addition, you need to consider all of the potential or existing customers that are part of or affected by my social organizations. “Melissa” at TiVo gave me a call last week and set up some time to talk about my experience. We talked yesterday and she gave me a few moments to pontificate about my thoughts on the importance of a complete customer experience. She had listened to my customer support calls and agreed that I had made it clear that I intended to remain a TiVo customer even though suddenLink is handling my subscription. She said that suddenLink is a very important partner for them and, of course, they want to do everything they can to support TiVo / suddenLink customers.  “Melissa” also said that they had turned this experience into a training opportunity for the reps involved. I hope that is true, because that “programmer arrogance” that I mentioned above (which was somewhat pervasive back then) may be part of the reason why that company is no longer around. Good job “Melissa”!  And, like I said, I am still a TiVo fan. In fact, we love our new TiVo and many of the great new features. In addition, if you’re one of the two people that read these posts, please remember that these are just opinions. Your experiences may be, and likely will be, completely unique to you.

    Read the article

  • Podcast Show Notes: Collaborate 10 Wrap-Up - Part 1

    - by Bob Rhubart
    OK, I know last week I promised you a program featuring Oracle ACE Directors Mike van Alst (IT-Eye) and Jordan Braunstein (TUSC) and The Definitive Guide to SOA: Oracle Service Bus author Jeff Davies. But things happen. In this case, what happened was Collaborate 10 in Las Vegas. Prior to the event I asked Oracle ACE Director and OAUG board member Floyd Teter to see if he could round up a couple of people at the event for an impromtu interview over Skype (I was here in Cleveland) to get their impressions of the event. Listen to Part 1 Floyd, armed with his brand new iPad, went above and beyond the call of duty. At the appointed hour, which turned out to be about hour after the close of Collaborate 10,  Floyd had gathered nine other people to join him in a meeting room somewhere in the Mandalay Bay Convention Center. Here’s the entire roster: Floyd Teter - Project Manager at Jet Propulsion Lab, OAUG Board Blog | Twitter | LinkedIn | Oracle Mix | Oracle ACE Profile Mark Rittman - EMEA Technical Director and Co-Founder, Rittman Mead,  ODTUG Board Blog | Twitter | LinkedIn | Oracle Mix | Oracle ACE Profile Chet Justice - OBI Consultant at BI Wizards Blog | Twitter | LinkedIn | Oracle Mix | Oracle ACE Profile Elke Phelps - Oracle Applications DBA at Humana, OAUG SIG Chair Blog | LinkedIn | Oracle Mix | Book | Oracle ACE Profile Paul Jackson - Oracle Applications DBA at Humana Blog | LinkedIn | Oracle Mix | Book Srini Chavali - Enterprise Database & Tools Leader at Cummins, Inc Blog | LinkedIn | Oracle Mix Dave Ferguson – President, Oracle Applications Users Group LinkedIn | OAUG Profile John King - Owner, King Training Resources Website | LinkedIn | Oracle Mix Gavyn Whyte - Project Portfolio Manager at iFactory Consulting Blog | Twitter | LinkedIn | Oracle Mix John Nicholson - Channels & Alliances at Greenlight Technologies Website | LinkedIn Big thanks to Floyd for assembling the panelists and handling the on-scene MC/hosting duties.  Listen to Part 1 On a technical note, this discussion was conducted over Skype, using Floyd’s iPad, placed in the middle of the table.  During the call the audio was fantastic – the iPad did a remarkable job. Sadly, the Technology Gods were not smiling on me that day. The audio set-up that I tested successfully before the call failed to deliver when we first connected – I could hear the folks in Vegas, but they couldn’t hear me. A frantic, last-minute adjustment appeared to have fixed that problem, and the audio in my headphones from both sides of the conversation was loud and clear.  It wasn’t until I listened to the playback that I realized that something was wrong. So the audio for Vegas side of the discussion has about the same fidelity as a cell phone. It’s listenable, but disappointing when compared to what it sounded like during the discussion. Still, this was a one shot deal, and the roster of panelists and the resulting conversation was too good and too much fun to scrap just because of an unfortunate technical glitch.   Part 2 of this Collaborate 10 Wrap-Up will run next week. After that, it’s back on track with the previously scheduled program. So stay tuned: RSS del.icio.us Tags: oracle,otn,collborate 10,c10,oracle ace program,archbeat,arch2arch,oaug,odtug,las vegas Technorati Tags: oracle,otn,collborate 10,c10,oracle ace program,archbeat,arch2arch,oaug,odtug,las vegas

    Read the article

  • PHP Web Services - Nice try

    Thanks to the membership in the O'Reilly User Group Programme the Mauritius Software Craftsmanship Community (short: MSCC) recently received a welcome package with several book titles. Among them is the latest publication of Lorna Jane Mitchell - 'PHP Web Services: APIs for the Modern Web'. Following is the book review I put on Amazon: Nice try! Initially, I was astonished that a small book like 'PHP Web Services' would be able to cover all the interesting topics about APIs and Web Services, independently whether they are written in PHP or not. And unfortunately, the title isn't able to stand up to the readers (or at least my) expectations. Maybe as a light defense, there is no usual paragraph about the intended audience of that book, but still I have to admit that the first half (chapters 1 to 8) are well written and Lorna has her points on the various technologies. Also, the code samples in PHP are clean and easy to understand. With chapter 'Debugging Web Services' the book started to change my mind about the clarity of advice and the instructions on designing and developing good APIs. Eventually, this might be related to the fact that I'm used to other tools since years, like Telerik Fiddler as HTTP proxy in order to trace and inspect any kind of request/response handling. Including localhost monitoring, SSL certification acceptance, and the ability to debug mobile devices, especially iOS-based ones. Compared to Charles, Fiddler is available for free. What really got me off the hook is the following statement in chapter 10 about Service Type Decisions: "For users who have larger systems using technology stacks such as Java, C++, or .NET, it may be easier for them to integrate with a SOAP service." WHAT? A couple of pages earlier the author recommends to stay away from 'old-fashioned' API styles like SOAP (if possible). And on top of that I wonder why there are tons of documentation towards development of RESTful Web Services based on WebAPI. The ASP.NET stack clearly moves away from SOAP to JSON and REST since years! Honestly, as a software developer on the .NET stack this leaves a mixed feeling after all. As for the remaining chapters I simply consider them as 'blah blah' without any real value and lots of theoretical advice. Related to the chapter 13 about 'Documentation', I just had the 'pleasure' to write a C#-based client against a Java-based SOAP Web Service. Personally, I take the WSDL as the master reference in the first place and Visual Studio generates all the stub types involved in the communication. During the implementation and testing I came across a 'java.lang.NullPointerException' in various methods and for various method parameters. The WSDL and the generated types were declared as Nullable, so nothing to worry about, or? Well, I logged in a support ticket, and guess what was the response to that scenario? "The service definition in the WSDL is wrong, please refer to the documentation in order to use the methods and parameters correctly" - No comment! Lorna's title is a quick read and in some areas she has good advice on designing and implementing Web Services and APIs. But roughly 100 pages aren't enough to cover a vast topic like that. After all, nice try and I'm looking forward to an improved second edition. Honestly, I never thought that I would come across a poor review. In general, it's a good book but it clearly has a lack of depth, the PHP code samples are incomplete (closing tags missing), and there are too many assumptions and theoretical statements.

    Read the article

  • how to uninstall mariadb and re-install mysql ? Mysql install turns into mariadb install

    - by Suma
    I recently upgraded my centos system via the desktop. mistake! I had mariadb, phpmyadmin working just fine before - but after the upgrade they stopped. I frantically googled and tried to follow some tutorials about mariadb * mysql reinstall untill I came to this one: http://centosforge.com/node/how-replace-mysql-mariadb-centos-6-including-mysql-uninstall-instructions-and-yum-install I executed this command to remove all of mysql: yum remove mysql-server mysql-libs mysql-devel mysql* and then tried to reinstall mysql: as below - it crashes with errors as follows: ***************************************************************** [root@localhost ~]# yum install mysql-server mysql mysql-devel ***************************************************************** Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: centos.serverspace.co.uk * extras: centos.serverspace.co.uk * rpmforge: www.mirrorservice.org * updates: mirror.rmg.io Setting up Install Process Package mysql-server is obsoleted by MariaDB-server, trying to install MariaDB-server-5.5.29-1.i686 instead Package mysql is obsoleted by MariaDB-server, trying to install MariaDB-server-5.5.29-1.i686 instead Package mysql-devel is obsoleted by MariaDB-devel, trying to install MariaDB-devel-5.5.29-1.i686 instead Resolving Dependencies --> Running transaction check ---> Package MariaDB-devel.i686 0:5.5.29-1 set to be updated --> Processing Dependency: MariaDB-common for package: MariaDB-devel ---> Package MariaDB-server.i686 0:5.5.29-1 set to be updated --> Processing Dependency: libssl.so.10 for package: MariaDB-server --> Processing Dependency: libcrypto.so.10 for package: MariaDB-server --> Running transaction check ---> Package MariaDB-common.i686 0:5.5.29-1 set to be updated --> Processing Dependency: MariaDB-compat for package: MariaDB-common ---> Package MariaDB-server.i686 0:5.5.29-1 set to be updated --> Processing Dependency: libssl.so.10 for package: MariaDB-server --> Processing Dependency: libcrypto.so.10 for package: MariaDB-server --> Running transaction check ---> Package MariaDB-compat.i686 0:5.5.29-1 set to be updated ---> Package MariaDB-server.i686 0:5.5.29-1 set to be updated --> Processing Dependency: libssl.so.10 for package: MariaDB-server --> Processing Dependency: libcrypto.so.10 for package: MariaDB-server --> Finished Dependency Resolution MariaDB-server-5.5.29-1.i686 from mariadb has depsolving problems --> Missing Dependency: libcrypto.so.10 is needed by package MariaDB-server-5.5.29-1.i686 (mariadb) MariaDB-server-5.5.29-1.i686 from mariadb has depsolving problems --> Missing Dependency: libssl.so.10 is needed by package MariaDB-server-5.5.29-1.i686 (mariadb) Error: Missing Dependency: libcrypto.so.10 is needed by package MariaDB-server-5.5.29-1.i686 (mariadb) Error: Missing Dependency: libssl.so.10 is needed by package MariaDB-server-5.5.29-1.i686 (mariadb) You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest [root@localhost ~] If I now try to install libssl.10, i get asked to install glibc libraries. 2.17 and 2.7 - other discussions have said to stay clear of the as this will explode my system - I tried download 2.17 and it's huge - took ages to unzip. Could someone please help me to completelty remove maraidb and install mysql - so that I don't get the above errors and pushed over to mariadb when I run: yum install mysql-server mysql mysql-devel There are tons of material on how to install mariadb - but none i found so far that plainly explains how to go backwards to mysql.

    Read the article

  • SQL SERVER – Expanding Views – Contest Win Joes 2 Pros Combo (USD 198) – Day 4 of 5

    - by pinaldave
    August 2011 we ran a contest where every day we give away one book for an entire month. The contest had extreme success. Lots of people participated and lots of give away. I have received lots of questions if we are doing something similar this month. Absolutely, instead of running a contest a month long we are doing something more interesting. We are giving away USD 198 worth gift every day for this week. We are giving away Joes 2 Pros 5 Volumes (BOOK) SQL 2008 Development Certification Training Kit every day. One copy in India and One in USA. Total 2 of the giveaway (worth USD 198). All the gifts are sponsored from the Koenig Training Solution and Joes 2 Pros. The books are available here Amazon | Flipkart | Indiaplaza How to Win: Read the Question Read the Hints Answer the Quiz in Contact Form in following format Question Answer Name of the country (The contest is open for USA and India residents only) 2 Winners will be randomly selected announced on August 20th. Question of the Day: Which of the following key word will force the query to use indexes created on views? a) ENCRYPTION b) SCHEMABINDING c) NOEXPAND d) CHECK OPTION Query Hints: BIG HINT POST Usually, the assumption is that Index on the table will use Index on the table and Index on view will be used by view. However, that is the misconception. It does not happen this way. In fact, if you notice the image, you will find the both of them (table and view) use both the index created on the table. The index created on the view is not used. The reason for the same as listed in BOL. The cost of using the indexed view may exceed the cost of getting the data from the base tables, or the query is so simple that a query against the base tables is fast and easy to find. This often happens when the indexed view is defined on small tables. You can use the NOEXPAND hint if you want to force the query processor to use the indexed view. This may require you to rewrite your query if you don’t initially reference the view explicitly. You can get the actual cost of the query with NOEXPAND and compare it to the actual cost of the query plan that doesn’t reference the view. If they are close, this may give you the confidence that the decision of whether or not to use the indexed view doesn’t matter. Additional Hints: I have previously discussed various concepts from SQL Server Joes 2 Pros Volume 4. SQL Joes 2 Pros Development Series – Structured Error Handling SQL Joes 2 Pros Development Series – SQL Server Error Messages SQL Joes 2 Pros Development Series – Table-Valued Functions SQL Joes 2 Pros Development Series – Table-Valued Store Procedure Parameters SQL Joes 2 Pros Development Series – Easy Introduction to CHECK Options SQL Joes 2 Pros Development Series – Introduction to Views SQL Joes 2 Pros Development Series – All about SQL Constraints Next Step: Answer the Quiz in Contact Form in following format Question Answer Name of the country (The contest is open for USA and India) Bonus Winner Leave a comment with your favorite article from the “additional hints” section and you may be eligible for surprise gift. There is no country restriction for this Bonus Contest. Do mention why you liked it any particular blog post and I will announce the winner of the same along with the main contest. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • System.Threading.ThreadAbortException executing WCF service

    - by SURESH GIRIRAJAN
    In one of our prod server we recently ran into issue when we went and update the web.config and try to browse the service. We started seeing the service was not responding and getting the following warning in the application log. Our service is WCF service, BizTalk orchestration exposed as service. We have other prod server where we never ran into this issue, so what’s different with this server. After going thru lot of forum and came up on some Microsoft service pack and hot fix which related to FCN. But I don’t want to apply any patch on this server then we need to do on all the other servers too. So solution is simple, I dropped the existing website, created a new site with different name with updated web.config browse the service. Then dropped that site and recreate the original web site and it worked fine without any issue. Event Viewer:  Event Type:        Warning Event Source:    ASP.NET 2.0.50727.0 Event Category:                Web Event Event ID:              1309 Date:                     6/6/2011 Time:                    5:41:42 PM User:                     N/A Computer:          PRODP02 Description: Event code: 3005 Event message: An unhandled exception has occurred. Event time: 6/6/2011 5:41:42 PM Event time (UTC): 6/6/2011 9:41:42 PM Event ID: a71769f42b304355a58c482bfec267f2 Event sequence: 3 Event occurrence: 1 Event detail code: 0  Application information:     Application domain: /LM/W3SVC/518296899/ROOT/PortArrivals-2-129518698821558995     Trust level: Full     Application Virtual Path: /TESTSVC     Application Path: D:\inetpub\wwwroot\RFID\TESTSVC\     Machine name: PRODP02  Process information:     Process ID: 8752     Process name: w3wp.exe     Account name: domain\BizTalk_Svc_Hostlso  Exception information:     Exception type: ThreadAbortException     Exception message: Thread was being aborted.  Request information:     Request URL: http://localhost:81/TESTSVC/TESTSVCS.svc     Request path: /TESTSVC/TESTSVCS.svc     User host address: 127.0.0.1     User:      Is authenticated: False     Authentication Type:      Thread account name: domain\BizTalk_Svc_Hostlso  Thread information:     Thread ID: 22     Thread account name: domain\BizTalk_Svc_Hostlso     Is impersonating: False     Stack trace:    at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)    at System.Web.HttpApplication.ApplicationStepManager.ResumeSteps(Exception error)  at System.Web.HttpApplication.System.Web.IHttpAsyncHandler.BeginProcessRequest(HttpContext context, AsyncCallback cb, Object extraData)    at System.Web.HttpRuntime.ProcessRequestInternal(HttpWorkerRequest wr)  <Description>Handling an exception.</Description> <AppDomain>/LM/W3SVC/518296899/ROOT/TESTSVC-6-129518741899334691</AppDomain> <Exception> <ExceptionType>System.Threading.ThreadAbortException, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089</ExceptionType> <Message>Thread was being aborted.</Message> <StackTrace> at System.Threading.Monitor.Enter(Object obj) at System.ServiceModel.ServiceHostingEnvironment.HostingManager.EnsureServiceAvailable(String normalizedVirtualPath) at System.ServiceModel.ServiceHostingEnvironment.EnsureServiceAvailableFast(String relativeVirtualPath) at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.HandleRequest() at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.BeginRequest() at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.OnBeginRequest(Object state) at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.WorkItem.Invoke2() at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.WorkItem.Invoke() at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.ProcessCallbacks() at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.CompletionCallback(Object state) at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.ScheduledOverlapped.IOCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* nativeOverlapped) at System.ServiceModel.Diagnostics.Utility.IOCompletionThunk.UnhandledExceptionFrame(UInt32 error, UInt32 bytesRead, NativeOverlapped* nativeOverlapped) </StackTrace> <ExceptionString>System.Threading.ThreadAbortException: Thread was being aborted.    at System.Threading.Monitor.Enter(Object obj)    at System.ServiceModel.ServiceHostingEnvironment.HostingManager.EnsureServiceAvailable(String normalizedVirtualPath)    at System.ServiceModel.ServiceHostingEnvironment.EnsureServiceAvailableFast(String relativeVirtualPath)    at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.HandleRequest()    at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.BeginRequest()    at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.OnBeginRequest(Object state)    at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.WorkItem.Invoke2()    at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.WorkItem.Invoke()    at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.ProcessCallbacks()    at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.CompletionCallback(Object state)    at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.ScheduledOverlapped.IOCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* nativeOverlapped)    at System.ServiceModel.Diagnostics.Utility.IOCompletionThunk.UnhandledExceptionFrame(UInt32 error, UInt32 bytesRead, NativeOverlapped* nativeOverlapped)</ExceptionString>

    Read the article

  • Oracle @ E20 Conference Boston - Building Social Business

    - by Michael Snow
    12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Oracle WebCenter is The Engagement Platform Powering Exceptional Experiences for Employees, Partners and Customers &amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;lt;span id=&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;XinhaEditingPostion&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;gt;&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;lt;/span&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;gt;lt;p&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;gt; &amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;lt;/p&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;gt; The way we work is changing rapidly, offering an enormous competitive advantage to those who embrace the new tools that enable contextual, agile and simplified information exchange and collaboration to distributed workforces and  networks of partners and customers. As many of you are aware, Enterprise 2.0 is the term for the technologies and business practices that liberate the workforce from the constraints of legacy communication and productivity tools like email. It provides business managers with access to the right information at the right time through a web of inter-connected applications, services and devices. Enterprise 2.0 makes accessible the collective intelligence of many, translating to a huge  competitive advantage in the form of increased innovation, productivity and agility.The Enterprise 2.0 Conference takes a strategic perspective, emphasizing the bigger picture implications of the technology and the exploration of what is at stake for organizations trying to change not only tools, but also culture and process. Beyond discussion of the "why", there will also be in-depth opportunities for learning the "how" that will help you bring Enterprise 2.0 to your business. You won't want to miss this opportunity to learn and hear from leading experts in the fields of technology for business, collaboration, culture change and collective intelligence.Oracle was a proud Gold sponsor of the Enterprise 2.0 Conference, taking place this past week in Boston. For those of you that weren't able to make it - we've made the Oracle Social Network Presentation session available here and have posted the slides below. 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle. 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Parent Objects

    - by Ali Bahrami
    Support for Parent Objects was added in Solaris 11 Update 1. The following material is adapted from the PSARC arc case, and the Solaris Linker and Libraries Manual. A "plugin" is a shared object, usually loaded via dlopen(), that is used by a program in order to allow the end user to add functionality to the program. Examples of plugins include those used by web browsers (flash, acrobat, etc), as well as mdb and elfedit modules. The object that loads the plugin at runtime is called the "parent object". Unlike most object dependencies, the parent is not identified by name, but by its status as the object doing the load. Historically, building a good plugin is has been more complicated than it should be: A parent and its plugin usually share a 2-way dependency: The plugin provides one or more routines for the parent to call, and the parent supplies support routines for use by the plugin for things like memory allocation and error reporting. It is a best practice to build all objects, including plugins, with the -z defs option, in order to ensure that the object specifies all of its dependencies, and is self contained. However: The parent is usually an executable, which cannot be linked to via the usual library mechanisms provided by the link editor. Even if the parent is a shared object, which could be a normal library dependency to the plugin, it may be desirable to build plugins that can be used by more than one parent, in which case embedding a dependency NEEDED entry for one of the parents is undesirable. The usual way to build a high quality plugin with -z defs uses a special mapfile provided by the parent. This mapfile defines the parent routines, specifying the PARENT attribute (see example below). This works, but is inconvenient, and error prone. The symbol table in the parent already describes what it makes available to plugins — ideally the plugin would obtain that information directly rather than from a separate mapfile. The new -z parent option to ld allows a plugin to link to the parent and access the parent symbol table. This differs from a typical dependency: No NEEDED record is created. The relationship is recorded as a logical connection to the parent, rather than as an explicit object name However, it operates in the same manner as any other dependency in terms of making symbols available to the plugin. When the -z parent option is used, the link-editor records the basename of the parent object in the dynamic section, using the new tag DT_SUNW_PARENT. This is an informational tag, which is not used by the runtime linker to locate the parent, but which is available for diagnostic purposes. The ld(1) manpage documentation for the -z parent option is: -z parent=object Specifies a "parent object", which can be an executable or shared object, against which to link the output object. This option is typically used when creating "plugin" shared objects intended to be loaded by an executable at runtime via the dlopen() function. The symbol table from the parent object is used to satisfy references from the plugin object. The use of the -z parent option makes symbols from the object calling dlopen() available to the plugin. Example For this example, we use a main program, and a plugin. The parent provides a function named parent_callback() for the plugin to call. The plugin provides a function named plugin_func() to the parent: % cat main.c #include <stdio.h> #include <dlfcn.h> #include <link.h> void parent_callback(void) { printf("plugin_func() has called parent_callback()\n"); } int main(int argc, char **argv) { typedef void plugin_func_t(void); void *hdl; plugin_func_t *plugin_func; if (argc != 2) { fprintf(stderr, "usage: main plugin\n"); return (1); } if ((hdl = dlopen(argv[1], RTLD_LAZY)) == NULL) { fprintf(stderr, "unable to load plugin: %s\n", dlerror()); return (1); } plugin_func = (plugin_func_t *) dlsym(hdl, "plugin_func"); if (plugin_func == NULL) { fprintf(stderr, "unable to find plugin_func: %s\n", dlerror()); return (1); } (*plugin_func)(); return (0); } % cat plugin.c #include <stdio.h> extern void parent_callback(void); void plugin_func(void) { printf("parent has called plugin_func() from plugin.so\n"); parent_callback(); } Building this in the traditional manner, without -zdefs: % cc -o main main.c % cc -G -o plugin.so plugin.c % ./main ./plugin.so parent has called plugin_func() from plugin.so plugin_func() has called parent_callback() As noted above, when building any shared object, the -z defs option is recommended, in order to ensure that the object is self contained and specifies all of its dependencies. However, the use of -z defs prevents the plugin object from linking due to the unsatisfied symbol from the parent object: % cc -zdefs -G -o plugin.so plugin.c Undefined first referenced symbol in file parent_callback plugin.o ld: fatal: symbol referencing errors. No output written to plugin.so A mapfile can be used to specify to ld that the parent_callback symbol is supplied by the parent object. % cat plugin.mapfile $mapfile_version 2 SYMBOL_SCOPE { global: parent_callback { FLAGS = PARENT }; }; % cc -zdefs -Mplugin.mapfile -G -o plugin.so plugin.c However, the -z parent option to ld is the most direct solution to this problem, allowing the plugin to actually link against the parent object, and obtain the available symbols from it. An added benefit of using -z parent instead of a mapfile, is that the name of the parent object is recorded in the dynamic section of the plugin, and can be displayed by the file utility: % cc -zdefs -zparent=main -G -o plugin.so plugin.c % elfdump -d plugin.so | grep PARENT [0] SUNW_PARENT 0xcc main % file plugin.so plugin.so: ELF 32-bit LSB dynamic lib 80386 Version 1, parent main, dynamically linked, not stripped % ./main ./plugin.so parent has called plugin_func() from plugin.so plugin_func() has called parent_callback() We can also observe this in elfedit plugins on Solaris systems running Solaris 11 Update 1 or newer: % file /usr/lib/elfedit/dyn.so /usr/lib/elfedit/dyn.so: ELF 32-bit LSB dynamic lib 80386 Version 1, parent elfedit, dynamically linked, not stripped, no debugging information available Related Other Work The GNU ld has an option named --just-symbols that can be used in a similar manner: --just-symbols=filename Read symbol names and their addresses from filename, but do not relocate it or include it in the output. This allows your output file to refer symbolically to absolute locations of memory defined in other programs. You may use this option more than once. -z parent is a higher level operation aimed specifically at simplifying the construction of high quality plugins. Although it employs the same operation, it differs from --just symbols in 2 significant ways: There can only be one parent. The parent is recorded in the created object, and can be displayed by 'file', or other similar tools.

    Read the article

  • IIS Logfile Visualization with XNA

    - by BobPalmer
    In my office, I have a wall mounted monitor who's whole purpose in life is to display perfmon stats from our various servers.  And on a fairly regular basis, I have folks walk by asking what the lines mean.    After providing the requisite explaination about CPU utilization, disk I/O bottlenecks, etc. this is usually followed by some blank stares from the user in question, and a distillation of all of our engineering wizardry down to the phrase 'So when the red line goes up that's bad then?'   This of course would not do.  So I talked to my friends and our network admin about an option to show something more eye catching and visual, with which we could catch at a glance a feel for what was up with our site.    He initially pointed me out to a video showing GLTail and Chipmunk done in Ruby.  Realizing this was both awesome, and that I needed an excuse to do something in XNA, I decided to knock out a proof of concept for something very similar, but with a few tweaks.   Here's a link to a video of the current prototype:   http://www.youtube.com/watch?v=jM_PWZbtH2I   Essentially this app opens up a log file (even an active one) and begins pulling out the lines of text.  (Here's a good Code Project link that covers how to do tail reading from an active text file: http://www.codeproject.com/KB/files/tail.aspx).   As new data is added, a bubble is generated in the application - a GET statement comes from the left, and a POST from the right.  I then run it through a series of expression checkers, and based on the kind of statement and the pattern, a bubble of an appropriate color is generated.   For example, if I get a 500, a huge red bubble pops out.  Others are based on the part of the system the page is from - i.e. green bubbles are from our claims management subsystem, and blue bubbles are from the pages our scheduling staff use to schedule patients.  Others include the purple bubbles for security and login, and yellow bubbles for some miscellaneous pages.   The little grey bubbles represent things like images, JS, CSS, etc - and their small size makes them work like grease to keep the larger page bubbles moving.   The app is also smart enough that if it is starting to bog down with handling the physics and interactions, it will suspend new bubbles until enough have dropped off that performance can resume (you can see this slight stuttering in the sample video).   The net result is that anyone will be able to look up on the wall monitor, and instantly get a quick feel for how things are going on the floor.  Website slow?  You can get a feel for both volume and utilized modules with one glance.  Website crashing?  Look for a wall of giant red bubbles.  No activity at all?  Maybe the site is down.  Now couple this with utilization within a farm, and cross referenced with a second app showing the same kind of data from your SQL database...   As for the app itself, it's a windows XNA project with the code in C#.   The physics are handled by the Farseer physicis eingine for XNA (http://www.codeplex.com/FarseerPhysics) which is just pure goodness.  The samples are great, and I had the app up and working in two evenings (half of that was fine tuning, and the other was me coding with a kid in my lap).   My next steps include wiring this to SQL (I have some ideas...), and adding a nice configuration module.  For example, you could use polygons, etc to tie to your regex - or more entertaining things like having a little human ragdoll to represent a user login.     Once that's wrapped up and I have a chance to complete some hardening, I will be releasing the whole thing into the wild as opensource.     Feel free to ping me if you have any questions! -Bob

    Read the article

  • Cutting Edge versus Just Average? Your SOA, Got BPM? by Mala Ramakrishnan

    - by JuergenKress
    Service Oriented Architecture (SOA) has completely transformed IT from the time it was introduced well over a decade ago. Organizations have been re-plumbing their infrastructure for reusability, efficiency and gain and succeeding with it. Best practices have emerged and people and technology have matured. We have got better at delivering on a stable platform on mission critical applications and services. Yet, there is this one secret that sets some SOA customers apart from the others. These companies grow and revolutionize their business and not just transform their IT infrastructure. The differences seem subtle for an untrained eye examining these organizations externally. And from within the company, it’s a bit like an ant sitting on an elephant, hard to differentiate between the IT trunk and business tail. What is it that some organizations do differently that makes them succeed beyond SOA? These organizations pull in business people more and more to weigh into their IT decisions. They wrench understanding process over services. They don’t settle easily when bridging business metrics and IT performance. They anguish over business requirements not translating seamlessly and quickly into IT. IT is not just an enabler but a pillar that revolutionizes their business. Okay, I’ll give it to you. These organizations layer Business Process Management (BPM) on top of their SOA. Think about lifeblood business processes in your own organizations. If you are Fedex, this would be shipping and handling. If you are Stanford Hospital, this would be patient case-management: from on-boarding through discharge and follow-up care. If you are Wells Fargo, this would be loan origination. Now think about how your SOA ties into your business process. Can you decouple your business processes from your SOA so that the two can transform and change independent of each other? Can you forecast success metrics for your business process, make the changes across the board and then look back over different periods of time to see if you are on track? Are your critical business processes entrenched in the minds of few experts in your organization or does everyone from the receptionist to your enterprise architect to your CEO understand what they can do to revolutionize it? Business Process Management is a superset of SOA. It is the process of getting your business to articulate business value and metrics and have it implemented in IT without any loss in translation. It is the act of extracting the business process from the minds of experts and IT applications in your organization and valuing them as assets for performance and gain. BPM is stepping outside your SOA and moving your organization to the next level of innovation. Oracle is accelerating BPM across industries with the latest launch. Join us to understand how BPM can give your organization a cutting edge over your SOA. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: SOA,BPM,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • How to future-proof my touch-enabled web application?

    - by Rice Flour Cookies
    I recently went out and purchased a touch-screen monitor with the intention of learning how to program touch-enabled web applications. I had reviewed the MDN documentation about touch events, as well as the W3C specification. To get started, I wrote a very short test page with two event handlers: one for the mousedown event and one for the touchstart event. I fired up the web page in IE and touched the document and found that only the mousedown event fired. I saw the same behavior with Firefox, only to find out later that Firefox can be set to enable the touchstart event using about:config. When touch events are enabled, the touchstart event fires, but not mousedown. Chrome was even stranger: it fired both events when I touched the document: touchstart and mousedown, in that order. Only on my Android phone does it appear to be the case that only the touchstart event fires when I touch the document. I did a a Google search and ended up on two interesting pages. First, I found the page on CanIUse for touch events: http://caniuse.com/#feat=touch Can I Use clearly indicates that IE does not support touch events as of this writing, and Firefox only supports touch events if they are manually enabled. Furthermore, all four browsers I mentioned treat the touch in a completely different way. It boils down to this: IE: simulated mouse click Firefox with touch disabled: simulated mouse click Firefox with touch enabled: touch event Chrome: touch event and simulated mouse click Android: touch event What is more frustrating is that Google also found a Microsoft page called RethinkIE. RethinkIE brags about touch support in IE; as a matter of fact, one of their slogans is "Touch the Web". It links to a number of touch-based application. I followed some of these links, and as best I can tell, it's just like CanIUse described; no proper touch support; just simulated mouse clicks. The MDN (https://developer.mozilla.org/en-US/docs/Web/API/Touch) and W3C (http://www.w3.org/TR/touch-events/) documentation describe a far richer interface; an interface that doesn't just simulate mouse clicks, but keeps track of multiple touches at once, the contact area, rotation, and force of each touch, and unique identifiers for each touch so that they can be tracked individually. I don't see how simulated mouse clicks can ever touch the above described functionality, which, once again, is part of the W3C specification, although it is listed as "non-normative", meaning that a browser can claim to be standards-compliant without implementing it. (Why bother making it part of the standard, then?) What motivated my research is that I've written an HTML5 application that doesn't work on Android because Android doesn't fire mouse events. I'm now afraid to try to implement touch for my application because the browsers all behave so differently. I imagine that at some time in the future, the browsers might start handling touch similarly, but how can I tell how they might be handled in the future short of writing code to handle the behavior of each individual browser? Is it possible to write code today that will work with touch-enabled browsers for years to come? If so, how?

    Read the article

  • top tweets WebLogic Partner Community – June 2013

    - by JuergenKress
    Send us your tweets @wlscommunity #WebLogicCommunity and follow us on twitter http://twitter.com/wlscommunity. Please feel free to send us your news! Lucas Jellema ?Getting started with Java EE 7: The Tutorial http://docs.oracle.com/javaee/7/tutorial/doc/home.htm … Simon Haslam I'm looking forward to starting a "WLS on ODA" proof of concept - some ideas for testing: http://www.veriton.co.uk/roller/fmw/entry/virtualised_oda_proof_of_concept … Frank Munz ?It's not too late - I just submitted two presentations about #OracleWebLogic and #Coherence for the @DOAGeV conference in Nürnberg. Did you? Arun Gupta ?Tyrus 1.0 User Guide: https://tyrus.java.net/documentation/1.0/user-guide.html … #WebSocket #JavaEE7 #GlassFish Arun Gupta #JavaEE7 Launch Webinar Technical Breakout replays on Youtube: http://bit.ly/12uUicT JSON 1.0 , EJB .2, Batch 1.0 more coming! OracleBlogs ?FREE Virtual Developer Day: Java SE, Java EE, Java Emebedded on Jun 19th and 25th http://ow.ly/2xBkwV Markus Eisele #Oracle #JavaSE Critical Patch Update Pre-Release Announcement - June 2013 http://www.oracle.com/technetwork/topics/security/javacpujun2013-1899847.html … #security OracleSupport_WLS ?Simple Custom #JMX MBeans with #WebLogic 12c and #Spring http://pub.vitrue.com/3kEr Oracle Technet Building Java HTML5/WebSocket Applications with JSR 356 - 4pm - Grand Ballroom Salon A/B #qconnewyork WebLogic Community Oracle Fusion Middleware (OFM) 11g (11.1.1.7) Starter Kit available & Customizable Demos http://wp.me/p1LMIb-BK Oracle Technet #Java EE 7: Moving Java Forward for the Enterprise | @java http://pub.vitrue.com/tHiM OTNArchBeat ?Oracle Forms to ADF Modernization Reference - Convero (AMEC) Project | @AndrejusB http://pub.vitrue.com/lZPR WebLogic Community ?ExaLogic In Memory Applications & Whitepapers Building Large Scale E-Commerce Platforms & Rethink the Entire Application Lifecycle… WebLogic Community ?Coherence YouTube videos http://wp.me/p1LMIb-BG Arun Gupta ?WARNING: Next 2 days are going to be loaded with #JavaEE7 launch related tweets, and offline next week! JDeveloper & ADF Using Contextual Event in Oracle ADF http://dlvr.it/3Vpybr Oracle WebLogic Check out new blog on #hybrid_cloud & why choice is important http://bit.ly/1b1QGhL Andrejus Baranovskis Oracle Forms to ADF Modernization Reference - Convero (AMEC) Project http://fb.me/1M9iWNmAw WebLogic Community WebLogic on Oracle Database Appliance by Frances Zhao http://wp.me/p1LMIb-BE OTNArchBeat ?New: A-Team Chronicles >> A great resource for technical content covering Oracle Fusion Middleware / Fusion Apps http://pub.vitrue.com/qbzS Oracle for Partners ?Take Java To The Edge: Java Virtual Developer Day – June 19 & June 25 http://bit.ly/19fGlSX Adam Bien ?Looking forward to tomorrow's #javaee7 + #angularjs #html5 marriage at #jpoint. See you there: http://www.jpoint.nl/meetingpoint/editie-2013#sessie-1 … shay shmeltzer ?There is a new patch for the #Oracle #ADF Mobile extension - use help->check for updates to get it. Frank Munz ?Not using @OracleWebLogic 12c yet? Australia does! Reviews from my @AUSOUG workshops in Brisbane, Adelaide and Perth. http://goo.gl/BfVc4 Arun Gupta ?WebSocket, Server-Sent Events, #JavaEE7 sessions accepted at #jaxlondon ... that's gonna be at least third trip to London this year! WebLogic Community SPARC T5-8 Delivers Best Single System SPECjEnterprise2010 Benchmark running WebLogic 12c http://wp.me/p1LMIb-BC WebLogic Community The Ultimate Java EE Event - 16 Power Workshops mit allen wichtigen Java-EE-Themen http://wp.me/p1LMIb-BY Oracle WebLogic ?@OracleWebLogic 7 Jun New Blog Post: Using try-with-resources with JDBC objects http://ow.ly/2xryb5 JDeveloper & ADF Switching Lists of Values http://dlvr.it/3PbCkw WebLogic Community ?YouTube channel Learning Oracle's ADF http://wp.me/p1LMIb-zA Markus Eisele [GER] RT @heisedc: #Java-Entwicklung in #Oracles Public #Cloud http://heise.de/-1866388/ftw OracleBlogs ?Coherence Incubator & Community Source Code & Release Documentation http://ow.ly/2x2fXK chriscmuir ?New blog post: Migrating ADF Mobile apps from 1.0 to 1.1 https://blogs.oracle.com/onesizedoesntfitall/entry/migrating_adf_mobile_apps_from … JDeveloper & ADF ?ADF JavaScript Partitioning for Performance http://dlvr.it/3Trw15 WebLogic Community WebLogic Server Security Workshop June 27th 2013 Germany http://wp.me/p1LMIb-C7 WebLogic Community Oracle Optimized Solution for WebLogic Server 12c http://wp.me/p1LMIb-BA WebLogic Community Virtualize and Run Your Forms Applications in the Cloud - Now On Demand http://wp.me/p1LMIb-By Lucas Jellema Innteresting presentation on various aspects of end user assistance in Fusion Applications (ADF based): http://www.slideshare.net/uobroin/ouag-ireland-final2012slideshare … Adam Bien ?Summer Of JavaEE Workshops And Gigs: Free Hacking night:11.06.2013, Utrecht JavaEE 7 Meets HTML 5 and AngularJ... http://bit.ly/11XRjt4 WebLogic Community ?Real World ADF Design & Architecture Principles Trainings Germany, Poland & Portugal http://wp.me/p1LMIb-Bw Oracle for Partners ?JAVA Virtual Developer Day – June 19 & June 25 - Watch educational content and engage with Oracle experts online https://oracle.6connex.com/portal/java2013/login/?langR=en_US&mcc=OPNNSL … Markus Eisele ?[blog] Java EE 7 is final. Thoughts, Insights and further Pointers. http://dlvr.it/3SrxnB #javaee7 WebLogic Community Oracle takes the top spot for market share in the Application Server Market Segment for 2012 http://wp.me/p1LMIb-Bu OTNArchBeat ?Oracle ACE Director @LucasJellema is "very pleasantly surprised" with the new ADF Academy. http://pub.vitrue.com/8fad chriscmuir ?Sell out crowd for our ADF architecture course in Munich #adfarch pic.twitter.com/zhNtQJ25JV Markus Eisele ?[blog] New German Article: Java 7 Update 21 Security Improvements http://dlvr.it/3Sc8V9 #java #heise #security Markus Eisele ?[blog] New German Article: Oracle Java Cloud Service http://dlvr.it/3Sc20V #java #heise #OracleCloud OracleSupport_WLS ?Troubleshooting and Tuning with #WebLogic - Developer Webcast now available on #Youtube http://pub.vitrue.com/GSOy Andrejus Baranovskis New ADF Academy - Impressive Concept for ADF eLearning http://fb.me/2kYSMKKR5 OracleSupport_WLS ?Removing a #weblogic domain properly http://pub.vitrue.com/ZndM WebLogic Community WebLogic Partner Community Newsletter May 2013 http://wp.me/p1LMIb-Bp Oracle WebLogic ?Blog: Troubleshooting tools Part 3- Heap Dumps #Oracle #WebLogic Read the series http://bit.ly/14CQSD2 Oracle WebLogic ?Blog: #WebLogic_Server on #Oracle_Database_Appliance- How to conjure a WebLogic cluster- http://bit.ly/11fciHA Oracle WebLogic ?Check out new cool features in Oracle Traffic Director- http://bit.ly/11fbz9h WebLogic Community Additional new material WebLogic Community April 2013 http://wp.me/p1LMIb-zM WebLogic Community New WebLogic references - we want yours http://wp.me/p1LMIb-zK OracleSupport_WLS ?#Weblogic Session Replication jsession ID and F5 http://pub.vitrue.com/dWZp OracleBlogs ?top tweets WebLogic Partner Community May 2013 http://ow.ly/2xc8M5 WebLogic Community Welcome to the Spring edition of Oracle Scene http://wp.me/p1LMIb-zE Andreas Koop ?[blog post] ADF: Static Values View Object does not show any values (solved) http://bit.ly/14RDZ8p OracleBlogs ?ADF Mobile - accessing the SQLite database http://ow.ly/2x85r0 OracleSupport_WLS Youtube channel- Troubleshooting and Tuning with #WebLogic.#JRockit #SOAP #JRF http://pub.vitrue.com/qMxu Arun Gupta Next Java Magazine is all about #JavaEE7...productivity, HTML5, WebSocket, Batch & more. Subscribe http://ow.ly/lkD5D (@Oraclejavamag) Oracle WebLogic How to configure a #WebLogic cluster on #Oracle_Database_Appliance? It’s easy, read how. http://bit.ly/11fciHA Oracle WebLogic ?Blog: How to use Heap Dumps to troubleshooting memory leaks- #Oracle #WebLogic_Server http://bit.ly/14CQSD2 OracleBlogs ?Over 100 Images To Be Added to NetBeans Platform Showcase http://ow.ly/2x7Fvp Lucas Jellema A new release of the ADF EMG Task Flow Tester is now available for both JDeveloper 11 R1 and R2. https://java.net/projects/adf-task-flow-tester/pages/GettingStarted … WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: twitter,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Visual Studio Exceptions dialogs

    - by Daniel Moth
    Previously I covered step 1 of live debugging with start and attach. Once the debugger is attached, you want to go to step 2 of live debugging, which is to break. One way to break under the debugger is to do nothing, and just wait for an exception to occur in your code. This is true for all types of code that you debug in Visual Studio, and let's consider the following piece of C# code:3: static void Main() 4: { 5: try 6: { 7: int i = 0; 8: int r = 5 / i; 9: } 10: catch (System.DivideByZeroException) {/*gulp. sue me.*/} 11: System.Console.ReadLine(); 12: } If you run this under the debugger do you expect an exception on line 8? It is a trick question: you have to know whether I have configured the debugger to break when exceptions are thrown (first-chance exceptions) or only when they are unhandled. The place you do that is in the Exceptions dialog which is accessible from the Debug->Exceptions menu and on my installation looks like this: Note that I have checked all CLR exceptions. I could have expanded (like shown for the C++ case in my screenshot) and selected specific exceptions. To read more about this dialog, please read the corresponding Exception Handling debugging msdn topic and all its subtopics. So, for the code above, the debugger will break execution due to the thrown exception (exactly as if the try..catch was not there), so I see the following Exception Thrown dialog: Note the following: I can hit continue (or hit break and then later continue) and the program will continue fine since I have a catch handler. If this was an unhandled exception, then that is what the dialog would say (instead of first chance exception) and continuing would crash the app. That hyperlinked text ("Open Exception Settings") opens the Exceptions dialog I described further up. The coolest thing to note is the checkbox - this is new in this latest release of Visual Studio: it is a shortcut to the checkbox in the Exceptions dialog, so you don't have to open it to change this setting for this specific exception - you can toggle that option right from this dialog. Finally, if you try the code above on your system, you may observe a couple of differences from my screenshots. The first is that you may have an additional column of checkboxes in the Exceptions dialog. The second is that the last dialog I shared may look different to you. It all depends on the Debug->Options settings, and the two relevant settings are in this screenshot: The Exception assistant is what configures the look of the UI when the debugger wants to indicate exception to you, and the Just My Code setting controls the extra column in the Exception dialog. You can read more about those options on MSDN: How to break on User-Unhandled exceptions (plus Gregg’s post) and Exception Assistant. Before I leave you to go play with this stuff a bit more, please note that this level of debugging is now available for JavaScript too, and if you are looking at the Exceptions dialog and wondering what the "GPU Memory Access Exceptions" node is about, stay tuned on the C++ AMP blog ;-) Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Does Test Driven Development (TDD) improve Quality and Correctness? (Part 1)

    - by David V. Corbin
    Since the dawn of the computer age, various methodologies have been introduced to improve quality and reduce cost. In this posting, I will by sharing my experiences with Test Driven Development; both its benefits and limitations. To start this topic, we need to agree on what TDD is. The first is to define each of the three words as used in this context. Test - An item or action which measures something in some quantifiable form. Driven - The primary motivation or focus of a series of activities (process) Development - All phases of a software project/product from concept through delivery. The above are very simple definitions that result in the following: "TDD is a process where the primary focus is on measuring and quantifying all aspects of the creation of a (software) product." There are many places where TDD is used outside of software development, even though it is not known by this name. Consider the (conventional) education process that most of us grew up on. The focus was to get the best grades as measured by different tests. Many of these tests measured rote memorization and not understanding of the subject matter. The result of this that many people graduated with high scores but without "quality and correctness" in their ability to utilize the subject matter (of course, the flip side is true where certain people DID understand the material but were not very good at taking this type of test). Returning to software development, let us look at some common scenarios. While these items are generally applicable regardless of platform, language and tools; the remainder of this post will utilize Microsoft Visual Studio and Team Foundation Server (TFS) for examples. It should be realized that everyone does at least some aspect of TDD. At the most rudimentary level, getting a program to compile involves a "pass/fail" measurement (is the syntax valid) that drives their ability to proceed further (run the program). Other developers may create "Unit Tests" in the belief that having a test for every method/property of a class and good code coverage is the goal of TDD. These items may be helpful and even important, but really only address a small aspect of the overall effort. To see TDD in a bigger view, lets identify the various activities that are part of the Software Development LifeCycle. These are going to be presented in a Waterfall style for simplicity, but each item also occurs within Iterative methodologies such as Agile/Scrum. the key ones here are: Requirements Gathering Architecture Design Implementation Quality Assurance Can each of these items be subjected to a process which establishes metrics (quantified metrics) that reflect both the quality and correctness of each item? It should be clear that conventional Unit Tests do not apply to all of these items; at best they can verify that a local aspect (e.g. a Class/Method) of implementation matches the (test writers perspective of) the appropriate design document. So what can we do? For each of area, the goal is to create tests that are quantifiable and durable. The ability to quantify the measurements (beyond a simple pass/fail) is critical to tracking progress(eventually measuring the level of success that has been achieved) and for providing clear information on what items need to be addressed (along with the appropriate time to address them - in varying levels of detail) . Durability is important so that the test can be reapplied (ideally in an automated fashion) over the entire cycle. Returning for a moment back to our "education example", one must also be careful of how the tests are organized and how the measurements are taken. If a test is in a multiple choice format, there is a significant statistical probability that a correct answer might be the result of a random guess. Also, in many situations, having the student simply provide a final answer can obscure many important elements. For example, on a math test, having the student simply provide a numeric answer (rather than showing the methodology) may result in a complete mismatch between the process and the result. It is hard to determine which is worse: The student who makes a simple arithmetric error at one step of a long process (resulting in a wrong answer) or The student who (without providing the "workflow") uses a completely invalid approach, yet still comes up with the right number. The "Wrong Process"/"Right Answer" is probably the single biggest problem in software development. Even very simple items can suffer from this. As an example consider the following code for a "straight line" calculation....Is it correct? (for Integral Points)         int Solve(int m, int b, int x) { return m * x + b; }   Most people would respond "Yes". But let's take the question one step further... Is it correct for all possible values of m,b,x??? (no fair if you cheated by being focused on the bolded text!)  Without additional information regarding constrains on "the possible values of m,b,x" the answer must be NO, there is the risk of overflow/wraparound that will produce an incorrect result! To properly answer this question (i.e. Test the Code), one MUST be able to backtrack from the implementation through the design, and architecture all the way back to the requirements. And the requirement itself must be tested against the stakeholder(s). It is only when the bounding conditions are defined that it is possible to determine if the code is "Correct" and has "Quality". Yet, how many of us (myself included) have written such code without even thinking about it. In many canses we (think we) "know" what the bounds are, and that the code will be correct. As we all know, requirements change, "code reuse" causes implementations to be applied to different scenarios, etc. This leads directly to the types of system failures that plague so many projects. This approach to TDD is much more holistic than ones which start by focusing on the details. The fundamental concepts still apply: Each item should be tested. The test should be defined/implemented before (or concurrent with) the definition/implementation of the actual item. We also add concepts that expand the scope and alter the style by recognizing: There are many things beside "lines of code" that benefit from testing (measuring/evaluating in a formal way) Correctness and Quality can not be solely measured by "correct results" In the future parts, we will examine in greater detail some of the techniques that can be applied to each of these areas....

    Read the article

  • MySQL Cluster 7.3 Labs Release – Foreign Keys Are In!

    - by Mat Keep
    0 0 1 1097 6254 Homework 52 14 7337 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} Summary (aka TL/DR): Support for Foreign Key constraints has been one of the most requested feature enhancements for MySQL Cluster. We are therefore extremely excited to announce that Foreign Keys are part of the first Labs Release of MySQL Cluster 7.3 – available for download, evaluation and feedback now! (Select the mysql-cluster-7.3-labs-June-2012 build) In this blog, I will attempt to discuss the design rationale, implementation, configuration and steps to get started in evaluating the first MySQL Cluster 7.3 Labs Release. Pace of Innovation It was only a couple of months ago that we announced the General Availability (GA) of MySQL Cluster 7.2, delivering 1 billion Queries per Minute, with 70x higher cross-shard JOIN performance, Memcached NoSQL key-value API and cross-data center replication.  This release has been a huge hit, with downloads and deployments quickly reaching record levels. The announcement of the first MySQL Cluster 7.3 Early Access lab release at today's MySQL Innovation Day event demonstrates the continued pace in Cluster development, and provides an opportunity for the community to evaluate and feedback on new features they want to see. What’s the Plan for MySQL Cluster 7.3? Well, Foreign Keys, as you may have gathered by now (!), and this is the focus of this first Labs Release. As with MySQL Cluster 7.2, we plan to publish a series of preview releases for 7.3 that will incrementally add new candidate features for a final GA release (subject to usual safe harbor statement below*), including: - New NoSQL APIs; - Features to automate the configuration and provisioning of multi-node clusters, on premise or in the cloud; - Performance and scalability enhancements; - Taking advantage of features in the latest MySQL 5.x Server GA. Design Rationale MySQL Cluster is designed as a “Not-Only-SQL” database. It combines attributes that enable users to blend the best of both relational and NoSQL technologies into solutions that deliver web scalability with 99.999% availability and real-time performance, including: Concurrent NoSQL and SQL access to the database; Auto-sharding with simple scale-out across commodity hardware; Multi-master replication with failover and recovery both within and across data centers; Shared-nothing architecture with no single point of failure; Online scaling and schema changes; ACID compliance and support for complex queries, across shards. Native support for Foreign Key constraints enables users to extend the benefits of MySQL Cluster into a broader range of use-cases, including: - Packaged applications in areas such as eCommerce and Web Content Management that prescribe databases with Foreign Key support. - In-house developments benefiting from Foreign Key constraints to simplify data models and eliminate the additional application logic needed to maintain data consistency and integrity between tables. Implementation The Foreign Key functionality is implemented directly within MySQL Cluster’s data nodes, allowing any client API accessing the cluster to benefit from them – whether using SQL or one of the NoSQL interfaces (Memcached, C++, Java, JPA or HTTP/REST.) The core referential actions defined in the SQL:2003 standard are implemented: CASCADE RESTRICT NO ACTION SET NULL In addition, the MySQL Cluster implementation supports the online adding and dropping of Foreign Keys, ensuring the Cluster continues to serve both read and write requests during the operation. An important difference to note with the Foreign Key implementation in InnoDB is that MySQL Cluster does not support the updating of Primary Keys from within the Data Nodes themselves - instead the UPDATE is emulated with a DELETE followed by an INSERT operation. Therefore an UPDATE operation will return an error if the parent reference is using a Primary Key, unless using CASCADE action, in which case the delete operation will result in the corresponding rows in the child table being deleted. The Engineering team plans to change this behavior in a subsequent preview release. Also note that when using InnoDB "NO ACTION" is identical to "RESTRICT". In the case of MySQL Cluster “NO ACTION” means “deferred check”, i.e. the constraint is checked before commit, allowing user-defined triggers to automatically make changes in order to satisfy the Foreign Key constraints. Configuration There is nothing special you have to do here – Foreign Key constraint checking is enabled by default. If you intend to migrate existing tables from another database or storage engine, for example from InnoDB, there are a couple of best practices to observe: 1. Analyze the structure of the Foreign Key graph and run the ALTER TABLE ENGINE=NDB in the correct sequence to ensure constraints are enforced 2. Alternatively drop the Foreign Key constraints prior to the import process and then recreate when complete. Getting Started Read this blog for a demonstration of using Foreign Keys with MySQL Cluster.  You can download MySQL Cluster 7.3 Labs Release with Foreign Keys today - (select the mysql-cluster-7.3-labs-June-2012 build) If you are new to MySQL Cluster, the Getting Started guide will walk you through installing an evaluation cluster on a singe host (these guides reflect MySQL Cluster 7.2, but apply equally well to 7.3) Post any questions to the MySQL Cluster forum where our Engineering team will attempt to assist you. Post any bugs you find to the MySQL bug tracking system (select MySQL Cluster from the Category drop-down menu) And if you have any feedback, please post them to the Comments section of this blog. Summary MySQL Cluster 7.2 is the GA, production-ready release of MySQL Cluster. This first Labs Release of MySQL Cluster 7.3 gives you the opportunity to preview and evaluate future developments in the MySQL Cluster database, and we are very excited to be able to share that with you. Let us know how you get along with MySQL Cluster 7.3, and other features that you want to see in future releases. * Safe Harbor Statement This information is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.

    Read the article

  • Silverlight Cream for December 27, 2010 -- #1016

    - by Dave Campbell
    In this Issue: Sacha Barber, David Anson, Jesse Liberty, Shawn Wildermuth, Jeff Blankenburg(-2-), Martin Krüger, Ryan Alford(-2-), Michael Crump, Peter Kuhn(-2-). Above the Fold: Silverlight: "Part 4 of 4 : Tips/Tricks for Silverlight Developers" Michael Crump WP7: "Navigating with the WebBrowser Control on WP7" Shawn Wildermuth Shoutouts: John Papa posted that the open call is up for MIX11 presenters: Your Chance to Speak at MIX11 From SilverlightCream.com: Aspect Examples (INotifyPropertyChanged via aspects) If you're wanting to read a really in-depth discussion of aspect oriented programming (AOP), check out the article Sacha Barber has up at CodeProject discussing INPC via aspects. How to: Localize a Windows Phone 7 application that uses the Windows Phone Toolkit into different languages David Anson has a nice tutorial up on localizing your WP7 app, including using the Toolkit and controls such as DatePicker... remember we're talking localized Windows Phone From Scratch – Animation Part 1 Jesse Liberty continues in his 'From Scratch' series with this first post on WP7 Animation... good stuff, Jesse! Navigating with the WebBrowser Control on WP7 In building his latest WP7 app, Shawn Wildermuth ran into some obscure errors surrounding browser.InvokeScript. He lists the simple solution and his back, refresh, and forward button functionality for us. What I Learned In WP7 – Issue #7 In the time I was out, Jeff Blankenburg got ahead of me, so I'll catch up 2 at a time... in this number 7 he discusses making videos of your apps, links to the Learn Visual Studio series, and his new website What I Learned In WP7 – Issue #8 Jeff Blankenburg's number 8 is a very cool tip on using the return key on the keyboard to handle the loss of focus and handling of text typed into a textbox. Resize of a grid by using thumb controls Martin Krüger has a sample in the Expression Gallery of a grid that is resizable by using 'thumb controls' at the 4 corners... all source, so check it out! Silverlight 4 – Productivity Power Tools and EF4 Ryan Alford found a very interesting bug associated with EF4 and the Productivity Power Tools, and the way to get out of it is just weird as well. Silverlight 4 – Toolkit and Theming Ryan Alford also had a problem adding a theme from the Toolkit, and what all you might have to do to get around this one.... Part 4 of 4 : Tips/Tricks for Silverlight Developers. Michael Crump has part 4 of his series on Silverlight Development tips and tricks. This is numbers 16 through 20 and covers topics such as Version information, Using Lambdas, Specifying a development port, Disabling ChildWindow Close button, and XAML cleanup. The XML content importer and Windows Phone 7 Peter Kuhn wanted to use the XML content inporter with a WP7 app and ran into problems implementing the process and a lack of documentation as well... he pounded through it all and has a class he's sharing for loading sounds via XML file settings. WP7 snippet: analyzing the hyperlink button style In a second post, Peter Kuhn responds to a forum discussion about the styles for the hyperlink button in WP7 and why they're different than SL4 ... and styles-to-go to get all the hyperlink goodness you want... wrapped text, or even non-text content. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Microsoft Business Intelligence Seminar 2011

    - by DavidWimbush
    I was lucky enough to attend the maiden presentation of this at Microsoft Reading yesterday. It was pretty gripping stuff not only because of what was said but also because of what could only be hinted at. Here's what I took away from the day. (Disclaimer: I'm not a BI guru, just a reasonably experienced BI developer, so I may have misunderstood or misinterpreted a few things. Particularly when so much of the talk was about the vision and subtle hints of what is coming. Please comment if you think I've got anything wrong. I'm also not going to even try to cover Master Data Services as I struggled to imagine how you would actually use it.) I was a bit worried when I learned that the whole day was going to be presented by one guy but Rafal Lukawiecki is a very engaging speaker. He's going to be presenting this about 20 times around the world over the coming months. If you get a chance to hear him speak, I say go for it. No doubt some of the hints will become clearer as Denali gets closer to RTM. Firstly, things are definitely happening in the SQL Server Reporting and BI world. Traditionally IT would build a data warehouse, then cubes on top of that, and then publish them in a structured and controlled way. But, just as with many IT projects in general, by the time it's finished the business has moved on and the system no longer meets their requirements. This not sustainable and something more agile is needed but there has to be some control. Apparently we're going to be hearing the catchphrase 'Balancing agility with control' a lot. More users want more access to more data. Can they define what they want? Of course not, but they'll recognise it when they see it. It's estimated that only 28% of potential BI users have meaningful access to the data they need, so there is a real pent-up demand. The answer looks like: give them some self-service tools so they can experiment and see what works, and then IT can help to support the results. It's estimated that 32% of Excel users are comfortable with its analysis tools such as pivot tables. It's the power user's preferred tool. Why fight it? That's why PowerPivot is an Excel add-in and that's why they released a Data Mining add-in for it as well. It does appear that the strategy is going to be to use Reporting Services (in SharePoint mode), PowerPivot, and possibly something new (smiles and hints but no details) to create reports and explore data. Everything will be published and managed in SharePoint which gives users the ability to mash-up, share and socialise what they've found out. SharePoint also gives IT tools to understand what people are looking at and where to concentrate effort. If PowerPivot report X becomes widely used, it's time to check that it shows what they think it does and perhaps get it a bit more under central control. There was more SharePoint detail that went slightly over my head regarding where Excel Services and Excel Web Application fit in, the differences between them, and the suggestion that it is likely they will one day become one (but not in the immediate future). That basic pattern is set to be expanded upon by further exploiting Vertipaq (the columnar indexing engine that enables PowerPivot to store and process a lot of data fast and in a small memory footprint) to provide scalability 'from the desktop to the data centre', and some yet to be detailed advances in 'frictionless deployment' (part of which is about making the difference between local and the cloud pretty much irrelevant). Excel looks like becoming Microsoft's primary BI client. It already has: the ability to consume cubes strong visualisation tools slicers (which are part of Excel not PowerPivot) a data mining add-in PowerPivot A major hurdle for self-service BI is presenting the data in a consumable format. You can't just give users PowerPivot and a server with a copy of the OLTP database(s). Building cubes is labour intensive and doesn't always give the user what they need. This is where the BI Semantic Model (BISM) comes in. I gather it's a layer of metadata you define that can combine multiple data sources (and types of data source) into a clear 'interface' that users can work with. It comes with a new query language called DAX. SSAS cubes are unlikely to go away overnight because, with their pre-calculated results, they are still the most efficient way to work with really big data sets. A few other random titbits that came up: Reporting Services is going to get some good new stuff in Denali. Keep an eye on www.projectbotticelli.com for the slides. You can also view last year's seminar sessions which covered a lot of the same ground as far as the overall strategy is concerned. They plan to add more material as Denali's features are publicly exposed. Check out the PASS keynote address for a showing of Yahoo's SQL BI servers. Apparently they wheeled the rack out on stage still plugged in and running! Check out the Excel 2010 Data Mining Add-Ins. 32 bit only at present but 64 bit is on the way. There are lots of data sets, many of them free, at the Windows Azure Marketplace Data Market (where you can also get ESRI shape files). If you haven't already seen it, have a look at the Silverlight Pivot Viewer (http://weblogs.asp.net/scottgu/archive/2010/06/29/silverlight-pivotviewer-now-available.aspx). The Bing Maps Data Connector is worth a look if you're into spatial stuff (http://www.bing.com/community/site_blogs/b/maps/archive/2010/07/13/data-connector-sql-server-2008-spatial-amp-bing-maps.aspx).  

    Read the article

  • Guessing Excel Data Types

    - by AjarnMark
    Note to Self HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Jet\4.0\Engines\Excel: TypeGuessRows = 0 means scan everything. Note to Others About 10 years ago I stumbled across this bit of information just when I needed it and it saved my project.  Then for some reason, a few years later when it would have been nice, but not critical, for some reason I could not find it again anywhere.  Well, now I have stumbled across it again, and to preserve my future self from nightmares and sudden baldness due to pulling my hair out, I have decided to blog it in the hopes that I can find it again this way. Here’s the story…  When you query data from an Excel spreadsheet, such as with old-fashioned DTS packages in SQL 2000 (my first reference) or simply with an OLEDB Data Adapter from ASP.NET (recent task) and if you are using the Microsoft Jet 4.0 driver (newer ones may deal with this differently) then you can get funny results where the query reports back that a cell value is null even when you know it contains data. What happens is that Excel doesn’t really have data types.  While you can format information in cells to appear like certain data types (e.g. Date, Time, Decimal, Text, etc.) that is not really defining the cell as being of a certain type like we think of when working with databases.  But, presumably, to make things more convenient for the user (programmer) when you issue a query against Excel, the query processor tries to guess what type of data is contained in each column and returns it in an appropriate manner.  This is all well and good IF your data is consistent in every row and matches what the processor guessed.  And, for efficiency’s sake, when the query processor is trying to figure out each column’s data type, it does so by analyzing only the first 8 rows of data (default setting). Now here’s the problem, suppose that your spreadsheet contains information about clothing, and one of the columns is Size.  Now suppose that in the first 8 rows, all of your sizes look like 32, 34, 18, 10, and so on, using numbers, but then, somewhere after the 8th row, you have some rows with sizes like S, M, L, XL.  What happens is that by examining only the first 8 rows, the query processor inferred that the column contained numerical data, and then when it hits the non-numerical data in later rows, it comes back blank.  Major bummer, and a real pain to track down if you don’t know that Excel is doing this, because you study the spreadsheet and say, “the data is RIGHT THERE!  WHY doesn’t the query see it?!?!”  And the hair-pulling begins. So, what’s a developer to do?  One option is to go to the registry setting noted above and change the DWORD value of TypeGuessRows from the default of 8 to 0 (zero).  Setting this value to zero will force Jet to scan every row in the spreadsheet before making its determination as to what type of data the column contains.  And that means that in the example above, it would have treated the column as a string rather than as numeric, and presto! your query now returns all of the values that you know are in there. Of course, there is a caveat… if you are querying large spreadsheets, making Jet scan every row can be quite a performance hit.  You could enter a different number (more than 8) that you believe is a better sampling of rows to make the guess, but you still have the possibility that every row scanned looks alike, but that later rows are different, and that you might get blanks when there really is data there.  That’s the type of gamble, I really don’t like to take with my data. Anyone with a better approach, or with experience with more recent drivers that have a better way of handling data types, please chime in!

    Read the article

  • SQL SERVER – Data Sources and Data Sets in Reporting Services SSRS

    - by Pinal Dave
    This example is from the Beginning SSRS by Kathi Kellenberger. Supporting files are available with a free download from the www.Joes2Pros.com web site. This example is from the Beginning SSRS. Supporting files are available with a free download from the www.Joes2Pros.com web site. Connecting to Your Data? When I was a child, the telephone book was an important part of my life. Maybe I was just a nerd, but I enjoyed getting a new book every year to page through to learn about the businesses in my small town or to discover where some of my school acquaintances lived. It was also the source of maps to my town’s neighborhoods and the towns that surrounded me. To make a phone call, I would need a telephone number. In order to find a telephone number, I had to know how to use the telephone book. That seems pretty simple, but it resembles connecting to any data. You have to know where the data is and how to interact with it. A data source is the connection information that the report uses to connect to the database. You have two choices when creating a data source, whether to embed it in the report or to make it a shared resource usable by many reports. Data Sources and Data Sets A few basic terms will make the upcoming choses make more sense. What database on what server do you want to connect to? It would be better to just ask… “what is your data source?” The connection you need to make to get your reports data is called a data source. If you connected to a data source (like the JProCo database) there may be hundreds of tables. You probably only want data from just a few tables. This means you want to write a specific query against this data source. A query on a data source to get just the records you need for an SSRS report is called a Data Set. Creating a local Data Source You can connect embed a connection from your report directly to your JProCo database which (let’s say) is installed on a server named Reno. If you move JProCo to a new server named Tampa then you need to update the Data Set. If you have 10 reports in one project that were all pointing to the JProCo database on the Reno server then they would all need to be updated at once. It’s possible to make a project level Data Source and have each report use that. This means one change can fix all 10 reports at once. This would be called a Shared Data Source. Creating a Shared Data Source The best advice I can give you is to create shared data sources. The reason I recommend this is that if a database moves to a new server you will have just one place in Report Manager to make the server name change. That one change will update the connection information in all the reports that use that data source. To get started, you will start with a fresh project. Go to Start > All Programs > SQL Server 2012 > Microsoft SQL Server Data Tools to launch SSDT. Once SSDT is running, click New Project to create a new project. Once the New Project dialog box appears, fill in the form, as shown in. Be sure to select Report Server Project this time – not the wizard. Click OK to dismiss the New Project dialog box. You should now have an empty project, as shown in the Solution Explorer. A report is meant to show you data. Where is the data? The first task is to create a Shared Data Source. Right-click on the Shared Data Sources folder and choose Add New Data Source. The Shared Data Source Properties dialog box will launch where you can fill in a name for the data source. By default, it is named DataSource1. The best practice is to give the data source a more meaningful name. It is possible that you will have projects with more than one data source and, by naming them, you can tell one from another. Type the name JProCo for the data source name and click the Edit button to configure the database connection properties. If you take a look at the types of data sources you can choose, you will see that SSRS works with many data platforms including Oracle, XML, and Teradata. Make sure SQL Server is selected before continuing. For this post, I am assuming that you are using a local SQL Server and that you can use your Windows account to log in to the SQL Server. If, for some reason you must use SQL Server Authentication, choose that option and fill in your SQL Server account credentials. Otherwise, just accept Windows Authentication. If your database server was installed locally and with the default instance, just type in Localhost for the Server name. Select the JProCo database from the database list. At this point, the connection properties should look like. If you have installed a named instance of SQL Server, you will have to specify the server name like this: Localhost\InstanceName, replacing the InstanceName with whatever your instance name is. If you are not sure about the named instance, launch the SQL Server Configuration Manager found at Start > All Programs > Microsoft SQL Server 2012 > Configuration Tools. If you have a named instance, the name will be shown in parentheses. A default instance of SQL Server will display MSSQLSERVER; a named instance will display the name chosen during installation. Once you get the connection properties filled in, click OK to dismiss the Connection Properties dialog box and OK again to dismiss the Shared Data Source properties. You now have a data source in the Solution Explorer. What’s next I really need to thank Kathi Kellenberger and Rick Morelan for sharing this material for this 5 day series of posts on SSRS. To get really comfortable with SSRS you will get to know the different SSDT windows, Build reports on your own (without the wizards),  Add report headers and footers, Accept user input,  create levels, charts, or even maps for visual appeal. You might be surprise to know a small 230 page book starts from the very beginning and covers the steps to do all these items. Beginning SSRS 2012 is a small easy to follow book so you can learn SSRS for less than $20. See Joes2Pros.com for more on this and other books. If you want to learn SSRS in easy to simple words – I strongly recommend you to get Beginning SSRS book from Joes 2 Pros. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Reporting Services, SSRS

    Read the article

  • Archbeat Link-O-Rama Top 10 Facebook Faves for October 20-26, 2013

    - by OTN ArchBeat
    Here's this week's list of the Top 10 items shared on the OTN ArchBeat Facebook Page from October 27 - November 2, 2013. Visualizing and Process (Twitter) Events in Real Time with Oracle Coherence | Noah Arliss This OTN Virtual Developer Day session explores in detail how to create a dynamic HTML5 Web application that interacts with Oracle Coherence as it’s processing events in real time, using the Avatar project and Oracle Coherence’s Live Events feature. Part of OTN Virtual Developer Day: Harnessing the Power of Oracle WebLogic and Oracle Coherence, November 5, 2013. 9am to 1pm PT / 12pm to 4pm ET / 1pm to 5pm BRT. Register now! HTML5 Application Development with Oracle WebLogic Server | Doug Clarke This free OTN Virtual Developer Day session covers the support for WebSockets, RESTful data services, and JSON infrastructure available in Oracle WebLogic Server. Part of OTN Virtual Developer Day: Harnessing the Power of Oracle WebLogic and Oracle Coherence, November 5, 2013. 9am to 1pm PT / 12pm to 4pm ET / 1pm to 5pm BRT. Register now! Video: ADF BC and REST services | Frederic Desbiens Spend a few minutes with Oracle ADF principal product manager Frederic Desbiens and learn how to publish ADF Business Components as RESTful web services. One Client Two Clusters | David Felcey "Sometimes its desirable to have a client connect to multiple clusters, either because the data is dispersed or for instance the clusters are in different locations for high availability," says David Felcey. David shows you how in this post, which includes a simple example. Exceptions Handling and Notifications in ODI | Christophe Dupupet Oracle Fusion Middleware A-Team director Christophe Dupupet reviews the techniques that are available in Oracle Data Integrator to guarantee that the appropriate individuals are notified in the event that ODI processes are impacted by network outages or other mishaps. Securing WebSocket applications on Glassfish | Pavel Bucek WebSocket is a key capability standardized into Java EE 7. Many developers wonder how WebSockets can be secured. One very nice characteristic for WebSocket is that it in fact completely piggybacks on HTTP. In this post Pavel Bucek demonstrates how to secure WebSocket endpoints in GlassFish using TLS/SSL. Oracle Coherence, Split-Brain and Recovery Protocols In Detail | Ricardo Ferreira Ricardo Ferreira's article "provides a high level conceptual overview of Split-Brain scenarios in distributed systems," focusing on a "specific example of cluster communication failure and recovery in Oracle Coherence." Non-programmatic Authentication Using Login Form in JSF (For WebCenter & ADF) | JayJay Zheng Oracle ACE JayJay Zheng shares an approach that "avoids the programmatic authentication and works great for having a custom login page developed in WebCenter Portal integrated with OAM authentication." The latest article in the Industrial SOA series looks at mobile computing and how companies are developing SOA to go. http://pub.vitrue.com/PUxT Tech Article: SOA in Real Life: Mobile Solutions The ACE Director Thing | Dr. Frank Munz Frank Munz finally gets around to blogging about achieving Oracle ACE Director status and shares some interesting insight into what will change—and what won't—thanks to that new status. A good, short read for those interested in learning more about the Oracle ACE program. Thought for the Day "Even if you're on the right track, you'll get run over if you just sit there." — Will Rogers, American humorist (November 4, 1879 – August 15, 1935) Source: brainyquote.com

    Read the article

  • The Business of Winning Innovation: An Exclusive Blog Series

    - by Kerrie Foy
    "The Business of Winning Innovation” is a series of articles authored by Oracle Agile PLM experts on what it takes to make innovation a successful and lucrative competitive advantage. Our customers have proven Agile PLM applications to be enormously flexible and comprehensive, so we’ve launched this article series to showcase some of the most fascinating, value-packed use cases. In this article by Keith Colonna, we kick-off the series by taking a look at the science side of innovation within the Consumer Products industry and how PLM can help companies innovate faster, cheaper, smarter. This article will review how innovation has become the lifeline for growth within consumer products companies and how certain companies are “winning” by creating a competitive advantage for themselves by taking a more enterprise-wide,systematic approach to “innovation”.   Managing the Science of Innovation within the Consumer Products Industry By: Keith Colonna, Value Chain Solution Manager, Oracle The consumer products (CP) industry is very mature and competitive. Most companies within this industry have saturated North America (NA) with their products thus maximizing their NA growth potential. Future growth is expected to come from either expansion outside of North America and/or by way of new ideas and products. Innovation plays an integral role in both of these strategies, whether you’re innovating business processes or the products themselves, and may cause several challenges for the typical CP company, Becoming more innovative is both an art and a science. Most CP companies are very good at the art of coming up with new innovative ideas, but many struggle with perfecting the science aspect that involves the best practice processes that help companies quickly turn ideas into sellable products and services. Symptoms and Causes of Business Pain Struggles associated with the science of innovation show up in a variety of ways, like: · Establishing and storing innovative product ideas and data · Funneling these ideas to the chosen few · Time to market cycle time and on-time launch rates · Success rates, or how often the best idea gets chosen · Imperfect decision making (i.e. the ability to kill projects that are not projected to be winners) · Achieving financial goals · Return on R&D investment · Communicating internally and externally as more outsource partners are added globally · Knowing your new product pipeline and project status These challenges (and others) can be consolidated into three root causes: A lack of visibility Poor data with limited access The inability to truly collaborate enterprise-wide throughout your extended value chain Choose the Right Remedy Product Lifecycle Management (PLM) solutions are uniquely designed to help companies solve these types challenges and their root causes. However, PLM solutions can vary widely in terms of configurability, functionality, time-to-value, etc. Business leaders should evaluate PLM solution in terms of their own business drivers and long-term vision to determine the right fit. Many of these solutions are point solutions that can help you cure only one or two business pains in the short term. Others have been designed to serve other industries with different needs. Then there are those solutions that demo well but are owned by companies that are either unable or unwilling to continuously improve their solution to stay abreast of the ever changing needs of the CP industry to grow through innovation. What the Right PLM Solution Should Do for You Based on more than twenty years working in the CP industry, I recommend investing in a single solution that can help you solve all of the issues associated with the science of innovation in a totally integrated fashion. By integration I mean the (1) integration of the all of the processes associated with the development, maintenance and delivery of your product data, and (2) the integration, or harmonization of this product data with other downstream sources, like ERP, product catalogues and the GS1 Global Data Synchronization Network (or GDSN, which is now a CP industry requirement for doing business with most retailers). The right PLM solution should help you: Increase Revenue. A best practice PLM solution should help a company grow its revenues by consolidating product development cycle-time and helping companies get new and improved products to market sooner. PLM should also eliminate many of the root causes for a product being returned, refused and/or reclaimed (which takes away from top-line growth) by creating an enterprise-wide, collaborative, workflow-driven environment. Reduce Costs. A strong PLM solution should help shave many unnecessary costs that companies typically take for granted. Rationalizing SKU’s, components (ingredients and packaging) and suppliers is a major opportunity at most companies that PLM should help address. A natural outcome of this rationalization is lower direct material spend and a reduction of inventory. Another cost cutting opportunity comes with PLM when it helps companies avoid certain costs associated with process inefficiencies that lead to scrap, rework, excess and obsolete inventory, poor end of life administration, higher cost of quality and regulatory and increased expediting. Mitigate Risk. Risks are the hardest to quantify but can be the most costly to a company. Food safety, recalls, line shutdowns, customer dissatisfaction and, worst of all, the potential tarnishing of your brands are a few of the debilitating risks that CP companies deal with on a daily basis. These risks are so uniquely severe that they require an enterprise PLM solution specifically designed for the CP industry that safeguards product information and processes while still allowing the art of innovation to flourish. Many CP companies have already created a winning advantage by leveraging a single, best practice PLM solution to establish an enterprise-wide, systematic approach to innovation. Oracle’s Answer for the Consumer Products Industry Oracle is dedicated to solving the growth and innovation challenges facing the CP industry. Oracle’s Agile Product Lifecycle Management for Process solution was originally developed with and for CP companies and is driven by a specialized development staff solely focused on maintaining and continuously improving the solution per the latest industry requirements. Agile PLM for Process helps CP companies handle all of the processes associated with managing the science of the innovation process, including: specification management, new product development/project and portfolio management, formulation optimization, supplier management, and quality and regulatory compliance to name a few. And as I mentioned earlier, integration is absolutely critical. Many Oracle CP customers, both with Oracle ERP systems and non-Oracle ERP systems, report benefits from Oracle’s Agile PLM for Process. In future articles we will explain in greater detail how both existing Oracle customers (like Gallo, Smuckers, Land-O-Lakes and Starbucks) and new Oracle customers (like ConAgra, Tyson, McDonalds and Heinz) have all realized the benefits of Agile PLM for Process and its integration to their ERP systems. More to Come Stay tuned for more articles in our blog series “The Business of Winning Innovation.” While we will also feature articles focused on other industries, look forward to more on how Agile PLM for Process addresses innovation challenges facing the CP industry. Additional topics include: Innovation Data Management (IDM), New Product Development (NPD), Product Quality Management (PQM), Menu Management,Private Label Management, and more! . Watch this video for more info about Agile PLM for Process

    Read the article

  • #altnetseattle &ndash; REST Services

    - by GeekAgilistMercenary
    Below are the notes I made in the REST Architecture Session I helped kick off with Andrew. RSS, ATOM, and such needed for better discovery.  i.e. there still is a need for some type of discovery. Difficult is modeling behaviors in a RESTful way.  ??  Invoking some type of state against an object.  For instance in the case of a POST vs. a GET.  The GET is easy, comes back as is, but what about a POST, which often changes some state or something. Challenge is doing multiple workflows with stateful workflows.  How does batch work.  Maybe model the batch as a resource. Frameworks aren’t particularly part of REST, REST is REST.  But point argued that REST is modeled, or part of modeling a state machine of some sort… ? Nothing is 100% reliable w/ REST – comparisons drawn with TCP/IP.  Sufficient probability is made however for the communications, but the idea of a possible failure has to be built into the usage model of REST. Ruby on Rails / RESTfully, and others used.  What were their issues, what do they do.  ATOM feeds, object serialized, using LINQ to XML w/ this.  No state machine libraries. Idempotent areas around REST and single change POST changes are inherent in the architecture. REST – one of the constrained languages is for the interaction w/ the system.  Limiting what can be done on the resources.  - disagreement, there is no agreed upon REST verbs. Sam Ruby – RESTful services.  Expanded the verbs within REST/HTTP pushes you off the web.  Of the existing verbs POST leaves the most up for debate. Robert Reem used Factory to deal with the POST to handle the new state.  The POST identifying what it just did by the return. Different states are put into POST, so that new prospective verbs, without creating verbs for REST/HTTP can be used to advantage without breaking universal clients. Biggest issue with REST services is their lack of state, yet it is also one of their biggest strengths.  What happens is that the client takes up the often onerous task of handling all state, state machines, and other extraneous resource management.  All the GETs, POSTs, DELETEs, INSERTs get all pushed into abstraction.  My 2 cents is that this in a way ends up pushing a huge proprietary burden onto the REST services often removing the point of REST to be simple and to the point. WADL does provide discovery and some state control (sort of?) Statement made, "WADL" isn't needed.  The JSON, XML, or other client side returned data handles this. I then applied the law of 2 feet rule for myself and headed to finish up these notes, post to the Wiki, and figure out what I was going to do next.  For the original Wiki entry check it out here. I will be adding more to this post with a subsequent post.  Please do feel free to post your thoughts and ideas about this, as I am sure everyone in the session will have more for elaboration.

    Read the article

  • Taking Your Business Scorecard Golfing

    - by tobyehatch
    Our workplace world is definitely changing. Not only are we taking work home, but we are working during odd hours in some very strange places.  I had the pleasure of interviewing Jacques Vigeant, Product Strategy Manager for Oracle Business Intelligence and Enterprise Performance Management, on a Podcast, and he enlightened me about how our mobile devices and business scorecards are enabling us to be more accountable and keep a watchful eye on business – even while on the golf course.Business scorecards have been around for many years - so I asked Jacques if he felt they had changed significantly due to technology. His answer was, “Yes, and no.”  Jacques agreed that scorecard enthusiasts are still passionate about executing the company strategy and monitoring Key Performance Indicators (KPIs), but scorecards and Business Intelligence (BI) as a whole have changed.  He explained that five to six years ago, people did BI work at the office and, for the most part, disconnected from their computer and workplace when they went home – with the exception of checking email and making a phone call or two. But now, that is no longer the case. People are virtually always connected with work and, more importantly, expect their BI and scorecards to be ‘always on,’ regardless of whether they are at their desk or somewhere else.Basically, the BI paradigm has changed from a 'pull' model, where employees are at their desks querying or pulling information from the system, to a 'push' model where employees expect their BI and scorecard systems to reach out (or push information) to them when there is something of note to learn or something on which they need to take action. I found this very interesting. However mobile devices do have their limitations with respect to screen sizes – does it really make sense to look at your strategy/scorecard on tiny devices? What kind of scorecard activities can you really expect to be able to do? Jacques’ answer was very logical. “When you think of a scorecard, it is really comprised of an organization of KPIs that are aligned with the strategic objectives of your company. KPIs are the heart of how you will execute your strategy. So, if you decompose that a little more, each KPI is well defined with the thresholds that you should keep an eye on and who is responsible for them. When we talk about scorecarding on a phone, we aren’t talking about surfing the strategy and exploring the strategy map like we do on the desktop. In a scorecarding context, we use the phone more as an alerting mechanism or simple monitoring device for your KPIs.”Jacques gave a great example of an inventory manager who took part of an afternoon off to go golfing before winter finally hit, and while on the front nine holes, his phone vibrated. His scorecard was alerting him that the inventory levels for one of the products was below some threshold that he had set.  From his phone, he had set up three options within Oracle Scorecard and Strategy Management (OSSM) for this type of situation:  1. Contact the warehouse manager directly by phone and work it out (standard phone function)  2. Tap/hold the KPI and add an annotation to the KPI in OSSM using the dictation capabilities of the phone and deal with it more fully when he gets back to the office  3. Tap/hold the KPI and invoke a business process from OSSM to transfer product from another warehouse with higher stock levels to the one that needs it  Being on a phone should still give you options to quickly deal with situations as needed, but mobile phones are not designed for nor should try to replicate the full desktop experience. We covered other interesting subjects in the interview, including how Oracle is keeping pace with mobile innovation and new devices such as Google Glasses, Galaxy Gear, Pebble Watches and more, and how Oracle is handling mobile security– which is great news for our mobile workforce. To listen to the entire Podcast, click here.To learn more about Oracle Scorecard and Strategy Management, click here.

    Read the article

< Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >