Search Results

Search found 3402 results on 137 pages for 'statistical analysis soft'.

Page 124/137 | < Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >

  • OpenVPN on ec2 bridged mode connects but no Ping, DNS or forwarding

    - by michael
    I am trying to use OpenVPN to access the internet over a secure connection. I have openVPN configured and running on Amazon EC2 in bridge mode with client certs. I can successfully connect from the client, but I cannot get access to the internet or ping anything from the client I checked the following and everything seems to shows a successful connection between the vpn client/server and UDP traffic on 1194 [server] sudo tcpdump -i eth0 udp port 1194 (shows UDP traffic after establishing connection) [server] sudo iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere ACCEPT all -- anywhere anywhere Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere Chain OUTPUT (policy ACCEPT) target prot opt source destination [server] sudo iptables -L -t nat Chain PREROUTING (policy ACCEPT) target prot opt source destination Chain POSTROUTING (policy ACCEPT) target prot opt source destination MASQUERADE all -- ip-W-X-Y-0.us-west-1.compute.internal/24 anywhere Chain OUTPUT (policy ACCEPT) target prot opt source destination [server] openvpn.log Wed Oct 19 03:11:26 2011 localhost/a.b.c.d:61905 [localhost] Inactivity timeout (--ping-restart), restarting Wed Oct 19 03:11:26 2011 localhost/a.b.c.d:61905 SIGUSR1[soft,ping-restart] received, client-instance restarting Wed Oct 19 03:41:31 2011 MULTI: multi_create_instance called Wed Oct 19 03:41:31 2011 a.b.c.d:57889 Re-using SSL/TLS context Wed Oct 19 03:41:31 2011 a.b.c.d:57889 LZO compression initialized Wed Oct 19 03:41:31 2011 a.b.c.d:57889 Control Channel MTU parms [ L:1574 D:166 EF:66 EB:0 ET:0 EL:0 ] Wed Oct 19 03:41:31 2011 a.b.c.d:57889 Data Channel MTU parms [ L:1574 D:1450 EF:42 EB:135 ET:32 EL:0 AF:3/1 ] Wed Oct 19 03:41:31 2011 a.b.c.d:57889 Local Options hash (VER=V4): '360696c5' Wed Oct 19 03:41:31 2011 a.b.c.d:57889 Expected Remote Options hash (VER=V4): '13a273ba' Wed Oct 19 03:41:31 2011 a.b.c.d:57889 TLS: Initial packet from [AF_INET]a.b.c.d:57889, sid=dd886604 ab6ebb38 Wed Oct 19 03:41:35 2011 a.b.c.d:57889 VERIFY OK: depth=1, /C=US/ST=CA/L=SanFrancisco/O=EXAMPLE/CN=EXAMPLE_CA/[email protected] Wed Oct 19 03:41:35 2011 a.b.c.d:57889 VERIFY OK: depth=0, /C=US/ST=CA/L=SanFrancisco/O=EXAMPLE/CN=localhost/[email protected] Wed Oct 19 03:41:37 2011 a.b.c.d:57889 Data Channel Encrypt: Cipher 'BF-CBC' initialized with 128 bit key Wed Oct 19 03:41:37 2011 a.b.c.d:57889 Data Channel Encrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Wed Oct 19 03:41:37 2011 a.b.c.d:57889 Data Channel Decrypt: Cipher 'BF-CBC' initialized with 128 bit key Wed Oct 19 03:41:37 2011 a.b.c.d:57889 Data Channel Decrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Wed Oct 19 03:41:37 2011 a.b.c.d:57889 Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 1024 bit RSA Wed Oct 19 03:41:37 2011 a.b.c.d:57889 [localhost] Peer Connection Initiated with [AF_INET]a.b.c.d:57889 Wed Oct 19 03:41:39 2011 localhost/a.b.c.d:57889 PUSH: Received control message: 'PUSH_REQUEST' Wed Oct 19 03:41:39 2011 localhost/a.b.c.d:57889 SENT CONTROL [localhost]: 'PUSH_REPLY,redirect-gateway def1 bypass-dhcp,route-gateway W.X.Y.Z,ping 10,ping-restart 120,ifconfig W.X.Y.Z 255.255.255.0' (status=1) Wed Oct 19 03:41:40 2011 localhost/a.b.c.d:57889 MULTI: Learn: (IPV6) -> localhost/a.b.c.d:57889 [client] tracert google.com Tracing route to google.com [74.125.71.104] over a maximum of 30 hops: 1 347 ms 349 ms 348 ms PC [w.X.Y.Z] 2 * * * Request timed out. I can also successfully ping the server IP address from the client, and ping google.com from an SSH shell on the server. What am I doing wrong? Here is my config (Note: W.X.Y.Z == amazon EC2 private ipaddress) bridge config on br0 ifconfig eth0 0.0.0.0 promisc up brctl addbr br0 brctl addif br0 eth0 ifconfig br0 W.X.Y.X netmask 255.255.255.0 broadcast W.X.Y.255 up route add default gw W.X.Y.1 br0 /etc/openvpn/server.conf (from https://help.ubuntu.com/10.04/serverguide/C/openvpn.html) local W.X.Y.Z dev tap0 up "/etc/openvpn/up.sh br0" down "/etc/openvpn/down.sh br0" ;server W.X.Y.0 255.255.255.0 server-bridge W.X.Y.Z 255.255.255.0 W.X.Y.105 W.X.Y.200 ;push "route W.X.Y.0 255.255.255.0" push "redirect-gateway def1 bypass-dhcp" push "dhcp-option DNS 208.67.222.222" push "dhcp-option DNS 208.67.220.220" tls-auth ta.key 0 # This file is secret user nobody group nogroup log-append openvpn.log iptables config sudo iptables -A INPUT -i tap0 -j ACCEPT sudo iptables -A INPUT -i br0 -j ACCEPT sudo iptables -A FORWARD -i br0 -j ACCEPT sudo iptables -t nat -A POSTROUTING -s W.X.Y.0/24 -o eth0 -j MASQUERADE echo 1 > /proc/sys/net/ipv4/ip_forward Routing Tables added route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface W.X.Y.0 0.0.0.0 255.255.255.0 U 0 0 0 br0 0.0.0.0 W.X.Y.1 0.0.0.0 UG 0 0 0 br0 C:>route print =========================================================================== Interface List 32...00 ff ac d6 f7 04 ......TAP-Win32 Adapter V9 15...00 14 d1 e9 57 49 ......Microsoft Virtual WiFi Miniport Adapter #2 14...00 14 d1 e9 57 49 ......Realtek RTL8191SU Wireless LAN 802.11n USB 2.0 Net work Adapter 10...00 1f d0 50 1b ca ......Realtek PCIe GBE Family Controller 1...........................Software Loopback Interface 1 11...00 00 00 00 00 00 00 e0 Teredo Tunneling Pseudo-Interface 16...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter 17...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #2 18...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #3 36...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #5 =========================================================================== IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 10.1.2.1 10.1.2.201 25 10.1.2.0 255.255.255.0 On-link 10.1.2.201 281 10.1.2.201 255.255.255.255 On-link 10.1.2.201 281 10.1.2.255 255.255.255.255 On-link 10.1.2.201 281 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 10.1.2.201 281 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 10.1.2.201 281 =========================================================================== Persistent Routes: Network Address Netmask Gateway Address Metric 0.0.0.0 0.0.0.0 10.1.2.1 Default =========================================================================== C:>tracert google.com Tracing route to google.com [74.125.71.147] over a maximum of 30 hops: 1 344 ms 345 ms 343 ms PC [W.X.Y.221] 2 * * * Request timed out.

    Read the article

  • Spotlight on Claims: Serving Customers Under Extreme Conditions

    - by [email protected]
    Oracle Insurance's director of marketing for EMEA, John Sinclair, recently attended the CII Spotlight on Claims event in London. Bad weather and its implications for the insurance industry have become very topical as the frequency and diversity of natural disasters - including rains, wind and snow - has surged across Europe this winter. On England's wettest day on record, the county of Cumbria was flooded with 12 inches of rain within 24 hours. Freezing temperatures wreaked havoc on European travel, causing high speed TVG trains to break down and stranding hundreds of passengers under the English Chanel in a tunnel all night long without heat or electricity. A storm named Xynthia thrashed France and surrounding countries with hurricane force, flooding ports and killing 51 people. After the Spring Equinox, insurers may have thought the worst had past. Then came along Eyjafjallajökull, spewing out vast quantities of volcanic ash in what is turning out to be one of most costly natural disasters in history. Such extreme events challenge insurance companies' ability to service their customers just when customers need their help most. When you add economic downturn and competitive pressures to the mix, insurers are further stretched and required to continually learn and innovate to meet high customer expectations with reduced budgets. These and other issues were hot topics of discussion at the recent "Spotlight on Claims" seminar in London, focused on how weather is affecting claims and the insurance industry. The event was organized by the CII (Chartered Insurance Institute), a group with 90,000 members. CII has been at the forefront in setting professional standards for the insurance industry for over a century. Insurers came to the conference to hear how they could better serve their customers under extreme weather conditions, learn from the experience of their peers, and hear about technological breakthroughs in climate modeling, geographic intelligence and IT. Customer case studies at the conference highlighted the importance of effective and constant communication in handling the overflow of catastrophe related claims. First and foremost is the need to rapidly establish initial communication with claimants to build their confidence in a positive outcome. Ongoing communication then needs to be continued throughout the claims cycle to mange expectations and maintain ownership of the process from start to finish. Strong internal communication to support frontline staff was also deemed critical to successful crisis management, as was communication with the broader insurance ecosystem to tap into extended resources and business intelligence. Advances in technology - such web based systems to access policies and enter first notice of loss in the field - as well as customer-focused self-service portals and multichannel alerts, are instrumental in improving customer satisfaction and helping insurers to deal with the claims surge, which often can reach four or more times normal workloads. Dynamic models of the global climate system can now be used to better understand weather-related risks, and as these models mature it is hoped that they will soon become more accurate in predicting the timing of catastrophic events. Geographic intelligence is also being used within a claims environment to better assess loss reserves and detect fraud. Despite these advances in dealing with catastrophes and predicting their occurrence, there will never be a substitute for qualified front line staff to deal with customers. In light of pressures to streamline efficiency, there was debate as to whether outsourcing was the solution, or whether it was better to build on the people you have. In the final analysis, nearly everybody agreed that in the future insurance companies would have to work better and smarter to keep on top. An appeal was also made for greater collaboration amongst industry participants in dealing with the extreme conditions and systematic stress brought on by natural disasters. It was pointed out that the public oftentimes judged the industry as a whole rather than the individual carriers when it comes to freakish events, and that all would benefit at such times from the pooling of limited resources and professional skills rather than competing in silos for competitive advantage - especially the end customer. One case study that stood out was on how The Motorists Insurance Group was able to power through one of the most devastating catastrophes in recent years - Hurricane Ike. The keys to Motorists' success were superior people, processes and technology. They did a lot of upfront planning and invested in their people, creating a healthy team environment that delivered "max service" even when they were experiencing the same level of devastation as the rest of the population. Processes were rapidly adapted to meet the challenge of the catastrophe and continually adapted to Ike's specific conditions as they evolved. Technology was fundamental to the execution of their strategy, enabling them anywhere access, on the fly reassigning of resources and rapid training to augment the work force. You can learn more about the Motorists experience by watching this video. John Sinclair is marketing director for Oracle Insurance in EMEA. He has more than 20 years of experience in insurance and financial services.

    Read the article

  • Analysing and measuring the performance of a .NET application (survey results)

    - by Laila
    Back in December last year, I asked myself: could it be that .NET developers think that you need three days and a PhD to do performance profiling on their code? What if developers are shunning profilers because they perceive them as too complex to use? If so, then what method do they use to measure and analyse the performance of their .NET applications? Do they even care about performance? So, a few weeks ago, I decided to get a 1-minute survey up and running in the hopes that some good, hard data would clear the matter up once and for all. I posted the survey on Simple Talk and got help from a few people to promote it. The survey consisted of 3 simple questions: Amazingly, 533 developers took the time to respond - which means I had enough data to get representative results! So before I go any further, I would like to thank all of you who contributed, because I now have some pretty good answers to the troubling questions I was asking myself. To thank you properly, I thought I would share some of the results with you. First of all, application performance is indeed important to most of you. In fact, performance is an intrinsic part of the development cycle for a good 40% of you, which is much higher than I had anticipated, I have to admit. (I know, "Have a little faith Laila!") When asked what tool you use to measure and analyse application performance, I found that nearly half of the respondents use logging statements, a third use performance counters, and 70% of respondents use a profiler of some sort (a 3rd party performance profilers, the CLR profiler or the Visual Studio profiler). The importance attributed to logging statements did surprise me a little. I am still not sure why somebody would go to the trouble of manually instrumenting code in order to measure its performance, instead of just using a profiler. I personally find the process of annotating code, calculating times from log files, and relating it all back to your source terrifyingly laborious. Not to mention that you then need to remember to turn it all off later! Even when you have logging in place throughout all your code anyway, you still have a fair amount of potentially error-prone calculation to sift through the results; in addition, you'll only get method-level rather than line-level timings, and you won't get timings from any framework or library methods you don't have source for. To top it all, we all know that bottlenecks are rarely where you would expect them to be, so you could be wasting time looking for a performance problem in the wrong place. On the other hand, profilers do all the work for you: they automatically collect the CPU and wall-clock timings, and present the results from method timing all the way down to individual lines of code. Maybe I'm missing a trick. I would love to know about the types of scenarios where you actively prefer to use logging statements. Finally, while a third of the respondents didn't have a strong opinion about code performance profilers, those who had an opinion thought that they were mainly complex to use and time consuming. Three respondents in particular summarised this perfectly: "sometimes, they are rather complex to use, adding an additional time-sink to the process of trying to resolve the existing problem". "they are simple to use, but the results are hard to understand" "Complex to find the more advanced things, easy to find some low hanging fruit". These results confirmed my suspicions: Profilers are seen to be designed for more advanced users who can use them effectively and make sense of the results. I found yet more interesting information when I started comparing samples of "developers for whom performance is an important part of the dev cycle", with those "to whom performance is only looked at in times of crisis", and "developers to whom performance is not important, as long as the app works". See the three graphs below. Sample of developers to whom performance is an important part of the dev cycle: Sample of developers to whom performance is important only in times of crisis: Sample of developers to whom performance is not important, as long as the app works: As you can see, there is a strong correlation between the usage of a profiler and the importance attributed to performance: indeed, the more important performance is to a development team, the more likely they are to use a profiler. In addition, developers to whom performance is an important part of the dev cycle have a higher tendency to use a much wider range of methods for performance measurement and analysis. And, unsurprisingly, the less important performance is, the less varied the methods of measurement are. So all in all, to come back to my random questions: .NET developers do care about performance. Those who care the most use a wider range of performance measurement methods than those who care less. But overall, logging statements, performance counters and third party performance profilers are the performance measurement methods of choice for most developers. Finally, although most of you find code profilers complex to use, those of you who care the most about performance tend to use profilers more than those of you to whom performance is not so important.

    Read the article

  • CentOS 5.4 NFS v4 client file permissions differ from original files & NFS Share file contents

    - by p4guru
    Having a strange problem with NFS share and file permissions on the 1 out of the 2 NFS clients, web1 has file permissions issues but web2 is fine. web1 and web2 are load balanced web servers. So questions are: how do I ensure NFS share file contents retain the same permissions for user/group as the original files on web1 server like they do on web2 server ? how do I reverse what I did on web1, i tried unmount command and said command not found ? Information: I'm using 3 dedicated server setup. All 3 servers CentOS 5.4 64bit based. servers are as follows: web1 - nfs client with file permissions issues web2 - nfs client file permissions are OKAY db1 - nfs share at /nfsroot web2 nfs client was setup by my web host, while web1 was setup by me. I did the following commands on web1 and it worked with updating db1 nfsroot share at /nfsroot/site_css with latest files on web1 but the file permissions don't stick even if i use tar with -p command to perserve file permissions ? cd /home/username/public_html/forums/script/ tar -zcp site_css/ > site_css.tar.gz mount -t nfs4 nfsshareipaddress:/site_css /home/username/public_html/forums/scripts/site_css/ -o rw,soft cd /home/username/public_html/forums/script/ tar -zxf site_css.tar.gz But checking on web1 file permissions no longer username user/group but owned by nobody ? but web2 file permissions correct ? This is only a problem for web1 while web2 is correct ? Looks like numeric ids aren't the same ? Not sure how to correct this ? web1 with incorrect user/group of nobody ls -alh /home/username/public_html/forums/scripts/site_css total 48K drwxrwxrwx 2 nobody nobody 4.0K Feb 22 02:37 ./ drwxr-xr-x 3 username username 4.0K Feb 22 02:43 ../ -rw-r--r-- 1 nobody nobody 1 Nov 30 2006 index.html -rw-r--r-- 1 nobody nobody 5.8K Feb 22 02:37 style-057c3df0-00011.css -rw-r--r-- 1 nobody nobody 5.8K Feb 22 02:37 style-95001864-00002.css -rw-r--r-- 1 nobody nobody 5.8K Feb 18 05:37 style-b1879ba7-00002.css -rw-r--r-- 1 nobody nobody 5.8K Feb 18 05:37 style-cc2f96c9-00011.css web1 numeric ids ls -n /home/username/public_html/forums/scripts/site_css total 48 drwxrwxrwx 2 99 99 4096 Feb 22 02:37 ./ drwxr-xr-x 3 503 500 4096 Feb 22 02:43 ../ -rw-r--r-- 1 99 99 1 Nov 30 2006 index.html -rw-r--r-- 1 99 99 5876 Feb 22 02:37 style-057c3df0-00011.css -rw-r--r-- 1 99 99 5877 Feb 22 02:37 style-95001864-00002.css -rw-r--r-- 1 99 99 5877 Feb 18 05:37 style-b1879ba7-00002.css -rw-r--r-- 1 99 99 5876 Feb 18 05:37 style-cc2f96c9-00011.css web2 correct username user/group permissions ls -alh /home/username/public_html/forums/scripts/site_css total 48K drwxrwxrwx 2 root root 4.0K Feb 22 02:37 ./ drwxr-xr-x 3 username username 4.0K Dec 2 14:51 ../ -rw-r--r-- 1 username username 1 Nov 30 2006 index.html -rw-r--r-- 1 username username 5.8K Feb 22 02:37 style-057c3df0-00011.css -rw-r--r-- 1 username username 5.8K Feb 22 02:37 style-95001864-00002.css -rw-r--r-- 1 username username 5.8K Feb 18 05:37 style-b1879ba7-00002.css -rw-r--r-- 1 username username 5.8K Feb 18 05:37 style-cc2f96c9-00011.css web2 numeric ids ls -n /home/username/public_html/forums/scripts/site_css total 48 drwxrwxrwx 2 503 500 4096 Feb 22 02:37 ./ drwxr-xr-x 3 503 500 4096 Dec 2 14:51 ../ -rw-r--r-- 1 503 500 1 Nov 30 2006 index.html -rw-r--r-- 1 503 500 5876 Feb 22 02:37 style-057c3df0-00011.css -rw-r--r-- 1 503 500 5877 Feb 22 02:37 style-95001864-00002.css -rw-r--r-- 1 503 500 5877 Feb 18 05:37 style-b1879ba7-00002.css -rw-r--r-- 1 503 500 5876 Feb 18 05:37 style-cc2f96c9-00011.css I checked db1 /nfsroot/site_css and user/group ownership was incorrect for newer files dated feb22 owned by root and not username ? on db1 originally incorrect root assigned user/group for new feb22 dated files ls -alh /nfsroot/site_css total 44K drwxrwxrwx 2 root root 4.0K Feb 22 02:37 . drwxr-xr-x 17 root root 4.0K Feb 17 12:06 .. -rw-r--r-- 1 root root 1 Nov 30 2006 index.html -rw-r--r-- 1 root root 5.8K Feb 22 02:37 style-057c3df0-00011.css -rw-r--r-- 1 root root 5.8K Feb 22 02:37 style-95001864-00002.css -rw------- 1 username nfs 5.8K Feb 18 05:37 style-b1879ba7-00002.css -rw------- 1 username nfs 5.8K Feb 18 05:37 style-cc2f96c9-00011.css Then I chmod them all on db1 and chown to set to right ownership on db1 so it looks like below on db1 once corrected the newer feb22 dated files ls -alh /nfsroot/site_css total 44K drwxrwxrwx 2 root root 4.0K Feb 22 02:37 . drwxr-xr-x 17 root root 4.0K Feb 17 12:06 .. -rw-r--r-- 1 username username 1 Nov 30 2006 index.html -rw-r--r-- 1 username username 5.8K Feb 22 02:37 style-057c3df0-00011.css -rw-r--r-- 1 username username 5.8K Feb 22 02:37 style-95001864-00002.css -rw-r--r-- 1 username username 5.8K Feb 18 05:37 style-b1879ba7-00002.css -rw-r--r-- 1 username username 5.8K Feb 18 05:37 style-cc2f96c9-00011.css but still web1 shows owned by nobody ? while web2 shows correct permissions ? web1 still with incorrect user/group of nobody not matching what web2 and db1 are set to ? ls -alh /home/username/public_html/forums/scripts/site_css total 48K drwxrwxrwx 2 nobody nobody 4.0K Feb 22 02:37 ./ drwxr-xr-x 3 username username 4.0K Feb 22 02:43 ../ -rw-r--r-- 1 nobody nobody 1 Nov 30 2006 index.html -rw-r--r-- 1 nobody nobody 5.8K Feb 22 02:37 style-057c3df0-00011.css -rw-r--r-- 1 nobody nobody 5.8K Feb 22 02:37 style-95001864-00002.css -rw-r--r-- 1 nobody nobody 5.8K Feb 18 05:37 style-b1879ba7-00002.css -rw-r--r-- 1 nobody nobody 5.8K Feb 18 05:37 style-cc2f96c9-00011.css Just so confusing so any help is very very much appreciated! thanks

    Read the article

  • FairScheduling Conventions in Hadoop

    - by dan.mcclary
    While scheduling and resource allocation control has been present in Hadoop since 0.20, a lot of people haven't discovered or utilized it in their initial investigations of the Hadoop ecosystem. We could chalk this up to many things: Organizations are still determining what their dataflow and analysis workloads will comprise Small deployments under tests aren't likely to show the signs of strains that would send someone looking for resource allocation options The default scheduling options -- the FairScheduler and the CapacityScheduler -- are not placed in the most prominent position within the Hadoop documentation. However, for production deployments, it's wise to start with at least the foundations of scheduling in place so that you can tune the cluster as workloads emerge. To do that, we have to ask ourselves something about what the off-the-rack scheduling options are. We have some choices: The FairScheduler, which will work to ensure resource allocations are enforced on a per-job basis. The CapacityScheduler, which will ensure resource allocations are enforced on a per-queue basis. Writing your own implementation of the abstract class org.apache.hadoop.mapred.job.TaskScheduler is an option, but usually overkill. If you're going to have several concurrent users and leverage the more interactive aspects of the Hadoop environment (e.g. Pig and Hive scripting), the FairScheduler is definitely the way to go. In particular, we can do user-specific pools so that default users get their fair share, and specific users are given the resources their workloads require. To enable fair scheduling, we're going to need to do a couple of things. First, we need to tell the JobTracker that we want to use scheduling and where we're going to be defining our allocations. We do this by adding the following to the mapred-site.xml file in HADOOP_HOME/conf: <property> <name>mapred.jobtracker.taskScheduler</name> <value>org.apache.hadoop.mapred.FairScheduler</value> </property> <property> <name>mapred.fairscheduler.allocation.file</name> <value>/path/to/allocations.xml</value> </property> <property> <name>mapred.fairscheduler.poolnameproperty</name> <value>pool.name</value> </property> <property> <name>pool.name</name> <value>${user.name}</name> </property> What we've done here is simply tell the JobTracker that we'd like to task scheduling to use the FairScheduler class rather than a single FIFO queue. Moreover, we're going to be defining our resource pools and allocations in a file called allocations.xml For reference, the allocation file is read every 15s or so, which allows for tuning allocations without having to take down the JobTracker. Our allocation file is now going to look a little like this <?xml version="1.0"?> <allocations> <pool name="dan"> <minMaps>5</minMaps> <minReduces>5</minReduces> <maxMaps>25</maxMaps> <maxReduces>25</maxReduces> <minSharePreemptionTimeout>300</minSharePreemptionTimeout> </pool> <mapreduce.job.user.name="dan"> <maxRunningJobs>6</maxRunningJobs> </user> <userMaxJobsDefault>3</userMaxJobsDefault> <fairSharePreemptionTimeout>600</fairSharePreemptionTimeout> </allocations> In this case, I've explicitly set my username to have upper and lower bounds on the maps and reduces, and allotted myself double the number of running jobs. Now, if I run hive or pig jobs from either the console or via the Hue web interface, I'll be treated "fairly" by the JobTracker. There's a lot more tweaking that can be done to the allocations file, so it's best to dig down into the description and start trying out allocations that might fit your workload.

    Read the article

  • Oracle Delivers Latest Release of Oracle Enterprise Manager 12c

    - by Scott McNeil
    Richer Service Catalog for Database and Middleware as a Service; Enhanced Database and Middleware Management Help Drive Enterprise-Scale Private Cloud Adoption News Summary IT organizations are adopting private clouds as a stepping-stone to business-driven, self-service IT. Successful implementations hinge on the ability to efficiently deploy and manage cloud services at enterprise scale. Having a complete cloud management solution integrated with an enterprise-class technology stack is a fundamental requirement for IT. Oracle Enterprise Manager 12c Release 4 meets that requirement by helping businesses become more agile and responsive, while reducing cost, complexity, and risk. News Facts Oracle Enterprise Manager 12c Release 4, available today, lets organizations rapidly adopt Oracle-based, enterprise-scale private clouds. New capabilities provide advanced technology stack management, secure database administration, and enterprise service governance, enabling Oracle customers and partners to maximize database and application performance and drive innovation using self-service IT platforms. The enhancements have been driven by customers and the growing Oracle Enterprise Manager Ecosystem, comprised of more than 750 Oracle PartnerNetwork (OPN) Specialized partners. Oracle and its partners and customers have built over 140 plug-ins and connectors for Oracle Enterprise Manager. Watch the video highlights. Automation for Broader Cloud Services Oracle Enterprise Manager 12c Release 4 allows for a rapid enterprise-wide adoption of database, middleware and infrastructure services in the private cloud, driven by an enhanced API-enabled service catalog. The release features “push button” style provisioning of complete environments such as SOA and Oracle Active Data Guard, and fast data cloning that enables rapid deployment and testing of enterprise applications. Out-of-the-box capabilities to detect data and configuration vulnerabilities provide enhanced cloud service governance along with greater operational control through a flexible and extensible showback mechanism. Enhanced Database Management A new performance warehouse enables predictive database diagnostics and trend analysis and helps identify database problems before they occur. New enterprise data-governance capabilities enhance security by helping systematically discover and protect sensitive data. Step-by-step orchestration of upgrades with the ability to rollback changes enables faster adoption of Oracle Database 12c. Expanded Fusion Middleware Management A new consolidated view of Oracle Fusion Middleware 12c deployments with a guided management capability lets administrators apply best management practices to diverse middleware environments and identify performance issues quickly. A Java VM Diagnostics as a Service feature allows governed access to diagnostics data for IT workers across multiple disciplines for accelerated DevOps resolutions of defects and performance optimization. New automated provisioning for SOA lets middleware administrators perform mass SOA provisioning with ease. Superior Enterprise-Grade Management Private roles and preferred credentials have been added to Oracle Enterprise Manager to provide additional fine-grained security for organizations with complex access control requirements. A new security console provides a single point of control for managing the security of Oracle Enterprise Manager environments. Support for the latest industry standard SNMP v3 protocol, including encryption, enables more secure heterogeneous management. “Smart monitoring” adapts to observed environmental changes and adds self-management capabilities to help Oracle Enterprise Manager run at peak performance, while demanding less IT supervision. Supporting Quotes “Lawrence Livermore National Laboratory has a strong tradition of technology breakthroughs and leadership. As a member of Oracle’s Customer Advisory Board for Oracle Enterprise Manager, we have consistently provided feedback and guidance in the areas of enterprise-scale cloud, self-diagnosability, and secure administration for the product,” said Tim Frazier, CIO, NIF and Photon Sciences, Lawrence Livermore National Laboratory. “We intend to take advantage of the Release 4 features that support enterprise-scale availability and fine-grained security capabilities for private cloud deployments.” “IDC's most recent CloudTrack survey shows that most enterprises plan to adopt hybrid cloud architectures over the next three years,” said Mary Johnston Turner, Research Vice President, Enterprise System Management Software, IDC. “These organizations plan to deploy a wide range of workloads into cloud environments including mission critical database and middleware services that require high levels of fault tolerance and disaster recovery. Such capabilities were traditionally custom configured for each application but cloud offers the possibility to incorporate such properties within the service definition, enabling organizations to adopt cloud without compromise. With the latest release of Oracle Enterprise Manager 12c, Oracle is providing customers with an out-of-the-box experience for delivering highly-resilient cloud services for databases and applications.” “Since its inception, Oracle has been leading the way in innovative, scalable and high performance solutions for the enterprise. With this release of Oracle Enterprise Manager, we are extending this leadership by providing enterprise-scale capabilities for planning, delivering, and managing private clouds. We call this ‘zero-to-cloud – accelerated.’ These enhancements help our customers to expedite their adoption of cloud computing and prepares them for the next generation of self-service IT,” said Prakash Ramamurthy, senior vice president of Systems and Cloud Management at Oracle. Supporting Resources Oracle Enterprise Manager 12c Video: Cerner Delivers High Performance Private Cloud Video: BIAS Achieves Outstanding Results with Private Cloud Press Release Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter Download the Oracle Enterprise Manager 12c Mobile app

    Read the article

  • Camera Projection back Into 3D world, offset error

    - by Anthony
    I'm using XNA to simulate a robot in a 3D world and then do image analysis on what the camera sees. I have my camera looking down in front of the direction that the robot is going, and I have the robot detecting white pixels. I'm trying to take the white pixels that it finds and project them back into the 3D world so that I can see if it is actually detecting the correct pixels. I almost have it working, but there is an offset between where the white is in in the World and were I put my orange triangles (which represent what the robot things is white). /// <summary> /// Takes a bool map of and makes vertex positions based on the map. /// </summary> /// <param name="c"> The bool map</param> private void ProjectBoolMapOnGroundAnthony2(bool[,] c) { float triangleSize = 0.04f; // Point of interest in World W cordinate system. Vector3 pointOfInterest_W = Vector3.Zero; // Point of interest in Robot Cordinate system R Vector3 pointOfInterest_R = Vector3.Zero; // alpha is the angle from the robot camera to where it is looking in the center. //double alpha = Math.Atan(1.8f / 1); /// Matrix representation of the view determined by the position, target, and updirection. Matrix View = ((SimulationMain)Game).mainRobot.robotCameraView.View; /// Matrix representation of the view determined by the angle of the field of view (Pi/4), aspectRatio, nearest plane visible (1), and farthest plane visible (1200) Matrix Projection = ((SimulationMain)Game).mainRobot.robotCameraView.Projection; /// Matrix representing how the real world cordinates differ from that of the rendering by the camera. Matrix World = ((SimulationMain)Game).mainRobot.robotCameraView.World; Plane groundPlan = new Plane(Vector3.UnitZ, 0.0f); for (int x = 0; x < this.screenWidth; x++) { for (int y = 0; y < this.screenHeight; ) { if (c[x, y] == true && this.count1D < 62000) { int j = 1; Vector3 nearPlanePoint = Game.GraphicsDevice.Viewport.Unproject(new Vector3(x, y, 0), Projection, View, World); Vector3 farPlanePoint = Game.GraphicsDevice.Viewport.Unproject(new Vector3(x, y, 1), Projection, View, World); //Vector3 pointOfInterest_W = Vector3.in Ray ray = new Ray(nearPlanePoint, farPlanePoint); pointOfInterest_W = ray.Position + ray.Direction * (float) ray.Intersects(groundPlan); this.vertexArray2[this.count1D + 0].Position.X = pointOfInterest_W.X - triangleSize; this.vertexArray2[this.count1D + 0].Position.Y = pointOfInterest_W.Y - triangleSize * j; this.vertexArray2[this.count1D + 0].Position.Z = pointOfInterest_W.Z; this.vertexArray2[this.count1D + 0].Color = Color.DarkOrange; // Put another vertex a the position but +1 in the X direction triangleSize //this.vertexArray2[this.count1D + 1].Position.X = pointOnGroud.X + 3; //this.vertexArray2[this.count1D + 1].Position.Y = pointOnGroud.Y + j; this.vertexArray2[this.count1D + 1].Position.X = pointOfInterest_W.X; this.vertexArray2[this.count1D + 1].Position.Y = pointOfInterest_W.Y + triangleSize * j; this.vertexArray2[this.count1D + 1].Position.Z = pointOfInterest_W.Z; this.vertexArray2[this.count1D + 1].Color = Color.Red; // Put another vertex a the position but +1 in the X direction //this.vertexArray2[this.count1D + 0].Position.X = pointOnGroud.X; //this.vertexArray2[this.count1D + 0].Position.Y = pointOnGroud.Y + 3 + j; this.vertexArray2[this.count1D + 2].Position.X = pointOfInterest_W.X + triangleSize; this.vertexArray2[this.count1D + 2].Position.Y = pointOfInterest_W.Y - triangleSize * j; this.vertexArray2[this.count1D + 2].Position.Z = pointOfInterest_W.Z; this.vertexArray2[this.count1D + 2].Color = Color.Orange; this.count1D += 3; y += j; } else { y++; } } } } The world is a grass texture with lines on it. The world plane is normal at (0,0,1). Any ideas on why there is an offset? Any Ideas? Thanks for the help, Anthony G.

    Read the article

  • SATA errors reported during boot: exception Emask 0x40 SAct 0x0 SErr 0x80800 action 0x0

    - by digby280
    I have noticed some error during the Linux boot. They seem to continue to occur after the boot adding lines to the log every few seconds. Once booted this normally does not appear to be causing any problems. However, around 1 in 10 boots results in a kernel panic and the computer has on two or three occasions suddenly rebooted after being powered on for a number of hours. I presume the cause of the reboot is a kernel panic as well. I am running Ubuntu 11.10 and I have had Ubuntu installed on the computer for around a year. I have googled around and not found anything useful. I have provided the kernel log lines and the output of smartctl. Can anyone explain exactly what these errors mean, or better still how to resolve them? Apr 2 16:51:27 dell580 kernel: [ 19.831140] EXT4-fs (sdb2): re-mounted. Opts: errors=remount-ro,user_xattr,commit=0 Apr 2 16:51:27 dell580 kernel: [ 19.934194] tg3 0000:03:00.0: eth0: Link is down Apr 2 16:51:28 dell580 kernel: [ 20.929468] tg3 0000:03:00.0: eth0: Link is up at 100 Mbps, full duplex Apr 2 16:51:28 dell580 kernel: [ 20.929471] tg3 0000:03:00.0: eth0: Flow control is on for TX and on for RX Apr 2 16:51:28 dell580 kernel: [ 20.929727] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Apr 2 16:51:29 dell580 kernel: [ 21.609381] EXT4-fs (sdb2): re-mounted. Opts: errors=remount-ro,user_xattr,commit=0 Apr 2 16:51:29 dell580 kernel: [ 21.616515] ata2.01: exception Emask 0x40 SAct 0x0 SErr 0x80800 action 0x0 Apr 2 16:51:29 dell580 kernel: [ 21.616519] ata2.01: SError: { HostInt 10B8B } Apr 2 16:51:29 dell580 kernel: [ 21.616525] ata2.00: hard resetting link Apr 2 16:51:29 dell580 kernel: [ 21.934036] ata2.01: hard resetting link Apr 2 16:51:29 dell580 kernel: [ 22.408890] ata2.00: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 2 16:51:29 dell580 kernel: [ 22.408907] ata2.01: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Apr 2 16:51:29 dell580 kernel: [ 22.440934] ata2.00: configured for UDMA/100 Apr 2 16:51:29 dell580 kernel: [ 22.449040] ata2.01: configured for UDMA/133 Apr 2 16:51:29 dell580 kernel: [ 22.449818] ata2: EH complete Apr 2 16:51:33 dell580 kernel: [ 26.122664] ata2.01: exception Emask 0x40 SAct 0x0 SErr 0x80800 action 0x0 Apr 2 16:51:33 dell580 kernel: [ 26.122670] ata2.01: SError: { HostInt 10B8B } Apr 2 16:51:33 dell580 kernel: [ 26.122677] ata2.00: hard resetting link Apr 2 16:51:33 dell580 kernel: [ 26.442684] ata2.01: hard resetting link Apr 2 16:51:34 dell580 kernel: [ 26.925545] ata2.00: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 2 16:51:34 dell580 kernel: [ 26.925561] ata2.01: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Apr 2 16:51:34 dell580 kernel: [ 26.961542] ata2.00: configured for UDMA/100 Apr 2 16:51:34 dell580 kernel: [ 26.969616] ata2.01: configured for UDMA/133 Apr 2 16:51:34 dell580 kernel: [ 26.970400] ata2: EH complete Apr 2 16:51:35 dell580 kernel: [ 28.111180] ata2.01: exception Emask 0x40 SAct 0x0 SErr 0x80800 action 0x0 Apr 2 16:51:35 dell580 kernel: [ 28.111184] ata2.01: SError: { HostInt 10B8B } Apr 2 16:51:35 dell580 kernel: [ 28.111191] ata2.00: hard resetting link Apr 2 16:51:35 dell580 kernel: [ 28.429674] ata2.01: hard resetting link Apr 2 16:51:36 dell580 kernel: [ 28.904557] ata2.00: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 2 16:51:36 dell580 kernel: [ 28.904572] ata2.01: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Apr 2 16:51:36 dell580 kernel: [ 28.936609] ata2.00: configured for UDMA/100 Apr 2 16:51:36 dell580 kernel: [ 28.944692] ata2.01: configured for UDMA/133 Apr 2 16:51:36 dell580 kernel: [ 28.945464] ata2: EH complete Apr 2 16:51:38 dell580 kernel: [ 31.581756] eth0: no IPv6 routers present Apr 2 16:51:38 dell580 kernel: [ 32.103066] ata2.01: exception Emask 0x40 SAct 0x0 SErr 0x80800 action 0x0 Apr 2 16:51:38 dell580 kernel: [ 32.103074] ata2.01: SError: { HostInt 10B8B } Apr 2 16:51:38 dell580 kernel: [ 32.103085] ata2.00: hard resetting link Apr 2 16:51:38 dell580 kernel: [ 32.419669] ata2.01: hard resetting link Apr 2 16:51:39 dell580 kernel: [ 32.894518] ata2.00: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 2 16:51:39 dell580 kernel: [ 32.894533] ata2.01: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Apr 2 16:51:39 dell580 kernel: [ 32.926536] ata2.00: configured for UDMA/100 Apr 2 16:51:39 dell580 kernel: [ 32.934715] ata2.01: configured for UDMA/133 Apr 2 16:51:39 dell580 kernel: [ 32.935578] ata2: EH complete Here's the output of smartctl for the drive. smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.0.0-17-generic] (local build) Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Model Family: SAMSUNG SpinPoint F1 DT Device Model: SAMSUNG HD103UJ Serial Number: S13PJ90QC19706 LU WWN Device Id: 5 0000f0 00b1c7960 Firmware Version: 1AA01113 User Capacity: 1,000,204,886,016 bytes [1.00 TB] Sector Size: 512 bytes logical/physical Device is: In smartctl database [for details use: -P show] ATA Version is: 8 ATA Standard is: ATA-8-ACS revision 3b Local Time is: Mon Apr 2 17:13:48 2012 BST SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x00) Offline data collection activity was never started. Auto Offline Data Collection: Disabled. Self-test execution status: ( 41) The self-test routine was interrupted by the host with a hard or soft reset. Total time to complete Offline data collection: (11772) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 197) minutes. Conveyance self-test routine recommended polling time: ( 21) minutes. SCT capabilities: (0x003f) SCT Status supported. SCT Error Recovery Control supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 100 100 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0007 076 076 011 Pre-fail Always - 7940 4 Start_Stop_Count 0x0032 099 099 000 Old_age Always - 521 5 Reallocated_Sector_Ct 0x0033 100 100 010 Pre-fail Always - 0 7 Seek_Error_Rate 0x000f 253 253 051 Pre-fail Always - 0 8 Seek_Time_Performance 0x0025 100 100 015 Pre-fail Offline - 0 9 Power_On_Hours 0x0032 100 100 000 Old_age Always - 642 10 Spin_Retry_Count 0x0033 100 100 051 Pre-fail Always - 0 11 Calibration_Retry_Count 0x0012 100 100 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 482 13 Read_Soft_Error_Rate 0x000e 100 100 000 Old_age Always - 0 183 Runtime_Bad_Block 0x0032 100 100 000 Old_age Always - 759 184 End-to-End_Error 0x0033 100 100 000 Pre-fail Always - 0 187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0 188 Command_Timeout 0x0032 100 100 000 Old_age Always - 0 190 Airflow_Temperature_Cel 0x0022 073 069 000 Old_age Always - 27 (Min/Max 16/27) 194 Temperature_Celsius 0x0022 073 067 000 Old_age Always - 27 (Min/Max 16/28) 195 Hardware_ECC_Recovered 0x001a 100 100 000 Old_age Always - 320028 196 Reallocated_Event_Count 0x0032 100 100 000 Old_age Always - 0 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0030 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 099 099 000 Old_age Always - 1494 200 Multi_Zone_Error_Rate 0x000a 100 100 000 Old_age Always - 0 201 Soft_Read_Error_Rate 0x000a 253 253 000 Old_age Always - 0 SMART Error Log Version: 1 ATA Error Count: 211 (device log contains only the most recent five errors) CR = Command Register [HEX] FR = Features Register [HEX] SC = Sector Count Register [HEX] SN = Sector Number Register [HEX] CL = Cylinder Low Register [HEX] CH = Cylinder High Register [HEX] DH = Device/Head Register [HEX] DC = Device Command Register [HEX] ER = Error register [HEX] ST = Status register [HEX] Powered_Up_Time is measured from power on, and printed as DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes, SS=sec, and sss=millisec. It "wraps" after 49.710 days. Error 211 occurred at disk power-on lifetime: 0 hours (0 days + 0 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 84 51 0f 31 63 8f e1 Error: ICRC, ABRT 15 sectors at LBA = 0x018f6331 = 26174257 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 00 40 62 8f e1 08 00:01:00.460 READ DMA c8 00 20 00 7c 30 e0 08 00:01:00.450 READ DMA c8 00 00 10 49 8f e1 08 00:01:00.440 READ DMA c8 00 e0 20 d0 30 e0 08 00:01:00.420 READ DMA c8 00 00 c0 59 90 e1 08 00:01:00.400 READ DMA Error 210 occurred at disk power-on lifetime: 0 hours (0 days + 0 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 84 51 cf e9 cf 66 e0 Error: ICRC, ABRT 207 sectors at LBA = 0x0066cfe9 = 6737897 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 00 b8 cf 66 e0 08 00:08:29.780 READ DMA c8 00 60 60 c9 18 e0 08 00:08:29.770 READ DMA c8 00 40 20 c9 18 e0 08 00:08:29.770 READ DMA c8 00 20 00 c9 18 e0 08 00:08:29.760 READ DMA c8 00 20 98 cf 66 e0 08 00:08:29.750 READ DMA Error 209 occurred at disk power-on lifetime: 0 hours (0 days + 0 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 84 51 2f d1 74 e0 e0 Error: ICRC, ABRT 47 sectors at LBA = 0x00e074d1 = 14709969 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 00 00 74 e0 e0 08 00:00:30.940 READ DMA c8 00 20 18 36 de e0 08 00:00:30.930 READ DMA c8 00 08 48 f1 dd e0 08 00:00:30.930 READ DMA c8 00 08 a8 f0 dd e0 08 00:00:30.930 READ DMA c8 00 08 90 f0 dd e0 08 00:00:30.930 READ DMA Error 208 occurred at disk power-on lifetime: 0 hours (0 days + 0 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 84 51 7f 21 88 9d e0 Error: ICRC, ABRT 127 sectors at LBA = 0x009d8821 = 10324001 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 a0 00 88 9d e0 08 00:00:27.610 READ DMA c8 00 58 a8 e7 9c e0 08 00:00:27.610 READ DMA c8 00 00 28 e6 9c e0 08 00:00:27.610 READ DMA c8 00 00 e0 e4 9c e0 08 00:00:27.610 READ DMA c8 00 00 90 e0 9c e0 08 00:00:27.600 READ DMA Error 207 occurred at disk power-on lifetime: 0 hours (0 days + 0 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 04 51 26 6a 6a c3 e0 Error: ABRT at LBA = 0x00c36a6a = 12806762 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- ca 00 00 90 69 c3 e0 08 00:29:39.350 WRITE DMA ca 00 40 90 68 c3 e0 08 00:29:39.350 WRITE DMA ca 00 40 50 65 c3 e0 08 00:29:39.350 WRITE DMA ca 00 40 d0 64 c3 e0 08 00:29:39.350 WRITE DMA ca 00 40 90 63 c3 e0 08 00:29:39.350 WRITE DMA SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Interrupted (host reset) 90% 638 - # 2 Short offline Interrupted (host reset) 90% 638 - # 3 Extended offline Interrupted (host reset) 90% 638 - # 4 Short offline Interrupted (host reset) 90% 638 - # 5 Extended offline Interrupted (host reset) 90% 638 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay.

    Read the article

  • WPA2 authentication fails on Ubuntu 12.04 using Rosewill RNX-N1

    - by user94156
    Decided to reduce the clutter in the house and replace a wired connection with a wireless one on my wife's system using USB network device Rosewill RNX-X1. I can see and connect to unprotected network, but WPA2 authentication repeatedly fails. RNX-X1 works on other systems (including TV); also have 2 of 'em and tried each. Worth noting that I recently switched from Comcast to CenturyLink and so switched routers. The system connected successfully to previous router (Linksys EA4500) using WPA2. Would think it is the router (Actiontec C1000A) but all other devices (TV, iPad, Windows, Blackberry, and Squeezebox) connect ok. Would appreciate some diagnostic guidance and insight (phrased for a newbie!) Tests to date: sudo lshw -class network *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:03:00.0 logical name: eth0 version: 01 serial: 00:e0:4d:30:40:a1 size: 10Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm vpd msi pciexpress bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=half firmware=N/A latency=0 link=no multicast=yes port=MII speed=10Mbit/s resources: irq:47 ioport:ac00(size=256) memory:fdcff000-fdcfffff memory:fdb00000-fdb1ffff *-network description: Wireless interface physical id: 1 bus info: usb@1:2 logical name: wlan1 serial: 00:02:6f:bd:30:a0 capabilities: ethernet physical wireless configuration: broadcast=yes driver=rt2800usb driverversion=3.2.0-31-generic firmware=0.29 link=no multicast=yes wireless=IEEE 802.11bgn sudo lspci -v 00:00.0 RAM memory: NVIDIA Corporation MCP67 Memory Controller (rev a2) Subsystem: Biostar Microtech Int'l Corp Device 3409 Flags: bus master, 66MHz, fast devsel, latency 0 Capabilities: [44] HyperTransport: Slave or Primary Interface Capabilities: [dc] HyperTransport: MSI Mapping Enable+ Fixed- 00:01.0 ISA bridge: NVIDIA Corporation MCP67 ISA Bridge (rev a2) Subsystem: Biostar Microtech Int'l Corp Device 3409 Flags: bus master, 66MHz, fast devsel, latency 0 00:01.1 SMBus: NVIDIA Corporation MCP67 SMBus (rev a2) Subsystem: Biostar Microtech Int'l Corp Device 3409 Flags: 66MHz, fast devsel, IRQ 11 I/O ports at fc00 [size=64] I/O ports at 1c00 [size=64] I/O ports at 1c40 [size=64] Capabilities: [44] Power Management version 2 Kernel driver in use: nForce2_smbus Kernel modules: i2c-nforce2 00:01.2 RAM memory: NVIDIA Corporation MCP67 Memory Controller (rev a2) Flags: 66MHz, fast devsel 00:02.0 USB controller: NVIDIA Corporation MCP67 OHCI USB 1.1 Controller (rev a2) (prog-if 10 [OHCI]) Subsystem: Biostar Microtech Int'l Corp Device 3409 Flags: bus master, 66MHz, fast devsel, latency 0, IRQ 23 Memory at fe02f000 (32-bit, non-prefetchable) [size=4K] Capabilities: [44] Power Management version 2 Kernel driver in use: ohci_hcd 00:02.1 USB controller: NVIDIA Corporation MCP67 EHCI USB 2.0 Controller (rev a2) (prog-if 20 [EHCI]) Subsystem: Biostar Microtech Int'l Corp Device 3409 Flags: bus master, 66MHz, fast devsel, latency 0, IRQ 22 Memory at fe02e000 (32-bit, non-prefetchable) [size=256] Capabilities: [44] Debug port: BAR=1 offset=0098 Capabilities: [80] Power Management version 2 Kernel driver in use: ehci_hcd 00:04.0 USB controller: NVIDIA Corporation MCP67 OHCI USB 1.1 Controller (rev a2) (prog-if 10 [OHCI]) Subsystem: Biostar Microtech Int'l Corp Device 3409 Flags: bus master, 66MHz, fast devsel, latency 0, IRQ 21 Memory at fe02d000 (32-bit, non-prefetchable) [size=4K] Capabilities: [44] Power Management version 2 Kernel driver in use: ohci_hcd 00:04.1 USB controller: NVIDIA Corporation MCP67 EHCI USB 2.0 Controller (rev a2) (prog-if 20 [EHCI]) Subsystem: Biostar Microtech Int'l Corp Device 3409 Flags: bus master, 66MHz, fast devsel, latency 0, IRQ 20 Memory at fe02c000 (32-bit, non-prefetchable) [size=256] Capabilities: [44] Debug port: BAR=1 offset=0098 Capabilities: [80] Power Management version 2 Kernel driver in use: ehci_hcd 00:06.0 IDE interface: NVIDIA Corporation MCP67 IDE Controller (rev a1) (prog-if 8a [Master SecP PriP]) Subsystem: Biostar Microtech Int'l Corp Device 3409 Flags: bus master, 66MHz, fast devsel, latency 0 [virtual] Memory at 000001f0 (32-bit, non-prefetchable) [size=8] [virtual] Memory at 000003f0 (type 3, non-prefetchable) [size=1] [virtual] Memory at 00000170 (32-bit, non-prefetchable) [size=8] [virtual] Memory at 00000370 (type 3, non-prefetchable) [size=1] I/O ports at f000 [size=16] Capabilities: [44] Power Management version 2 Kernel driver in use: pata_amd Kernel modules: pata_amd 00:07.0 Audio device: NVIDIA Corporation MCP67 High Definition Audio (rev a1) Subsystem: Biostar Microtech Int'l Corp Device 820c Flags: bus master, 66MHz, fast devsel, latency 0, IRQ 22 Memory at fe024000 (32-bit, non-prefetchable) [size=16K] Capabilities: [44] Power Management version 2 Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+ Capabilities: [6c] HyperTransport: MSI Mapping Enable- Fixed+ Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel 00:08.0 PCI bridge: NVIDIA Corporation MCP67 PCI Bridge (rev a2) (prog-if 01 [Subtractive decode]) Flags: bus master, 66MHz, fast devsel, latency 0 Bus: primary=00, secondary=01, subordinate=01, sec-latency=32 I/O behind bridge: 0000c000-0000cfff Memory behind bridge: fdf00000-fdffffff Prefetchable memory behind bridge: fd000000-fd0fffff Capabilities: [b8] Subsystem: NVIDIA Corporation Device cb84 Capabilities: [8c] HyperTransport: MSI Mapping Enable- Fixed- 00:09.0 IDE interface: NVIDIA Corporation MCP67 AHCI Controller (rev a2) (prog-if 85 [Master SecO PriO]) Subsystem: Biostar Microtech Int'l Corp Device 5407 Flags: bus master, 66MHz, fast devsel, latency 0, IRQ 23 I/O ports at 09f0 [size=8] I/O ports at 0bf0 [size=4] I/O ports at 0970 [size=8] I/O ports at 0b70 [size=4] I/O ports at dc00 [size=16] Memory at fe02a000 (32-bit, non-prefetchable) [size=8K] Capabilities: [44] Power Management version 2 Capabilities: [8c] SATA HBA v1.0 Capabilities: [b0] MSI: Enable- Count=1/8 Maskable- 64bit+ Capabilities: [cc] HyperTransport: MSI Mapping Enable- Fixed+ Kernel driver in use: ahci 00:0b.0 PCI bridge: NVIDIA Corporation MCP67 PCI Express Bridge (rev a2) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=02, subordinate=02, sec-latency=0 I/O behind bridge: 0000b000-0000bfff Memory behind bridge: fde00000-fdefffff Prefetchable memory behind bridge: 00000000fdd00000-00000000fddfffff Capabilities: [40] Subsystem: NVIDIA Corporation Device 0000 Capabilities: [48] Power Management version 2 Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+ Capabilities: [60] HyperTransport: MSI Mapping Enable- Fixed- Capabilities: [80] Express Root Port (Slot+), MSI 00 Capabilities: [100] Virtual Channel Kernel driver in use: pcieport Kernel modules: shpchp 00:0c.0 PCI bridge: NVIDIA Corporation MCP67 PCI Express Bridge (rev a2) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=03, subordinate=03, sec-latency=0 I/O behind bridge: 0000a000-0000afff Memory behind bridge: fdc00000-fdcfffff Prefetchable memory behind bridge: 00000000fdb00000-00000000fdbfffff Capabilities: [40] Subsystem: NVIDIA Corporation Device 0000 Capabilities: [48] Power Management version 2 Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+ Capabilities: [60] HyperTransport: MSI Mapping Enable- Fixed- Capabilities: [80] Express Root Port (Slot+), MSI 00 Capabilities: [100] Virtual Channel Kernel driver in use: pcieport Kernel modules: shpchp 00:0d.0 PCI bridge: NVIDIA Corporation MCP67 PCI Express Bridge (rev a2) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=04, subordinate=04, sec-latency=0 I/O behind bridge: 00009000-00009fff Memory behind bridge: fda00000-fdafffff Prefetchable memory behind bridge: 00000000fd900000-00000000fd9fffff Capabilities: [40] Subsystem: NVIDIA Corporation Device 0000 Capabilities: [48] Power Management version 2 Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+ Capabilities: [60] HyperTransport: MSI Mapping Enable- Fixed- Capabilities: [80] Express Root Port (Slot+), MSI 00 Capabilities: [100] Virtual Channel Kernel driver in use: pcieport Kernel modules: shpchp 00:0e.0 PCI bridge: NVIDIA Corporation MCP67 PCI Express Bridge (rev a2) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=05, subordinate=05, sec-latency=0 I/O behind bridge: 00008000-00008fff Memory behind bridge: fd800000-fd8fffff Prefetchable memory behind bridge: 00000000fd700000-00000000fd7fffff Capabilities: [40] Subsystem: NVIDIA Corporation Device 0000 Capabilities: [48] Power Management version 2 Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+ Capabilities: [60] HyperTransport: MSI Mapping Enable- Fixed- Capabilities: [80] Express Root Port (Slot+), MSI 00 Capabilities: [100] Virtual Channel Kernel driver in use: pcieport Kernel modules: shpchp 00:0f.0 PCI bridge: NVIDIA Corporation MCP67 PCI Express Bridge (rev a2) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=06, subordinate=06, sec-latency=0 I/O behind bridge: 00007000-00007fff Memory behind bridge: fd600000-fd6fffff Prefetchable memory behind bridge: 00000000fd500000-00000000fd5fffff Capabilities: [40] Subsystem: NVIDIA Corporation Device 0000 Capabilities: [48] Power Management version 2 Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+ Capabilities: [60] HyperTransport: MSI Mapping Enable- Fixed- Capabilities: [80] Express Root Port (Slot+), MSI 00 Capabilities: [100] Virtual Channel Kernel driver in use: pcieport Kernel modules: shpchp 00:10.0 PCI bridge: NVIDIA Corporation MCP67 PCI Express Bridge (rev a2) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=07, subordinate=07, sec-latency=0 I/O behind bridge: 00006000-00006fff Memory behind bridge: fd400000-fd4fffff Prefetchable memory behind bridge: 00000000fd300000-00000000fd3fffff Capabilities: [40] Subsystem: NVIDIA Corporation Device 0000 Capabilities: [48] Power Management version 2 Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+ Capabilities: [60] HyperTransport: MSI Mapping Enable- Fixed- Capabilities: [80] Express Root Port (Slot+), MSI 00 Capabilities: [100] Virtual Channel Kernel driver in use: pcieport Kernel modules: shpchp 00:11.0 PCI bridge: NVIDIA Corporation MCP67 PCI Express Bridge (rev a2) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=08, subordinate=08, sec-latency=0 I/O behind bridge: 00005000-00005fff Memory behind bridge: fd200000-fd2fffff Prefetchable memory behind bridge: 00000000fd100000-00000000fd1fffff Capabilities: [40] Subsystem: NVIDIA Corporation Device 0000 Capabilities: [48] Power Management version 2 Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+ Capabilities: [60] HyperTransport: MSI Mapping Enable- Fixed- Capabilities: [80] Express Root Port (Slot+), MSI 00 Capabilities: [100] Virtual Channel Kernel driver in use: pcieport Kernel modules: shpchp 00:12.0 VGA compatible controller: NVIDIA Corporation C68 [GeForce 7050 PV / nForce 630a] (rev a2) (prog-if 00 [VGA controller]) Subsystem: Biostar Microtech Int'l Corp Device 1406 Flags: bus master, 66MHz, fast devsel, latency 0, IRQ 21 Memory at fb000000 (32-bit, non-prefetchable) [size=16M] Memory at e0000000 (64-bit, prefetchable) [size=256M] Memory at fc000000 (64-bit, non-prefetchable) [size=16M] [virtual] Expansion ROM at 80000000 [disabled] [size=128K] Capabilities: [48] Power Management version 2 Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit+ Kernel driver in use: nvidia Kernel modules: nvidia_current, nouveau, nvidiafb 00:18.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration Flags: fast devsel Capabilities: [80] HyperTransport: Host or Secondary Interface 00:18.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map Flags: fast devsel 00:18.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller Flags: fast devsel 00:18.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control Flags: fast devsel Capabilities: [f0] Secure device <?> Kernel driver in use: k8temp Kernel modules: k8temp 03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 01) Subsystem: Biostar Microtech Int'l Corp Device 2305 Flags: bus master, fast devsel, latency 0, IRQ 47 I/O ports at ac00 [size=256] Memory at fdcff000 (64-bit, non-prefetchable) [size=4K] [virtual] Expansion ROM at fdb00000 [disabled] [size=128K] Capabilities: [40] Power Management version 2 Capabilities: [48] Vital Product Data Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+ Capabilities: [60] Express Endpoint, MSI 00 Capabilities: [84] Vendor Specific Information: Len=4c <?> Capabilities: [100] Advanced Error Reporting Capabilities: [12c] Virtual Channel Capabilities: [148] Device Serial Number 32-00-00-00-10-ec-81-68 Capabilities: [154] Power Budgeting <?> Kernel driver in use: r8169 Kernel modules: r8169 sudo rfkill list all 2: phy2: Wireless LAN Soft blocked: no Hard blocked: no

    Read the article

  • Deploying Data Mining Models using Model Export and Import, Part 2

    - by [email protected]
    In my last post, Deploying Data Mining Models using Model Export and Import, we explored using DBMS_DATA_MINING.EXPORT_MODEL and DBMS_DATA_MINING.IMPORT_MODEL to enable moving a model from one system to another. In this post, we'll look at two distributed scenarios that make use of this capability and a tip for easily moving models from one machine to another using only Oracle Database, not an external file transport mechanism, such as FTP. The first scenario, consider a company with geographically distributed business units, each collecting and managing their data locally for the products they sell. Each business unit has in-house data analysts that build models to predict which products to recommend to customers in their space. A central telemarketing business unit also uses these models to score new customers locally using data collected over the phone. Since the models recommend different products, each customer is scored using each model. This is depicted in Figure 1.Figure 1: Target instance importing multiple remote models for local scoring In the second scenario, consider multiple hospitals that collect data on patients with certain types of cancer. The data collection is standardized, so each hospital collects the same patient demographic and other health / tumor data, along with the clinical diagnosis. Instead of each hospital building it's own models, the data is pooled at a central data analysis lab where a predictive model is built. Once completed, the model is distributed to hospitals, clinics, and doctor offices who can score patient data locally.Figure 2: Multiple target instances importing the same model from a source instance for local scoring Since this blog focuses on model export and import, we'll only discuss what is necessary to move a model from one database to another. Here, we use the package DBMS_FILE_TRANSFER, which can move files between Oracle databases. The script is fairly straightforward, but requires setting up a database link and directory objects. We saw how to create directory objects in the previous post. To create a database link to the source database from the target, we can use, for example: create database link SOURCE1_LINK connect to <schema> identified by <password> using 'SOURCE1'; Note that 'SOURCE1' refers to the service name of the remote database entry in your tnsnames.ora file. From SQL*Plus, first connect to the remote database and export the model. Note that the model_file_name does not include the .dmp extension. This is because export_model appends "01" to this name.  Next, connect to the local database and invoke DBMS_FILE_TRANSFER.GET_FILE and import the model. Note that "01" is eliminated in the target system file name.  connect <source_schema>/<password>@SOURCE1_LINK; BEGIN  DBMS_DATA_MINING.EXPORT_MODEL ('EXPORT_FILE_NAME' || '.dmp',                                 'MY_SOURCE_DIR_OBJECT',                                 'name =''MY_MINING_MODEL'''); END; connect <target_schema>/<password>; BEGIN  DBMS_FILE_TRANSFER.GET_FILE ('MY_SOURCE_DIR_OBJECT',                               'EXPORT_FILE_NAME' || '01.dmp',                               'SOURCE1_LINK',                               'MY_TARGET_DIR_OBJECT',                               'EXPORT_FILE_NAME' || '.dmp' );  DBMS_DATA_MINING.IMPORT_MODEL ('EXPORT_FILE_NAME' || '.dmp',                                 'MY_TARGET_DIR_OBJECT'); END; To clean up afterward, you may want to drop the exported .dmp file at the source and the transferred file at the target. For example, utl_file.fremove('&directory_name', '&model_file_name' || '.dmp');

    Read the article

  • WPA2 authentication fails using USB network devices (Linksys and Rosewill)

    - by Greg Youtz
    Decided to reduce the clutter in the house and replace a wired connection with a wireless one on my wife's system using USB network device Rosewill RNX-X1. I can see and connect to unprotected network, but WPA2 authentication repeatedly fails. Tried the same with a Linksys USB network adapter. Both failed to authenticate. Worth noting that I recently switched from Comcast to CenturyLink and so switched routers. The system connected successfully to previous router (Linksys EA4500) using WPA2. Would think it is the router (Actiontec C1000A) but all other devices (TV, iPad, Windows, Blackberry, and Squeezebox) connect ok. Would appreciate some diagnostic guidance and insight (phrased for a newbie!) Tests to date: sudo lshw -class network *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:03:00.0 logical name: eth0 version: 01 serial: 00:e0:4d:30:40:a1 size: 10Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm vpd msi pciexpress bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=half firmware=N/A latency=0 link=no multicast=yes port=MII speed=10Mbit/s resources: irq:47 ioport:ac00(size=256) memory:fdcff000-fdcfffff memory:fdb00000-fdb1ffff *-network description: Wireless interface physical id: 1 bus info: usb@1:2 logical name: wlan1 serial: 00:02:6f:bd:30:a0 capabilities: ethernet physical wireless configuration: broadcast=yes driver=rt2800usb driverversion=3.2.0-31-generic firmware=0.29 link=no multicast=yes wireless=IEEE 802.11bgn sudo lspci -v 00:00.0 RAM memory: NVIDIA Corporation MCP67 Memory Controller (rev a2) Subsystem: Biostar Microtech Int'l Corp Device 3409 Flags: bus master, 66MHz, fast devsel, latency 0 Capabilities: [44] HyperTransport: Slave or Primary Interface Capabilities: [dc] HyperTransport: MSI Mapping Enable+ Fixed- 00:01.0 ISA bridge: NVIDIA Corporation MCP67 ISA Bridge (rev a2) Subsystem: Biostar Microtech Int'l Corp Device 3409 Flags: bus master, 66MHz, fast devsel, latency 0 00:01.1 SMBus: NVIDIA Corporation MCP67 SMBus (rev a2) Subsystem: Biostar Microtech Int'l Corp Device 3409 Flags: 66MHz, fast devsel, IRQ 11 I/O ports at fc00 [size=64] I/O ports at 1c00 [size=64] I/O ports at 1c40 [size=64] Capabilities: [44] Power Management version 2 Kernel driver in use: nForce2_smbus Kernel modules: i2c-nforce2 00:01.2 RAM memory: NVIDIA Corporation MCP67 Memory Controller (rev a2) Flags: 66MHz, fast devsel 00:02.0 USB controller: NVIDIA Corporation MCP67 OHCI USB 1.1 Controller (rev a2) (prog-if 10 [OHCI]) Subsystem: Biostar Microtech Int'l Corp Device 3409 Flags: bus master, 66MHz, fast devsel, latency 0, IRQ 23 Memory at fe02f000 (32-bit, non-prefetchable) [size=4K] Capabilities: [44] Power Management version 2 Kernel driver in use: ohci_hcd 00:02.1 USB controller: NVIDIA Corporation MCP67 EHCI USB 2.0 Controller (rev a2) (prog-if 20 [EHCI]) Subsystem: Biostar Microtech Int'l Corp Device 3409 Flags: bus master, 66MHz, fast devsel, latency 0, IRQ 22 Memory at fe02e000 (32-bit, non-prefetchable) [size=256] Capabilities: [44] Debug port: BAR=1 offset=0098 Capabilities: [80] Power Management version 2 Kernel driver in use: ehci_hcd 00:04.0 USB controller: NVIDIA Corporation MCP67 OHCI USB 1.1 Controller (rev a2) (prog-if 10 [OHCI]) Subsystem: Biostar Microtech Int'l Corp Device 3409 Flags: bus master, 66MHz, fast devsel, latency 0, IRQ 21 Memory at fe02d000 (32-bit, non-prefetchable) [size=4K] Capabilities: [44] Power Management version 2 Kernel driver in use: ohci_hcd 00:04.1 USB controller: NVIDIA Corporation MCP67 EHCI USB 2.0 Controller (rev a2) (prog-if 20 [EHCI]) Subsystem: Biostar Microtech Int'l Corp Device 3409 Flags: bus master, 66MHz, fast devsel, latency 0, IRQ 20 Memory at fe02c000 (32-bit, non-prefetchable) [size=256] Capabilities: [44] Debug port: BAR=1 offset=0098 Capabilities: [80] Power Management version 2 Kernel driver in use: ehci_hcd 00:06.0 IDE interface: NVIDIA Corporation MCP67 IDE Controller (rev a1) (prog-if 8a [Master SecP PriP]) Subsystem: Biostar Microtech Int'l Corp Device 3409 Flags: bus master, 66MHz, fast devsel, latency 0 [virtual] Memory at 000001f0 (32-bit, non-prefetchable) [size=8] [virtual] Memory at 000003f0 (type 3, non-prefetchable) [size=1] [virtual] Memory at 00000170 (32-bit, non-prefetchable) [size=8] [virtual] Memory at 00000370 (type 3, non-prefetchable) [size=1] I/O ports at f000 [size=16] Capabilities: [44] Power Management version 2 Kernel driver in use: pata_amd Kernel modules: pata_amd 00:07.0 Audio device: NVIDIA Corporation MCP67 High Definition Audio (rev a1) Subsystem: Biostar Microtech Int'l Corp Device 820c Flags: bus master, 66MHz, fast devsel, latency 0, IRQ 22 Memory at fe024000 (32-bit, non-prefetchable) [size=16K] Capabilities: [44] Power Management version 2 Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+ Capabilities: [6c] HyperTransport: MSI Mapping Enable- Fixed+ Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel 00:08.0 PCI bridge: NVIDIA Corporation MCP67 PCI Bridge (rev a2) (prog-if 01 [Subtractive decode]) Flags: bus master, 66MHz, fast devsel, latency 0 Bus: primary=00, secondary=01, subordinate=01, sec-latency=32 I/O behind bridge: 0000c000-0000cfff Memory behind bridge: fdf00000-fdffffff Prefetchable memory behind bridge: fd000000-fd0fffff Capabilities: [b8] Subsystem: NVIDIA Corporation Device cb84 Capabilities: [8c] HyperTransport: MSI Mapping Enable- Fixed- 00:09.0 IDE interface: NVIDIA Corporation MCP67 AHCI Controller (rev a2) (prog-if 85 [Master SecO PriO]) Subsystem: Biostar Microtech Int'l Corp Device 5407 Flags: bus master, 66MHz, fast devsel, latency 0, IRQ 23 I/O ports at 09f0 [size=8] I/O ports at 0bf0 [size=4] I/O ports at 0970 [size=8] I/O ports at 0b70 [size=4] I/O ports at dc00 [size=16] Memory at fe02a000 (32-bit, non-prefetchable) [size=8K] Capabilities: [44] Power Management version 2 Capabilities: [8c] SATA HBA v1.0 Capabilities: [b0] MSI: Enable- Count=1/8 Maskable- 64bit+ Capabilities: [cc] HyperTransport: MSI Mapping Enable- Fixed+ Kernel driver in use: ahci 00:0b.0 PCI bridge: NVIDIA Corporation MCP67 PCI Express Bridge (rev a2) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=02, subordinate=02, sec-latency=0 I/O behind bridge: 0000b000-0000bfff Memory behind bridge: fde00000-fdefffff Prefetchable memory behind bridge: 00000000fdd00000-00000000fddfffff Capabilities: [40] Subsystem: NVIDIA Corporation Device 0000 Capabilities: [48] Power Management version 2 Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+ Capabilities: [60] HyperTransport: MSI Mapping Enable- Fixed- Capabilities: [80] Express Root Port (Slot+), MSI 00 Capabilities: [100] Virtual Channel Kernel driver in use: pcieport Kernel modules: shpchp 00:0c.0 PCI bridge: NVIDIA Corporation MCP67 PCI Express Bridge (rev a2) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=03, subordinate=03, sec-latency=0 I/O behind bridge: 0000a000-0000afff Memory behind bridge: fdc00000-fdcfffff Prefetchable memory behind bridge: 00000000fdb00000-00000000fdbfffff Capabilities: [40] Subsystem: NVIDIA Corporation Device 0000 Capabilities: [48] Power Management version 2 Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+ Capabilities: [60] HyperTransport: MSI Mapping Enable- Fixed- Capabilities: [80] Express Root Port (Slot+), MSI 00 Capabilities: [100] Virtual Channel Kernel driver in use: pcieport Kernel modules: shpchp 00:0d.0 PCI bridge: NVIDIA Corporation MCP67 PCI Express Bridge (rev a2) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=04, subordinate=04, sec-latency=0 I/O behind bridge: 00009000-00009fff Memory behind bridge: fda00000-fdafffff Prefetchable memory behind bridge: 00000000fd900000-00000000fd9fffff Capabilities: [40] Subsystem: NVIDIA Corporation Device 0000 Capabilities: [48] Power Management version 2 Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+ Capabilities: [60] HyperTransport: MSI Mapping Enable- Fixed- Capabilities: [80] Express Root Port (Slot+), MSI 00 Capabilities: [100] Virtual Channel Kernel driver in use: pcieport Kernel modules: shpchp 00:0e.0 PCI bridge: NVIDIA Corporation MCP67 PCI Express Bridge (rev a2) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=05, subordinate=05, sec-latency=0 I/O behind bridge: 00008000-00008fff Memory behind bridge: fd800000-fd8fffff Prefetchable memory behind bridge: 00000000fd700000-00000000fd7fffff Capabilities: [40] Subsystem: NVIDIA Corporation Device 0000 Capabilities: [48] Power Management version 2 Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+ Capabilities: [60] HyperTransport: MSI Mapping Enable- Fixed- Capabilities: [80] Express Root Port (Slot+), MSI 00 Capabilities: [100] Virtual Channel Kernel driver in use: pcieport Kernel modules: shpchp 00:0f.0 PCI bridge: NVIDIA Corporation MCP67 PCI Express Bridge (rev a2) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=06, subordinate=06, sec-latency=0 I/O behind bridge: 00007000-00007fff Memory behind bridge: fd600000-fd6fffff Prefetchable memory behind bridge: 00000000fd500000-00000000fd5fffff Capabilities: [40] Subsystem: NVIDIA Corporation Device 0000 Capabilities: [48] Power Management version 2 Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+ Capabilities: [60] HyperTransport: MSI Mapping Enable- Fixed- Capabilities: [80] Express Root Port (Slot+), MSI 00 Capabilities: [100] Virtual Channel Kernel driver in use: pcieport Kernel modules: shpchp 00:10.0 PCI bridge: NVIDIA Corporation MCP67 PCI Express Bridge (rev a2) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=07, subordinate=07, sec-latency=0 I/O behind bridge: 00006000-00006fff Memory behind bridge: fd400000-fd4fffff Prefetchable memory behind bridge: 00000000fd300000-00000000fd3fffff Capabilities: [40] Subsystem: NVIDIA Corporation Device 0000 Capabilities: [48] Power Management version 2 Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+ Capabilities: [60] HyperTransport: MSI Mapping Enable- Fixed- Capabilities: [80] Express Root Port (Slot+), MSI 00 Capabilities: [100] Virtual Channel Kernel driver in use: pcieport Kernel modules: shpchp 00:11.0 PCI bridge: NVIDIA Corporation MCP67 PCI Express Bridge (rev a2) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=08, subordinate=08, sec-latency=0 I/O behind bridge: 00005000-00005fff Memory behind bridge: fd200000-fd2fffff Prefetchable memory behind bridge: 00000000fd100000-00000000fd1fffff Capabilities: [40] Subsystem: NVIDIA Corporation Device 0000 Capabilities: [48] Power Management version 2 Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+ Capabilities: [60] HyperTransport: MSI Mapping Enable- Fixed- Capabilities: [80] Express Root Port (Slot+), MSI 00 Capabilities: [100] Virtual Channel Kernel driver in use: pcieport Kernel modules: shpchp 00:12.0 VGA compatible controller: NVIDIA Corporation C68 [GeForce 7050 PV / nForce 630a] (rev a2) (prog-if 00 [VGA controller]) Subsystem: Biostar Microtech Int'l Corp Device 1406 Flags: bus master, 66MHz, fast devsel, latency 0, IRQ 21 Memory at fb000000 (32-bit, non-prefetchable) [size=16M] Memory at e0000000 (64-bit, prefetchable) [size=256M] Memory at fc000000 (64-bit, non-prefetchable) [size=16M] [virtual] Expansion ROM at 80000000 [disabled] [size=128K] Capabilities: [48] Power Management version 2 Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit+ Kernel driver in use: nvidia Kernel modules: nvidia_current, nouveau, nvidiafb 00:18.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration Flags: fast devsel Capabilities: [80] HyperTransport: Host or Secondary Interface 00:18.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map Flags: fast devsel 00:18.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller Flags: fast devsel 00:18.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control Flags: fast devsel Capabilities: [f0] Secure device <?> Kernel driver in use: k8temp Kernel modules: k8temp 03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 01) Subsystem: Biostar Microtech Int'l Corp Device 2305 Flags: bus master, fast devsel, latency 0, IRQ 47 I/O ports at ac00 [size=256] Memory at fdcff000 (64-bit, non-prefetchable) [size=4K] [virtual] Expansion ROM at fdb00000 [disabled] [size=128K] Capabilities: [40] Power Management version 2 Capabilities: [48] Vital Product Data Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+ Capabilities: [60] Express Endpoint, MSI 00 Capabilities: [84] Vendor Specific Information: Len=4c <?> Capabilities: [100] Advanced Error Reporting Capabilities: [12c] Virtual Channel Capabilities: [148] Device Serial Number 32-00-00-00-10-ec-81-68 Capabilities: [154] Power Budgeting <?> Kernel driver in use: r8169 Kernel modules: r8169 sudo rfkill list all 2: phy2: Wireless LAN Soft blocked: no Hard blocked: no Would appreciate insight on how to chase this down.

    Read the article

  • Why Wifi no longer works 12.04.1

    - by Roger
    starting this morning over wifi (Realtek RTL8188CE) on CLEVO W253HU. May be due to the update before yesterday, more pilot managed, but somehow it worked yesterday. If someone has an idea of the problem. Back command lines: cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION="Ubuntu 12.04.1 LTS" lsusb Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 006: ID 192f:0416 Avago Technologies, Pte. Bus 002 Device 004: ID 5986:0315 Acer, Inc lspci 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 00:16.0 Communication controller: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 (rev 04) 00:1a.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 (rev 05) 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 05) 00:1c.0 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 (rev b5) 00:1c.2 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 3 (rev b5) 00:1c.3 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 4 (rev b5) 00:1d.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 (rev 05) 00:1f.0 ISA bridge: Intel Corporation HM65 Express Chipset Family LPC Controller (rev 05) 00:1f.2 SATA controller: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller (rev 05) 00:1f.3 SMBus: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller (rev 05) 02:00.0 Network controller: Realtek Semiconductor Co., Ltd. RTL8188CE 802.11b/g/n WiFi Adapter (rev 01) 03:00.0 Ethernet controller: JMicron Technology Corp. JMC250 PCI Express Gigabit Ethernet Controller (rev 05) 03:00.1 System peripheral: JMicron Technology Corp. SD/MMC Host Controller (rev 90) 03:00.2 SD Host controller: JMicron Technology Corp. Standard SD Host Controller (rev 90) 03:00.3 System peripheral: JMicron Technology Corp. MS Host Controller (rev 90) lspci -nn | grep -i net 02:00.0 Network controller [0280]: Realtek Semiconductor Co., Ltd. RTL8188CE 802.11b/g/n WiFi Adapter [10ec:8176] (rev 01) 03:00.0 Ethernet controller [0200]: JMicron Technology Corp. JMC250 PCI Express Gigabit Ethernet Controller [197b:0250] (rev 05) lspci -k 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) Subsystem: CLEVO/KAPOK Computer Device 4140 Kernel driver in use: agpgart-intel 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) Subsystem: CLEVO/KAPOK Computer Device 4140 Kernel driver in use: i915 Kernel modules: i915 00:16.0 Communication controller: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 (rev 04) Subsystem: CLEVO/KAPOK Computer Device 4140 Kernel driver in use: mei Kernel modules: mei 00:1a.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 (rev 05) Subsystem: CLEVO/KAPOK Computer Device 4140 Kernel driver in use: ehci_hcd 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 05) Subsystem: CLEVO/KAPOK Computer Device 4140 Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel 00:1c.0 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 (rev b5) Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.2 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 3 (rev b5) Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.3 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 4 (rev b5) Kernel driver in use: pcieport Kernel modules: shpchp 00:1d.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 (rev 05) Subsystem: CLEVO/KAPOK Computer Device 4140 Kernel driver in use: ehci_hcd 00:1f.0 ISA bridge: Intel Corporation HM65 Express Chipset Family LPC Controller (rev 05) Subsystem: CLEVO/KAPOK Computer Device 4140 Kernel modules: iTCO_wdt 00:1f.2 SATA controller: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller (rev 05) Subsystem: CLEVO/KAPOK Computer Device 4140 Kernel driver in use: ahci 00:1f.3 SMBus: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller (rev 05) Subsystem: CLEVO/KAPOK Computer Device 4140 Kernel modules: i2c-i801 02:00.0 Network controller: Realtek Semiconductor Co., Ltd. RTL8188CE 802.11b/g/n WiFi Adapter (rev 01) Subsystem: Realtek Semiconductor Co., Ltd. Device 9196 Kernel modules: rtl8192ce 03:00.0 Ethernet controller: JMicron Technology Corp. JMC250 PCI Express Gigabit Ethernet Controller (rev 05) Subsystem: CLEVO/KAPOK Computer Device 4140 Kernel driver in use: jme Kernel modules: jme 03:00.1 System peripheral: JMicron Technology Corp. SD/MMC Host Controller (rev 90) Subsystem: CLEVO/KAPOK Computer Device 4140 Kernel driver in use: sdhci-pci Kernel modules: sdhci-pci 03:00.2 SD Host controller: JMicron Technology Corp. Standard SD Host Controller (rev 90) Subsystem: CLEVO/KAPOK Computer Device 4140 Kernel modules: sdhci-pci 03:00.3 System peripheral: JMicron Technology Corp. MS Host Controller (rev 90) Subsystem: CLEVO/KAPOK Computer Device 4140 Kernel driver in use: jmb38x_ms Kernel modules: jmb38x_ms sudo lshw -C network *-network NON-RÉCLAMÉ description: Network controller produit: RTL8188CE 802.11b/g/n WiFi Adapter fabriquant: Realtek Semiconductor Co., Ltd. identifiant matériel: 0 information bus: pci@0000:02:00.0 version: 01 bits: 64 bits horloge: 33MHz fonctionnalités: pm msi pciexpress cap_list configuration: latency=0 ressources: portE/S:e000(taille=256) mémoire:f7d00000-f7d03fff *-network description: Ethernet interface produit: JMC250 PCI Express Gigabit Ethernet Controller fabriquant: JMicron Technology Corp. identifiant matériel: 0 information bus: pci@0000:03:00.0 nom logique: eth0 version: 05 numéro de série: 00:90:f5:c1:c6:45 taille: 100Mbit/s capacité: 1Gbit/s bits: 32 bits horloge: 33MHz fonctionnalités: pm pciexpress msix msi bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=jme driverversion=1.0.8 duplex=full ip=192.168.1.54 latency=0 link=yes multicast=yes port=MII speed=100Mbit/s ressources: irq:44 mémoire:f7c20000-f7c23fff portE/S:d100(taille=128) portE/S:d000(taille=256) mémoire:f7c10000-f7c1ffff mémoire:f7c00000-f7c0ffff lsmod Module Size Used by btusb 18288 0 rfcomm 47604 0 bnep 18281 2 bluetooth 180104 11 btusb,rfcomm,bnep parport_pc 32866 0 ppdev 17113 0 binfmt_misc 17540 1 snd_hda_codec_realtek 224173 0 dm_crypt 23125 0 snd_hda_codec_hdmi 32474 0 uvcvideo 72627 0 videodev 98259 1 uvcvideo v4l2_compat_ioctl32 17128 1 videodev snd_hda_intel 33773 2 snd_hda_codec 127706 3 snd_hda_codec_realtek,snd_hda_codec_hdmi,snd_hda_intel snd_hwdep 13668 1 snd_hda_codec snd_pcm 97188 3 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec snd_seq_midi 13324 0 snd_rawmidi 30748 1 snd_seq_midi jmb38x_ms 17646 0 psmouse 87692 0 serio_raw 13211 0 memstick 16569 1 jmb38x_ms snd_seq_midi_event 14899 1 snd_seq_midi rtl8192ce 84826 0 rtl8192c_common 75767 1 rtl8192ce rtlwifi 111202 1 rtl8192ce snd_seq 61896 2 snd_seq_midi,snd_seq_midi_event snd_timer 29990 2 snd_pcm,snd_seq snd_seq_device 14540 3 snd_seq_midi,snd_rawmidi,snd_seq mac80211 506816 3 rtl8192ce,rtl8192c_common,rtlwifi mac_hid 13253 0 snd 78855 14 snd_hda_codec_realtek,snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device cfg80211 205544 2 rtlwifi,mac80211 soundcore 15091 1 snd snd_page_alloc 18529 2 snd_hda_intel,snd_pcm mei 41616 0 lp 17799 0 parport 46562 3 parport_pc,ppdev,lp usbhid 47199 0 hid 99559 1 usbhid i915 473035 3 drm_kms_helper 46978 1 i915 drm 242038 4 i915,drm_kms_helper jme 41259 0 i2c_algo_bit 13423 1 i915 sdhci_pci 18826 0 sdhci 33205 1 sdhci_pci wmi 19256 0 video 19596 1 i915 iwconfig lo no wireless extensions. eth0 no wireless extensions. ifconfig eth0 Link encap:Ethernet HWaddr 00:90:f5:c1:c6:45 inet adr:192.168.1.54 Bcast:192.168.1.255 Masque:255.255.255.0 adr inet6: fe80::290:f5ff:fec1:c645/64 Scope:Lien UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Packets reçus:4513 erreurs:0 :0 overruns:0 frame:0 TX packets:4359 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 lg file transmission:1000 Octets reçus:3471675 (3.4 MB) Octets transmis:712722 (712.7 KB) Interruption:44 lo Link encap:Boucle locale inet adr:127.0.0.1 Masque:255.0.0.0 adr inet6: ::1/128 Scope:Hôte UP LOOPBACK RUNNING MTU:16436 Metric:1 Packets reçus:686 erreurs:0 :0 overruns:0 frame:0 TX packets:686 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 lg file transmission:0 Octets reçus:64556 (64.5 KB) Octets transmis:64556 (64.5 KB) sudo iwlist scan lo Interface doesn't support scanning. eth0 Interface doesn't support scanning. uname -r -m 3.2.0-30-generic x86_64 cat /etc/network/interfaces auto lo iface lo inet loopback nm-tool NetworkManager Tool State: connected (global) - Device: eth0 [Connexion filaire 1] ------------------------------------------ Type: Wired Driver: jme State: connected Default: yes HW Address: 00:90:F5:C1:C6:45 Capabilities: Carrier Detect: yes Speed: 100 Mb/s Wired Properties Carrier: on IPv4 Settings: Address: 192.168.1.54 Prefix: 24 (255.255.255.0) Gateway: 192.168.1.1 DNS: 192.168.1.1 sudo rfkill listrfkill list 1: hci0: Bluetooth Soft blocked: no Hard blocked: no The absence of line "Kernel driver in use:" the return of lspci-k made ??me think that it is not loaded yet but he seems to be. lsmod | grep rtl8192ce rtl8192ce 137478 0 rtlwifi 118749 1 rtl8192ce mac80211 506816 2 rtl8192ce,rtlwifi I found something disturbing in / var / log / syslog Sep 14 11:40:11 pcroger kernel: [ 64.048783] rtl8192ce-0:rtl92c_init_sw_vars():<0-0> Failed to request firmware! Sep 14 11:40:11 pcroger kernel: [ 64.048795] rtlwifi-0:rtl_pci_probe():<0-0> Can't init_sw_vars. Sep 14 11:40:11 pcroger kernel: [ 64.048835] rtl8192ce 0000:02:00.0: PCI INT A disabled Sep 14 11:40:11 pcroger kernel: [ 64.943345] ata1.00: exception Emask 0x0 SAct 0x7fffffff SErr 0x0 action 0x6 frozen Sep 14 11:40:11 pcroger kernel: [ 64.943358] ata1.00: failed command: READ FPDMA QUEUED Sep 14 11:40:11 pcroger kernel: [ 64.943371] ata1.00: cmd 60/00:00:00:68:6a/04:00:0b:00:00/40 tag 0 ncq 524288 in Sep 14 11:40:11 pcroger kernel: [ 64.943374] res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Sep 14 11:40:11 pcroger kernel: [ 64.943381] ata1.00: status: { DRDY } Ubuntu and takes forever to start (2 min).

    Read the article

  • Trigger Happy

    - by Tim Dexter
    Its been a while, I know, we’ll say no more OK? I’ll just write …In the latest BIP 11.1.1.6 release and if I’m really honest; the release before this (we'll call it dot 5 for brevity.) The boys and gals in the engine room have been real busy enhancing BIP with some new functionality. Those of you that use the scheduling engine in OBIEE may already know and use the ‘conditional scheduling’ feature. This allows you to be more intelligent about what reports get run and sent to folks on a scheduled basis. You create a ‘trigger’ analysis (answer) that is executed at schedule time prior to the main report. When the schedule rolls around, the trigger is run, if it returns rows, then the main report is run and delivered. If there are no rows returned, then the main report is not run. Useful right? Your users are not bombarded with 20 reports in their inbox every week that they need to wade throu. They get a handful that they know they need to look at. If you ensure you use conditional formatting in the report then they can find the anomalous data in the reports very quickly and move on to the rest of their day more quickly. You could even think of OBIEE as a virtual team member, scouring the data on your behalf 24/7 and letting you know when its found an issue.BI Publisher, wanting the team t-shirt and the khaki pants, has followed suit. You can now set up ‘triggers’ for it to execute before it runs the main report. Just like its big brother, if the scheduled report trigger returns rows of data; it then executes the main report. Otherwise, the report is skipped until the next schedule time rolls around. Sound familiar?BIP differs a little, in that you only need to construct a query to act as the trigger rather than a complete report. Let assume we have a monthly wage by department report on a schedule. We only want to send the report to managers if their departmental wages reach and/or exceed a certain amount. The toughest part about this is coming up with the SQL to test the business rule you want to implement. For my example, its not that tough: select d.department_name, sum(e.salary) as wage_total from employees e, departments d where d.department_id = e.department_id group by d.department_name having sum(e.salary) > 230000 We're looking for departments where the wage cost is greater than 230,000 Dexter Dollars! With a bit of messing I found out you can parametrize the query. Users can then set a value at schedule time if they need to. To create the trigger is straightforward enough. You can create multiple triggers for users to select at schedule time. Notice I also used a parameter in the query, :wamount. Note the matching parameter in the tree on the left. You also dont need to return multiple columns, one is fine, the key is if there are rows returned. You can build the rest of your report as usual. At scheduling time the Schedule tab has a bit more on it. If your users want to set the trigger, they check the Use Trigger box. The page will then pop fields to pick the appropriate trigger they want to use, even a trigger on another data model if needed. Note it will also ask for the parameter value associated with the trigger. At this point you should note that the data model does not make a distinction between trigger and data model (extract) parameters. So users will see the parameters on the General and Schedule tabs. If per chance you do need to just have a trigger parameters. You can just hide them from the report using the Parameters popup in the report designer, just un-check the 'Show' box I have tested the opposite case where you do not want main report parameters seen in the trigger section. BIP handles that for you! Once the report hits its allotted schedule time, the trigger is executed. Based on the results the report will either run or be 'skipped.' Now, you have a smarter scheduler that will only deliver reports when folks need to see them and take action on the contents. More official info here for developers and here for users.

    Read the article

  • Tuning Red Gate: #2 of Many

    - by Grant Fritchey
    In the last installment, I used the SQL Monitor tool to get a snapshot view of the current state of the servers at Red Gate that are giving us trouble. That snapshot suggested some areas where I should focus some time, primarily in which queries were being called most frequently or were running the longest. But, you don't want to just run off & start tuning queries. Remember, the foundation for query tuning is the server itself. So, I want to be sure I'm not looking at some major hardware or configuration issues that I need to address first. Rather than look at the current status of the server, I'm going to look at historical data. Clicking on the Analysis tab of SQL Monitor I get a whole list of counters that I can look at. More importantly, I can look at them over a period of time. Even more importantly, I can compare past periods with current periods to see if we're looking at a progressive issue or not. There are counters here that will give me an indication of load, and there are counters here that will tell me specifics about that load. First, I want to just look at the load to understand where the pain points might be. Trying to drill down before you have detailed information is just bad planning. First thing I'm going to check is the CPU, just to see what's up there. I have two servers I'm interested in, so I'll show you both: Looking at the last 30 days for both servers, well, let's just say that the first server is about what I would expect. It has an average baseline behavior with occasional, regular, peaks. This looks like a system with a fairly steady & predictable load that probably has a nightly batch process that spikes the processor. In short, normal stuff. The points there where the CPU drops radically. that might be worth investigating further because something changed the processing on this system a lot. But the first server. It's all over the place. There's no steady CPU behavior at all. It's spike high for long periods of time. It's up, it's down. I'm really going to have to spend time looking at CPU issues on this server to try to figure out what's up. It might be other processes being shared on the server, it might be something else. Either way, I'm going to have to spend time evaluating this CPU, especially those peeks about a week ago. Looking at the Pages/sec, again, just a measure of load, I see that there are some peaks on the rg-sql02 server, but over all, it looks like a fairly standard load. Plus, the peaks are only up to 550 pages/sec. Remember, this isn't a performance measure, but just a load measurement, but from this, I don't think we're looking at major memory issues, but I may want to correlate these counters with the CPU counters. Again, the other server looks like there's stuff going on. The load is not at all consistent. In fact there was a point earlier in the year that looks pretty severe. Plus the spikes here are twice the size of the other system. We've got a lot more load going on here and I will probably need to drill down on memory usage on this server. Taking a look at the disk transfers/sec the load on both systems seems to roughly correspond to the other load indicators. Notice that drop right in the middle of the graph for rg-sql02. I wonder if the office was closed over that period or a system was down for maintenance. If I saw spikes in memory or disk that corresponded to the drip in CPU, you can assume something was using those other resources and causing a drop, but when everything goes down, it just means that the system isn't gettting used. The disk on the rg-sql01 system isn't spiking exactly the same way as the memory & cpu, so there's a good chance (chance mind you) that any performance issues might not be disk related. However, notice that huge jump at the beginning of the month. Several disks were used more than they were for the rest of the month. That's the load on the server. What about the load on SQL Server itself? Next time.

    Read the article

  • MDM for Tax Authorities

    - by david.butler(at)oracle.com
    In last week’s MDM blog, we discussed MDM in the Public Sector. I want to continue that thread. After all, no industry faces tougher data quality problems than governmental organizations, and few industries suffer more significant down side consequences to poor operations than local, state and federal governments. One key challenge area is taxation. Tax Authorities face a multitude of IT challenges. Firstly, the data used in tax calculations is increasing in volume and complexity. They must improve service by introducing multi-channel contact centers and self-service capabilities. Security concerns necessitate increasingly sophisticated data protection procedures. And cost constraints are driving Tax Authorities to rely on off-the-shelf software for many of their functional areas. Compounding these issues is the fact that the IT architectures in operation at most revenue and collections agencies are very complex. They typically include multiple, disparate operational and analytical systems across which the sum total of data about individual constituents is fragmented. To make matters more complicated, taxation is not carried out by a single jurisdiction, and often sources of income including employers, investments and other sources of taxable income and deductions must also be tracked and shared among tax authorities. Collectively, these systems are involved in tax assessment and collections, risk analysis, scoring, tracking, auditing and investigation case management. The Problem of Constituent Data Management The infrastructure described above makes it very difficult to create a consolidated representation of a given party. Differing formats and data models mean that a constituent may be represented in one way in one system and in a different way in another. Individual records are frequently inaccurate, incomplete, out of date and/or inconsistent with other records relating to the same constituent. When constituent data must be aggregated and scored, information within each system must be rationalized and normalized so the agency can produce a constituent information file (CIF) that provides a single source of truth about that party. If information about that constituent changes, each system in turn must be updated. There have been many attempts to solve this problem with technology: from consolidating transactional systems to conducting manual systems integration projects and superimposing layers of business intelligence and analytics. All these approaches can be successful in solving a portion of the problem at a specific point in time, but without an enterprise perspective, anything gained is quickly lost again. Oracle Constituent Data Mastering for Tax Authorities: A Single View of the Constituent Oracle has a flexible and long-term solution to the problem of securely integrating and managing constituent data. The Oracle Solution for mastering Constituent Data for Tax Authorities is based on two core product offerings: Oracle Customer Hub and – optionally – Oracle Application Integration Architecture (AIA). Customer Hub is a master data management (MDM) product that centralizes, de-duplicates, and enriches constituent data. It unifies fragmented information without disrupting existing business processes or IT investments. Role based data access and privacy rules guarantee maximum security and privacy. Data is continuously and automatically synchronized with all source systems. With the Oracle Customer Hub managing the master constituent identity, every department can capture transaction activity against the same record, improving reporting accuracy, employee productivity, reliability of constituent analytics, and day-to-day constituent relationships. Oracle Application Integration Architecture provides a collection of core pre-built processes to support out of the box Master Data Governance across Oracle Customer Hub, Siebel CRM, and Oracle E-Business Suite. It also provides a framework to enable MDM integrations with other Oracle and non-Oracle applications. Oracle AIA removes some of the key inhibitors to implementing a service-oriented architecture (SOA) by providing a pre-built SOA-based middleware foundation as well as industry-optimized service oriented applications, all built around a SOA governance model that encourages effective design and reuse. I encourage you to read Oracle Solution for Mastering Constituents Data for Public Sector – Tax Authorities by Roberto Negro. It is an outstanding whitepaper that describes how the Oracle MDM solution allows you to create a unified, reconciled source of high-quality constituent data and gain an accurate single view of each constituent. This foundation enables you to lower the costs associated with data quality and integration and create a tax organization that is efficient, secure and constituent-centric. Also, don’t forget the upcoming webcast on Thursday, February 10th: Deliver Improved Services to Citizens at Lower Cost to your Organization Our Guest Speaker is Ruben Spekle, from Capgemini. He will also provide insight into Public Sector Master Data Management and Case Management implementations including one that was executed for a Dutch Government Agency. If you are interested in how governmental organizations from around the world are using MDM to advance their cause, click here to register for the webcast.

    Read the article

  • EPM and Business Analytics Talking-head Videos from Oracle OpenWorld 2013

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE Here is a selection of 2 to 3 minute video interviews at this year’s Oracle OpenWorld: 1. George Somogyi, Solutions Architect, New Edge Group, talks about the importance of having their integrated Oracle Hyperion Platform consisting of Oracle Hyperion Financial Management, Oracle Hyperion Financial Data Quality Management, Oracle E-Business Suite R12 and Oracle Business Intelligence Extended Edition plus their use of Oracle Managed Cloud Services. Speaker: George Somogyi @ http://youtu.be/kWn0dQxCUy8 2. Gregg Thompson, Director of Financial Systems for ADT, talks about using Oracle Data Relationship Management prior to implementing an Enterprise Performance Management solution. Gregg confirmed that there are big benefits to bringing the full Oracle Hyperion Financial Close suite online with Oracle DRM as the metadata source. Reduced maintenance time and use of external consultants translates into significant time and cost savings and faster implementation times. Speaker: Gregg Thompson @ http://youtu.be/XnFrR9Uk4xk 3. Jeff Spangler, Director Financial Planning and Analysis for Speedy Cash Holdings Corp, talked to us about the benefits achieved through implementing Oracle Hyperion Planning and financial reporting solutions. He also describes how the use of Data Relationship Management will keep the process running smoothly now and in the future. Speaker: Jeff Spangler @ http://youtu.be/kkkuMkgJ22U 4. Marc Seewald, Senior Director of Product Management for Oracle Hyperion Tax Provision at Oracle, talks about Oracle Hyperion Tax Provision, how it is an integral part of the financial close process and that it provides better internal controls and automation of this task. Marc talks about Oracle Partners and customers alike who are seeing great value. Speaker: Marc Seewald @ http://youtu.be/lM_nfvACGuA 5. Matt Bradley, SVP of Product Development for Enterprise Performance Management (EPM) Applications at Oracle, talked to us about different deployment options for Oracle EPM. Cloud services (SaaS), managed services, on-premise, off-premise all have their merits, and organizations need flexibility to easily move between them as their companies evolve. Speaker: Matt Bradley @ http://youtu.be/ATO7Z9dbE-o 6. Neil Sellers, Partner, Qubix International talks about their experience with previewing Oracle’s new Planning and Budgeting Cloud Service. He describes the benefits of the step-by-step task lists, the speed of getting the application up and running, and the huge benefits of not having to manage the software and hardware side of the planning process. Speaker: Neil Sellers @ http://youtu.be/xmosO28e4_I 7. Praveen Pasupuleti, Senior Business Intelligence Development Manager of Citrix Systems Inc., talks about their Oracle Hyperion Planning upgrade and the huge performance improvement now experienced in forecasting. He also talked about the benefits of Oracle Hyperion Workforce Planning achieved by Citrix. Speaker: Praveen Pasupuleti @ http://youtu.be/d1e_4hLqw8c 8. CheckPoint Consulting, talked to us about how Enterprise Performance Management should be viewed as an entire solution, rather than as a bunch of applications in silos, to provide significant benefits; and how Data Relationship Management can tie it all together effectively. Speaker: Ron Dimon @ http://youtu.be/sRwbdbbXvUE 9. Sonal Kulkarni, Enterprise Performance Management Leader, Cummins Inc., talks about their use of Oracle Hyperion Financial Close Management (Account Reconciliation Manager), Oracle Hyperion Financial Management and Oracle Hyperion Financial Data Quality Management and how this is providing efficiency, visibility and compliance benefits. Speaker: Sonal Kulkarni @ http://youtu.be/OEgup5dKyVc 10. Todd Renard, Manager Financial Planning and Business Analytics for B/E Aerospace Inc., talks about the huge benefits that B/E Aerospace is experiencing from Oracle Financial Close Suite. He was extremely excited about Oracle Hyperion Financial Data Quality Management and how this helps them integrate a new business in as little as three weeks. Speaker: Todd Renard @ http://youtu.be/nIfqK46uVI8 11. Peter Smolianski, Chief Technology Officer for the District of Columbia Courts, talked to us about how D.C. Courts is using Oracle Scorecard and Strategy Management to push their 5 year plan forward, to report results to their constituents, and take accountability for process changes to become more efficient. Speaker: Peter Smolianski @ http://www.youtube.com/watch?v=T-DtB5pl-uk 12. Rich Wilkie, Senior Director of Product Management for Financial Close Suite at Oracle, talked to us about Oracle Financial Management Analytics. He told us how the prebuilt dashboards on top of Oracle Hyperion Financial Close Suite make it easy for everyone to see the numbers and understand where they are in the close process, and if there is an issue, they can see where it is. Executives are excited to get this information on mobile devices too. Speaker: Rich Wilkie @ http://www.youtube.com/watch?v=4UHuHgx74Yg 13. Dinesh Balebail, Senior Director of Software Development for Oracle Hyperion Profitability and Cost Management, talked to us about the power and speed of Oracle Hyperion Profitability and Cost Management and how it is being used to do deep costing for Telecoms, Hospitals, Banks and other high transaction volume organizations effectively. Speaker: Dinesh Balebail @ http://youtu.be/ivx5AZCXAfs /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman"; mso-ansi-language:EN-US; mso-fareast-language:EN-US;}

    Read the article

  • Limiting Audit Exposure and Managing Risk – Q&A and Follow-Up Conversation

    - by Tanu Sood
    Thanks to all who attended the live ISACA webcast on Limiting Audit Exposure and Managing Risk with Metrics-Driven Identity Analytics. We were really fortunate to have Don Sparks from ISACA moderate the webcast featuring Stuart Lincoln, Vice President, IT P&L Client Services, BNP Paribas, North America and Neil Gandhi, Principal Product Manager, Oracle Identity Analytics. Stuart’s insights given the team’s role in providing IT for P&L Client Services and his tremendous experience in identity management and establishing sustainable compliance programs were true value-add at yesterday’s webcast. And if you are a healthcare organization looking to solve your compliance and security challenges, we recommend you join us for a live webcast on Tuesday, November 29 at 10 am PT. The webcast will feature experts from Kaiser Permanente, PricewaterhouseCoopers and Oracle and the focus of the discussion will be around the compliance challenges a healthcare organization faces and best practices for tackling those. Here are the details: Healthcare IT News Webcast: Managing Risk and Enforcing Compliance in Healthcare with Identity Analytics Tuesday, November 29, 201110:00 a.m. PT / 1:00 p.m. ET Register Today The ISACA webcast replay is now available on-demand and the slides are also available for download. Since we didn’t have time to address all the questions we received during the live Q&A portion of the webcast, we have captured responses to the remaining questions here. Please continue to provide us your feedback and insights from your experience in deploying identity compliance solutions. Q. Can you please clarify the mechanism utilized to populate the Identity Warehouse from each individual application's access management function / files? A. Oracle Identity Analytics (OIA) supports direct imports from applications. Data collection is based on Extract, Transform and Load (ETL) that eliminates the need to write connectors to different applications. Oracle Identity Analytics’ import engine supports complex entitlement feeds saved as either text files or XML. The imports can be scheduled on a periodic basis or triggered as needed. If the applications are synchronized with a user provisioning solution like Oracle Identity Manager, Oracle Identity Analytics has a seamless integration to pull in data from Oracle Identity Manager. Q.  Can you provide a short summary of the new features in your latest release of Oracle Identity Analytics? A. Oracle recently announced availability of enhanced Oracle Identity Analytics. This release focused on easing the certification process by offering risk analytics driven certification, advanced certification screens, business centric views and significant improvement in performance including 3X faster data imports, 3X faster certification campaign generation and advanced auto-certification features, that  will allow organizations to improve user productivity by up to 80%. Closed-loop risk feedback and IT policy monitoring with Oracle Identity Manager, a leading user provisioning solution, allows for more accurate certification reviews. And, OIA's improved performance enables customers to scale compliance initiatives supporting millions of user entitlements across thousands of applications, whether on premise or in the cloud, without compromising speed or integrity. Q. Will ISACA grant a CPE credit for attending this ISACA-sponsored webinar today? A. From ISACA: Hello and thank you for your interest in the 2011 ISACA Webinar Program!  Unfortunately, there are no CPEs offered for this program, archived or live.  We will be looking into the feasibility of offering them in the future.  Q. Would you be able to use this to help manage licenses for software? That is to say - could it track software that is not used by a user, thus eliminating the software license? A. OIA’s integration with Oracle Identity Manager, a leading user provisioning solution, allows organizations to detect ghost accounts or unused accounts via account reconciliation. Based on company’s policies, this could trigger an automated workflow for account deletion or asking for further investigation. Closed-loop feedback between the two solutions would then allow visibility into the complete audit trail of when the account was detected, the action taken, by whom, when and the current status. Q. We have quarterly attestations and .xls mechanisms are not working. Once the identity data is correlated in Identity Analytics, do you then automate access certification? A. OIA’s identity warehouse analyzes and correlates identity data across various resources that allows OIA to determine a user’s risk profile, who the access review request should go to, along with all the relevant access details of the user. The access certification manager gets notification on what to review, when and the relevant data is presented in a business friendly screen. Based on the result of the access certification process, actions are triggered and results recorded and archived. Access review managers have visual risk indicators that also allow them to prioritize access certification tasks and efforts. Q. How does Oracle Identity Analytics work with Cloud Security? A. For enterprises looking to build their own cloud(s), Oracle offers a set of security services that cloud developers can leverage including Oracle Identity Analytics.  For enterprises looking to manage their compliance requirements but without hosting those in-house and instead having a hosting provider offer managed Identity Management services to the organizations, Oracle Identity Analytics can be leveraged much the same way as you’d in an on-premise (within the enterprise) environment. In fact, organizations today are leveraging Oracle Identity Analytics to manage identity compliance in both these ways. Q. Would you recommend this as a cost effective solution for a smaller organization with @ 2,500 users? A. The key return-on-investment (ROI) on Oracle Identity Analytics is derived from automating compliance processes thereby eliminating administrative overhead, minimizing errors, maintaining cost- and time-effective sustainable compliance processes and minimizing audit exposures and penalties.  Of course, there are other tangible benefits that are derived from an Oracle Identity Analytics implementation as outlined in the webcast. For a quantitative analysis of your requirements and potential ROI calculation, we recommend you refer to the Forrester Study on Total Economic Impact of Oracle Identity Analytics. For an in-person discussion, please email Richard Caldwell.

    Read the article

  • A little primer on using TFS with a small team

    - by johndoucette
    The scenario; A small team of 3 developers mostly in maintenance mode with traditional ASP.net, classic ASP, .Net integration services and utilities with the company’s third party packages, and a bunch of java-based Coldfusion web applications all under Visual Source Safe (VSS). They are about to embark on a huge SharePoint 2010 new construction project and wanted to use subversion instead VSS. TFS was a foreign word and smelled of “high cost” and of an “over complicated process”. Since they had no preconditions about the old TFS versions (‘05 & ‘08), it was fun explaining how simple it was to install a TFS server and get the ball rolling, with or without all the heavy stuff one sometimes associates with such a huge and powerful application management lifecycle product. So, how does a small team begin using TFS? 1. Start by using source control and migrate current VSS source trees into TFS. You can take the latest version or migrate the entire version history. It’s up to you on whether you want a clean start or need quick access to all the version notes and history of the bits. 2. Since most shops are mainly in maintenance mode with existing applications, begin using bug workitems for everything. When you receive an issue/bug from your current tracking system, manually enter the workitem in TFS right through Visual Studio. You can automate the integration to the current tracking system later or replace it entirely. Believe me, this thing is powerful and can handle even the largest of help desks. 3. With new construction, begin work with requirements and task workitems and follow the traditional sprint-based development lifecycle. Obviously, some minor training will be needed, but don’t fear, this is very intuitive and MSDN has a ton of lesson based labs and videos. 4. For the java developers, use the new Team Explorer Everywhere 2010 plugin (recently known as Teamprise). There is a seamless interface in Eclipse, but also a good command-line utility for other environments such as Dreamweaver. 5. Wait to fully integrate the whole workitem/project management/testing process until your team is familiar with the integrated workitems for bugs and code. After a while, you will see the team wanting more transparency into the work they are all doing and naturally, everyone will want workitems to help them organize the chaos! 6. Management will be limited in the value of the reports until you have a fully blown implementation of project planning, construction, build, deployment and testing. However, there are some basic “bug rate” reports and current backlog listings that can provide good information. Some notable explanations of TFS; Work Item Tracking and Project Management - A workitem represents the unit of work within the system which enables tracking of all activities produced by a user, whether it is a developer, business user, project manager or tester. The properties of a workitem such as linked changesets (checked-in code), who updated the data and when, the states and reasons for change, are all transitioned to a data warehouse within TFS for reporting purposes. A workitem can be defines as a "bug", "requirement", test case", or a "change request". They drive the work effort by the individual assigned to it and also provide a key role in defining what needs to be done. Workitems are the things the team needs to do to accomplish a goal. Test Case Management - Starting with a workitem known as a "test case", a tester (or developer) can now author and manage test cases within a formal test plan subsystem. Although TFS supports the test case workitem type, there is a new product known as the VS Test Professional 2010 which allows a tester to facilitate manual tests including fast forwarding steps in the process to arrive at the assertion point quickly. This repeatable process provides quick regression tests and can be conducted by the business user to ensure completeness during UAT. In addition, developers no longer can provide a response to a bug with the line "cannot reproduce". With every test run, attachments including the recorded session, captured environment configurations and settings, screen shots, intellitrace (debugging history), and in some cases if the lab manager is being used, a snapshot of the tested environment is available. Version Control - A modern system allowing shared check-in/check-out, excellent merge conflict resolution, Shelvesets (personal check-ins), branching/merging visualization, public workspaces, gated check-ins, security hierarchy capabilities, and changeset/workitem tracking. Knowing what was done with the code by any developer has become much easier to picture and resolve issues. Team Build - Automate the compilation process whether you need it to be whenever a developer checks-in code, periodically such as nightly builds for testers in the morning, or manual builds to be deployed into production. Each build can run through pre-determined tests, perform code analysis to see if the developer conforms to the team standards, and reject the build if either fails. Project Portal & Reporting - Provide management with a dashboard with insight into the project(s). "Where are we" in each step of the way including past iterations and the current burndown rate. Enabling this feature is easy as it seamlessly interfaces with existing SharePoint implementations.

    Read the article

  • CodePlex Daily Summary for Thursday, August 21, 2014

    CodePlex Daily Summary for Thursday, August 21, 2014Popular ReleasesOutlook 2013 Backup Add-In: Outlook Backup Add-In 1.3: Changelog for new version: Added button in config-window to reset the last backup-time (this will trigger the backup after closing outlook) Minimum interval set to 0 (backup at each closing of outlook) Catch exception when data store entry is corrupt Added two parameters (prefix and suffix) to automatically rename the backup file Updated VSTO-Runtime to 10.0.50325 Upgraded project to Visual Studio 2013 Added optional command to run after backup (e.g. pack backup files, ...) Add...File Explorer for WPF: FileExplorer3_20August2014: Please see Aug14 Update.ODBC Connect: v1.0: ODBC Connect executables for both 32bit and 64bit ODBC data sourcesMSSQL Deployment Tool: Microsoft SQL Deploy Tool v1.3.1: MicrosoftSqlDeployTool: v1.3.1.38348 What's changed? Update namespace and assembly name. Bug fixing.SharePoint 2013 Search Query Tool: SharePoint 2013 Search Query Tool v2.1: Layout improvements Bug fixes Stores auth method and user name Moved experimental settings to Advanced boxCtrlAltStudio Viewer: CtrlAltStudio Viewer 1.2.2.41183 Alpha: This alpha of the CtrlAltStudio Viewer provides some preliminary Oculus Rift DK2 support. For more details, see the release notes linked to below. Release notes: http://ctrlaltstudio.com/viewer/release-notes/1-2-2-41183-alpha Support info: http://ctrlaltstudio.com/viewer/support Privacy policy: http://ctrlaltstudio.com/viewer/privacy Disclaimer: This software is not provided or supported by Linden Lab, the makers of Second Life.HDD Guardian: HDD Guardian 0.6.1: New: package now include smartctl 6.3; Removed: standard notification e-mail. Now you have to set your mail server to send e-mail alerts; Bugfix: USB detection error; custom e-mail server settings issue; bottom panel displays a wrong ATA error count.VG-Ripper & PG-Ripper: VG-Ripper 2.9.62: changes NEW: Added Support for 'MadImage.org' links NEW: Added Support for 'ImgSpot.org' links NEW: Added Support for 'ImgClick.net' links NEW: Added Support for 'Imaaage.com' links NEW: Added Support for 'Image-Bugs.com' links NEW: Added Support for 'Pictomania.org' links NEW: Added Support for 'ImgDap.com' links NEW: Added Support for 'FileSpit.com' links FIXED: 'ImgSee.me' linksExchange Database Recovery With and Without Log Files is Possible: Exchange Recovery Application: This Exchange Recovery Software comes with free trial edition which helps users to inspect the working capability of the recovery process. Download free demo version and repair inaccessible mailboxes from EDB file without any obstructions.MongoRepository: MongoRepository 1.6.6: Installing using NuGet (recommended)MongoRepository is now a NuGet package for your convenience. Step-by-step instructions can be found in Installing MongoRepository using NuGet Installing using BinariesYou can also choose to download the binaries instead of using NuGet. There are 2 downloads: mongorepository_full.x.x.x contains all binaries required (MongoRepository and the 10gen C# driver) mongorepository.x.x.x contains only the MongoRepository binary Make sure you reference MongoReposit...Cryptography Enumerations JavaScript Shell: Cryptography Enumerations JavaScript Shell 1.0.0: First ReleaseMagick.NET: Magick.NET 7.0.0.0001: Magick.NET linked with ImageMagick 7-Beta.CMake Tools for Visual Studio: CMake Tools for Visual Studio 1.2: This release adds the following new features and bug fixes from CMake Tools for Visual Studio 1.1: Added support for CMake 3.0. Added support for word completion. Added IntelliSense support for the CMAKEHOSTSYSTEM_INFORMATION command. Fixed syntax highlighting for tokens beginning with escape sequences. Fixed issue uninstalling CMake Tools for Visual Studio after Visual Studio has been uninstalled.GW2 Personal Assistant Overlay: GW2 Personal Assistant Overlay 1.1: Overview1.1 is the second 'stable' release of the GW2 Personal Assistant Overlay. This version includes just a couple of very minor features and some minor bug fixes. For details regarding installation, setup, and general use, see Documentation. Note: If you were using a previous version, you will probably want to copy over the following user settings files: GW2PAO.DungeonSettings.xml GW2PAO.EventSettings.xml GW2PAO.WvWSettings.xml GW2PAO.ZoneCompletionSettings.xml New FeaturesAdded new "No...Fluentx: Fluentx v1.5.3: Added few more extension methods.fastJSON: v2.1.2: 2.1.2 - bug fix circular referencesJPush.NET: JPush Server SDK 1.2.1 (For JPush V3): Assembly: 1.2.1.24728 JPush REST API Version: v3 JPush Documentation Reference .NET framework: v4.0 or above. Sample: class: JPushClientV3 2014 Augest 15th.SEToolbox: SEToolbox 01.043.008 Release 1: Changed ship/station names to use new DisplayName instead of Beacon/Antenna. Fixed issue with updated SE binaries 01.043.018 using new Voxel Material definitions.Google .Net API: Drive.Sample: Google .NET Client API – Drive.SampleInstructions for the Google .NET Client API – Drive.Sample</h2> http://code.google.com/p/google-api-dotnet-client/source/browse/?repo=samples#hg%2FDrive.SampleBrowse Source, or main file http://code.google.com/p/google-api-dotnet-client/source/browse/Drive.Sample/Program.cs?repo=samplesProgram.cs <h3>1. Checkout Instructions</h3> <p><b>Prerequisites:</b> Install Visual Studio, and <a href="http://mercurial.selenic.com/">Mercurial</a>.</p> ...FineUI - jQuery / ExtJS based ASP.NET Controls: FineUI v4.1.1: -??Form??????????????(???-5929)。 -?TemplateField??ExpandOnDoubleClick、ExpandOnEnter、ExpandToSelectRow????(LZOM-5932)。 -BodyPadding???????,??“5”“5 10”,???????????“5px”“5px 10px”。 -??TriggerBox?EnableEdit=false????,??????????????(Jango_Jing-5450)。 -???????????DataKeyNames???????????(yygy-6002)。 -????????????????????????(Gnid-6018)。 -??PageManager???AutoSizePanelID????,??????????????????(yygy-6008)。 -?FState???????????????,????????????????(????-5925)。 -??????OnClientClick???return?????????(FineU...New ProjectsAesonFramework: Aeson FrameworkBullet for Windows: Bullet is used to simulate collision detection, soft and rigid body dynamics. This is an early version of Bullet with support for Windows and Windows Phone.CC-Homework: This is a collection of homework projects completed during coder camps.Integrating Exchange Server 2010 Mail Attachments with SharePoint 2013 via C#.: Integrating Exchange Server 2010 Mail Attachments with SharePoint 2013 via C#.Mosquito.ViewModel: A minimal MVVM library aimed at removing the pain of implementing ViewModels.OWL API for .NET: An open source project that port the java OWLAPI to .Net. It uses IKVM and contains scripts to compile the OWLAPI libraries and Java reasoners with Samples.PCStoreManager: Progetto integrativo corso programmazione ad oggettiStockEr: Spa application for stocks analysis.SysLog Server: This is a free Syslog server for windows.UnixtimeHelpers: This project minimalistic assembly contains unixtime converters. Converting DateTime to unixtime (double type respresent) and vice versa.Winner - Scommessa vincente: Winner permette di visualizzare le quote dei prossimi maggiori incontri sportivi e di compilare una schedina con una o più scommesse.

    Read the article

  • Agile Testing Days 2012 – Day 2 – Learn through disagreement

    - by Chris George
    I think I was in the right place! During Day 1 I kept on reading tweets about Lean Coffee that has happened earlier that morning. It intrigued me and I figured in for a penny in for a pound, and set my alarm for 6:45am. Following the award night the night before, it was _really_ hard getting up when it went off, but I did and after a very early breakfast, set off for the 10 min walk to the Dorint. With Lean Coffee due to start at 07:30, I arrived at the hotel and made my way to one of the hotel bars. I soon realised I was in the right place as although the bar was empty, there was a table with post-it’s and pens! This MUST be the place! The premise of Lean Coffee is to have several small timeboxed discussions. Everyone writes down what they would like to discuss on post-its that are then briefly explained and submitted to the pile. Once everyone is done, the group dot-votes on the topics. The topics are then sorted by the dot vote counts and the discussions begin. Each discussion had 8 mins to start with, which meant it prevented the discussions getting off topic too much. After the time elapsed, the group had a vote whether to extend the discussion by a further 4 mins or move on. Several discussion were had around training, soft skills etc. The conversations were really interesting and there were quite a few good ideas. Overall it was a very enjoyable experience, certainly worth the early start! Make Melly Happy Following Lean Coffee was real coffee, and much needed that was! The first keynote of the day was “Let’s help Melly (Changing Work into Life)”by Jurgen Appelo. Draw lines to track happiness This was a very interesting presentation, and set the day nicely. The theme to the keynote was projects are about the people, more-so than the actual tasks. So he started by showing a photo of an employee ‘Melly’ who looked happy enough. He then stated that she looked happy but actually hated her job. In fact 50% of Americans hate their jobs. He went on to say that the world over 50% of people hate Americans their jobs. Jurgen talked about many ways to reduce the feedback cycle, not only of the project, but of the people management. Ideas such as Happiness doors, happiness tracking (drawing lines on a wall indicating your happiness for that day), kudo boxes (to compliment a colleague for good work). All of these (and more) ideas stimulate conversation amongst the team, lead to early detection of issues and investigation of solutions. I’ve massively simplified Jurgen’s keynote and have certainly not done it justice, so I will post a link to the video once it’s available. Following more coffee, the next talk was “How releasing faster changes testing” by Alexander Schwartz. This is a topic very close to our hearts at the moment, so I was eager to find out any juicy morsels that could help us achieve more frequent releases, and Alex did not disappoint. He started off by confirming something that I have been a firm believer in for a number of years now; adding more people can do more harm than good when trying to release. This is for a number of reasons, but just adding new people to a team at such a critical time can be more of a drain on resources than they add. The alternative is to have the whole team have shared responsibility for faster delivery. So the whole team is responsible for quality and testing. Obviously you will have the test engineers on the project who have the specialist skills, but there is no reason that the entire team cannot do exploratory testing on the product. This links nicely with the Developer Exploratory testing presented by Sigge on Day 1, and certainly something that my team are really striving towards. Focus on cycle time, so what can be done to reduce the time between dev cycles, release cycles. What’s stops a release, what delays a release? all good solid questions that can be answered. Alex suggested that perhaps the product doesn’t need to be fully tested. Doing less testing will reduce the cycle time therefore get the release out faster. He suggested a risk-based approach to planning what testing needs to happen. Reducing testing could have an impact on revenue if it causes harm to customers, so test the ‘right stuff’! Determine a set of tests that are ‘face saving’ or ‘smoke’ tests. These tests cover the core functionality of the product and aim to prevent major embarrassment if these areas were to fail! Amongst many other very good points, Alex suggested that a good approach would be to release after every new feature is added. So do a bit of work -> release, do some more work -> release. By releasing small increments of work, the impact on the customer of bugs being introduced is reduced. Red Pill, Blue Pill The second keynote of the day was “Adaptation and improvisation – but your weakness is not your technique” by Markus Gartner and proved to be another very good presentation. It started off quoting lines from the Matrix which relate to adapting, improvising, realisation and mastery. It has alot of nerds in the room smiling! Markus went on to explain how through deliberate practice ( and a lot of it!) you can achieve mastery, but then you never stop learning. Through methods such as code retreats, testing dojos, workshops you can continually improve and learn. The code retreat idea was one that interested me. It involved pairing to write an automated test for, say, 45 mins, they deleting all the code, finding a different partner and writing the same test again! This is another keynote where the video will speak louder than anything I can write here! Markus did elaborate on something that Lisa and Janet had touched on yesterday whilst busting the myth that “Testers Must Code”. Whilst it is true that to be a tester, you don’t need to code, it is becoming more common that there is this crossover happening where more testers are coding and more programmers are testing. Markus made a special distinction between programmers and developers as testers develop tests code so this helped to make that clear. “Extending Continuous Integration and TDD with Continuous Testing” by Jason Ayers was my next talk after lunch. We already do CI and a bit of TDD on my project team so I was interested to see what this continuous testing thing was all about and whether it would actually work for us. At the start of the presentation I was of the opinion that it just would not work for us because our tests are too slow, and that would be the case for many people. Jason started off by setting the scene and saying that those doing TDD spend between 10-15% of their time waiting for tests to run. This can be reduced by testing less often, reducing the test time but this then increases the risk of introduced bugs not being spotted quickly. Therefore, in comes Continuous Testing (CT). CT systems run your unit tests whenever you save some code and runs them in the background so you can continue working. This is a really nice idea, but to do this, your tests must be fast, independent and reliable. The latter two should be the case anyway, and the first is ideal, but hard! Jason makes several suggestions to make tests fast. Firstly keep the scope of the test small, secondly spin off any expensive tests into a suite which is run, perhaps, overnight or outside of the CT system at any rate. So this started to change my mind, perhaps we could re-engineer our tests, and continuously run the quick ones to give an element of coverage. This talk was very interesting and I’ve already tried a couple of the tools mentioned on our product (Mighty Moose and NCrunch). Sadly due to the way our solution is built, it currently doesn’t work, but we will look at whether we can make this work because this has the potential to be a mini-game-changer for us. Using the wrong data Gojko’s Hierarchy of Quality The final keynote of the day was “Reinventing software quality” by Gojko Adzic. He opened the talk with the statement “We’ve got quality wrong because we are using the wrong data”! Gojko then went on to explain that we should judge a bug by whether the customer cares about it, not by whether we think it’s important. Why spend time fixing issues that the customer just wouldn’t care about and releasing months later because of this? Surely it’s better to release now and get customer feedback? This was another reference to the idea of how it’s better to build the right thing wrong than the wrong thing right. Get feedback early to make sure you’re making the right thing. Gojko then showed something which was very analogous to Maslow’s heirachy of needs. Successful – does it contribute to the business? Useful – does it do what the user wants Usable – does it do what it’s supposed to without breaking Performant/Secure – is it secure/is the performance acceptable Deployable Functionally ok – can it be deployed without breaking? He then explained that User Stories should focus on change. In other words they should focus on the users needs, not the users process. Describe what the change will be, how that change will happen then measure it! Networking and Beer Following the day’s closing keynote, there were drinks and nibble for the ‘Networking’ evening. This was a great opportunity to talk to people. I find approaching strangers very uncomfortable but once again, when in Rome! Pete Walen and I had a long conversation about only fixing issues that the customer cares about versus fixing issues that make you proud of your software! Without saying much, and asking the right questions, Pete made me re-evaluate my thoughts on the matter. Clever, very clever!  Oh and he ‘bought’ me a beer! My Takeaway Triple from Day 2: release small and release often to minimize issues creeping in and get faster feedback from ‘the real world’ Focus on issues that the customers care about, not what we think is important It’s okay to disagree with someone, even if they are well respected agile testing gurus, that’s how discussion and learning happens!  

    Read the article

  • The Virtues and Challenges of Implementing Basel III: What Every CFO and CRO Needs To Know

    - by Jenna Danko
    The Basel Committee on Banking Supervision (BCBS) is a group tasked with providing thought-leadership to the global banking industry.  Over the years, the BCBS has released volumes of guidance in an effort to promote stability within the financial sector.  By effectively communicating best-practices, the Basel Committee has influenced financial regulations worldwide.  Basel regulations are intended to help banks: More easily absorb shocks due to various forms of financial-economic stress Improve risk management and governance Enhance regulatory reporting and transparency In June 2011, the BCBS released Basel III: A global regulatory framework for more resilient banks and banking systems.  This new set of regulations included many enhancements to previous rules and will have both short and long term impacts on the banking industry.  Some of the key features of Basel III include: A stronger capital base More stringent capital standards and higher capital requirements Introduction of capital buffers  Additional risk coverage Enhanced quantification of counterparty credit risk Credit valuation adjustments  Wrong  way risk  Asset Value Correlation Multiplier for large financial institutions Liquidity management and monitoring Introduction of leverage ratio Even more rigorous data requirements To implement these features banks need to embark on a journey replete with challenges. These can be categorized into three key areas: Data, Models and Compliance. Data Challenges Data quality - All standard dimensions of Data Quality (DQ) have to be demonstrated.  Manual approaches are now considered too cumbersome and automation has become the norm. Data lineage - Data lineage has to be documented and demonstrated.  The PPT / Excel approach to documentation is being replaced by metadata tools.  Data lineage has become dynamic due to a variety of factors, making static documentation out-dated quickly.  Data dictionaries - A strong and clean business glossary is needed with proper identification of business owners for the data.  Data integrity - A strong, scalable architecture with work flow tools helps demonstrate data integrity.  Manual touch points have to be minimized.   Data relevance/coverage - Data must be relevant to all portfolios and storage devices must allow for sufficient data retention.  Coverage of both on and off balance sheet exposures is critical.   Model Challenges Model development - Requires highly trained resources with both quantitative and subject matter expertise. Model validation - All Basel models need to be validated. This requires additional resources with skills that may not be readily available in the marketplace.  Model documentation - All models need to be adequately documented.  Creation of document templates and model development processes/procedures is key. Risk and finance integration - This integration is necessary for Basel as the Allowance for Loan and Lease Losses (ALLL) is calculated by Finance, yet Expected Loss (EL) is calculated by Risk Management – and they need to somehow be equal.  This is tricky at best from an implementation perspective.  Compliance Challenges Rules interpretation - Some Basel III requirements leave room for interpretation.  A misinterpretation of regulations can lead to delays in Basel compliance and undesired reprimands from supervisory authorities. Gap identification and remediation - Internal identification and remediation of gaps ensures smoother Basel compliance and audit processes.  However business lines are challenged by the competing priorities which arise from regulatory compliance and business as usual work.  Qualification readiness - Providing internal and external auditors with robust evidence of a thorough examination of the readiness to proceed to parallel run and Basel qualification  In light of new regulations like Basel III and local variations such as the Dodd Frank Act (DFA) and Comprehensive Capital Analysis and Review (CCAR) in the US, banks are now forced to ask themselves many difficult questions.  For example, executives must consider: How will Basel III play into their Risk Appetite? How will they create project plans for Basel III when they haven’t yet finished implementing Basel II? How will new regulations impact capital structure including profitability and capital distributions to shareholders? After all, new regulations often lead to diminished profitability as well as an assortment of implementation problems as we discussed earlier in this note.  However, by requiring banks to focus on premium growth, regulators increase the potential for long-term profitability and sustainability.  And a more stable banking system: Increases consumer confidence which in turn supports banking activity  Ensures that adequate funding is available for individuals and companies Puts regulators at ease, allowing bankers to focus on banking Stability is intended to bring long-term profitability to banks.  Therefore, it is important that every banking institution takes the steps necessary to properly manage, monitor and disclose its risks.  This can be done with the assistance and oversight of an independent regulatory authority.  A spectrum of banks exist today wherein some continue to debate and negotiate with regulators over the implementation of new requirements, while others are simply choosing to embrace them for the benefits I highlighted above. Do share with me how your institution is coping with and embracing these new regulations within your bank. Dr. Varun Agarwal is a Principal in the Banking Practice for Capgemini Financial Services.  He has over 19 years experience in areas that span from enterprise risk management, credit, market, and to country risk management; financial modeling and valuation; and international financial markets research and analyses.

    Read the article

  • The Arab HEUG is now a reality, and other random thoughts

    - by user9147039
    I just returned from Doha, Qatar where the first of its kind HEUG (Higher Education User Group) meeting for institutions in the Middle East and North Africa was held at Qatar University and jointly hosted by Damman University from Saudi Arabia. Over 80 delegates attended including representation from education institutions in Oman, Saudi Arabia, Lebanon, and Qatar. There are many other regional HEUG organizations in place (in Australia/New Zealand, APAC, EMEA, as well as smaller regional HEUG’s in the Netherlands, South Africa, and in regions of the US), but it was truly an accomplishment to see this Middle East/North Africa group organize and launch their chapter with a meeting of this quality. To be known as the Arab HEUG going forward, I am excited about the prospects for sharing between the institutions and for the growth of Oracle solutions in the region. In particular the hosts for the event (Qatar University) did a masterful job with logistics and organization, and the quality of the event was a testament to their capabilities. Among the more interesting and enlightening presentations I attended were one from Dammam University on the lessons learned from their implementation of Campus Solutions and transition off of Banner, as well as the use by Qatar University E-business Suite for grants management (both pre-and post-award). The most notable fact coming from this latter presentation was the fit (89%) of e-Business Suite Grants to the university’s requirements. In a few weeks time we will be convening the 5th meeting of the Oracle Education & Research Industry Strategy Council in Redwood Shores (5th since my advent into my current role). The main topics of discussion will be around our Higher Education Applications Strategy for the future (including cloud approaches to ERP (HCM, Finance, and Student Information Systems), how some cases studies on the benefits of leveraging delivered functionality and extensibility in the software (versus customization). On the second day of the event we will turn our attention to Oracle in Research and also budgeting and planning in higher education. Both of these sessions will include significant participation from council members in the form of panel discussions. Our EVP’s for Systems (John Fowler) and for Global Cloud Services and North America application sales (Joanne Olson) will join us for the discussion. I recently read a couple of articles that were surprising to me. The first was from Inside Higher Ed on October 15 entitled, “As colleges prepare for major software upgrades, Kuali tries to woo them from corporate vendors.” It continues to disappointment that after all this time we are still debating whether it is better to build enterprise software through open or community source initiatives when fully functional, flexible, supported, and widely adopted options exist in the marketplace. Over a decade or more ago when these solutions were relatively immature and there was a great deal of turnover in the market I could appreciate the initiatives like Kuali. But let’s not kid ourselves – the real objective of this movement is to counter a perceived predatory commercial software industry. Again, when commercial solutions are deployed as written without significant customization, and standard business processes are adopted, the cost of these solutions (relative to the value delivered) is quite low, and certain much lower than the massive investment (and risk) in in-house developers to support a bespoke community source system. In this era of cost pressures in education and the need to refocus resources on teaching, learning, and research, I believe it’s bordering on irresponsible to continue to pursue open-source ERP. Many of the adopter’s total costs are staggering and have little to show for their efforts and expended resources. The second article was recently in the Chronicle of Higher Education and was entitled “’Big Data’ Is Bunk, Obama Campaign’s Tech Guru Tells University Leaders.” This one was so outrageous I almost don’t want to legitimize it by referencing it here. In the article the writer relays statements made by Harper Reed, President Obama’s former CTO for his 2012 re-election campaign, that big data solutions in education have no relevance and are akin to snake oil. He goes on to state that while he’s a fan of data-driven decision making in education, most of the necessary analysis can be accomplished in Excel spreadsheets. Yeah… right. This is exactly what ails education (higher education in particular). Dozens of shadow and siloed systems running on spreadsheets with limited-to-no enterprise wide initiatives to harness the data-rich environment that is a higher ed institution and transform the data into useable information. I’ll grant Mr. Reed that “Big Data” is overused and hackneyed, but imperatives like improving student success in higher education are classic big data problems that data-mining and predictive analytics can address. Further, higher ed need to be producing a massive amount more data scientists and analysts than are currently in the pipeline, to further this discipline and application of these tools to many many other problems across multiple industries.

    Read the article

  • ODI 12c - Aggregating Data

    - by David Allan
    This posting will look at the aggregation component that was introduced in ODI 12c. For many ETL tool users this shouldn't be a big surprise, its a little different than ODI 11g but for good reason. You can use this component for composing data with relational like operations such as sum, average and so forth. Also, Oracle SQL supports special functions called Analytic SQL functions, you can use a specially configured aggregation component or the expression component for these now in ODI 12c. In database systems an aggregate transformation is a transformation where the values of multiple rows are grouped together as input on certain criteria to form a single value of more significant meaning - that's exactly the purpose of the aggregate component. In the image below you can see the aggregate component in action within a mapping, for how this and a few other examples are built look at the ODI 12c Aggregation Viewlet here - the viewlet illustrates a simple aggregation being built and then some Oracle analytic SQL such as AVG(EMP.SAL) OVER (PARTITION BY EMP.DEPTNO) built using both the aggregate component and the expression component. In 11g you used to just write the aggregate expression directly on the target, this made life easy for some cases, but it wan't a very obvious gesture plus had other drawbacks with ordering of transformations (agg before join/lookup. after set and so forth) and supporting analytic SQL for example - there are a lot of postings from creative folks working around this in 11g - anything from customizing KMs, to bypassing aggregation analysis in the ODI code generator. The aggregate component has a few interesting aspects. 1. Firstly and foremost it defines the attributes projected from it - ODI automatically will perform the grouping all you do is define the aggregation expressions for those columns aggregated. In 12c you can control this automatic grouping behavior so that you get the code you desire, so you can indicate that an attribute should not be included in the group by, that's what I did in the analytic SQL example using the aggregate component. 2. The component has a few other properties of interest; it has a HAVING clause and a manual group by clause. The HAVING clause includes a predicate used to filter rows resulting from the GROUP BY clause. Because it acts on the results of the GROUP BY clause, aggregation functions can be used in the HAVING clause predicate, in 11g the filter was overloaded and used for both having clause and filter clause, this is no longer the case. If a filter is after an aggregate, it is after the aggregate (not sometimes after, sometimes having).  3. The manual group by clause let's you use special database grouping grammar if you need to. For example Oracle has a wealth of highly specialized grouping capabilities for data warehousing such as the CUBE function. If you want to use specialized functions like that you can manually define the code here. The example below shows the use of a manual group from an example in the Oracle database data warehousing guide where the SUM aggregate function is used along with the CUBE function in the group by clause. The SQL I am trying to generate looks like the following from the data warehousing guide; SELECT channel_desc, calendar_month_desc, countries.country_iso_code,       TO_CHAR(SUM(amount_sold), '9,999,999,999') SALES$ FROM sales, customers, times, channels, countries WHERE sales.time_id=times.time_id AND sales.cust_id=customers.cust_id AND   sales.channel_id= channels.channel_id  AND customers.country_id = countries.country_id  AND channels.channel_desc IN   ('Direct Sales', 'Internet') AND times.calendar_month_desc IN   ('2000-09', '2000-10') AND countries.country_iso_code IN ('GB', 'US') GROUP BY CUBE(channel_desc, calendar_month_desc, countries.country_iso_code); I can capture the source datastores, the filters and joins using ODI's dataset (or as a traditional flow) which enables us to incrementally design the mapping and the aggregate component for the sum and group by as follows; In the above mapping you can see the joins and filters declared in ODI's dataset, allowing you to capture the relationships of the datastores required in an entity-relationship style just like ODI 11g. The mix of ODI's declarative design and the common flow design provides for a familiar design experience. The example below illustrates flow design (basic arbitrary ordering) - a table load where only the employees who have maximum commission are loaded into a target. The maximum commission is retrieved from the bonus datastore and there is a look using employees as the driving table and only those with maximum commission projected. Hopefully this has given you a taster for some of the new capabilities provided by the aggregate component in ODI 12c. In summary, the actions should be much more consistent in behavior and more easily discoverable for users, the use of the components in a flow graph also supports arbitrary designs and the tool (rather than the interface designer) takes care of the realization using ODI's knowledge modules. Interested to know if a deep dive into each component is interesting for folks. Any thoughts? 

    Read the article

  • Oracle RightNow CX for Good Customer Experiences

    - by Andreea Vaduva
    Oracle RightNow CX is all about the customer experience, it’s about understanding what drives a good interaction and it’s about delivering a solution which works for our customers and by extension, their customers. One of the early guiding principles of Oracle RightNow was an 8-point strategy to providing good customer experiences. Establish a knowledge foundation Empowering the customer Empower employees Offer multi-channel choice Listen to the customer Design seamless experiences Engage proactively Measure and improve continuously The application suite provides all of the tools necessary to deliver a rewarding, repeatable and measurable relationship between business and customer. The Knowledge Authoring tool provides gap analysis, WYSIWIG editing (and includes HTML rich content for non-developers), multi-level categorisation, permission based publishing and Web self-service publishing. Oracle RightNow Customer Portal, is a complete web application framework that enables businesses to control their own end-user page branding experience, which in turn will allow customers to self-serve. The Contact Centre Experience Designer builds a combination of workspaces, agent scripting and guided assistances into a Desktop Workflow. These present an agent with the tools they need, at the time they need them, providing even the newest and least experienced advisors with consistently accurate and efficient information, whilst guiding them through the complexities of internal business processes. Oracle RightNow provides access points for customers to feedback about specific knowledge articles or about the support site in general. The system will generate ‘incidents’ based on the scoring of the comments submitted. This makes it easy to view and respond to customer feedback. It is vital, more now than ever, not to under-estimate the power of the social web – Facebook, Twitter, YouTube – they have the ability to cause untold amounts of damage to businesses with a single post – witness musician Dave Carroll and his protest song on YouTube, posted in response to poor customer services from an American airline. The first day saw 150,000 views and is currently at 12,011,375. The Times reported that within 4 days of the post, the airline’s stock price fell by 10 percent, which represented a cost to shareholders of $180 million dollars. It is a universally acknowledged fact, that when customers are unhappy, they will not come back, and, generally speaking, it only takes one bad experience to lose a customer. The idea that customer loyalty can be regained by using social media channels was the subject of a 2011 Survey commissioned by RightNow and conducted by Harris Interactive. The survey discovered that 68% of customers who posted a negative review about a holiday on a social networking site received a response from the business. It further found that 33% subsequently posted a positive review and 34% removed the original negative review. Cloud Monitor provides the perfect mechanism for seeing what is being said about a business on public Facebook pages, Twitter or YouTube posts; it allows agents to respond proactively – either by creating an Oracle RightNow incident or by using the same channel as the original post. This leaves step 8 – Measuring and Improving: How does a business know whether it’s doing the right thing? How does it know if its customers are happy? How does it know if its staff are being productive? How does it know if its staff are being effective? Cue Oracle RightNow Analytics – fully integrated across the entire platform – Service, Marketing and Sales – there are in excess of 800 standard reports. If this were not enough, a large proportion of the database has been made available via the administration console, allowing users without any prior database experience to write their own reports, format them and schedule them for e-mail delivery to a distribution list. It handles the complexities of table joins, and allows for the manipulation of data with ease. Oracle RightNow believes strongly in the customer owning their solution, and to provide the best foundation for success, Oracle University can give you the RightNow knowledge and skills you need. This is a selection of the courses offered: RightNow Customer Service Administration Rel 12.02 (3 days) Available as In Class and Live Virtual Class (Release 11.11 is available as In Class, Live Virtual Class and Training On Demand) This course familiarises users with the tasks and concepts needed to configure and maintain their system. RightNow Customer Portal Designer and Contact Center Experience Designer Administration Rel 12.02 (2 days) Available as In Class and Live Virtual Class (Release 11.11 is available as In Class, Live Virtual Class and Training On Demand) This course introduces basic CP structure and how to make changes to the look, feel and behaviour of their self-service pages RightNow Analytics Rel 12.02 (2 days) Available as In Class, Live Virtual Class and Training On Demand (Release 11.11 is available as In Class and Live Virtual Class) This course equips users with the skills necessary to understand data supplied by standard reports and to create custom reports RightNow Integration and Customization For Developers Rel 12.02 (5-days) Available as In Class and Live Virtual Class (Release 11.11 is available as In Class, Live Virtual Class and Training On Demand) This course is for experienced web developers and offers an introduction to Add-In development using the Desktop Add-In Framework and introduces the core knowledge that developers need to begin integrating Oracle RightNow CX with other systems A full list of courses offered can be found on the Oracle University website. For more information and course dates please get in contact with your local Oracle University team. On top of the Service components, the suite also provides marketing tools, complex survey creation and tracking and sales functionality. I’m a fan of the application, and I think I’ve made that clear: It’s completely geared up to providing customers with support at point of need. It can be configured to meet even the most stringent of business requirements. Oracle RightNow is passionate about, and committed to, providing the best customer experience possible. Oracle RightNow CX is the application that makes it possible. About the Author: Sarah Anderson worked for RightNow for 4 years in both in both a consulting and training delivery capacity. She is now a Senior Instructor with Oracle University, delivering the following Oracle RightNow courses: RightNow Customer Service Administration RightNow Analytics RightNow Customer Portal Designer and Contact Center Experience Designer Administration RightNow Marketing and Feedback

    Read the article

  • Best Method For Evaluating Existing Software or New Software

    How many of us have been faced with having to decide on an off-the-self or a custom built component, application, or solution to integrate in to an existing system or to be the core foundation of a new system? What is the best method for evaluating existing software or new software still in the design phase? One of the industry preferred methodologies to use is the Active Reviews for Intermediate Designs (ARID) evaluation process.  ARID is a hybrid mixture of the Active Design Review (ADR) methodology and the Architectural Tradeoff Analysis Method (ATAM). So what is ARID? ARD’s main goal is to ensure quality, detailed designs in software. One way in which it does this is by empowering reviewers by assigning generic open ended survey questions. This approach attempts to remove the possibility for allowing the standard answers such as “Yes” or “No”. The ADR process ignores the “Yes”/”No” questions due to the fact that they can be leading based on how the question is asked. Additionally these questions tend to receive less thought in comparison to more open ended questions. Common Active Design Review Questions What possible exceptions can occur in this component, application, or solution? How should exceptions be handled in this component, application, or solution? Where should exceptions be handled in this component, application, or solution? How should the component, application, or solution flow based on the design? What is the maximum execution time for every component, application, or solution? What environments can this component, application, or solution? What data dependencies does this component, application, or solution have? What kind of data does this component, application, or solution require? Ok, now I know what ARID is, how can I apply? Let’s imagine that your organization is going to purchase an off-the-shelf (OTS) solution for its customer-relationship management software. What process would we use to ensure that the correct purchase is made? If we use ARID, then we will have a series of 9 steps broken up by 2 phases in order to ensure that the correct OTS solution is purchases. Phase 1 Identify the Reviewers Prepare the Design Briefing Prepare the Seed Scenarios Prepare the Materials When identifying reviewers for a design it is preferred that they be pulled from a candidate pool comprised of developers that are going to implement the design. The believe is that developers actually implementing the design will have more a vested interest in ensuring that the design is correct prior to the start of code. Design debriefing consist of a summary of the design, examples of the design solving real world examples put in to use and should be no longer than two hours typically. The primary goal of this briefing is to adequately summarize the design so that the review members could actually implement the design. In the example of purchasing an OTS product I would attempt to review my briefing prior to its distribution with the review facilitator to ensure that nothing was excluded that should have not been. This practice will also allow me to test the length of the briefing to ensure that can be delivered in an appropriate about of time. Seed Scenarios are designed to illustrate conceptualized scenarios when applied with a set of sample data. These scenarios can then be used by the reviewers in the actual evaluation of the software, All materials needed for the evaluation should be prepared ahead of time so that they can be reviewed prior to and during the meeting. Materials Included: Presentation Seed Scenarios Review Agenda Phase 2 Present ARID Present Design Brainstorm and prioritize scenarios Apply scenarios Summarize Prior to the start of any ARID review meeting the Facilitator should define the remaining steps of ARID so that all the participants know exactly what they are doing prior to the start of the review process. Once the ARID rules have been laid out, then the lead designer presents an overview of the design which typically takes about two hours. During this time no questions about the design or rational are allowed to be asked by the review panel as a standard, but they are written down for use latter in the process. After the presentation the list of compiled questions is then summarized and sent back to the lead designer as areas that need to be addressed further. In the example of purchasing an OTS product issues could arise regarding security, the implementation needed or even if this is this the correct product to solve the needed solution. After the Design presentation a brainstorming and prioritize scenarios process begins by reducing the seed scenarios down to just the highest priority scenarios.  These will then be used to test the design for suitability. Once the selected scenarios have been defined the reviewers apply the examples provided in the presentation to the scenarios. The intended output of this process is to provide code or pseudo code that makes use of the examples provided while solving the selected seed scenarios. As a standard rule, the designers of the systems are not allowed to help the review board unless they all become stuck. When this occurs it is documented and along with the reason why the designer needed to help the review panel back on track. Once all of the scenarios have been completed the review facilitator reviews with the group issues that arise during the process. Then the reviewers will be polled as to efficacy of the review experience. References: Clements, Paul., Kazman, Rick., Klien, Mark. (2002). Evaluating Software Architectures: Methods and Case Studies Indianapolis, IN: Addison-Wesley

    Read the article

< Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >