Search Results

Search found 5716 results on 229 pages for 'dual channel'.

Page 201/229 | < Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >

  • A view from the call center for the Nashville Flood telethon

    - by Rob Foster
    I want to break away from my usual topic of something technical and talk about what I experienced tonight while working in the call center for the Nashville Flood telethon, which was broadcast on WSMV, CNN, and The Weather Channel.  We started receiving calls about 7pm local time and to be honest, I had no idea what to expect when going into this.  I mean, I'm a pretty good talker, but this is different...We had a good script of what to say and how we were supposed to say it, as well as paper forms and pens that we used to collect information from people who wanted to donate their money to help.  I took my first few calls pretty easily and it went pretty quick and easy.  Everyone was upbeat and happy to be in the call center as well as people happy to be donating money. Pizza, snacks, and soft drinks were flowing well.  Everyone is smiling and happy.  :) About 3 or 4 calls into my night, I got a call from a lady that had lost 2 family members in West Nashville who drowned in the floods.  She was crying when she called and I of course tried to console her.  She told me how bad her situation was, losing family members and much of her neighborhood.  After all this, she still just wanted to help other people.  She was donating all the money that she could to the telethon and I want to share a direct quote from her: "I want to donate this instead of buying flowers for my family members' funeral because people out there need help.". Please let me pause while I get myself together <again>.  That caught me so off guard (and still does). I had kids calling wanting to donate their allowance, open their piggy banks, whatever they could do.  These are kids.  Kids not much older than my boys.  Kids who should be focused on buying the next cool video game or toy or whatever but wanted to do something.  Everyone just seemed to want to help. I took calls from as far away as British Columbia as well and pretty much coast to coast.  how cool is that? Yet another thing that caught me off guard.  This kind lady that called from British Columbia told me how much she loved visiting Nashville and just hated to see this happen.  I belive that she said that she will be attending the CMA Fest this year too.  I was sure to tell her not to cancel her plans!  :) It felt like every call I took (and I took A LOT, as did everyone else) was very personal and heartfelt.  I've never had the privelage to do anything like this and fell lucky to have been able to help out with answering phones and logging donations.  Nashville will bounce back very quickly, people are out there day and night helping each other, and the spirits are very high here.  I hope that one day, my kids read this blog and better understand who they are, where they come from, and what the human spirt is and can be.  I love this city, I love the people here, I love the culture and even more than ever am proud to say that this is me.  This is us.  We are Nashville!

    Read the article

  • Top 5 Mobile Apps To Keep Track Of Cricket Scores [ICC World Cup]

    - by Gopinath
    The ICC World Cup 2011 has started with a bang today and the first match between India vs Bangladesh was a cracker. India trashed Bangladesh with a huge margin, thanks to Sehwag for scoring an entertaining 175 runs in 140 runs. At the moment it’s very clear that whole India is gripped with cricket fever and so the rest of fans across the globe. Couple of days ago we blogged about how to watch live streaming of ICC cricket world cup online for free as well as top 10 websites to keep track live scores on your computers. What about tracking live cricket scores on mobiles phones? Here is our guide to top mobile apps available for Symbian(Nokia), Android, iOS and Windows mobiles. By the way, we are covering free apps alone in this post. Why to waste money when free apps are available? SnapTu – Symbian Mobile App SnapTu is a multi feature application that lets you to track live cricket scores, read latest news and check stats published on cric info. SnapTu has tie up with Cric Info and accessing all of CricInfo website on your mobile is very easy. Along with live scores, SnapTu also lets you access your Facebook, Twitter and Picassa on your mobile. This is my favourite application to track cricket on Symbian mobiles. Download SnapTu for your mobiles here Yahoo! Cricket – Symbian & iOS App Yahoo! Cricket Scores is another dedicated application to catch up with live scores and news on your Nokia mobiles and iPhones. This application is developed by Yahoo!, the web giant as well as the official partner of ICC. Features of the app at a glance Cricket: Get a summary page with latest scores, upcoming matches and details of the recent matches News: View sections devoted to the latest news, interviews and photos Statistics: Find the latest team and player stats Download Yahoo! Cricket For Symbian Phones   Download Yahoo! Cricket For iOS ESPN CricInfo – Android and iOS App Is there any site that is better than CricInfo to catch up with latest cricket news and live scores? I say No. ESPN CricInfo is the best website available on the web to get up to the minute  cricket information with in-depth analysis from cricket experts. The live commentary provided by CricInfo site is equally enjoyable as watching live cricket on TV. CricInfo guys have their official applications for Android mobiles and iOS devices and you accessing ball by ball updates on these application is joy. Download ESPN Crick Info App: Android Version, iPhone Version NDTV Cricket – Android, iOS and Blackberry App NDTV Cricket App is developed by NDTV, the most popular English TV news channel in India. This application provides live coverage of international and domestic cricket (Test, ODI & T20) along with latest News, Photos, Videos and Stats. This application is available for iOS devices(iPhones, iPads, iPod Touch), Android mobiles and Blackberry devices. Download NDTV Cricket for iOS here & here    Download NDTV Apps For Rest of OSs ECB Cricket – Symbian, iOS & Android App If you are an UK citizen then  this may be the right application to download for getting live cricket score updates as well as latest news about England Cricket Board. ECB Cricket is an official application of England Cricket Board Download ECB Cricket : Android Version, iPhone Version, Symbian Version Are there any better apps that we missed to feature in this list? This article titled,Top 5 Mobile Apps To Keep Track Of Cricket Scores [ICC World Cup], was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Live vom Oracle Partner Day 2012 in Frankfurt

    - by A&C Redaktion
    Frankfurt a. M. gegen 11:30 UhrCharmante Idee, mit einem Welcome-Lunch in den Oracle Partner Day 2012 zu starten. So kann man bei einem Snack auch gleich die beeindruckende Atmosphäre der Commerzbank Arena auf sich wirken lassen und ist, ehe man sich versieht, mit dem nebenstehenden Geschäftsführer, einer Managerin und zwei Vertriebsmitarbeitern in ein Gespräch über die jeweils letzten Stadionbesuche verwickelt. Überall fröhliches Wiedersehen, viele haben sich das letzte Mal vor genau einem Jahr getroffen, im Radisson Blu, beim OPN Day Satellite. So, die Masse setzt sich in Bewegung – auf geht’s zur Eröffnung: Silvia Kaske fängt an! 13:45 Uhr Die Keynotes waren mal wieder ein thematischer Rundumschlag – und ein kleines Who-is-Who im Oracle Universum zugleich: Silvia Kaske, Senior Director Channel A&C eröffnete den Partner Day, danach stellte David Callaghan (Senior Vice President UK, Ireland, Israel) die EMEA-Strategien für das FY13 vor und Jürgen Kunz (SVP Technology Northern Europe & Country Leader Germany) sprach über die Geschäftsmöglichkeiten mit Partnern. Christian Werner gab in seiner neuen Funktion als Senior Director Alliances & Channels Germany einen Überblick über die neue Struktur des Oracle Channels und stellte das deutsche Team vor. Zum Abschluss folgte mit Prof. Hermann Maurer ein Gastredner von der Academia Europaea, einer prominent besetzten akademischen Gesellschaft, die sich dem besseren Verständnis der Wissenschaft in der Öffentlichkeit verschrieben hat. Er wagte einen Blick in die Zukunft der IT: „Das Beste kommt erst noch“. Wie immer, in einem so komprimierten Programm, bleibt noch die eine oder andere Frage – aber jetzt ist ja Zeit, bei Coffee & Networking noch mal nachzufragen. Kurz nach 14 Uhr Viele haben inzwischen auch das erste Obergeschoss erkundet. In der Partner Service Zone ist das Angebot breit gefächert: Von Oracle Financing über das License Management bis hin zu OPN Specialized dreht sich hier alles um konkrete Angebote für Partner. Nach einem kurzen Abstecher in die ISV-Lounge, geht es weiter zur Expert Zone: Oracle Database, Oracle Options, Fusion Middleware, Applications und Oracle Hardware heißen hier die Themen und an den Infoständen wird bereits lautstark gefachsimpelt. Zurück im Erdgeschoss sieht man noch diverse Partner, Oracle Executives und andere Teilnehmer durcheinander wuseln, um ihre Breakout Session zu finden. Andere blättern im druckfrischen A&C Kursbuch. In den nächsten zwei Stunden stehen Business Opportunities im Fokus – aufgeteilt nach Hardware, Technology oder Sales Partnern – dazu noch die Angebote der VADs, die A&C Partner Sessions und das 1:1 Speed Dating. Einige Partner nutzen parallel die angebotenen Implementation Tests, um direkt vor Ort die Zertifizierung zu erhalten. Das doppelte Angebot der Breakouts ermöglicht den Teilnehmern, an möglichst vielen Sessions nacheinander teilzunehmen. Kein Thema soll zu kurz kommen! Ein AusblickWas erwartet uns noch, im Laufe des Nachmittags? Sehr informativ wird sicherlich das Leader Panel, in dem die teilnehmenden Partner Fragen an Oracle Executives stellen können. Wenn dann die ersten Teilnehmer unruhig werden, hat das nichts mit den Themen zu tun. Nein, es steht vielmehr noch ein spannender Höhepunkt bevor: die Partner Award Ceremony (über die wir später ausführlich berichten werden). Nach einer hoffentlich gelungenen Veranstaltung stellt sich zum Schluss nur noch die Frage, was sich genau hinter der „Red Stack Arena Sports Challenge“ verbirgt. Brauchen wir Turnschuhe?

    Read the article

  • How to get bearable 2D and 3D performance on AMD Radeon HD 6950?

    - by l0b0
    I have had an AMD Radeon HD 6950 (i.e., Cayman series) for a couple years now, and I have tried a lot of combinations of drivers and settings with terrible results. I'm completely at a loss as to how to proceed. The open source driver has much better 2D performance, but it offloads all OpenGL rendering to the CPU. What I've tried so far: All the latest stable Ubuntu releases in the period, plus one Linux Mint release. All the latest stable AMD Catalyst Proprietary Display Drivers, and currently 13.1. The unofficial wiki installation instructions for every Ubuntu version and the semi-official Ubuntu instructions. All the tips and tweaks I could find for Minecraft (Optifine, reducing settings to minimum), VLC (postprocessing at minimum, rendering at native video size), Catalyst Control Center (flipped every lever in there) and X11 (some binary toggles I can no longer remember). Results: Typically 13-15 FPS in Minecraft, 30 max (100+ in Windows with the same driver version). Around 10 FPS in Team Fortress 2 using the official Steam client. Choppy video playback, in Flash and with VLC. CPU use goes through the roof when rendering video (150% for 1080p on YouTube in Chromium, 100% for 1080p H264 in VLC). glxgears shows 12.5 FPS when maximized. fgl_glxgears shows 10 FPS when maximized. Hardware details from lshw: Motherboard ASUS P6X58D-E CPU Intel Core i7 CPU 950 @ 3.07GHz (never overclocked; 64 bit) 6 GB RAM Video card product "Cayman PRO [Radeon HD 6950]", vendor "Hynix Semiconductor (Hyundai Electronics)" 2 x 1920x1200 monitors, both connected with HDMI. I feel I must be missing something absolutely fundamental here. Is there no accelerated support for anything on 64-bit architectures? Does a dual monitor completely mess up the driver? $ fglrxinfo display: :0 screen: 0 OpenGL vendor string: Advanced Micro Devices, Inc. OpenGL renderer string: AMD Radeon HD 6900 Series OpenGL version string: 4.2.11995 Compatibility Profile Context $ glxinfo | grep 'direct rendering' direct rendering: Yes I am currently using the open source driver, with the following results: Full frame rate and low CPU load when playing 1080p video. Black screen (but music in the background) in Team Fortress 2. Similar performance in Minecraft as the Catalyst driver. In hindsight obvious, since both end up offloading the rendering to the CPU. My /var/log/Xorg.0.log after upgrading to AMD Catalyst 13.1. Some possibly important lines: (WW) Falling back to old probe method for fglrx (WW) fglrx: No matching Device section for instance (BusID PCI:0@3:0:1) found The generated xorg.conf. The disabled "monitor" 0-DFP9 is actually an A/V receiver, which sometimes confuses the monitor drivers when turned on/off (but not in Windows). All three "monitor" devices are connected with HDMI. Edit: Chris Carter's suggestion to use the xorg-edgers PPA (Catalyst 13.1) resulted in some improvement, but still pretty bad performance overall: Minecraft stabilizes at 13-17 FPS, but at least the CPU load is "only" at 45-60%. Still 150% CPU use for 1080p video rendering on YouTube in Chromium. Massive improvement for 1080p H264 in VLC: 40-50% CPU use and no visible jitter glxgears performance about doubled to 25-30 FPS when maximized. fgl_glxgears still at ~10 FPS when maximized.

    Read the article

  • 2013 Predictions for Retail

    - by David Dorf
    Its that time of year to roll out the predictions for next year.  I can't say I've really nailed it in the past, but feel free to look back at my 2012, 2011, and 2010 predictions.  I'm not expecting anything earth-shattering this year; just continued maturation of several technologies that are finally taking hold. 1. Next day delivery -- Amazon finally decided it wasn't worth fighting state taxes and instead decided to place distribution centers everywhere so they can potentially offer next-day deliveries.  Not to be outdone, Walmart is looking to leverage its huge physical presence to offer the same.  Clubs like ShopRunner are pushing delivery barriers as well, so the norm is shifting to free shipping in a few days or relatively cheap shipping overnight.  Retailers need be thinking about how to ship from physical stores. 2. Bring your own device -- Earlier this year Intuit bought AisleBuyer, a mobile self-checkout start-up, at least somewhat validating the BYOD approach.  Grocery stores, especially in Europe, have been supporting in-aisle self-scanning for a while and I'm betting it will find a home in certain verticals in the US too.  There's also the BYOD concept for employees.  Some retailers are considering issuing mobile devices at hiring along side the shirt and name-tag.  Employees become responsible for the hardware until they leave. 3. TV shopping -- Will Apple finally release a TV product in 2013?  Who knows?  But the industry isn't standing still. Companies like QVC and HSN are already successfully combining the TV and online experiences for shopping.  Comcast is partnering with Tivo to allow viewers to interact with ads with Paypal handing payment.  This will be a slow maturation, but expect TVs to get smarter and eventually become a new selling channel (pun intended) for retailers. 4. Privacy backlash -- It only takes one big incident to stir the public, and I'm betting we have one in 2013.  Facebook, Google, or Apple will test the boundaries of what the public is willing to accept.  It could involve a retailer using geo-location technology, or possibly video analytics.  And as is always the case, the offender will apologize, temporarily remove the technology, and wait 2-3 years for it to be generally accepted.  Privacy is a moving target. 5. More NFC -- I've come to the conclusion that adoption of any banking technology is going to be slow.  It was slow for credit cards, ATMs, and online billpay so why should it be any different for NFC?  Maybe, just maybe the iPhone 5S will have an NFC chip, but we're not going to see mainstream uptake for years.  Next year we'll continue to see incremental improvements from Isis, Google, and Paypal and a plethora of new startups, but don't toss your magstripe cards just yet. 6. In-store location -- The technologies for tracking people inside stores is really improving.  Retailers can track people using video cameras, infrared, and by the WiFi radios in mobile phones.  We're getting closer to the point where accuracy could be a shelf-facing, which will help retailers understand how people shop, where they spend time, and what displays attract them.  Expect CPG companies to get involved and partner with retailers, since the data benefits both parties.  Consumers will benefit by being directed right to the products they seek.  (In 2013 ARTS is forming a workteam to develop new standards in this area.) 7. M&A -- Looking back at 2012 there were some really big deals involving IBM, Oracle, JDA, and NCR and I expect that trend will likely continue as vendors add assets to bolster their portfolios.  Many retailers are due for an IT transformation to support anywhere, anytime shoppers, and one-stop-vendors can minimize complexity and costs. Predictions from other sources: Independent Retailer Stores Magazine IDC Insights Mobile Commerce Daily

    Read the article

  • Atheros AR9485 wireless card doesn't work in an ASUS K53E

    - by John
    I just installed Ubuntu 11.10 32-bit version in dual boot mode on an ASUS K53. Every thing seems to work fine except for the wireless. The wireless works on Windows 7. Ubuntu finds the wireless card, but it does not appear to be turned on. The only physical means of turning on the card is the FN-F2 key combo. That works on Windows, but not in Ubuntu. I've looked in the forums for a solution and I'm not quite sure what to do. I've gathered the following information: jdwbmc@Spatha:~$ cat /etc/lsb-release; uname -a DISTRIB_ID=Ubuntu DISTRIB_RELEASE=11.10 DISTRIB_CODENAME=oneiric DISTRIB_DESCRIPTION="Ubuntu 11.10" Linux Spatha 3.0.0-12-generic #20-Ubuntu SMP Fri Oct 7 14:50:42 UTC 2011 i686 i686 i386 GNU/Linux jdwbmc@Spatha:~$ lspci -nnk | grep -iA2 net 02:00.0 Network controller [0280]: Atheros Communications Inc. AR9485 Wireless Network Adapter [168c:0032] (rev 01) Subsystem: AzureWave Device [1a3b:1186] Kernel driver in use: ath9k -- 04:00.0 Ethernet controller [0200]: Atheros Communications AR8151 v2.0 Gigabit Ethernet [1969:1083] (rev c0) Subsystem: ASUSTeK Computer Inc. Device [1043:1851] Kernel driver in use: atl1c jdwbmc@Spatha:~$ lsusb Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 003: ID 058f:a014 Alcor Micro Corp. jdwbmc@Spatha:~$ iwconfig lo no wireless extensions. eth0 no wireless extensions. wlan0 IEEE 802.11bgn ESSID off/any Mode:Managed Access Point: Not-Associated Tx-Power=0 dBm Retry long limit:7 RTS thrff Fragment thrff Power Managementff jdwbmc@Spatha:~$ rfkill list all 0: phy0: Wireless LAN Soft blocked: no Hard blocked: no 1: asus-wlan: Wireless LAN Soft blocked: no Hard blocked: no jdwbmc@Spatha:~$ lsmod Module Size Used by parport_pc 32114 0 ppdev 12849 0 snd_hda_codec_hdmi 31426 1 bnep 17923 2 rfcomm 38408 0 bluetooth 148839 10 bnep,rfcomm snd_hda_codec_realtek 254125 1 binfmt_misc 17292 1 joydev 17393 0 asus_nb_wmi 12469 0 asus_wmi 19333 1 asus_nb_wmi sparse_keymap 13658 1 asus_wmi uvcvideo 67271 0 videodev 85626 1 uvcvideo snd_hda_intel 24262 2 snd_hda_codec 91754 3 snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_i ntel snd_hwdep 13276 1 snd_hda_codec snd_pcm 80468 3 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec snd_seq_midi 13132 0 wmi 18744 1 asus_wmi snd_rawmidi 25241 1 snd_seq_midi snd_seq_midi_event 14475 1 snd_seq_midi snd_seq 51567 2 snd_seq_midi,snd_seq_midi_event arc4 12473 2 i915 505108 3 snd_timer 28932 2 snd_pcm,snd_seq snd_seq_device 14172 3 snd_seq_midi,snd_rawmidi,snd_seq ath9k 112711 0 psmouse 73673 0 serio_raw 12990 0 mac80211 272785 1 ath9k drm_kms_helper 32889 1 i915 ath9k_common 13599 1 ath9k drm 192226 4 i915,drm_kms_helper ath9k_hw 293893 2 ath9k,ath9k_common ath 19387 2 ath9k,ath9k_hw cfg80211 172392 3 ath9k,mac80211,ath snd 55902 14 snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_i ntel,snd_hda_codec,snd_hwdep,snd_pcm,snd_rawmidi,s nd_seq,snd_timer,snd_seq_device soundcore 12600 1 snd snd_page_alloc 14115 2 snd_hda_intel,snd_pcm mei 36466 0 i2c_algo_bit 13199 1 i915 video 18908 1 i915 lp 17455 0 parport 40930 3 parport_pc,ppdev,lp ahci 21634 3 libahci 25727 1 ahci atl1c 36638 0 xhci_hcd 72915 0 jdwbmc@Spatha:/var/lib/NetworkManager$ cat NetworkManager.state [main] NetworkingEnabled=true WirelessEnabled=true WWANEnabled=true WimaxEnabled=true jdwbmc@Spatha:~$ lspci -nn | grep 0280 02:00.0 Network controller [0280]: Atheros Communications Inc. AR9485 Wireless Network Adapter [168c:0032] (rev 01) Any help would be appreciated. Thanks.

    Read the article

  • 5.1 surround sound on Acer Aspire 5738ZG with Ubuntu 11.10

    - by kbargais_LV
    I got a problem with sound. I tried everything but no results. :( I got 3 sound ports. my daemon: # This file is part of PulseAudio. # # PulseAudio is free software; you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # PulseAudio is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with PulseAudio; if not, write to the Free Software # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 # USA. ## Configuration file for the PulseAudio daemon. See pulse-daemon.conf(5) for ## more information. Default values are commented out. Use either ; or # for ## commenting. ; daemonize = no ; fail = yes ; allow-module-loading = yes ; allow-exit = yes ; use-pid-file = yes ; system-instance = no ; local-server-type = user ; enable-shm = yes ; shm-size-bytes = 0 # setting this 0 will use the system-default, usually 64 MiB ; lock-memory = no ; cpu-limit = no ; high-priority = yes ; nice-level = -11 ; realtime-scheduling = yes ; realtime-priority = 5 ; exit-idle-time = 20 ; scache-idle-time = 20 ; dl-search-path = (depends on architecture) ; load-default-script-file = yes ; default-script-file = /etc/pulse/default.pa ; log-target = auto ; log-level = notice ; log-meta = no ; log-time = no ; log-backtrace = 0 resample-method = speex-float-1 ; enable-remixing = yes ; enable-lfe-remixing = no flat-volumes = no ; rlimit-fsize = -1 ; rlimit-data = -1 ; rlimit-stack = -1 ; rlimit-core = -1 ; rlimit-as = -1 ; rlimit-rss = -1 ; rlimit-nproc = -1 ; rlimit-nofile = 256 ; rlimit-memlock = -1 ; rlimit-locks = -1 ; rlimit-sigpending = -1 ; rlimit-msgqueue = -1 ; rlimit-nice = 31 ; rlimit-rtprio = 9 ; rlimit-rttime = 1000000 ; default-sample-format = s16le ; default-sample-rate = 44100 ; default-sample-channels = 6 ; default-channel-map = front-left,front-right default-fragments = 8 default-fragment-size-msec = 10 ; enable-deferred-volume = yes ; deferred-volume-safety-margin-usec = 8000 ; deferred-volume-extra-delay-usec = 0

    Read the article

  • Do MORE with WebCenter - Webcast Overview & TIES Tour

    - by Michael Snow
    Today's post is from Michelle Huff, Senior Director, Product Management, Oracle WebCenter `````````````````  In case you missed it, I presented on a webcast yesterday focused on how you can “Do More with Oracle WebCenter – Expand Beyond Content Management.” As you may remember, we rebranded Oracle’s Enterprise Content Management (ECM) Suite, which some people knew by the wonderfully techie three-letter acronyms -- UCM, URM & IPM -- to Oracle WebCenter Content last year. Since it’s a unified ECM platform, I’ve seen many customers over the years continue to expand the number of content-centric solutions and application integrations powered by WebCenter throughout their organizations. But, did you know WebCenter also provides portal, collaboration and web experience management capabilities as well? This enables you to leverage your existing investment in the WebCenter platform as well as the information you’re managing to create engaging sites, collaborative spaces, or self-service portals and composite applications. In the webcast I walked through six different ways that you can do more with WebCenter: Collaborative content contribution and sharing environment Share content across intranets and extranets Combine content in composite applications Create targeted online experiences Manage interactive social experiences Optimize multi-channel customer experiences Joining me on the call was Greg Utecht with TIES. TIES is a joint powers cooperative owned by 46 Minnesota school districts, represents 514 schools – and provides software applications, hardware and software, internet service and professional development designed by educators for education. I was having a lot of fun over the past few days talking with Greg about the TIES implementation and future plans with WebCenter. He joined me on the call for a little Q&A to explain how he’s using WebCenter today for their iContent implementation for document management, records management and archiving. And also covered how they have expanded their implementation to create a collaborative space called their HRPay System with WebCenter to facilitate collaboration and to better engage their users within the school districts. During our conversation a few questions came from the audience about their implementation. They were curious to see how the system looked – so let’s take a peak. This first screenshot shows the screen that a human resources or payroll worker in one of our member districts would see upon logging in, based on their credentials and role in their district. This shows the result of clicking on the SUBSCRIBE link on the main page. It allows the user to subscribe to parts of the portal which will e-mail him/her when those are updated in any way. This shows the screen that a human resources or payroll worker in one of our member districts would see upon clicking on the Resources link. This shows the screen that a human resources or payroll worker in one of our member districts would see upon clicking on the Finance Advisory link. It shows the discussion threads and document sharing areas. This shows the screen that appears when the forum topic on the preceding screen is clicked. This shows the screen portlet up close with shared documents. This shows the screen that appears when a shared document is clicked on. Note that there is also a download button and an update button, meaning people can work on these collaboratively. If you missed the webcast, check it out! You can watch the replay OnDemand HERE. If you attended the webcast, thanks for joining - I hoped you learned a little from the session. I learned that kids are getting digital report cards today! Wow, have times changed with technology. Uh oh, is this when I start saying “You know, back in my days…?”

    Read the article

  • Slow Ubuntu 10.04 after long time unused

    - by Winston Ewert
    I'm at spring break so I'm back at my parent's house. I've turned my computer on which has been off since January and its unusably slow. This was not the case when I last used the computer in January. It is running 10.04, Memory: 875.5 MB CPU: AMD Athlon 64 X2 Dual Core Processor 4400+ Available Disk Space: 330.8 GB I'm not seeing a large usage of either memory or Disk I/O. If I look at my list of processes there is only a very small amount of CPU usage. However, if I hover over the CPU usage graph that I've on the top bar, I sometimes get really high readings like 100%. It took a long time to boot, to open firefox, to open a link in firefox. As far as I can tell everything that the computer tries to do is just massively slow. Right now, I'm apt-get dist-upgrading to install any updates that I will have missed since last time this computer was on. Any ideas as to what is going on here? UPDATE: I thought to check dmesg and it has a lot of entries like this: [ 1870.142201] ata3.00: exception Emask 0x0 SAct 0x7 SErr 0x0 action 0x0 [ 1870.142206] ata3.00: irq_stat 0x40000008 [ 1870.142210] ata3.00: failed command: READ FPDMA QUEUED [ 1870.142217] ata3.00: cmd 60/08:10:c0:4a:65/00:00:03:00:00/40 tag 2 ncq 4096 in [ 1870.142218] res 41/40:00:c5:4a:65/00:00:03:00:00/40 Emask 0x409 (media error) <F> [ 1870.142221] ata3.00: status: { DRDY ERR } [ 1870.142223] ata3.00: error: { UNC } [ 1870.143981] ata3.00: SB600 AHCI: limiting to 255 sectors per cmd [ 1870.146758] ata3.00: SB600 AHCI: limiting to 255 sectors per cmd [ 1870.146761] ata3.00: configured for UDMA/133 [ 1870.146777] ata3: EH complete [ 1872.092269] ata3.00: exception Emask 0x0 SAct 0x7 SErr 0x0 action 0x0 [ 1872.092274] ata3.00: irq_stat 0x40000008 [ 1872.092278] ata3.00: failed command: READ FPDMA QUEUED [ 1872.092285] ata3.00: cmd 60/08:00:c0:4a:65/00:00:03:00:00/40 tag 0 ncq 4096 in [ 1872.092287] res 41/40:00:c5:4a:65/00:00:03:00:00/40 Emask 0x409 (media error) <F> [ 1872.092289] ata3.00: status: { DRDY ERR } [ 1872.092292] ata3.00: error: { UNC } [ 1872.094050] ata3.00: SB600 AHCI: limiting to 255 sectors per cmd [ 1872.096795] ata3.00: SB600 AHCI: limiting to 255 sectors per cmd [ 1872.096798] ata3.00: configured for UDMA/133 [ 1872.096814] ata3: EH complete [ 1874.042279] ata3.00: exception Emask 0x0 SAct 0x7 SErr 0x0 action 0x0 [ 1874.042285] ata3.00: irq_stat 0x40000008 [ 1874.042289] ata3.00: failed command: READ FPDMA QUEUED [ 1874.042296] ata3.00: cmd 60/08:10:c0:4a:65/00:00:03:00:00/40 tag 2 ncq 4096 in [ 1874.042297] res 41/40:00:c5:4a:65/00:00:03:00:00/40 Emask 0x409 (media error) <F> [ 1874.042300] ata3.00: status: { DRDY ERR } [ 1874.042302] ata3.00: error: { UNC } [ 1874.044048] ata3.00: SB600 AHCI: limiting to 255 sectors per cmd [ 1874.046837] ata3.00: SB600 AHCI: limiting to 255 sectors per cmd [ 1874.046840] ata3.00: configured for UDMA/133 [ 1874.046861] sd 2:0:0:0: [sda] Unhandled sense code [ 1874.046863] sd 2:0:0:0: [sda] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [ 1874.046867] sd 2:0:0:0: [sda] Sense Key : Medium Error [current] [descriptor] [ 1874.046872] Descriptor sense data with sense descriptors (in hex): [ 1874.046874] 72 03 11 04 00 00 00 0c 00 0a 80 00 00 00 00 00 [ 1874.046883] 03 65 4a c5 [ 1874.046886] sd 2:0:0:0: [sda] Add. Sense: Unrecovered read error - auto reallocate failed [ 1874.046892] sd 2:0:0:0: [sda] CDB: Read(10): 28 00 03 65 4a c0 00 00 08 00 [ 1874.046900] end_request: I/O error, dev sda, sector 56969925 [ 1874.046920] ata3: EH complete I'm not certain, but that looks like my problem may be a failing hard drive. But the drive is less then a year old, it really shouldn't be failing now...

    Read the article

  • AWS .NET SDK v2: setting up queues and topics

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/10/13/aws-.net-sdk-v2-setting-up-queues-and-topics.aspxFollowing on from my last post, reading from SQS queues with the new SDK is easy stuff, but linking a Simple Notification Service topic to an SQS queue is a bit more involved. The AWS model for topics and subscriptions is a bit more advanced than in Azure Service Bus. SNS lets you have subscribers on multiple different channels, so you can send a message which gets relayed to email address, mobile apps and SQS queues all in one go. As the topic owner, when you request a subscription on any channel, the owner needs to confirm they’re happy for you to send them messages. With email subscriptions, the user gets a confirmation request from Amazon which they need to reply to before they start getting messages. With SQS, you need to grant the topic permission to write to the queue. If you own both the topic and the queue, you can do it all in code with the .NET SDK. Let’s say you want to create a new topic, a new queue as a topic subscriber, and link the two together. Creating the topic is easy with the SNS client (which has an expanded name, AmazonSimpleNotificationServiceClient, compare to the SQS class which is just called QueueClient): var request = new CreateTopicRequest(); request.Name = TopicName; var response = _snsClient.CreateTopic(request); TopicArn = response.TopicArn; In the response from AWS (which I’m assuming is successful), you get an ARN – Amazon Resource Name – which is the unique identifier for the topic. We create the queue using the same code from my last post, AWS .NET SDK v2: the message-pump pattern, and then we need to subscribe the queue to the topic. The topic creates the subscription request: var response = _snsClient.Subscribe(new SubscribeRequest { TopicArn = TopicArn, Protocol = "sqs", Endpoint = _queueClient.QueueArn }); That response will give you an ARN for the subscription, which you’ll need if you want to set attributes like RawMessageDelivery. Then the SQS client needs to confirm the subscription by allowing the topic to send messages to it. The SDK doesn’t give you a nice mechanism for doing that, so I’ve extended my AWS wrapper with a method that encapsulates it: internal void AllowSnsToSendMessages(TopicClient topicClient) { var policy = Policies.AllowSendFormat.Replace("%QueueArn%", QueueArn).Replace("%TopicArn%", topicClient.TopicArn); var request = new SetQueueAttributesRequest(); request.Attributes.Add("Policy", policy); request.QueueUrl = QueueUrl; var response = _sqsClient.SetQueueAttributes(request); } That builds up a policy statement, which gets added to the queue as an attribute, and specifies that the topic is allowed to send messages to the queue. The statement itself is a JSON block which contains the ARN of the queue, the ARN of the topic, and an Allow effect for the sqs:SendMessage action: public const string AllowSendFormat= @"{ ""Statement"": [ { ""Sid"": ""MySQSPolicy001"", ""Effect"": ""Allow"", ""Principal"": { ""AWS"": ""*"" }, ""Action"": ""sqs:SendMessage"", ""Resource"": ""%QueueArn%"", ""Condition"": { ""ArnEquals"": { ""aws:SourceArn"": ""%TopicArn%"" } } } ] }"; There’s a new gist with an updated QueueClient and a new TopicClient here: Wrappers for the SQS and SNS clients in the AWS SDK for .NET v2. Both clients have an Ensure() method which creates the resource, so if you want to create a topic and a subscription you can use:  var topicClient = new TopicClient(“BigNews”, “ImListening”); And the topic client has a Subscribe() method, which calls into the message pump on the queue client: topicClient.Subscribe(x=>Log.Debug(x.Body)); var message = {}; //etc. topicClient.Publish(message); So you can isolate all the fiddly bits and use SQS and SNS with a similar interface to the Azure SDK.

    Read the article

  • How to Waste Your Marketing Budget

    - by Mike Stiles
    Philosophers have long said if you find out where a man’s money is, you’ll know where his heart is. Find out where money in a marketing budget is allocated, and you’ll know how adaptive and ready that company is for the near future. Marketing spends are an investment. Not unlike buying stock, the money is placed in areas the marketer feels will yield the highest return. Good stock pickers know the lay of the land, the sectors, the companies, and trends. Likewise, good marketers should know the media available to them, their audience, what they like & want, what they want their marketing to achieve…and trends. So what are they doing? And how are they doing? A recent eTail report shows nearly half of retailers planned on focusing on SEO, SEM, and site research technologies in the coming months. On the surface, that’s smart. You want people to find you. And you’re willing to let the SEO tail wag the dog and dictate the quality (or lack thereof) of your content such as blogs to make that happen. So search is prioritized well ahead of social, multi-channel initiatives, email, even mobile - despite the undisputed explosive growth and adoption of it by the public. 13% of retailers plan to focus on online video in the next 3 months. 29% said they’d look at it in 6 months. Buying SEO trickery is easy. Attracting and holding an audience with wanted, relevant content…that’s the hard part. So marketers continue to kick the content can down the road. Pretty risky since content can draw and bind customers to you. Asked to look a year ahead, retailers started thinking about CRM systems, customer segmentation, and loyalty, (again well ahead of online video, social and site personalization). What these investors are missing is social is spreading across every function of the enterprise and will be a part of CRM, personalization, loyalty programs, etc. They’re using social for engagement but not for PR, customer service, and sales. Mistake. Allocations are being made seemingly blind to the trends. Even more peculiar are the results of an analysis Mary Meeker of Kleiner Perkins made. She looked at how much time people spend with media types and how marketers are investing in those media. 26% of media consumption is online, marketers spend 22% of their ad budgets there. 10% of media time is spent with mobile, but marketers are spending 1% of their ad budgets there. 7% of media time is spent with print, but (get this) marketers spend 25% of their ad budgets there. It’s like being on Superman’s Bizarro World. Mary adds that of the online spending, most goes to search while spends on content, even ad content, stayed flat. Stock pickers know to buy low and sell high. It means peering with info in hand into the likely future of a stock and making the investment in it before it peaks. Either marketers aren’t believing the data and trends they’re seeing, or they can’t convince higher-ups to acknowledge change and adjust their portfolios accordingly. Follow @mikestilesImage via stock.xchng

    Read the article

  • Oracle EMEA News Digest - May 2014

    - by Steve Walker
    Systems Oracle introduced a technology preview of an OpenStack® distribution that allows Oracle Linux and Oracle VM users to work with the open source cloud software. This provides customers with additional choices and interoperability while taking advantage of the efficiency, performance, scalability, and security of Oracle Linux and Oracle VM. The distribution is delivered as part of the Oracle Linux and Oracle VM Premier Support offerings, at no additional cost. Oracle plans to work further with the OpenStack community to develop and enhance its enterprise-class capabilities to meet customer demands. Also in the Open Source arena, Oracle announced the general availability of MySQL Fabric. MySQL Fabric provides an integrated system that makes it simpler to manage groups of MySQL databases. It delivers both high availability - via failure detection and failover - and scalability through automated data sharding. Oracle Database, Middleware and Technology The company made two announcements for Oracle Tuxedo, the #1 application server for C, C++, COBOL and Java deployments in private cloud or traditional data center environments. With enhanced management and monitoring features and tighter integration with Oracle technologies, the latest release of Oracle Tuxedo 12c enables organizations to dramatically increase application throughput, while reducing total cost of ownership and time to market for new application development and deployment. Oracle also introduced the latest release of its mainframe application rehosting platform, Oracle Tuxedo ART 12c, to help organizations speed up migration projects and accelerate the adoption of the new environment by current IT staff. It enables organizations to accelerate the rehosting of IBM mainframe applications and greatly enhance management and supportability of the rehosted applications while reducing costs and risk. Applications According to new Oracle studies, B2B and B2C commerce professionals find integrated, omni-channel customer experiences increasingly valuable to their organizations, and are continuing to invest in technologies and digital content strategies to facilitate them. The studies—one for B2B and one for B2C—surveyed e-commerce professionals in business and technology departments from around the world. Although the priorities, success metrics, and technology investments differed between the two groups, customer acquisition and retention emerged as common themes across B2B and B2C. Growing market share and enhancing customer experience are cited as top investment areas for all e-commerce professionals. In product news, Oracle announced the latest release of Oracle Business Intelligence (BI) Applications (version 11.1.1.8.1, in case anyone asks). It includes prebuilt connectors between Oracle Procurement and Spend Analytics and Oracle’s JD Edwards. Additionally, a new Oracle Human Resources Analytics module for developing and maintaining a skilled workforce has been introduced. In use at more than 4,000 companies worldwide, Oracle BI Applications support leading enterprise applications, including Oracle E-Business Suite, Oracle’s PeopleSoft, Oracle's Siebel CRM, Oracle’s JD Edwards EnterpriseOne offering high-performing analytics at a lower cost. Industries For the Communications Industry, Oracle has launched a new release of the Oracle Communications Core Session Manager. This gives CSPs a new way to design, deploy and manage complex networking services and embrace next-generation technology, It provides them with an immediate entry point for  network function virtualization (NFV) efforts, allowing them to realize immediate benefits associated with network virtualization – including increased service agility and improved network resource sharing. And for the Utilities Industry, Oracle is releasing solutions with new business features and enhanced technical architecture that help position utilities for success now and into the future. Oracle has provided new releases for its customer information system,  meter data management system, customer self-service solution and mobile workforce management solution.

    Read the article

  • links for 2010-12-23

    - by Bob Rhubart
    Oracle VM Virtualbox 4.0 extension packs (Wim Coekaerts Blog) Wim Coekaerts describes the the new extension pack in Oracle VM Virtualbox 4.0 and how it's different from 3.2 and earlier releases. (tags: oracle otn virtualization virtualbox) Oracle Fusion Middleware Security: Creating OES SM instances on 64 bit systems "I've already opened a bug on this against OES 10gR3 CP5, but in case anyone else runs into it before it gets fixed I wanted to blog it too. (NOTE: CP5 is when official support was introduced for running OES on a 64 bit system with a 64 bit JVM)" - Chris Johnson (tags: oracle otn fusionmiddleware security) Oracle Enterprise Manager Grid Control: Shared loader directory, RAC and WebLogic Clustering "RAC is optional. Even the load balancer is optional. The feed from the agents also goes to the load balancer on a different port and it is routed to the available management server. In normal case, this is ok." - Porus Homi Havewala (tags: WebLogic oracle otn grid clustering) Magic Web Doctor: Thought Process on Upgrading WebLogic Server to 11g "Upgrading to new versions can be challenging task, but it's done for linear scalability, continuous enhanced availability, efficient manageability and automatic/dynamic infrastructure provisioning at a low cost." - Chintan Patel (tags: oracle otn weblogic upgrading) InfoQ: Using a Service Bus to Connect the Supply Chain Peter Paul van de Beek presents a case study of using a service bus in a supply channel connecting a wholesale supplier with hundreds of retailers, the overall context and challenges faced – including the integration of POS software coming from different software providers-, the solution chosen and its implementation, how it worked out and the lessons learned along the way. (tags: ping.fm) Oracle VM VirtualBox 4.0 is released! - The Fat Bloke Sings The Fat Bloke spreads the news and shares some screenshots.  (tags: oracle otn virtualization virtualbox) Leaks on Wikis: "Corporations...You're Next!" Oracle Desktop Virtualization Can Help. (Oracle's Virtualization Blog) "So what can you do to guard against these types of breaches where there is no outsider (or even insider) intrusion to detect per se, but rather someone with malicious intent is physically walking out the door with data that they are otherwise allowed to access in their daily work?" - Adam Hawley (tags: oracle otn virtualization security) OTN ArchBeat Podcast Guest Roster As the OTN ArchBeat Podcast enters its third year, it's time to acknowledge the invaluable contributions of the guests who have participated in ArchBeat programs. Check out this who's who of ArchBeat podcast panelists, with links to their respective interviews and more. (tags: oracle otn oracleace podcast archbeat) Show Notes: Architects in the Cloud (ArchBeat) Now available! Part 2 (of 4) of the ArchBeat interview with Stephen G. Bennett and Archie Reed, the authors of "Silver Clouds, Dark Linings: A Concise Guide to Cloud Computing." (tags: oracle otn podcast cloud) A Cautionary Tale About Multi-Source JNDI Configuration (Scott Nelson's Portal Productivity Ponderings) "I ran into this issue after reading that p13nDataSource and cgDataSource-NonXA should not be configured as multi-source. There were some issues changing them to use the basic JDBC connection string and when rolling back to the bad configuration the server went 'Boom.'" - Scott Nelson (tags: weblogic jdbc oracle jndi)

    Read the article

  • Slow wired internet connection on Realtek RTL8168-8111 (Rev 6)

    - by Mark
    I keep on seeing issues people are having with their wireless. I am having an issue with being wired into my router. I installed Ubuntu 11.10 the other day, on my custom PC. The motherboard I have installed on this PC is an ASUS P8H61-M. The issue I am having is with my speed. I have a dual boot, windows 7 and the new Ubuntu. On my Windows install I am getting test speeds from Speakeasy at 17Mbps and actual downloads around 2-3MB/s. With Ubuntu, I am getting test speeds from Speakeasy at 1.14Mbps and actual downloads around 60KB/s. I have disabled IPv6 and am no using GoogleDNS for my DNS, but it hasn't resolved the issue. I have scanned my router (WRT54GS Linksys) to disable IPv6 connections, and I am not seeing any option for that. I cannot figure out why I am getting such sluggish internet connection. Any help to resolve would be great! I performed an iconfig -a with these results: mark@Mark-ASUS:~$ ifconfig -a eth0 Link encap:Ethernet HWaddr f4:6d:04:d1:2c:4e inet addr:192.168.1.103 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::f66d:4ff:fed1:2c4e/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:21888 errors:0 dropped:21888 overruns:0 frame:21888 TX packets:21068 errors:0 dropped:90 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:26348337 (26.3 MB) TX bytes:2217140 (2.2 MB) Interrupt:46 Base address:0xc000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:7 errors:0 dropped:0 overruns:0 frame:0 TX packets:7 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:952 (952.0 B) TX bytes:952 (952.0 B) My specs are: mark@Mark-ASUS:~$ sudo lspci -nn 04:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller [10ec:8168] (rev 06) 06:00.0 PCI bridge [0604]: ASMedia Technology Inc. Device [1b21:1080] (rev 01) udev information: KERNEL[11.351405] add /devices/pci0000:00/0000:00:1c.2/0000:04:00.0/net/eth0 (net) UDEV_LOG=3 ACTION=add DEVPATH=/devices/pci0000:00/0000:00:1c.2/0000:04:00.0/net/eth0 SUBSYSTEM=net INTERFACE=eth0 IFINDEX=2 SEQNUM=1542 UDEV [11.363905] add /devices/pci0000:00/0000:00:1c.2/0000:04:00.0/net/eth0 (net) UDEV_LOG=3 ACTION=add DEVPATH=/devices/pci0000:00/0000:00:1c.2/0000:04:00.0/net/eth0 SUBSYSTEM=net INTERFACE=eth0 IFINDEX=2 SEQNUM=1542 ID_VENDOR_FROM_DATABASE=Realtek Semiconductor Co., Ltd. ID_MODEL_FROM_DATABASE=RTL8111/8168B PCI Express Gigabit Ethernet controller ID_BUS=pci ID_VENDOR_ID=0x10ec ID_MODEL_ID=0x8168 ID_MM_CANDIDATE=1 dmesg information: [ 2.855982] r8169 Gigabit Ethernet driver 2.3LK-NAPI loaded [ 2.856366] r8169 0000:04:00.0: eth0: RTL8168b/8111b at 0xffffc9000064c000, f4:6d:04:d1:2c:4e, XID 0c900800 IRQ 46 [ 12.540956] r8169 0000:04:00.0: eth0: link down [ 12.540961] r8169 0000:04:00.0: eth0: link down [ 12.541173] ADDRCONF(NETDEV_UP): eth0: link is not ready I took out a lot of information that did not pertain to eth0, because the previous edits wouldn't save. If I need any more information, let me know. I would love to get this resolved. The other issue I am noticing is sometimes my connection disconnects all together, for about a minute, then it reconnects.

    Read the article

  • Gartner PCC: A Shovel & Some Ah-Ha's

    - by kellsey.ruppel
    When Gartner Vice President and leading analyst Whit Andrews kicked off the Gartner Portals, Content & Collaboration Summit on Monday, March 12 at the Gaylord Palms in Orlando, FL by bringing a shovel to the stage, eyebrows raised and a few thoughts went through my head. Either this guy plans to go help the construction workers outside construct that new pool at the Gaylord or he took a wrong turn and is at the wrong conference. Oh and how did he get that shovel through airport security? As Whit explained more his objective became more clear…take everything anyone has ever told you about portals and throw it out the window, as portals have evolved and times they are most certainly changing. The future Web is here, available not only on browsers but also via a broad spectrum of access points, including automobiles, consumer electronics and more and more mobile devices. Not merely prevalent, the future Web is also multimedia-driven and operates in real time, driven by mobility, social media, streaming video and other dynamic services. Applications and user experiences are in the midst of an evolution — from the early, simple mobile Web models to today’s Web 2.0 mobile apps and, ultimately, to a world of predominantly Web apps. Additionally, cloud services will forever change how portals and user experience are designed, built, delivered, sourced and managed. So what does this mean for you? Today’s organizations need software that will enable them to not just do their jobs, but to do it in a way that is familiar and easy for them.  What does this mean for IT? Use software and technology as an enabler, not as a roadblock. Overall, we had a great week in Orlando learning about how to improve the user experience, manage content explosion, launch social initiatives, transition to mobile environments and understand cloud and SaaS options.  We had some great conversations throughout the conference and at the Oracle booth. Lots of demonstrations were given of Oracle WebCenter Sites and Oracle Social Network. And as Christie mentioned earlier this week, our Vice President of Product Management and Strategy for WebCenter Loren Weinberg presented on the topic of customer engagement and talked about how organization’s relationships with their customers have fundamentally changed today and the resulting impact that has on their priorities.  Loren also talked about the importance of customer engagement, why that matters now more than ever, and what you can do to help your company or organization succeed in this new world. The question asked in every keynote and session was a simple one: What is your “ah-ha” moment? I personally had quite a few, some of which I’ve captured below. 70% of internal social initiatives eventually fail. By 2014, refusing to communicate with consumers via social media will be as harmful as ignoring emails/phone calls is today. Customer engagement = multi-channel + social & interactive + personal & relevant + optimized. If people choose to talk about your product/company/service, it's because it's remarkable. -- Seth Godin's keynote (one of the highlights of the conference!) The Web will become the primary method used for delivering content and applications to mobile devices. By 2015, 20% of smart phone users worldwide will conduct commerce using context-enriched services on a weekly basis.  86% of customers will pay more for a better customer experience. 6 P's of Quality User Experience. Product. Enabled by: People, Patterns, Process, Profit, Priorities. Did you attend the Gartner Summit? What were your ah-ha moments?

    Read the article

  • How can I make sound work without starting X?

    - by Magnus Hoff
    I have a headless machine connected to my sound system, and I am using it to run a music playing daemon that I control over the network. (Among other things) However, I can't seem to be able to have sound come out of my speakers without running X. I am running pulse audio in a system wide instance and my daemon is not running within X. Nevertheless, when my daemon is playing music without me hearing it, I can fix it by running startx in an unrelated session. After X starts, I can hear the sound. The sound disappears again if I kill the X server. Interestingly/annoyingly, the sound also stops after X has been running for a few minutes. This could possibly be because of a screen saver of some sort, but I haven't been able to verify or falsify this theory. So my current workaround is to ssh into the box whenever I want music and startx, and restart it every fifteen minutes or so. I'd like to do better. I have been able to verify the following: Adjustments in alsamixer have no effect on this problem. The relevant output channel is never muted In alsamixer, I can see no difference between when the sound is working and when it isn't Nothing is muted in pactl list There is no difference in the output from pactl list between before starting X and after it's started. (Except the identifier of the pactl instance connected to pulse, which is different each time you run pactl) The user running the music daemon is a member of the groups audio, pulse and pulse-access The music daemon program does not report any error messages and acts as if it is playing the music like it should Some form of dbus daemon is running. ps aux|grep dbus reports dbus-daemon --system --fork --activation=upstart before and after I have started X Some details about my hardware: Motherboard: http://www.asus.com/Motherboards/AT5IONTI_DELUXE/ Sound chip: Nvidia GPU 0b HDMI/DP (from alsamixer) Using HDMI for output (Machine also has an Intel Realtek ALC887 that I am not using) Output of lsmod: Module Size Used by deflate 12617 0 zlib_deflate 27139 1 deflate ctr 13201 0 twofish_generic 16635 0 twofish_x86_64_3way 25287 0 twofish_x86_64 12907 1 twofish_x86_64_3way twofish_common 20919 3 twofish_generic,twofish_x86_64_3way,twofish_x86_64 camellia 29348 0 serpent 29125 0 blowfish_generic 12530 0 blowfish_x86_64 21466 0 blowfish_common 16739 2 blowfish_generic,blowfish_x86_64 cast5 25112 0 des_generic 21415 0 xcbc 12815 0 rmd160 16744 0 bnep 18281 2 rfcomm 47604 12 sha512_generic 12796 0 crypto_null 12918 0 parport_pc 32866 0 af_key 36389 0 ppdev 17113 0 binfmt_misc 17540 1 nfsd 281980 2 ext2 73795 1 nfs 436929 1 lockd 90326 2 nfsd,nfs fscache 61529 1 nfs auth_rpcgss 53380 2 nfsd,nfs nfs_acl 12883 2 nfsd,nfs sunrpc 255224 16 nfsd,nfs,lockd,auth_rpcgss,nfs_acl btusb 18332 2 vesafb 13844 2 pl2303 17957 1 ath3k 12961 0 bluetooth 180153 24 bnep,rfcomm,btusb,ath3k snd_hda_codec_hdmi 32474 4 nvidia 11308613 0 ftdi_sio 40679 1 usbserial 47113 6 pl2303,ftdi_sio psmouse 97485 0 snd_hda_codec_realtek 224173 1 snd_hda_intel 33719 5 snd_hda_codec 127706 3 snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_intel serio_raw 13211 0 snd_seq_midi 13324 0 snd_hwdep 17764 1 snd_hda_codec snd_pcm 97275 3 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec snd_rawmidi 30748 1 snd_seq_midi snd_seq_midi_event 14899 1 snd_seq_midi snd_seq 61929 2 snd_seq_midi,snd_seq_midi_event snd_timer 29990 2 snd_pcm,snd_seq snd_seq_device 14540 3 snd_seq_midi,snd_rawmidi,snd_seq snd 79041 20 snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device asus_atk0110 18078 0 mac_hid 13253 0 jc42 13948 0 soundcore 15091 1 snd snd_page_alloc 18529 2 snd_hda_intel,snd_pcm coretemp 13554 0 i2c_i801 17570 0 lp 17799 0 parport 46562 3 parport_pc,ppdev,lp r8169 62154 0 Any ideas? What does X do that's so important?

    Read the article

  • October in Review

    - by Richard Bingham
    With OpenWorld over October was time to get back to serious work for everyone, including the Fusion Applications Developer Relations team. Don't forget the OpenWorld content is still available, including presentation downloads, for a limited period of time so be sure to grab anything you found useful or take another scan for anything you might have missed. Of all the announcements, the continued evolution of the Oracle Cloud services for extending and integrating with Fusion Applications is increasing in popularity, and certainly the Cloud Marketplace is something we're becoming involved in. More details to follow. Fusion Concepts Last week Vik from our team started the new "Fusion Concepts" series of articles, providing those new to Fusion Applications an explanation of the architectural basics, with the aim to reduce the learning curve and lay the platform for more efficient and effective development. The series begun with an insightful first post on the different schemas that exist in the Fusion Applications database. Look out for upcoming posts on multi-lingual entities, profile options, look-ups and more. New Learning Resources Our YouTube channel continued to expand with more 'how to' videos on using page composer, extending the Simplified UI (aka FUSE), and integrating BI reports and analytics. Also the Oracle Learning Library is now well established as a central resource for knowledge, now with thousands of tutorials, videos, and documents. Of particular note are the great new extensibility-related videos added by the CRM Product Management team, including more on the ever-expanding capabilities of Application Composer. To see some examples of these search using keyword 'customization' or the product 'Sales Cloud'. Finally on learning resources, as Oliver mentioned the Oracle Press book on Fusion Application Customization and Extensibility is now available for pre-order on Amazon (due out 1st Jan). Out And About October also saw us attend the annual Apps Conference held by the UK Oracle User Group in London. Interestingly there was an Applications Transformation stream of sessions and content that included Fusion Applications with all the latest in the Oracle Applications evolution, as always focused around the three tenets of social, mobile, and cloud. Read more in Richard's post-event write up. Other teams around Oracle have also been busy. Angelo from the Platform Technical Services group has done quite a bit of work using web services with Fusion SaaS and has published many interesting findings on his blog. It's definitely recommended reading if you are working on any related integration projects. The middleware-for-applications group has built a new tool called "AppAdvantage" offering an online assessment of your use of Fusion Middleware technologies with Oracle Applications. As the popularity of integrating cloud applications with on-premises systems continued to grow, leveraging existing middleware technologies (and licenses) to support the integration solution is likely to be of paramount importance. Similarly the "Build Enterprise Application Extensions with Ease" section of the related webpage has AppsUX director Killan Evers speaking about customization using the composer tools. Both are useful resources for those just getting started with a move to Fusion Applications. The Oracle A-Team, specialists in middleware technical architecture, always publish superb content via their 'chronicles' site, now with a substantial amount specifically related to Fusion Applications. Click on the Fusion Applications menu on the top right of their homepage to see more. Last month of particular note was an article on customizing the timeout pop-up message that shows to inactive users, providing design-time insight and easy-to-follow steps. Finally if you're looking at using Oracle Middleware and Cloud to tailor and extend your applications then you may also be interested in this new blog post on the roadmap for Oracle SOA and the latest on-demand Cloud Development webcast.

    Read the article

  • Booting the liveCD/USB in EFI mode fails on Samsung Tablet XE700T1A

    - by F.L.
    My tablet is Samsung Series 7 Slate (XE700T1A-A02FR (French Language)). It operates an Intel Sandy Bridge architecture. The main issue about this tablet is that it ships with an installed Windows 7 in (U)EFI mode (GPT partition table, etc.), so I'd like to get an EFI dual boot with Ubuntu. But it seems I can't boot on the liveCD in EFI mode. It starts loading (up to initrd), but I then get a blank (black) screen. I've tried the nomodeset kernel option (as well as removing quiet and splash) with no luck. [2012-09-27] I have used the Ubuntu 12.04.1 Desktop ISO (I have read somewhere that it is the only one that can boot in EFI mode). I'd say this has something to do with UEFI since the LiveCD boots in bios mode but not in efi mode. Besides, I am not sure my boot info will help, since I can't boot the LiveCD in EFI mode. As a result I can't install ubuntu in EFI mode. So it would be the boot info from the liveCD boot in bios mode. This happens on a ubuntu-12.04.1-desktop-amd64 iso used on a LiveUSB. Live USB was created by dd'ing the iso onto the full disk device (i.e. /dev/sdx no number) of the Flash drive. I have also tried copying the LiveCD files on a primary GPT partition, but with no luck, I just get the grub shell, no menu, no install option. [2012-09-28] I tried today a flash drive created with Ubuntu's Startup Disk Creator and the alternate 12.04.1 64 bit ISO. I get a grub menu in text mode (which meens it did start in efi mode) with install options / test options. But when I start any of these, I simply get a black screen (no cursor, neither mouse nor text-mode cursor). I tried removing the 'quiet' option and adding nomodeset and acpi=off, but it didn't do any good. So this is the same result as for the LiveCD. [2012-10-01] I have tried with a version of the secure remix version via usb-creator-gtk. The boot on the USB key has the same symptoms. Boot in EFI mode is impossible (I have menu but whatever entry I choose, I get the blank screen problem). The boot in BIOS mode works, I did the install. Then I used boot-repair to try installing grub-efi and get a system that would boot in efi mode. But I can't boot this system, because the EFI firmware doesn't seem to detect that sda contains a valid efi partition. Here is the resulting boot-info Boot info 1253554 [2012-10-01] Today, I have reinstalled the pre-shipped version of windows 7, and then installed ubuntu from a secure-remix iso dumped on USB flash drive vie usb-creator-gtk booted in BIOS mode. When install ended, I said "continue testing" then I used boot-repair to try get the bootloader installed. Now, when I boot the tablet, I get the grub menu, it can chainload windows 7 flawlessly. But when I try to start one of the ubuntu options I get the same old blank screen. Here is the new boot-info: Boot info 1253927 [2012-10-01] I tried installing the 3.3 kernel by chrooting a live usb boot (secure remix again) into the installed system. Same symptoms. I feel the key to this is that the device's efi firmware (which is EFI v2.0) would expose the graphics hardware in a way that prevents the kernel to initialize it, and thus prevents it from booting (the kernel stops all drive access just after the screen turns kind of very dark purple). Here is some info on the UEFI firmware as given by rEFInd: EFI revision: 2.00 Platform: x86_64 (64 bit) Firmware: American Megatrends 4.635 Screen Output: Graphics Output (UEFI), 800x600 [2012-10-08] This week end I tried loading the kernel with elilo. Eventhough I didn't have more luck on booting the kernel, elilo gives more info when loading the kernel. I think the next step is trying to load a kernel with EFI stub directly.

    Read the article

  • Exadata X3, 11.2.3.2 and Oracle Platinum Services

    - by Rene Kundersma
    Oracle recently announced an Exadata Hardware Update. The overall architecture will remain the same, however some interesting hardware refreshes are done especially for the storage server (X3-2L). Each cell will now have 1600GB of flash, this means an X3-2 full rack will have 20.3 TB of total flash ! For all the details I would like to refer to the Oracle Exadata product page: www.oracle.com/exadata Together with the announcement of the X3 generation. A new Exadata release, 11.2.3.2 is made available. New Exadata systems will be shipped with this release and existing installations can be updated to that release. As always there is a storage cell patch and a patch for the compute node, which again needs to be applied using YUM. Instructions and requirements for patching existing Exadata compute nodes to 11.2.3.2 using YUM can be found in the patch README. Depending on the release you have installed on your compute nodes the README will direct you to a particular section in MOS note 1473002.1. MOS 1473002.1 should only be followed with the instructions from the 11.2.3.2 patch README. Like with 11.2.3.1.0 and 11.2.3.1.1 instructions are added to prepare your systems to use YUM for the first time in case you are still on release 11.2.2.4.2 and earlier. You will also find these One Time Setup instructions in MOS note 1473002.1 By default compute nodes that will be updated to 11.2.3.2.0 will have the UEK kernel. Before 11.2.3.2.0 the 'compatible kernel' was used for the compute nodes. For 11.2.3.2.0 customer will have the choice to replace the UEK kernel with the Exadata standard 'compatible kernel' which is also in the ULN 11.2.3.2 channel. Recommended is to use the kernel that is installed by default. One of the other great new things 11.2.3.2 brings is Writeback Flashcache (wbfc). By default wbfc is disabled after the upgrade to 11.2.3.2. Enable wbfc after patching on the storage servers of your test environment and see the improvements this brings for your applications. Writeback FlashCache can be enabled  by dropping the existing FlashCache, stopping the cellsrv process and changing the FlashCacheMode attribute of the cell. Of course stopping cellsrv can only be done in a controlled manner. Steps: drop flashcache alter cell shutdown services cellsrv again, cellsrv can only be stopped in a controlled manner alter cell flashCacheMode = WriteBack alter cell startup services cellsrv create flashcache all Going back to WriteThrough FlashCache is also possible, but only after flushing the FlashCache: alter cell flashcache all flush Last item I like to highlight in particular is already from a while ago, but a great thing to emphasis: Oracle Platinum Services. On top of the remote fault monitoring with faster response times Oracle has included update and patch deployment services.These services are delivered by Oracle Advanced Customer Support at no additional costs for qualified Oracle Premier Support customers. References: 11.2.3.2.0 README Exadata YUM Repository Population, One-Time Setup Configuration and YUM upgrades  1473002.1 Oracle Platinum Services

    Read the article

  • How the SPARC T4 Processor Optimizes Throughput Capacity: A Case Study

    - by Ruud
    This white paper demonstrates the architected latency hiding features of Oracle’s UltraSPARC T2+ and SPARC T4 processors That is the first sentence from this technical white paper, but what does it exactly mean? Let's consider a very simple example, the computation of a = b + c. This boils down to the following (pseudo-assembler) instructions that need to be executed: load @b, r1 load @c, r2 add r1,r2,r3 store r3, @a The first two instructions load variables b and c from an address in memory (here symbolized by @b and @c respectively). These values go into registers r1 and r2. The third instruction adds the values in r1 and r2. The result goes into register r3. The fourth instruction stores the contents of r3 into the memory address symbolized by @a. If we're lucky, both b and c are in a nearby cache and the load instructions only take a few processor cycles to execute. That is the good case, but what if b or c, or both, have to come from very far away? Perhaps both of them are in the main memory and then it easily takes hundreds of cycles for the values to arrive in the registers. Meanwhile the processor is doing nothing and simply waits for the data to arrive. Actually, it does something. It burns cycles while waiting. That is a waste of time and energy. Why not use these cycles to execute instructions from another application or thread in case of a parallel program? That is exactly what latency hiding on the SPARC T-Series processors does. It is a hardware feature totally transparent to the user and application. As soon as there is a delay in the execution, the hardware uses these otherwise idle cycles to execute instructions from another process. As a result, the throughput capacity of the system improves because idle cycles are no longer wasted and therefore more jobs can be run per unit of time. This feature has been in the SPARC T-series from the beginning, so why this paper? The difference with previous publications on this topic is in the amount of detail given. How this all works under the hood is fully explained using two example programs. Starting from the assembly language instructions, it is demonstrated in what way these programs execute. To really see what is happening we go down to the processor pipeline level, where the gaps in the execution are, and show in what way these idle cycles are filled by other copies of the same program running simultaneously. Both the SPARC T4 as well as the older UltraSPARC T2+ processor are covered. You may wonder why the UltraSPARC T2+ is included. The focus of this work is on the SPARC T4 processor, but to explain the basic concept of latency hiding at this very low level, we start with the UltraSPARC T2+ processor because it is architecturally a much simpler design. From the single issue, in-order pipelines of this processor we then shift gears and cover how this all works on the much more advanced dual issue, out-of-order architecture of the T4. The analysis and performance experiments have been conducted on both processors. The results depend on the processor, but in all cases the theoretical estimates are confirmed by the experiments. If you're interested to read a lot more about this and find out how things really work under the hood, you can download a copy of the paper here. A paper like this could not have been produced without the help of several other people. I want to thank the co-author of this paper, Jared Smolens, for his very valuable contributions and our highly inspiring discussions. I'm also indebted to Thomas Nau (Ulm University, Germany), Shane Sigler and Mark Woodyard (both at Oracle) for their feedback on earlier versions of this paper. Karen Perkins (Perkins Technical Writing and Editing) and Rick Ramsey at Oracle were very helpful in providing editorial and publishing assistance.

    Read the article

  • Ubuntu keyboard shortcuts - two-part question

    - by Don
    Background: I come from a Windows background and just started dual-booting Ubuntu (my first Linux experience) about 4 days ago. So my systems are Windows 7/Ubuntu 12.04, and so far I'm loving Ubuntu. I am a dedicated mouse-abolitionist (trackpads are hell) and do most of my browsing and navigation with keyboard shortcuts. However, on switching to Ubuntu, a lot of my keyboard shortcuts are gone, and my productivity has resultantly taken a huge hit anytime I am using Ubuntu. Problem 1: My computer was designed to display on-screen notifications for a second when I hit caps-lock or num-lock, and there are no constant indicators of the lock status (LEDs, etc). In Ubuntu, the keys still worked, but the notifications were gone. Googling got me a tutorial on key-binding(Compiz) and scripts, so now I have capslock and numlock running this script: #!/bin/bash icon="/usr/share/icons/gnome/scalable/devices/keyboard.svg" case $1 in 'scrl') mask=3 key="Scroll" ;; 'num') mask=2 key="Num" ;; 'caps') mask=1 key="Caps" ;; esac value=$(xset q | grep "LED mask" | sed -r "s/.*LED mask:\s+[0-9a-fA-F]+([0-9a-fA-F]).*/\1/") if [ $(( 0x$value & 0x$mask )) == $mask ] then output="$key Lock" output2="On" else output="$key Lock" output2="Off" fi notify-send -i $icon "$output" "$output2" -t 1000 But, whether turning it off or on, the notifications always say that I have turned it on. Is there an easy fix for this, or an easier way to work it to get it do display the CORRECT notifications? Problem 2: I'm not sure if this is because of my keyboard or Ubuntu. In Windows, I use Chrome and use the ctrl+pgUp/pgDwn shortcuts quite a bit to switch between tabs. On my keyboard, I can enter pgUp and pgDwn by either disabling NumLock and hitting 9 or 3 respectively on the 10key. Alternately I can hold the fn key and hit up or down arrow. The first method is the one I very heavily relied upon, and it works in Firefox for Ubuntu, but not in Chrome nor in Chromium. The second method (ctrl + fn + up/down) works fine in Chrome. However, I'd dearly like to find a method to make the first method work. Any suggestions? Thanks in advance for your help. Update: @julien: I've checked the keyboard layout options - I didn't find anything that seemed useful for this goal. @Marty: The script runs once when the button is pressed. In Compiz, I've tied those two keys to the script, so when I press the button, it runs the script with the button pressed as a parameter. Update: @elmicha: Thanks. That one works a lot better, and it even pops an icon into the status bar when caps lock is on. There's still a very slight problem in that if I quickly tap the key twice, the image will show that it has been turned on and then turned off, and the notifaction will come and go from my status area, but the text of both notifications will be "Caps Lock on". Same with Num Lock. However, if the time between presses is long enough for the first notification to disappear, everything displays correctly. Given how quickly the notifications disappear, I don't expect this will pose too much of a problem for me.

    Read the article

  • Give a session on C++ AMP – here is how

    - by Daniel Moth
    Ever since presenting on C++ AMP at the AMD Fusion conference in June, then the Gamefest conference in August, and the BUILD conference in September, I've had numerous requests about my material from folks that want to re-deliver the same session. The C++ AMP session I put together has evolved over the 3 presentations to its final form that I used at BUILD, so that is the one I recommend you base yours on. Please get the slides and the recording from channel9 (I'll refer to slide numbers below). This is how I've been presenting the C++ AMP session: Context (slide 3, 04:18-08:18) Start with a demo, on my dual-GPU machine. I've been using the N-Body sample (for VS 11 Developer Preview). (slide 4) Use an nvidia slide that has additional examples of performance improvements that customers enjoy with heterogeneous computing. (slide 5) Talk a bit about the differences today between CPU and GPU hardware, leading to the fact that these will continue to co-exist and that GPUs are great for data parallel algorithms, but not much else today. One is a jack of all trades and the other is a number cruncher. (slide 6) Use the APU example from amd, as one indication that the hardware space is still in motion, emphasizing that the C++ AMP solution is a data parallel API, not a GPU API. It has a future proof design for hardware we have yet to see. (slide 7) Provide more meta-data, as blogged about when I first introduced C++ AMP. Code (slide 9-11) Introduce C++ AMP coding with a simplistic array-addition algorithm – the slides speak for themselves. (slide 12-13) index<N>, extent<N>, and grid<N>. (Slide 14-16) array<T,N>, array_view<T,N> and comparison between them. (Slide 17) parallel_for_each. (slide 18, 21) restrict. (slide 19-20) actual restrictions of restrict(direct3d) – the slides speak for themselves. (slide 22) bring it altogether with a matrix multiplication example. (slide 23-24) accelerator, and accelerator_view. (slide 26-29) Introduce tiling incl. tiled matrix multiplication [tiling probably deserves a whole session instead of 6 minutes!]. IDE (slide 34,37) Briefly touch on the concurrency visualizer. It supports GPU profiling, but enhancements specific to C++ AMP we hope will come at the Beta timeframe, which is when I'll be spending more time talking about it. (slide 35-36, 51:54-59:16) Demonstrate the GPU debugging experience in VS 11. Summary (slide 39) Re-iterate some of the points of slide 7, and add the point that the C++ AMP spec will be open for other compiler vendors to implement, even on other platforms (in fact, Microsoft is actively working on that). (slide 40) Links to content – see slide – including where all your questions should go: http://social.msdn.microsoft.com/Forums/en/parallelcppnative/threads.   "But I don't have time for a full blown session, I only need 2 (or just 1, or 3) C++ AMP slides to use in my session on related topic X" If all you want is a small number of slides, you can take some from the session above and customize them. But because I am so nice, I have created some slides for you, including talking points in the notes section. Download them here. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • SQL Server Optimizer Malfunction?

    - by Tony Davis
    There was a sharp intake of breath from the audience when Adam Machanic declared the SQL Server optimizer to be essentially "stuck in 1997". It was during his fascinating "Query Tuning Mastery: Manhandling Parallelism" session at the recent PASS SQL Summit. Paraphrasing somewhat, Adam (blog | @AdamMachanic) offered a convincing argument that the optimizer often delivers flawed plans based on assumptions that are no longer valid with today’s hardware. In 1997, when Microsoft engineers re-designed the database engine for SQL Server 7.0, SQL Server got its initial implementation of a cost-based optimizer. Up to SQL Server 2000, the developer often had to deploy a steady stream of hints in SQL statements to combat the occasionally wilful plan choices made by the optimizer. However, with each successive release, the optimizer has evolved and improved in its decision-making. It is still prone to the occasional stumble when we tackle difficult problems, join large numbers of tables, perform complex aggregations, and so on, but for most of us, most of the time, the optimizer purrs along efficiently in the background. Adam, however, challenged further any assumption that the current optimizer is competent at providing the most efficient plans for our more complex analytical queries, and in particular of offering up correctly parallelized plans. He painted a picture of a present where complex analytical queries have become ever more prevalent; where disk IO is ever faster so that reads from disk come into buffer cache faster than ever; where the improving RAM-to-data ratio means that we have a better chance of finding our data in cache. Most importantly, we have more CPUs at our disposal than ever before. To get these queries to perform, we not only need to have the right indexes, but also to be able to split the data up into subsets and spread its processing evenly across all these available CPUs. Improvements such as support for ColumnStore indexes are taking things in the right direction, but, unfortunately, deficiencies in the current Optimizer mean that SQL Server is yet to be able to exploit properly all those extra CPUs. Adam’s contention was that the current optimizer uses essentially the same costing model for many of its core operations as it did back in the days of SQL Server 7, based on assumptions that are no longer valid. One example he gave was a "slow disk" bias that may have been valid back in 1997 but certainly is not on modern disk systems. Essentially, the optimizer assesses the relative cost of serial versus parallel plans based on the assumption that there is no IO cost benefit from parallelization, only CPU. It assumes that a single request will saturate the IO channel, and so a query would not run any faster if we parallelized IO because the disk system simply wouldn’t be able to handle the extra pressure. As such, the optimizer often decides that a serial plan is lower cost, often in cases where a parallel plan would improve performance dramatically. It was challenging and thought provoking stuff, as were his techniques for driving parallelism through query logic based on subsets of rows that define the "grain" of the query. I highly recommend you catch the session if you missed it. I’m interested to hear though, when and how often people feel the force of the optimizer’s shortcomings. Barring mistakes, such as stale statistics, how often do you feel the Optimizer fails to find the plan you think it should, and what are the most common causes? Is it fighting to induce it toward parallelism? Combating unexpected plans, arising from table partitioning? Something altogether more prosaic? Cheers, Tony.

    Read the article

  • Personal Financial Management – The need for resuscitation

    - by Salil Ravindran
    Until a year or so ago, PFM (Personal Financial Management) was the blue eyed boy of every channel banking head. In an age when bank account portability is still fiction, PFM was expected to incentivise customers to switch banks. It still is, in some emerging economies, but if the state of PFM in matured markets is anything to go by, it is in a state of coma and badly requires resuscitation. Studies conducted around the year show an alarming decline and stagnation in PFM usage in mature markets. A Sept 2012 report by Aite Group – Strategies for PFM Success shows that 72% of users hadn’t used PFM and worse, 58% of them were not kicked about using it. Of the rest who had used it, only half did on a bank site. While there are multiple reasons for this lack of adoption, some are glaringly obvious. While pretty graphs and pie charts are important to provide a visual representation of my income and expense, it is simply not enough to encourage me to return. Static representation of data without any insightful analysis does not help me. Budgeting and Cash Flow is important but when I have an operative account, a couple of savings accounts, a mortgage loan and a couple of credit cards help me with what my affordability is in specific contexts rather than telling me I just busted my budget. Help me with relative importance of each budget category so that I know it is fine to go over budget on books for my daughter as against going over budget on eating out. Budget over runs and spend analysis are post facto and I am informed of my sins only when I return to online banking. That too, only if I decide to come to the PFM area. Fundamentally, PFM should be a part of my banking engagement rather than an analysis tool. It should be contextual so that I can make insight based decisions. So what can be done to resuscitate PFM? Amalgamation with banking activities – In most cases, PFM tools are integrated into online banking pages and they are like chapter 37 of a long story. PFM needs to be a way of banking rather than a tool. Available balances should shift to Spendable Balances. Budget and goal related insights should be integrated with transaction sessions to drive pre-event financial decisions. Personal Financial Guidance - Banks need to think ground level and see if their PFM offering is really helping customers achieve self actualisation. Banks need to recognise that most customers out there are non-proficient about making the best value of their money. Customers return when they know that they are being guided rather than being just informed on their finance. Integrating contextual financial offers and financial planning into PFM is one way ahead. Yet another way is to help customers tag unwanted spending thereby encouraging sound savings habits. Mobile PFM – Most banks have left all those numbers on online banking. With access mostly having moved to devices and the success of apps, moving PFM on to devices will give it a much needed shot in the arm. This is not only about presenting the same wine in a new bottle but also about leveraging the power of the device in pushing real time notifications to make pre-purchase decisions. The pursuit should be to analyse spend, budgets and financial goals real time and push them pre-event on to the device. So next time, I should know that I have over run my eating out budget before walking into that burger joint and not after. Increase participation and collaboration – Peer group experiences and comments are valued above those offered by the bank. Integrating social media into PFM engagement will let customers share and solicit their financial management experiences with their peer group. Peer comparisons help benchmark one’s savings and spending habits with those of the peer group and increases stickiness. While mature markets have gone through this learning in some way over the last one year, banks in maturing digital banking economies increasingly seem to be falling into this trap. Best practices lie in profiling and segmenting customers, being where they are and contextually guiding them to identify and achieve their financial goals. Banks could look at the likes of Simple and Movenbank to draw inpiration from.

    Read the article

  • Grub Rescue Unknown Filesystem Error. Grub Corrupted or Filesystem?

    - by nightcrawler
    Now it has happened twice & have been pulling my hairs now... I have installed xubuntu on my external hardisk & have been using it for about 3 months. It has three partitions, one of 500 mb mounted at /boot, 2nd one of 48gb mounted at / & the rest (out of 160gb) is ntfs partition....used as normal external storage. The last storage supposedly acts as a buffer b/w Linux distributions & Win platform, buffer in the sense that it provides a universal channel for data transfers. I have constantly used this external hardisk for data transfers b/w win7 laptop & xubuntu (on this external hd) without any hassle. However, on of my desktops where I have ubuntu I (for the first time) attached this external drive which let me do data transfers where all three partitions properly mounted....but then nasty thing occurred the same that occurred before. I (as usual) tried booting via this external hd (one having xubuntu, one having being formerly used under Ubuntu) I got error Now I am totally devastated because similar thing happened ~6months before when I had fedora 17 in my external hd (instead of xubuntu) & after it was used under ubuntu the same happened...i didn't reported it because I already had planned towards debian instead of rpm! The mystery is that as long as I don't attach this external hd under ubuntu the data never** corrupts whereas under win xp/7 I can use it as a normal usb storage of coarse linux partitions aren’t available under win platforms... **From corrupts I mean hd fails to boot with the error mentioned however cant say whether data within remains untouched? It seems that my grub & or MBR is corrupted. Please sir guide me to solve this issue also why I cant attach & use linux external hds under linux platform Disk /dev/sdc: 160.0 GB, 160041884672 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581806 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0004e7d0 Device Boot Start End Blocks Id System /dev/sdc1 * 2048 976895 487424 83 Linux /dev/sdc2 978942 96874495 47947777 5 Extended /dev/sdc3 96874496 312575999 107850752 7 HPFS/NTFS/exFAT /dev/sdc5 978944 94726143 46873600 83 Linux /dev/sdc6 94728192 96874495 1073152 82 Linux swap / Solaris I can recall for sure that have seen a thread here when a similar problem occurred & in response someone gave solution of how to mount (now invisible) partitions & recover important data in them. I have misplaced that URL so if any can guide me thither because my important documents resides in / partition What I already have done: Without success I have tried this & related solutions What I plan to do: I believe that filesystem has corrupted & would you recommend solution like this provided I cant recall whether my /boot (500mb) partition was ext4 or ext2 though I am sure that my / (48gb) partition was ext4 UPDATE 1 Attached my external hd under Ubuntu ran followinf command as root grub-install /dev/sdc where /dev/sdc was my external hd containing corrupted xubuntu....it reported all done! I re-ran fdisk -l but to my disappointment it reported Disk /dev/sdc: 160.0 GB, 160041884672 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581806 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x1b6b9167 Disk /dev/sdc doesn't contain a valid partition table ...& now I can't even access its ntfs partition (former /dev/sdc3) please help? UPDATE 2 TestDisk (by cgsecurity) failed at founding any partition table :( TestDisk 6.13, Data Recovery Utility, November 2011 Christophe GRENIER <[email protected]> http://www.cgsecurity.org Disk /dev/sdc - 160 GB / 149 GiB - CHS 19457 255 63 Partition Start End Size in sectors

    Read the article

< Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >