Search Results

Search found 5436 results on 218 pages for 'transfer rate'.

Page 175/218 | < Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >

  • Lubuntu upgrade to 13.04 killed sound with ALSA. How to troubleshoot?

    - by Sven
    After upgrading to 13.04 from 12.10 Lubuntu lost audio playback after unplugging usb soundcard (Polycom) and plugging it back in. Volume control was gray and leading to pulseaudio mixer (not installed) so I uninstalled the pulseaudio package. I also removed and reinstalled the alsa-base package. After restart I have the alsamixer back everything seemingly as usual(volume 100%, unmute) but every sound program gets me errors no matter what device I select. aplay -L: null Discard all samples (playback) or generate zero samples (capture) pulse PulseAudio Sound Server default:CARD=NVidia HDA NVidia, ALC662 rev1 Analog Default Audio Device sysdefault:CARD=NVidia HDA NVidia, ALC662 rev1 Analog Default Audio Device front:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog Front speakers surround40:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog 4.0 Surround output to Front and Rear speakers surround41:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog 4.1 Surround output to Front, Rear and Subwoofer speakers surround50:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog 5.0 Surround output to Front, Center and Rear speakers surround51:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog 5.1 Surround output to Front, Center, Rear and Subwoofer speakers surround71:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog 7.1 Surround output to Front, Center, Side, Rear and Woofer speakers iec958:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Digital IEC958 (S/PDIF) Digital Audio Output hdmi:CARD=NVidia,DEV=0 HDA NVidia, HDMI 0 HDMI Audio Output dmix:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog Direct sample mixing device dmix:CARD=NVidia,DEV=1 HDA NVidia, ALC662 rev1 Digital Direct sample mixing device dmix:CARD=NVidia,DEV=3 HDA NVidia, HDMI 0 Direct sample mixing device dsnoop:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog Direct sample snooping device dsnoop:CARD=NVidia,DEV=1 HDA NVidia, ALC662 rev1 Digital Direct sample snooping device dsnoop:CARD=NVidia,DEV=3 HDA NVidia, HDMI 0 Direct sample snooping device hw:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog Direct hardware device without any conversions hw:CARD=NVidia,DEV=1 HDA NVidia, ALC662 rev1 Digital Direct hardware device without any conversions hw:CARD=NVidia,DEV=3 HDA NVidia, HDMI 0 Direct hardware device without any conversions plughw:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog Hardware device with all software conversions plughw:CARD=NVidia,DEV=1 HDA NVidia, ALC662 rev1 Digital Hardware device with all software conversions plughw:CARD=NVidia,DEV=3 HDA NVidia, HDMI 0 Hardware device with all software conversions default:CARD=Communicator Default Audio Device sysdefault:CARD=Communicator Default Audio Device front:CARD=Communicator,DEV=0 Polycom Communicator, USB Audio Front speakers surround40:CARD=Communicator,DEV=0 Polycom Communicator, USB Audio 4.0 Surround output to Front and Rear speakers surround41:CARD=Communicator,DEV=0 Polycom Communicator, USB Audio 4.1 Surround output to Front, Rear and Subwoofer speakers surround50:CARD=Communicator,DEV=0 Polycom Communicator, USB Audio 5.0 Surround output to Front, Center and Rear speakers surround51:CARD=Communicator,DEV=0 Polycom Communicator, USB Audio 5.1 Surround output to Front, Center, Rear and Subwoofer speakers surround71:CARD=Communicator,DEV=0 Polycom Communicator, USB Audio 7.1 Surround output to Front, Center, Side, Rear and Woofer speakers iec958:CARD=Communicator,DEV=0 Polycom Communicator, USB Audio IEC958 (S/PDIF) Digital Audio Output dmix:CARD=Communicator,DEV=0 Polycom Communicator, USB Audio Direct sample mixing device dsnoop:CARD=Communicator,DEV=0 Polycom Communicator, USB Audio Direct sample snooping device hw:CARD=Communicator,DEV=0 Polycom Communicator, USB Audio Direct hardware device without any conversions plughw:CARD=Communicator,DEV=0 Polycom Communicator, USB Audio Hardware device with all software conversions etc/asound.conf: defaults.ctl.card 1 defaults.pcm.card 1 defaults.pcm.device 1 Following gets same result with both devices. aplay -vv -D front:CARD=NVidia,DEV=0 "Release the Pressure.wav": Playing WAVE 'Release the Pressure.wav' : Signed 16 bit Little Endian, Rate 44100 Hz, Mono aplay: set_params:1087: Channels count non available Guayadeque mp3 playback: AL lib: alsa_open_playback: Could not open playback device 'default': No such file or directory 21:32:14: Error: Gstreamer error 'Configured audiosink playbackbin is not working.' Audacious: ALSA error: snd_mixer_attach failed: No such file or directory. ALSA error: snd_pcm_open failed: No such device. So How do I fix my audio? UPDATE: I removed the usb soundcard and got rid of all alsa config. Everything is working as before the install but it sure feels fragile.

    Read the article

  • My Tech Ed North America Preview - Certification Edition

    - by Chris Gardner
    In my previous TechEd North America Preview, I addressed all the content I wanted to see at the show. This time, we shall turn our attention to the certifications I might try to pick up. If you have never been to TechEd North America before, one of the greatest things about the event is an on-site certification center. If you have a couple hours to spare, you can walk up to a test. The first test on my agenda is 70-5231. I took this update test once, but did not do well on the MVC portion2. A few practice tests later, and I think I'm ready to fake that section. After that, I need to complete my road to being a master. The good folks here at work have been having a real love / hate relationship with the idea of me become an MCM in SQL Server3. Of course, before I do that, I need to finally take the SQL Administration tests. Thus, we shall add 70-4324 and 70-4505 to the list. Speaking of MCM, TechEd North America will have a special on test 88-9706. This test is normally $500, and you have to find a place to take it7. However, there is a special 50% off rate for people who take it on location. With those kind of prices, I may just take it as a form of study guide. As a final push, I may take some Windows Phone exams. I mentioned in my previous post that I may attend the 70-5998 Exam Cram session. Unfortunately, I will be staffing the Hands-On-Lab at that time. As we know, this has never stopped me from taking a test. This may lead to fits of 70-5069, but after we've come this far... That should complete my list. Do I really think I'll find time to take 6 tests at TechEd North America? Probably not. I have done it at TechEd North America before, but that was before I was TechEd North America staff. I also had a co-worker pass 9 in one year, but he basically did nothing but travel to Orlando in 2007 to take tests. And what's the point of attending a HUGE conference if you don't network? Of course, networking will have to wait for Friday's post... 1 Upgrade: Transition Your MCPD .NET Framework 3.5 Web Developer Skills to MCPD .NET Framework 4 Web Developer 2Because I never have used, nor do I really think I ever will use, MVC... 3By that, I mean they love the idea, and they hate the price 4Microsoft SQL Server 2008, Implementation and Maintenance 5PRO: Designing, Optimizing and Maintaining a Database Administrative Solution Using Microsoft SQL Server 2008 6SQL Server 2008 Microsoft Certified Master: Knowledge Exam 7Which isn't nearly as expensive as the Lab Exam, nor as difficult to find a location. However, it is not offered at every testing facility. 8PRO: Designing and Developing Windows Phone Applications 9TS: Silverlight 4, Development

    Read the article

  • Live Event: OTN Architect Day: Cloud Computing - Two weeks and counting

    - by Bob Rhubart
    In just two weeks architects and others will gather at the Oracle Conference Center in Redwood Shores, CA for the first Oracle Technology Network Architect Day event of 2013. This event focuses on Cloud Computing, and features sessions specifically focused on real-world examples of the implementation of cloud computing. When: Tuesday July 9, 2013              8:30am - 12:30pm Where: Oracle Conference Center              350 Oracle Pkwy              Redwood City, CA 94065 Register now. It's free! Here's the agenda: 8:30am - 9:00am Registration and Continental Breakfast 9:00am - 9:45am Keynote 21st Century IT | Dr. James Baty VP, Global Enterprise Architecture Program, Oracle Imagine a time long, long ago. A time when servers were certified and dedicated to specific applications, when anything posted on an enterprise web site was from restricted, approved channels, and when we tried to limit the growth of 'dirty' data and storage. Today, applications are services running in the muti-tenant hybrid cloud. Companies beg their customers to tweet them, friend them, and publicly rate their products. And constantly analyzing a deluge of Internet, social and sensor data is the key to creating the next super-successful product, or capturing an evil terrorist. The old IT architecture was planned, dedicated, stable, controlled, with separate and well-defined roles. The new architecture is shared, dynamic, continuous, XaaS, DevOps. This keynote session describes the challenges and opportunities that the new business / IT paradigms present to the IT architecture and architects. 9:45am - 10:30am Technical Session Oracle Cloud: A Case Study in Building a Cloud | Anbu Krishnaswami Enterprise Architect, Oracle Building a Cloud can be challenging thanks to the complex requirements unique to Cloud computing and the massive scale typically associated with Cloud. Cloud providers can take an Infrastructure as a Service (IaaS) approach and build a cloud on virtualized commodity hardware, or they can take the Platform as a Service (PaaS) path, a service-oriented approach based on pre-configured, integrated, engineered systems. This presentation uses the Oracle Cloud itself as a case study in the use of engineered systems, demonstrating how the technical design of engineered systems is leveraged for building PaaS and SaaS Cloud services and a Cloud management infrastructure. The presentation will also explore the principles, patterns, best practices, and architecture views provided in Oracle's Cloud reference architecture. 10:30 am -10:45 am Break 10:45am-11:30am Technical Session Database as a Service | Michael Timpanaro-Perrotta Director, Product Management, Oracle Database Cloud New applications are now commonly built in a Cloud model, where the database is consumed as a service, and many established business processes are beginning to migrate to database as a service (DBaaS). This adoption of DBaaS is made possible by the availability of new capabilities in the database that enable resource pooling, dynamic resource management, model-based provisioning, metered use, and effective quality-of-service controls. This session will examine the catalog of database services at a large commercial bank to understand how these capabilities are enabling DBaaS for a wide range of needs within the enterprise. 11:30 am - 12:00 pm Panel Q&A Dr. James Baty, Anbu Krishnaswami, and Michael Timpanaro-Perrotta respond to audience questions. Registration is free, but seating is limited, so register now.

    Read the article

  • Live Event: OTN Architect Day: Cloud Computing - Two weeks and counting

    - by Bob Rhubart
    In just two weeks architects and others will gather at the Oracle Conference Center in Redwood Shores, CA for the first Oracle Technology Network Architect Day event of 2013. This event focuses on Cloud Computing, and features sessions specifically focused on real-world examples of the implementation of cloud computing. When: Tuesday July 9, 2013              8:30am - 12:30pm Where: Oracle Conference Center              350 Oracle Pkwy              Redwood City, CA 94065 Register now. It's free! Here's the agenda: 8:30am - 9:00am Registration and Continental Breakfast 9:00am - 9:45am Keynote 21st Century IT | Dr. James Baty VP, Global Enterprise Architecture Program, Oracle Imagine a time long, long ago. A time when servers were certified and dedicated to specific applications, when anything posted on an enterprise web site was from restricted, approved channels, and when we tried to limit the growth of 'dirty' data and storage. Today, applications are services running in the muti-tenant hybrid cloud. Companies beg their customers to tweet them, friend them, and publicly rate their products. And constantly analyzing a deluge of Internet, social and sensor data is the key to creating the next super-successful product, or capturing an evil terrorist. The old IT architecture was planned, dedicated, stable, controlled, with separate and well-defined roles. The new architecture is shared, dynamic, continuous, XaaS, DevOps. This keynote session describes the challenges and opportunities that the new business / IT paradigms present to the IT architecture and architects. 9:45am - 10:30am Technical Session Oracle Cloud: A Case Study in Building a Cloud | Anbu Krishnaswami Enterprise Architect, Oracle Building a Cloud can be challenging thanks to the complex requirements unique to Cloud computing and the massive scale typically associated with Cloud. Cloud providers can take an Infrastructure as a Service (IaaS) approach and build a cloud on virtualized commodity hardware, or they can take the Platform as a Service (PaaS) path, a service-oriented approach based on pre-configured, integrated, engineered systems. This presentation uses the Oracle Cloud itself as a case study in the use of engineered systems, demonstrating how the technical design of engineered systems is leveraged for building PaaS and SaaS Cloud services and a Cloud management infrastructure. The presentation will also explore the principles, patterns, best practices, and architecture views provided in Oracle's Cloud reference architecture. 10:30 am -10:45 am Break 10:45am-11:30am Technical Session Database as a Service | Michael Timpanaro-Perrotta Director, Product Management, Oracle Database Cloud New applications are now commonly built in a Cloud model, where the database is consumed as a service, and many established business processes are beginning to migrate to database as a service (DBaaS). This adoption of DBaaS is made possible by the availability of new capabilities in the database that enable resource pooling, dynamic resource management, model-based provisioning, metered use, and effective quality-of-service controls. This session will examine the catalog of database services at a large commercial bank to understand how these capabilities are enabling DBaaS for a wide range of needs within the enterprise. 11:30 am - 12:00 pm Panel Q&A Dr. James Baty, Anbu Krishnaswami, and Michael Timpanaro-Perrotta respond to audience questions. Registration is free, but seating is limited, so register now.

    Read the article

  • Building ATLAS (and later Octave w/ ATLAS)

    - by David Parks
    I'm trying to set up ATLAS (so I can later compile octave with ATLAS support). If I'm correct, I still need to build this manually due to the environment specific optimizations. I do see a package for ATLAS, but it looks like it's using the cross platform generic build options (e.g. "it'll be slow"). So, running the configure script as described in the docs seems to go poorly. As a java developer I never do well at making heads or tails of errors in these build processes. Am I missing dependencies (if so is there any documentation on what I need)? allusers@vbubuntu:~/Downloads/atlas3.10.1/build_vbubuntu$ ../configure -b 64 -D c -DPentiumCPS=3000 --with-netlib-lapack-tarfile=/home/allusers/Downloads/lapack-3.5.0.tgz make: `xconfig' is up to date. ./xconfig -d s /home/allusers/Downloads/atlas3.10.1/build_vbubuntu/../ -d b /home/allusers/Downloads/atlas3.10.1/build_vbubuntu -b 64 -D c -DPentiumCPS=3000 -Si lapackref 1 OS configured as Linux (1) Assembly configured as GAS_x8664 (2) Vector ISA Extension configured as SSE3 (6,448) ERROR: enum fam=3, chip=2, mach=0 make[3]: *** [atlas_run] Error 44 make[2]: *** [IRunArchInfo_x86] Error 2 Architecture configured as Corei1 (25) ERROR: enum fam=3, chip=2, mach=0 make[3]: *** [atlas_run] Error 44 make[2]: *** [IRunArchInfo_x86] Error 2 Clock rate configured as 2350Mhz ERROR: enum fam=3, chip=2, mach=0 make[3]: *** [atlas_run] Error 44 make[2]: *** [IRunArchInfo_x86] Error 2 Maximum number of threads configured as 4 Parallel make command configured as '$(MAKE) -j 4' ERROR: enum fam=3, chip=2, mach=0 make[3]: *** [atlas_run] Error 44 make[2]: *** [IRunArchInfo_x86] Error 2 Cannot detect CPU throttling. rm -f config1.out make atlas_run atldir=/home/allusers/Downloads/atlas3.10.1/build_vbubuntu exe=xprobe_comp redir=config1.out \ args="-v 0 -o atlconf.txt -O 1 -A 25 -Si nof77 0 -V 448 -b 64 -d b /home/allusers/Downloads/atlas3.10.1/build_vbubuntu" make[1]: Entering directory `/home/allusers/Downloads/atlas3.10.1/build_vbubuntu' cd /home/allusers/Downloads/atlas3.10.1/build_vbubuntu ; ./xprobe_comp -v 0 -o atlconf.txt -O 1 -A 25 -Si nof77 0 -V 448 -b 64 -d b /home/allusers/Downloads/atlas3.10.1/build_vbubuntu > config1.out make[2]: gfortran: Command not found make[2]: *** [IRunF77Comp] Error 127 make[2]: g77: Command not found make[2]: *** [IRunF77Comp] Error 127 make[2]: f77: Command not found make[2]: *** [IRunF77Comp] Error 127 Unable to find usable compiler for F77; abortingMake sure compilers are in your path, and specify good compilers to configure (see INSTALL.txt or 'configure --help' for details)make[1]: *** [atlas_run] Error 8 make[1]: Leaving directory `/home/allusers/Downloads/atlas3.10.1/build_vbubuntu' make: *** [IRun_comp] Error 2 ERROR 512 IN SYSCMND: 'make IRun_comp args="-v 0 -o atlconf.txt -O 1 -A 25 -Si nof77 0 -V 448 -b 64"' mkdir src bin tune interfaces mkdir: cannot create directory ‘src’: File exists mkdir: cannot create directory ‘bin’: File exists mkdir: cannot create directory ‘tune’: File exists mkdir: cannot create directory ‘interfaces’: File exists make: *** [make_subdirs] Error 1 make -f Make.top startup make[1]: Entering directory `/home/allusers/Downloads/atlas3.10.1/build_vbubuntu' Make.top:1: Make.inc: No such file or directory Make.top:325: warning: overriding commands for target `/AtlasTest' Make.top:76: warning: ignoring old commands for target `/AtlasTest' make[1]: *** No rule to make target `Make.inc'. Stop. make[1]: Leaving directory `/home/allusers/Downloads/atlas3.10.1/build_vbubuntu' make: *** [startup] Error 2 mv: cannot move ‘lapack-3.5.0’ to ‘../reference/lapack-3.5.0’: Directory not empty mv: cannot stat ‘lib/Makefile’: No such file or directory ../configure: 450: ../configure: cannot create lib/Makefile: Directory nonexistent ../configure: 451: ../configure: cannot create lib/Makefile: Directory nonexistent ../configure: 452: ../configure: cannot create lib/Makefile: Directory nonexistent ../configure: 453: ../configure: cannot create lib/Makefile: Directory nonexistent ../configure: 509: ../configure: cannot create lib/Makefile: Directory nonexistent DONE configure

    Read the article

  • 5.1 surround sound on Acer Aspire 5738ZG with Ubuntu 11.10

    - by kbargais_LV
    I got a problem with sound. I tried everything but no results. :( I got 3 sound ports. my daemon: # This file is part of PulseAudio. # # PulseAudio is free software; you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # PulseAudio is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with PulseAudio; if not, write to the Free Software # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 # USA. ## Configuration file for the PulseAudio daemon. See pulse-daemon.conf(5) for ## more information. Default values are commented out. Use either ; or # for ## commenting. ; daemonize = no ; fail = yes ; allow-module-loading = yes ; allow-exit = yes ; use-pid-file = yes ; system-instance = no ; local-server-type = user ; enable-shm = yes ; shm-size-bytes = 0 # setting this 0 will use the system-default, usually 64 MiB ; lock-memory = no ; cpu-limit = no ; high-priority = yes ; nice-level = -11 ; realtime-scheduling = yes ; realtime-priority = 5 ; exit-idle-time = 20 ; scache-idle-time = 20 ; dl-search-path = (depends on architecture) ; load-default-script-file = yes ; default-script-file = /etc/pulse/default.pa ; log-target = auto ; log-level = notice ; log-meta = no ; log-time = no ; log-backtrace = 0 resample-method = speex-float-1 ; enable-remixing = yes ; enable-lfe-remixing = no flat-volumes = no ; rlimit-fsize = -1 ; rlimit-data = -1 ; rlimit-stack = -1 ; rlimit-core = -1 ; rlimit-as = -1 ; rlimit-rss = -1 ; rlimit-nproc = -1 ; rlimit-nofile = 256 ; rlimit-memlock = -1 ; rlimit-locks = -1 ; rlimit-sigpending = -1 ; rlimit-msgqueue = -1 ; rlimit-nice = 31 ; rlimit-rtprio = 9 ; rlimit-rttime = 1000000 ; default-sample-format = s16le ; default-sample-rate = 44100 ; default-sample-channels = 6 ; default-channel-map = front-left,front-right default-fragments = 8 default-fragment-size-msec = 10 ; enable-deferred-volume = yes ; deferred-volume-safety-margin-usec = 8000 ; deferred-volume-extra-delay-usec = 0

    Read the article

  • Oracle Tutor: XPDL conversion (and why you should care)

    - by mary.keane
    You may have noticed that the Oracle Business Process Converter feature in Tutor 14 supports "XPDL" conversion to Oracle Business Process Analysis Suite (BPA), Oracle Business Process Management Suite (BPM), and Oracle Tutor, and you may have briefly wondered "what is XPDL?" before you moved on to the Visio import feature (a very popular feature in Tutor 14). This posting is for those who do not yet understand (or care) about XPDL and process modeling. Many of us (and I'm including myself) have spent years working in the process definition arena: we've written procedures, designed systems and software to help others write procedures, and have been responsible for embedding policies and procedures into training material for employees. We've worked with tools such as Oracle Tutor, Microsoft Visio, Microsoft Word, and UPK. Most of us have never worked with "modeling tools" before, and we certainly never had to understand BPMN. It's a brave new world in this arena, and companies desperately need people with policy and procedural system expertise to be able to work with system analysts so there is a seamless transfer of knowledge from IT to employees. When working with applications, a picture is worth a thousand words, so eventually you're going to need to understand and be able to work with business process models. XPDL is an acronym for XML Process Definition Language, and it is an interchange format for business process models. It allows you to take a BPMN model that was developed in one workflow application such as BizAgi and import it into another workflow application or a true BPMN management system such as Oracle BPM. Specifically, the XPDL format contains the graphical information of a model as well as any executable information. By using a common format, models can be moved from a basic modeling application used by business owners to applications used by system architects. Over 80 applications support the XPDL format, including MetaStorm ProVision, BEA ALBPM, BizAgi, and Tibco. I mention these applications because we have provided XSLT mapping files specifically for these vendors. Oracle Business Process Converter was designed with user extensibility in mind, and thus users can add their own XML files so that additional XPDL models from other vendors can be converted to BPM, BPA, and Oracle Tutor. Instructions on how to add your own files can be found in Appendix 4 of the Oracle Business Converter manual. Let's take a visual look at how this works. Here is an example of a model devloped in BizAgi: This model can be created by the average business user without a large learning curve, and it's a good start for the system analyst who will be adding web services as well as for the business manager who manages the process described in the model. By exporting this model as XPDL, the information can be converted into Oracle BPA and Oracle BPM as well as converted to Oracle Tutor to become the framework for a procedure. Through this conversion feature, one graphic illustration of a business process can be used by a system analyst, business analyst, business manager, and employee, as seen below. Model Converted to Tutor Procedure Below is the task section of the procedure after conversion from an XPDL file. Model converted to BPA Model converted to BPM End users still want step by step instructions on how to perform their jobs, so procedures (Oracle Tutor) and application simulations (UPK) are still a critical piece of the solution. But IT professionals need graphic descriptions of how the applications work, regardless of whether there are any tasks involving humans. Now there is a way to convert procedures (Oracle Tutor docx files) and basic models (XPDL files) so that business managers and system analysts can share process information. References Wikipedia XPDL. Workflow Management Coalition, XPDL Support and Resources Oracle Business Process Converter manual, Oracle Tutor 14 Oracle Business Process Management 11g If you have any XPDL conversion stories to share, we'd love to hear from you. Best wishes for the coming new year, Mary Keane, Senior Development Manager, Oracle Tutor and BPM

    Read the article

  • Why Ultra-Low Power Computing Will Change Everything

    - by Tori Wieldt
    The ARM TechCon keynote "Why Ultra-Low Power Computing Will Change Everything" was anything but low-powered. The speaker, Dr. Johnathan Koomey, knows his subject: he is a Consulting Professor at Stanford University, worked for more than two decades at Lawrence Berkeley National Laboratory, and has been a visiting professor at Stanford University, Yale University, and UC Berkeley's Energy and Resources Group. His current focus is creating a standard (computations per kilowatt hour) and measuring computer energy consumption over time. The trends are impressive: energy consumption has halved every 1.5 years for the last 60 years. Battery life has made roughly a 10x improvement each decade since 1960. It's these improvements that have made laptops and cell phones possible. What does the future hold? Dr. Koomey said that in the past, the race by chip manufacturers was to create the fastest computer, but the priorities have now changed. New computers are tiny, smart, connected and cheap. "You can't underestimate the importance of a shift in industry focus from raw performance to power efficiency for mobile devices," he said. There is also a confluence of trends in computing, communications, sensors, and controls. The challenge is how to reduce the power requirements for these tiny devices. Alternate sources of power that are being explored are light, heat, motion, and even blood sugar. The University of Michigan has produced a miniature sensor that harnesses solar energy and could last for years without needing to be replaced. Also, the University of Washington has created a sensor that scavenges power from existing radio and TV signals.Specific devices designed for a purpose are much more efficient than general purpose computers. With all these sensors, instead of big data, developers should focus on nano-data, personalized information that will adjust the lights in a room, a machine, a variable sign, etc.Dr. Koomey showed some examples:The Proteus Digital Health Feedback System, an ingestible sensor that transmits when a patient has taken their medicine and is powered by their stomach juices. (Gives "powered by you" a whole new meaning!) Streetline Parking Systems, that provide real-time data about available parking spaces. The information can be sent to your phone or update parking signs around the city to point to areas with available spaces. Less driving around looking for parking spaces!The BigBelly trash system that uses solar power, compacts trash, and sends a text message when it is full. This dramatically reduces the number of times a truck has to come to pick up trash, freeing up resources and slashing fuel costs. This is a classic example of the efficiency of moving "bits not atoms." But researchers are approaching the physical limits of sensors, Dr. Kommey explained. With the current rate of technology improvement, they'll reach the three-atom transistor by 2041. Once they hit that wall, it will force a revolution they way we do computing. But wait, researchers at Purdue University and the University of New South Wales are both working on a reliable one-atom transistors! Other researchers are working on "approximate computing" that will reduce computing requirements drastically. So it's unclear where the wall actually is. In the meantime, as Dr. Koomey promised, ultra-low power computing will change everything.

    Read the article

  • WebCenter Innovation Award Winners

    - by Michael Snow
    Of course, here on our WebCenter blog – we’d like to highlight and brag about our great WebCenter winners. The 2012 WebCenter Innovation Award Winners University of Louisville Location: Louisville, KY, USA Industry: Higher Education Fusion Middleware Products: WebCenter Portal, WebCenter Content, JDeveloper, WebLogic, Oracle BI, Oracle IdM University of Louisville is a state supported research university Statewide Informatics Network to improve public health The University of Louisville has implemented WebCenter as part of the LOUI (Louisville Informatics Institute) Initiative, a Statewide Informatics Network, which will improve public healthcare and lower cost through the use of novel technology and next generation analytics, decision support and innovative outcomes-based payment systems. ---------- News Limited Country/Region: Australia Industry: News/Media FMW Products: WebCenter Sites Single platform running websites for 50% of Australia's newspapers News Corp is running half of Australia's newspaper websites on this shared platform powered by Oracle WebCenter Sites and have overtaken their nearest competitors and are now leading in terms of monthly page impressions. At peak they have over 250 editors on the system publishing in real-time.Sites include: www.newsspace.com.au, www.news.com.au, www.theaustralian.com.au and many others ------ Life Technologies Corp. Country/Region: Carlsbad, CA, USAIndustry: Life SciencesFMW Products: WebCenter Portal, SOA Suite Life Technologies Corp. is a global biotechnology tools company dedicated to improving the human condition with innovative life science products. They were awarded an innovation award for their solution utilizing WebCenter Portal for remotely monitoring & repairing biotech instruments. They deployed WebCenter as a portal that accesses Life Technologies cloud based service monitoring system where all customer deployed instruments can be remotely monitored and proactively repaired.  The portal provides alerts from these cloud based monitoring services directly to the customer and to Life Technologies Field Engineers.  The Portal provides insight into the instruments and services customers purchased for the purpose of analyzing and anticipating future customer needs and creating targeted sales and service programs. ----- China Mobile Jiangsu China Mobile Jiangsu is one of the biggest subsidiaries of China Mobile. It has over 25,000 employees and 40 million mobile subscribers. Country/Region: Jiangsu, China Industry: Telecommunications FMW Products: WebCenter Portal, WebCenter Content, JDeveloper, SOA Suite, IdM They were awarded an Innovation Award for their new employee platform powered by WebCenter Portal is designed to serve their 25,000+ employees and help them drive collaboration & productivity. JSMCC (Chian Mobile Jiangsu) Employee Enterprise Portal and Collaboration Platform. It is one of the China Mobile’s most important IT innovation projects. The new platform is designed to serve for JSMCC’s 25000+ employees and to help them improve the working efficiency, changing their traditional working mode to social ways, encouraging employees on business collaboration and innovation. The solution is built on top of Oracle WebCenter Portal Framework and WebCenter Spaces while also leveraging Weblogic Server, UCM, OID, OAM, SES, IRM and Oracle Database 11g. By providing rich collaboration services, knowledge management services, sensitive document protection services, unified user identity management services, unified information search services and personalized information integration capabilities, the working efficiency of JSMCC employees has been greatly improved. Main Functionality : Information portal, office automation integration, personal space, group space, team collaboration with web2.0 services, unified search engine for multiple data sources, document management and protection. SSO for multiple platforms. -------- LADWP – Los Angeles Department for Water and Power Los Angeles Department of Water and Power (LADWP) is the largest public utility company in United States with over 1.6 Million customers. LADWP provides water and power for millions of residential & commercial customers in Southern California. LADWP also bills most of these customers for sanitation services provided by another city department. Country/Region: US – Los Angeles, CA Industry: Public Utility FMW Products: WebCenter Portal, WebCenter Content, JDeveloper, SOA Suite, IdM The new infrastructure consists of: Oracle WebCenter Portal including mobile portal Oracle WebCenter Content for Content Management and Digital Asset Management (DAM) Oracle OAM (IDM, OVD, OAM) integrated with AD for enterprise identity management Oracle Siebel for CRM Oracle DB Oracle SOA Suite for integration of various subsystems and back end systems  The new portal's features include: Complete Graphical redesign based on best practices in UI Design for high usability Customer Self Service implemented through MyAccount (Bill Pay, Payment History, Bill History, Usage Analysis, Service Request Management) Financial Assistance Programs (CRM, WebCenter) Customer Rebate Programs (CRM, WebCenter) Turn On/Off/Transfer of services (Commercial & Residential) Outage Reporting eNotification (SMS, email) Multilingual (English & Spanish) – using WebCenter multi-language support Section 508 (ADA) Compliant Search – Using WebCenter SES (Secured Enterprise Search) Distributed Authorship in WebCenter Content Mobile Access (any Mobile Browser)

    Read the article

  • Browser Specific Extensions of HttpClient

    - by imran_ku07
            Introduction:                     REpresentational State Transfer (REST) causing/leaving a great impact on service/API development because it offers a way to access a service without requiring any specific library by embracing HTTP and its features. ASP.NET Web API makes it very easy to quickly build RESTful HTTP services. These HTTP services can be consumed by a variety of clients including browsers, devices, machines, etc. With .NET Framework 4.5, we can use HttpClient class to consume/send/receive RESTful HTTP services(for .NET Framework 4.0, HttpClient class is shipped as part of ASP.NET Web API). The HttpClient class provides a bunch of helper methods(for example, DeleteAsync, PostAsync, GetStringAsync, etc.) to consume a HTTP service very easily. ASP.NET Web API added some more extension methods(for example, PutAsJsonAsync, PutAsXmlAsync, etc) into HttpClient class to further simplify the usage. In addition, HttpClient is also an ideal choice for writing integration test for a RESTful HTTP service. Since a browser is a main client of any RESTful API, it is also important to test the HTTP service on a variety of browsers. RESTful service embraces HTTP headers and different browsers send different HTTP headers. So, I have created a package that will add overloads(with an additional Browser parameter) for almost all the helper methods of HttpClient class. In this article, I will show you how to use this package.           Description:                     Create/open your test project and install ImranB.SystemNetHttp.HttpClientExtensions NuGet package. Then, add this using statement on your class, using ImranB.SystemNetHttp;                     Then, you can start using any HttpClient helper method which include the additional Browser parameter. For example,  var client = new HttpClient(myserver); var task = client.GetAsync("http://domain/myapi", Browser.Chrome); task.Wait(); var response = task.Result; .                     Here is the definition  of Browser, public enum Browser { Firefox = 0, Chrome = 1, IE10 = 2, IE9 = 3, IE8 = 4, IE7 = 5, IE6 = 6, Safari = 7, Opera = 8, Maxthon = 9, }                     These extension methods will make it very easy to write browser specific integration test. It will also help HTTP service consumer to mimic the request sending behavior of a browser. This package source is available on github. So, you can grab the source and add some additional behavior on the top of these extensions.         Summary:                     Testing a REST API is an important aspect of service development and today, testing with a browser is crucial. In this article, I showed how to write integration test that will mimic the browser request sending behavior. I also showed an example. Hopefully you will enjoy this article too.

    Read the article

  • Unable to make properly work the Ralink rt3090 wifi card on my Lenovo B575 with Kubuntu 12.04 64bit

    - by Sebastien
    I look and tried many solution from many thread but I still unable to make this wifi card work properly (very slow, unable to connect to some wifi spot, etc.). I tried to compile the driver from the ralink website but it doesn't work. Tried to blacklist many mod, withou any result. So here are some command results, hope their help you help me: lspci sebastien@sebastien-portable:~$ lspci 00:00.0 Host bridge: Advanced Micro Devices [AMD] Family 14h Processor Root Complex 00:01.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI Wrestler [Radeon HD 6310] 00:01.1 Audio device: Advanced Micro Devices [AMD] nee ATI Wrestler HDMI Audio [Radeon HD 6250/6310] 00:11.0 SATA controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 SATA Controller [AHCI mode] 00:12.0 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB OHCI0 Controller 00:12.2 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB EHCI Controller 00:13.0 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB OHCI0 Controller 00:13.2 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB EHCI Controller 00:14.0 SMBus: Advanced Micro Devices [AMD] nee ATI SBx00 SMBus Controller (rev 42) 00:14.2 Audio device: Advanced Micro Devices [AMD] nee ATI SBx00 Azalia (Intel HDA) (rev 40) 00:14.3 ISA bridge: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 LPC host controller (rev 40) 00:14.4 PCI bridge: Advanced Micro Devices [AMD] nee ATI SBx00 PCI to PCI Bridge (rev 40) 00:14.5 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB OHCI2 Controller 00:15.0 PCI bridge: Advanced Micro Devices [AMD] nee ATI SB700/SB800/SB900 PCI to PCI bridge (PCIE port 0) 00:15.2 PCI bridge: Advanced Micro Devices [AMD] nee ATI SB900 PCI to PCI bridge (PCIE port 2) 00:18.0 Host bridge: Advanced Micro Devices [AMD] Family 12h/14h Processor Function 0 (rev 43) 00:18.1 Host bridge: Advanced Micro Devices [AMD] Family 12h/14h Processor Function 1 00:18.2 Host bridge: Advanced Micro Devices [AMD] Family 12h/14h Processor Function 2 00:18.3 Host bridge: Advanced Micro Devices [AMD] Family 12h/14h Processor Function 3 00:18.4 Host bridge: Advanced Micro Devices [AMD] Family 12h/14h Processor Function 4 00:18.5 Host bridge: Advanced Micro Devices [AMD] Family 12h/14h Processor Function 6 00:18.6 Host bridge: Advanced Micro Devices [AMD] Family 12h/14h Processor Function 5 00:18.7 Host bridge: Advanced Micro Devices [AMD] Family 12h/14h Processor Function 7 02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 06) 03:00.0 Network controller: Ralink corp. RT3090 Wireless 802.11n 1T/1R PCIe lsmod sebastien@sebastien-portable:~$ lsmod Module Size Used by rt2800pci 18715 0 arc4 12529 2 rt2800lib 58925 1 rt2800pci crc_ccitt 12667 1 rt2800lib rt2x00pci 14577 1 rt2800pci rt2x00lib 55301 3 rt2800pci,rt2800lib,rt2x00pci mac80211 506816 3 rt2800lib,rt2x00pci,rt2x00lib cfg80211 205544 2 rt2x00lib,mac80211 eeprom_93cx6 12725 1 rt2800pci rt2860sta 864748 0 snd_hda_codec_conexant 62128 1 snd_hda_codec_hdmi 32474 1 uvcvideo 72627 0 rts5139 351143 0 snd_hda_intel 33773 4 videodev 98259 1 uvcvideo snd_hda_codec 127706 3 snd_hda_codec_conexant,snd_hda_codec_hdmi,snd_hda_intel snd_hwdep 13668 1 snd_hda_codec psmouse 87692 0 v4l2_compat_ioctl32 17128 1 videodev serio_raw 13211 0 k10temp 13166 0 snd_pcm 97188 3 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec sp5100_tco 13791 0 i2c_piix4 13301 0 snd_seq_midi 13324 0 snd_rawmidi 30748 1 snd_seq_midi ideapad_laptop 18234 0 sparse_keymap 13890 1 ideapad_laptop rfcomm 47604 0 joydev 17693 0 snd_seq_midi_event 14899 1 snd_seq_midi bnep 18281 2 bluetooth 180104 10 rfcomm,bnep parport_pc 32866 0 ppdev 17113 0 snd_seq 61896 2 snd_seq_midi,snd_seq_midi_event snd_timer 29990 2 snd_pcm,snd_seq snd_seq_device 14540 3 snd_seq_midi,snd_rawmidi,snd_seq snd 78855 18 snd_hda_codec_conexant,snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device soundcore 15091 1 snd mac_hid 13253 0 snd_page_alloc 18529 2 snd_hda_intel,snd_pcm lp 17799 0 parport 46562 3 parport_pc,ppdev,lp usbhid 47199 0 hid 99559 1 usbhid r8169 62099 0 radeon 804372 4 video 19596 0 wmi 19256 0 ttm 76949 1 radeon drm_kms_helper 46978 1 radeon drm 242038 6 radeon,ttm,drm_kms_helper i2c_algo_bit 13423 1 radeon iwconfig sebastien@sebastien-portable:~$ iwconfig lo no wireless extensions. wlan0 IEEE 802.11bgn ESSID:"4CE6763F0E0A" Mode:Managed Frequency:2.452 GHz Access Point: 4C:E6:76:3F:0E:0A Bit Rate=54 Mb/s Tx-Power=20 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=70/70 Signal level=-39 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:100 Missed beacon:0 eth0 no wireless extensions.

    Read the article

  • Indentify Codecs & Technical Information About Video Files

    - by DigitalGeekery
    Have you ever wanted to play an audio or video file but didn’t have the proper codec installed? Today we’ll show how to determine codecs, along with a host of other technical details about your media files with MediaInfo. Installation Download and install MediaInfo. You can find the download link at the bottom of the page. Note: When installing MediaInfo there is a recommended software bundle which you can opt out of by selecting Do not install option. Each recommended software choice may be different, like in this example it offers Spyware Terminator. The cool thing though is they use Open Candy which opts you out of the install. Just double check to make sure you’re not installing extra crapware. Using MediaInfo The first time you run MediaInfo it will display the Preferences window. There are various option such as language, output format, and whether or not you want MediaInfo to check for new versions. Click OK. Select a file or folder to analyze by clicking on the File or Folder icons on the left of the application window or by selecting File > Open from the menu. You can also drag and drop a file directly onto the application. MediaInfo will display details of your media file. In Basic view, you’ll see basic information. Notice in the example below the video and audio codecs, along with file size, running time of the media file, and even the application used to create the video file (Writing application).    You can switch to some of the other views by selecting View from the Menu and choosing form the dropdown list.   Sheet View will present the information a bit more clearly. You can see in the example below that the video and audio codec are listing in clearly identified columns. (AVC is often more commonly referred to H.264.)   Tree View is perhaps the most detailed. You can see from the example below the codec used for this AVI file is XviD.   Scrolling down even further you’ll see additional information like video and audio bit rates, frame rate, aspect ratio, and more.   In Basic View (and also in Sheet view) you can click to find a player for your file. In this instance with an MP4 file, it took me to the download page for Quicktime. This is by no means the only media player for this file, but if you are stuck for how to play a media file, this will forward you to a solution that works. You can do the same thing with Video codec. Click Go to the web site of this video codec to find a download.   MediaInfo is a simple but powerful tool that can be used to discover the details of a media file, or just to find a compatible codec. It works with most any video file type and is available for Windows, Mac, and Linux. Some Mac and Linux versions, however, are currently command line only. Download MediaInfo Similar Articles Productive Geek Tips How to Convert Videos to 3GP for Mobile PhonesFix for VLC Skipping and Lagging Playing High-Def Video FilesUsing VLC Player Under VistaUse Your Mac Mini as a Media Server Part 2How to Play .OGM Video Files in Windows Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 2010 World Cup Schedule Boot Snooze – Reboot and then Standby or Hibernate Customize Everything Related to Dates, Times, Currency and Measurement in Windows 7 Google Earth replacement Icon (Icons we like) Build Great Charts in Excel with Chart Advisor tinysong gives a shortened URL for you to post on Twitter (or anywhere)

    Read the article

  • Tuning Default WorkManager - Advantages and Disadvantages

    - by Murali Veligeti
    Before discussing on Tuning Default WorkManager, lets have a brief introduction on What is Default WorkManger Before Weblogic Server 9.0 release, we had the concept of Execute Queues. WebLogic Server (before WLS 9.0), processing was performed in multiple execute queues. Different classes of work were executed in different queues, based on priority and ordering requirements, and to avoid deadlocks. In addition to the default execute queue, weblogic.kernel.default, there were pre-configured queues dedicated to internal administrative traffic, such as weblogic.admin.HTTP and weblogic.admin.RMI.Users could control thread usage by altering the number of threads in the default queue, or configure custom execute queues to ensure that particular applications had access to a fixed number of execute threads, regardless of overall system load. From WLS 9.0 release onwards WebLogic Server uses is a single thread pool (single thread pool which is called Default WorkManager), in which all types of work are executed. WebLogic Server prioritizes work based on rules you define, and run-time metrics, including the actual time it takes to execute a request and the rate at which requests are entering and leaving the pool.The common thread pool changes its size automatically to maximize throughput. The queue monitors throughput over time and based on history, determines whether to adjust the thread count. For example, if historical throughput statistics indicate that a higher thread count increased throughput, WebLogic increases the thread count. Similarly, if statistics indicate that fewer threads did not reduce throughput, WebLogic decreases the thread count. This new strategy makes it easier for administrators to allocate processing resources and manage performance, avoiding the effort and complexity involved in configuring, monitoring, and tuning custom executes queues. The Default WorkManager is used to handle thread management and perform self-tuning.This Work Manager is used by an application when no other Work Managers are specified in the application’s deployment descriptors. In many situations, the default Work Manager may be sufficient for most application requirements. WebLogic Server’s thread-handling algorithms assign each application its own fair share by default. Applications are given equal priority for threads and are prevented from monopolizing them. The default work-manager, as its name tells, is the work-manager defined by default.Thus, all applications deployed on WLS will use it. But sometimes, when your application is already in production, it's obvious you can't take your EAR / WAR, update the deployment descriptor(s) and redeploy it.The default work-manager belongs to a thread-pool, as initial thread-pool comes with only five threads, that's not much. If your application has to face a large number of hits, you may want to start with more than that.Well, that's quite easy. You have  two option to do so.1) Modify the config.xmlJust add the following line(s) in your server definition : <server> <name>AdminServer</name> <self-tuning-thread-pool-size-min>100</self-tuning-thread-pool-size-min> <self-tuning-thread-pool-size-max>200</self-tuning-thread-pool-size-max> [...] </server> 2) Adding some JVM parameters Add the following system property in setDomainEnv.sh/setDomainEnv.cmd or startWebLogic.sh/startWebLogic.cmd : -Dweblogic.threadpool.MinPoolSize=100 -Dweblogic.threadpool.MaxPoolSize=100 Reboot WLS and see the option has been taken into account . Disadvantage: So far its fine. But here there is an disadvantage in tuning Default WorkManager. Internally Weblogic Server has many work managers configured for different types of work.  if we run out of threads in the self-tuning pool(because of system property -Dweblogic.threadpool.MaxPoolSize) due to being undersized, then important work that WLS might need to do could be starved.  So, while limiting the self-tuning would limit the default WorkManager and internally it also limits all other internal WorkManagers which WLS uses.So the best alternative is to override the default WorkManager that means creating a WorkManager for the Application and assign the WorkManager for the application instead of tuning the Default WorkManager.

    Read the article

  • CacheAdapter 2.4 – Bug fixes and minor functional update

    - by Glav
    Note: If you are unfamiliar with the CacheAdapter library and what it does, you can read all about its awesome ability to utilise memory, Asp.Net Web, Windows Azure AppFabric and memcached caching implementations via a single unified, simple to use API from here and here.. The CacheAdapter library is receiving an update to version 2.4 and is currently available on Nuget here. Update: The CacheAdapter has actualy just had a minor revision to 2.4.1. This significantly increases the performance and reliability in memcached scenario under more extreme loads. General to moderate usage wont see any noticeable difference though. Bugs This latest version fixes a big that is only present in the memcached implementation and is only seen in rare, intermittent times (making i particularly hard to find). The bug is where a cache node would be removed from the farm when errors in deserialization of cached objects would occur due to serialised data not being read from the stream in entirety. The code also contains enhancements to better surface serialization exceptions to aid in the debugging process. This is also specifically targeted at the memcached implementation. This is important when moving from something like memory or Asp.Web caching mechanisms to memcached where the serialization rules are not as lenient. There are a few other minor bug fixes, code cleanup and a little refactoring. Minor feature addition In addition to this bug fix, many people have asked for a single setting to either enable or disable the cache.In this version, you can disable the cache by setting the IsCacheEnabled flag to false in the application configuration file. Something like the example below: <Glav.CacheAdapter.MainConfig> <setting name="CacheToUse" serializeAs="String"> <value>memcached</value> </setting> <setting name="DistributedCacheServers" serializeAs="String"> <value>localhost:11211</value> </setting> <setting name="IsCacheEnabled" serializeAs="String"> <value>False</value> </setting> </Glav.CacheAdapter.MainConfig> Your reasons to use this feature may vary (perhaps some performance testing or problem diagnosis). At any rate, disabling the cache will cause every attempt to retrieve data from the cache, resulting in a cache miss and returning null. If you are using the ICacheProvider with the delegate/Func<T> syntax to populate the cache, this delegate method will get executed every single time. For example, when the cache is disabled, the following delegate/Func<T> code will be executed every time: var data1 = cacheProvider.Get<SomeData>("cache-key", DateTime.Now.AddHours(1), () => { // With the cache disabled, this data access code is executed every attempt to // get this data via the CacheProvider. var someData = new SomeData() { SomeText = "cache example1", SomeNumber = 1 }; return someData; }); One final note: If you access the cache directly via the ICache instance, instead of the higher level ICacheProvider API, you bypass this setting and still access the underlying cache implementation. Only the ICacheProvider instance observes the IsCacheEnabled setting. Thanks to those individuals who have used this library and provided feedback. Ifyou have any suggestions or ideas, please submit them to the issue register on bitbucket (which is where you can grab all the source code from too)

    Read the article

  • Guest blog: A Closer Look at Oracle Price Analytics by Will Hutchinson

    - by Takin Babaei
    Overview:  Price Analytics helps companies understand how much of each sale goes into discounts, special terms, and allowances. This visibility lets sales management see the panoply of discounts and start seeing whether each discount drives desired behavior. In Price Analytics monitors parts of the quote-to-order process, tracking quotes, including the whole price waterfall and seeing which result in orders. The “price waterfall” shows all discounts between list price and “pocket price”. Pocket price is the final price the vendor puts in its pocket after all discounts are taken. The value proposition: Based on benchmarks from leading consultancies and companies I have talked to, where they have studied the effects of discounting and started enforcing what many of them call “discount discipline”, they find they can increase the pocket price by 0.8-3%. Yes, in today’s zero or negative inflation environment, one can, through better monitoring of discounts, collect what amounts to a price rise of a few percent. We are not talking about selling more product, merely about collecting a higher pocket price without decreasing quantities sold. Higher prices fall straight to the bottom line. The best reference I have ever found for understanding this phenomenon comes from an article from the September-October 1992 issue of Harvard Business Review called “Managing Price, Gaining Profit” by Michael Marn and Robert Rosiello of McKinsey & Co. They describe the outsized impact price management has on bottom line performance compared to selling more product or cutting variable or fixed costs. Price Analytics manages what Marn and Rosiello call “transaction pricing”, namely the prices of a given transaction, as opposed to what is on the price list or pricing according to the value received. They make the point that if the vendor does not manage the price waterfall, customers will, to the vendor’s detriment. It also discusses its findings that in companies it studied, there was no correlation between discount levels and any indication of customer value. I urge you to read this article. What Price Analytics does: Price analytics looks at quotes the company issues and tracks them until either the quote is accepted or rejected or it expires. There are prebuilt adapters for EBS and Siebel as well as a universal adapter. The target audience includes pricing analysts, product managers, sales managers, and VP’s of sales, marketing, finance, and sales operations. It tracks how effective discounts have been, the win rate on quotes, how well pricing policies have been followed, customer and product profitability, and customer performance against commitments. It has the concept of price waterfall, the deal lifecycle, and price segmentation built into the product. These help product and sales managers understand their pricing and its effectiveness on driving revenue and profit. They also help understand how terms are adhered to during negotiations. They also help people understand what segments exist and how well they are adhered to. To help your company increase its profits and revenues, I urge you to look at this product. If you have questions, please contact me. Will HutchinsonMaster Principal Sales Consultant – Analytics, Oracle Corp. Will Hutchinson has worked in the business intelligence and data warehousing for over 25 years. He started building data warehouses in 1986 at Metaphor, advancing to running Metaphor UK’s sales consulting area. He also worked in A.T. Kearney’s business intelligence practice for over four years, running projects and providing training to new consultants in the IT practice. He also worked at Informatica and then Siebel, before coming to Oracle with the Siebel acquisition. He became Master Principal Sales Consultant in 2009. He has worked on developing ROI and TCO models for business intelligence for over ten years. Mr. Hutchinson has a BS degree in Chemical Engineering from Princeton University and an MBA in Finance from the University of Chicago.

    Read the article

  • Day 5 - Tada! My Game Menu Screen Graphics

    - by dapostolov
    So, tonight I took some time to mash up some graphics for my game menu screen. My artistic talent sucks...but here goes nothing...voila, my menu screen!! The Menu Screen The screen above is displaying 4 sprites, even though it looks like maybe 7... I guess one of the first things for me to test in the future is ... is it more memory efficient (and better frame rate) to draw one big background image OR tp paint the screen black, and place each sprite in set locations? To display the 4 sprites above, I borrowed my code from yesterday ... I know, tacky, but...I wanted to see it, feel it. Do you feel it? FEEL IT! (homer voice & shakes fist) Note: the menu items won't scale properly as it stands with this code, well pretty much they do nothing except look pretty... Paint.Net & Google Fun So how did I create that image above? Well, to create the background and 3 menu items I used Paint.Net. Basically, I scoured Google images for: a stone doorway, a stone pillar, an old book, a wizards hat, and...that's it pretty much it! I'll let you type in those searches and see if you can locate the images I used. I know, bad developer...but I figured since I modified the images considerably it doesn't count...well for a personal project it shouldn't count...*shrug* Anyhow, I extracted each key assest I wanted from each image and applied lots of matting, blurring, color changes, glow effects and such. Then, using my vivid imagination I placed / composed each of the layered assets into the mashed up the "scene" above. Pretty cool, eh? Hey, did you know, the cool mist effect is actually a fire rendition in Paint.net? I set it to black & white with opacity set next to nothing. I'm also very proud of the yellow "light" in the stone doorway. I drew that in and then applied gausian blur to it to give it the effect of light creeping out around the door and into the room...heheh. So did I achieve the dark, mysterious ritual as I stated in my design doc? I think I had a great stab at it! Maybe down the road I can get a real artist to crank out some quality graphics for the game... =) So, What's Next? Well, I don't have that animated brazier yet...however, I thought it would be even cooler if I can get that door pulsing that yellow light and it would be extremely cool to have the smoke / mist moving across the screen! Make the creative ideas stop!! (clutches head) haha! I'm having great fun working on this project =) I recommend others giving something like this a try, it's really fulfilling. OK. Tomorrow... I think I'm going to start creating some game / menu objects as per the design doc, maybe even get a custom mouse cursor up on the screen and handle a couple of mouse events, and lastly, maybe a feature to toggle a framerate display... D.

    Read the article

  • Insurers Pushed to Transform Their Business

    - by Calvin Glenn
    Everyone in the P&C industry has heard it “We can’t do it.” “Nobody wants to do it.” “We can’t afford to do it.”  Unfortunately, what they’re referencing are the reasons many insurers are still trying to maintain their business processing on legacy policy administration systems, attempting to bide time until there is no other recourse but to give in, bite the bullet, and take on the monumental task of replacing an entire policy administration system (PAS). Just the thought of that project sends IT, Business Users and Management reeling. However, is that fear real?  It is a bit daunting when one realizes that a complete policy administration system replacement will touch most every function an insurer manages, from quoting and rating, to underwriting, distribution, and even customer service. With that, everyone has heard at least one horror story around a transformation initiative that has far exceeded budget and the promised implementation / go-live timeline.    But, does it have to be that hard?  Surely, in the age where a person can voice-activate their DVR to record a TV program from a cell phone, there has to be someone somewhere who’s figured out how to simplify this process. To be able to help insurers, of all sizes, transform and grow their business while also delivering on their overall objectives of providing speed to market, straight-through-processing for applications, quoting, underwriting, and simplified product development. Maybe we’re looking too hard and the answer is simple and straight-forward. Why replace the entire machine when all it really needs is a new part…a single enterprise rating system? This core, modular piece of the policy administration system is the foundation of product development and rate management that enables insurers to provide the right product at the right price to the right customer through the best channels at any given moment in time. The real benefit of a single enterprise rating system is the ability to deliver enhanced business capabilities, such as improved product management, streamlined underwriting, and speed to market. With these benefits, carriers have accomplished a portion of their overall transformation goal. Furthermore, lessons learned from the rating project can be applied to the bigger, down-the-road PAS project to support the successful completion of the overall transformation endeavor. At the recent Oracle OpenWorld Conference in San Francisco, information was shared with attendees about a recent “go-live” project from an Oracle Insurance Tier 1 insurer who did what is proposed above…replaced just the rating portion of their legacy policy administration system with Oracle Insurance Insbridge Rating and Underwriting.  This change provided the insurer greater flexibility to set rates that better reflect risk while enabling the company to support its market segment strategy. Using the Oracle Insurance Insbridge enterprise rating solution, the insurer was able to reduce processing time for agents and underwriters, gained the ability to support proprietary rating models and improved pricing accuracy.      There is mounting pressure on P&C insurers to produce growth and show net profitability in the midst of modest overall industry growth, large weather-related losses and intensifying competition for market share.  Insurers are also being asked to improve customer service, offer a differentiated value proposition and simplify insurance processes.  While the demands are many there is an easy answer…invest in and update the most mission critical application in your arsenal, the single enterprise rating system. Download the Podcast to listen to “Stand-Alone Rating Engine - Leading Force Behind Core Transformation Projects in the P&C Market,” a podcast originally recorded in October 2013. Related Resources: White Paper: Stand-Alone Rating Engine: Leading Force Behind Core Transformation Projects in the P&C Market Webcast On Demand: Stand-Alone Rating Engine and Core Transformation for P&C Insurers Don’t forget to keep up with us year-round: Facebook: www.facebook.com/oracleinsurance Twitter: www.twitter.com/oracleinsurance YouTube: www.youtube.com/oracleinsurance

    Read the article

  • Sunshine after the iCloud release?

    - by Laila
    "Why should I believe them? They're the ones that brought us MobileMe? It was not our finest hour, but we learned a lot." Steve Jobs June 6th 2011 Apple's new cloud service has been met with uncritical excitement by industry commentators.  It is wonderful what a rename can do.  Apple has had a 'cloud' offering for three years called MobileMe, successor to .MAC and  iTools, so iCloud is now the fourth internet service Apple have attempted. If this had been Microsoft, there would have been catcalls all around the blogosphere.  I'll admit that there is a lot more functionality announced for iCloud than MobileMe has ever managed to achieve, but then almost anything has more functionality than MobileMe.  It's an expensive service (£120 a year in the UK, $90 in the states), launched as far back as  June 9, 2008, that has delivered very little and suffered a string of technical problems; the documentation was mainly  a community effort, built up gradually by the frustrated and angry users. It was supposed to synchronise PC Outlook calendars but couldn't manage Microsoft Exchange (Google could, of course). It used WebDAV to allow Windows users to attach to the filestore, but didn't document how to do it. The method for downloading and uploading files to the cloud-based filestore was ridiculously clunky. It allowed you to post photos on a public site, but forgot to include a way of deleting photos. I could go on with the list, but you can explore the many sites that have flourished to inhabit the support-vacuum left by Apple. MobileMe should have had all the bright new clever things announced for iCloud. Apple dropped the ball, and allowed services such as Flickr to fill the void. However, their PR skills are such that, a name-change later (the .ME.com email address remains), it has turned a rout into a victory, and hundreds of earnest bloggers have been extolling Apple's expertise in cloud matters. This must be frustrating for the other cloud providers who have quietly got the technology working right. I wish iCloud well, even though I resent the expensive mess they made of MobileMe. Apple promise that iCloud will sync files, apps, app data, and media across all the different iOS5 devices, Macs, and PCs. It also hopes to sync music across devices, but not video content. They've offered existing MobileMe users free use of the MobileMe service for a year as the product is morphed, and they will be able to transfer to iCloud when it is launched in the autumn.  On June 30, 2012, MobileMe will die, and Apple's iWeb is also soon to join iTools and .MAC in the hereafter. So why get excited about iCloud? That all depends on the level of PC integration. Whereas iOS5 machines will be full participants in the new world of data-sharing (Sorry iPod Touch users) what about .NET libraries? There is talk of synchronising 'My Pictures' libraries with iOS5 and iMac machines, but little more detail as yet. Apple has a lot to prove with iCloud and anyone with actual experience of their past attempts to get into cloud services will be wary.

    Read the article

  • Schizophrenic Ubuntu 12.10-12.04: Atheros 922 PCI WIFI is disabled in Unity but enabled in terminal - How to getit to work?

    - by zewone
    I am trying to get my PCI Wireless Atheros 922 card to work. It is disabled in Unity: both the network utility and the desktop (see screenshot http://www.amisdurailhalanzy.be/Screenshot%20from%202012-10-25%2013:19:54.png) I tried many different advises on many different forums. Installed 12.10 instead of 12.04, enabled all interfaces... etc. I have read about the aht9 driver... The terminal shows no hw or sw lock for the Atheros card, nevertheless, it is still disabled. Nothing worked so far, the card is still disabled. Any help is much appreciated. Here are more tech details: myuser@adri1:~$ sudo lshw -C network *-network:0 DISABLED description: Wireless interface product: AR922X Wireless Network Adapter vendor: Atheros Communications Inc. physical id: 2 bus info: pci@0000:03:02.0 logical name: wlan1 version: 01 serial: 00:18:e7:cd:68:b1 width: 32 bits clock: 66MHz capabilities: pm bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=ath9k driverversion=3.5.0-17-generic firmware=N/A latency=168 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:18 memory:d8000000-d800ffff *-network:1 description: Ethernet interface product: VT6105/VT6106S [Rhine-III] vendor: VIA Technologies, Inc. physical id: 6 bus info: pci@0000:03:06.0 logical name: eth0 version: 8b serial: 00:11:09:a3:76:4a size: 10Mbit/s capacity: 100Mbit/s width: 32 bits clock: 33MHz capabilities: pm bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=via-rhine driverversion=1.5.0 duplex=half latency=32 link=no maxlatency=8 mingnt=3 multicast=yes port=MII speed=10Mbit/s resources: irq:18 ioport:d300(size=256) memory:d8013000-d80130ff *-network DISABLED description: Wireless interface physical id: 1 bus info: usb@1:8.1 logical name: wlan0 serial: 00:11:09:51:75:36 capabilities: ethernet physical wireless configuration: broadcast=yes driver=rt2500usb driverversion=3.5.0-17-generic firmware=N/A link=no multicast=yes wireless=IEEE 802.11bg myuser@adri1:~$ sudo rfkill list all 0: hci0: Bluetooth Soft blocked: no Hard blocked: no 1: phy1: Wireless LAN Soft blocked: no Hard blocked: yes 2: phy0: Wireless LAN Soft blocked: no Hard blocked: no myuser@adri1:~$ dmesg | grep wlan0 [ 15.114235] IPv6: ADDRCONF(NETDEV_UP): wlan0: link is not ready myuser@adri1:~$ dmesg | egrep 'ath|firm' [ 14.617562] ath: EEPROM regdomain: 0x30 [ 14.617568] ath: EEPROM indicates we should expect a direct regpair map [ 14.617572] ath: Country alpha2 being used: AM [ 14.617575] ath: Regpair used: 0x30 [ 14.637778] ieee80211 phy0: >Selected rate control algorithm 'ath9k_rate_control' [ 14.639410] Registered led device: ath9k-phy0 myuser@adri1:~$ dmesg | grep wlan1 [ 15.119922] IPv6: ADDRCONF(NETDEV_UP): wlan1: link is not ready myuser@adri1:~$ lspci -nn | grep 'Atheros' 03:02.0 Network controller [0280]: Atheros Communications Inc. AR922X Wireless Network Adapter [168c:0029] (rev 01) myuser@adri1:~$ sudo ifconfig eth0 Link encap:Ethernet HWaddr 00:11:09:a3:76:4a inet addr:192.168.2.2 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::211:9ff:fea3:764a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5457 errors:0 dropped:0 overruns:0 frame:0 TX packets:2548 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3425684 (3.4 MB) TX bytes:282192 (282.1 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:590 errors:0 dropped:0 overruns:0 frame:0 TX packets:590 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:53729 (53.7 KB) TX bytes:53729 (53.7 KB) myuser@adri1:~$ sudo iwconfig wlan0 IEEE 802.11bg ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=off Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:on lo no wireless extensions. eth0 no wireless extensions. wlan1 IEEE 802.11bgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=0 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:off myuser@adri1:~$ lsmod | grep "ath9k" ath9k 116549 0 mac80211 461161 3 rt2x00usb,rt2x00lib,ath9k ath9k_common 13783 1 ath9k ath9k_hw 376155 2 ath9k,ath9k_common ath 19187 3 ath9k,ath9k_common,ath9k_hw cfg80211 175375 4 rt2x00lib,ath9k,mac80211,ath myuser@adri1:~$ iwlist scan wlan0 Failed to read scan data : Network is down lo Interface doesn't support scanning. eth0 Interface doesn't support scanning. wlan1 Failed to read scan data : Network is down myuser@adri1:~$ lsb_release -d Description: Ubuntu 12.10 myuser@adri1:~$ uname -mr 3.5.0-17-generic i686 ![Schizophrenic Ubuntu](http://www.amisdurailhalanzy.be/Screenshot%20from%202012-10-25%2013:19:54.png) Any help much appreciated... Thanks, Philippe

    Read the article

  • The Evolution of Oracle Direct EMEA by John McGann

    - by user769227
    John is expanding his Dublin based team and is currently recruiting a Director with marketing and sales leadership experience: http://bit.ly/O8PyDF Should you wish to apply, please send your CV to [email protected] Hi, my name is John McGann and I am part of the Oracle Direct management team, based in Dublin.   Today I’m writing from the Oracle London City office, right in the heart of the financial district and up to very recently at the centre of a fantastic Olympic Games. The Olympics saw individuals and teams from across the globe competing to decide who is Citius, Altius, Fortius - “Faster, Higher, Stronger" There are lots of obvious parallels between the competitive world of the Olympics and the Business environments that many of us operate in, but there are also some interesting differences – especially in my area of responsibility within Oracle. We are of course constantly striving to be the best - the best solution on offer for our clients, bringing simplicity to their management, consumption and application of information technology, and the best provider when compared with our many niche competitors.   In Oracle and especially in Oracle Direct, a key aspect of how we achieve this is what sets us apart from the Olympians.  We have long ago eliminated geographic boundaries as a limitation to what we can achieve. We assemble the strongest individuals across multiple countries and bring them together in teams focussed on a single goal. One such team is the Oracle Direct Sales Programs team. In case you don’t know, Oracle Direct EMEA (Europe Middle East and Africa) is the inside sales division in Oracle and it is where I started my Oracle career.  I remember that my first role involved putting direct mail in envelopes.... things have moved on a bit since then – for me, for Oracle Direct and in how we interact with our customers. Today, the team of over 1000 people is located in the different Oracle Direct offices around Europe – the main ones are Malaga, Berlin, Prague and Dubai plus the headquarters in Dublin. We work in over 20 languages and are in constant contact with current and future Oracle customers, using the latest internet and telephone technologies to effectively communicate and collaborate with each other, our customers and prospects. One of my areas of responsibility within Oracle Direct is the Sales Programs team. This team of 25 people manages the planning and execution of demand generation, leading the process of finding new and incremental revenue within Oracle Direct. The Sales Programs Managers or ‘SPMs’ are embedded within each of the Oracle Direct sales teams, focussed on distinct geographies or product groups. The SPMs are virtual members of the regional sales management teams, and work closely with the sales and marketing teams to define and deliver demand generation activities. The customer contact elements of these activities are executed via the Oracle Direct Sales and Business Development/Lead Generation teams, to deliver the pipeline required to meet our revenue goals. Activities can range from pan-EMEA joint sales and marketing campaigns, to very localised niche campaigns. The campaigns might focus on particular segments of our existing customers, introducing elements of our evolving solution portfolio which customers may not be familiar with. The Sales Programs team also manages ‘Nurture’ activities to ensure that we develop potential business opportunities with contacts and organisations that do not have immediate requirements. Looking ahead, it is really important that we continue to evolve our ability to add value to our clients and reduce the physical limitations of our distance from them through the innovative application of technology. This enables us to enhance the customer buying experience and to enable the Inside Sales teams to manage ever more complex sales cycles from start to finish.  One of my expectations of my team is to actively drive innovation in how we leverage data to better understand our customers, and exploit emerging technologies to better communicate with them.   With the rate of innovation and acquisition within Oracle, we need to ensure that existing and potential customers are aware of all we have to offer that relates to their business goals.   We need to achieve this via a coherent communication and sales strategy to effectively target the right people using the most effective medium. This is another area where the Sales Programs team plays a key role.

    Read the article

  • Learn Many Languages

    - by Phil Factor
    Around twenty-five years ago, I was trying to solve the problem of recruiting suitable developers for a large business. I visited the local University (it was a Technical College then). My mission was to remind them that we were a large, local employer of technical people and to suggest that, as they were in the business of educating young people for a career in IT, we should work together. I anticipated a harmonious chat where we could suggest to them the idea of mentioning our name to some of their graduates. It didn’t go well. The academic staff displayed a degree of revulsion towards the whole topic of IT in the world of commerce that surprised me; tweed met charcoal-grey, trainers met black shoes. However, their antipathy to commerce was something we could have worked around, since few of their graduates were destined for a career as university lecturers. They asked me what sort of language skills we needed. I tried ducking the invidious task of naming computer languages, since I wanted recruits who were quick to adapt and learn, with a broad understanding of IT, including development methodologies, technologies, and data. However, they pressed the point and I ended up saying that we needed good working knowledge of C and BASIC, though FORTRAN and COBOL were, at the time, still useful. There was a ghastly silence. It was as if I’d recommended the beliefs and practices of the Bogomils of Bulgaria to a gathering of Cardinals. They stared at me severely, like owls, until the head of department broke the silence, informing me in clipped tones that they taught only Modula 2. Now, I wouldn’t blame you if at this point you hurriedly had to look up ‘Modula 2′ on Wikipedia. Based largely on Pascal, it was a specialist language for embedded systems, but I’ve never ever come across it in a commercial business application. Nevertheless, it was an excellent teaching language since it taught modules, scope control, multiprogramming and the advantages of encapsulating a set of related subprograms and data structures. As long as the course also taught how to transfer these skills to other, more useful languages, it was not necessarily a problem. I said as much, but they gleefully retorted that the biggest local employer, a defense contractor specializing in Radar and military technology, used nothing but Modula 2. “Why teach any other programming language when they will be using Modula 2 for all their working lives?” said a complacent lecturer. On hearing this, I made my excuses and left. There could be no meeting of minds. They were providing training in a specific computer language, not an education in IT. Twenty years later, I once more worked nearby and regularly passed the long-deserted ‘brownfield’ site of the erstwhile largest local employer; the end of the cold war had led to lean times for defense contractors. A digger was about to clear the rubble of the long demolished factory along with the accompanying growth of buddleia and thistles, in order to lay the infrastructure for ‘affordable housing’. Modula 2 was a distant memory. Either those employees had short working lives or they’d retrained in other languages. The University, by contrast, was thriving, but I wondered if their erstwhile graduates had ever cursed the narrow specialization of their training in IT, as they struggled with the unexpected variety of their subsequent careers.

    Read the article

  • World Record Siebel PSPP Benchmark on SPARC T4 Servers

    - by Brian
    Oracle's SPARC T4 servers set a new World Record for Oracle's Siebel Platform Sizing and Performance Program (PSPP) benchmark suite. The result used Oracle's Siebel Customer Relationship Management (CRM) Industry Applications Release 8.1.1.4 and Oracle Database 11g Release 2 running Oracle Solaris on three SPARC T4-2 and two SPARC T4-1 servers. The SPARC T4 servers running the Siebel PSPP 8.1.1.4 workload which includes Siebel Call Center and Order Management System demonstrates impressive throughput performance of the SPARC T4 processor by achieving 29,000 users. This is the first Siebel PSPP 8.1.1.4 benchmark supporting 29,000 concurrent users with a rate of 239,748 Business Transactions/hour. The benchmark demonstrates vertical and horizontal scalability of Siebel CRM Release 8.1.1.4 on SPARC T4 servers. Performance Landscape Systems Txn/hr Users Call Center Order Management Response Times (sec) 1 x SPARC T4-1 (1 x SPARC T4 2.85 GHz) – Web 3 x SPARC T4-2 (2 x SPARC T4 2.85 GHz) – App/Gateway 1 x SPARC T4-1 (1 x SPARC T4 2.85 GHz) – DB 239,748 29,000 0.165 0.925 Oracle: Call Center + Order Management Transactions: 197,128 + 42,620 Users: 20300 + 8700 Configuration Summary Web Server Configuration: 1 x SPARC T4-1 server 1 x SPARC T4 processor, 2.85 GHz 128 GB memory Oracle Solaris 10 8/11 iPlanet Web Server 7 Application Server Configuration: 3 x SPARC T4-2 servers, each with 2 x SPARC T4 processor, 2.85 GHz 256 GB memory 3 x 300 GB SAS internal disks Oracle Solaris 10 8/11 Siebel CRM 8.1.1.5 SIA Database Server Configuration: 1 x SPARC T4-1 server 1 x SPARC T4 processor, 2.85 GHz 128 GB memory Oracle Solaris 11 11/11 Oracle Database 11g Release 2 (11.2.0.2) Storage Configuration: 1 x Sun Storage F5100 Flash Array 80 x 24 GB flash modules Benchmark Description Siebel 8.1 PSPP benchmark includes Call Center and Order Management: Siebel Financial Services Call Center – Provides the most complete solution for sales and service, allowing customer service and telesales representatives to provide superior customer support, improve customer loyalty, and increase revenues through cross-selling and up-selling. High-level description of the use cases tested: Incoming Call Creates Opportunity, Quote and Order and Incoming Call Creates Service Request . Three complex business transactions are executed simultaneously for specific number of concurrent users. The ratios of these 3 scenarios were 30%, 40%, 30% respectively, which together were totaling 70% of all transactions simulated in this benchmark. Between each user operation and the next one, the think time averaged approximately 10, 13, and 35 seconds respectively. Siebel Order Management – Oracle's Siebel Order Management allows employees such as salespeople and call center agents to create and manage quotes and orders through their entire life cycle. Siebel Order Management can be tightly integrated with back-office applications allowing users to perform tasks such as checking credit, confirming availability, and monitoring the fulfillment process. High-level description of the use cases tested: Order & Order Items Creation and Order Updates. Two complex Order Management transactions were executed simultaneously for specific number of concurrent users concurrently with aforementioned three Call Center scenarios above. The ratio of these 2 scenarios was 50% each, which together were totaling 30% of all transactions simulated in this benchmark. Between each user operation and the next one, the think time averaged approximately 20 and 67 seconds respectively. Key Points and Best Practices No processor cores or cache were activated or deactivated on the SPARC T-Series systems to achieve special benchmark effects. See Also Siebel White Papers SPARC T4-1 Server oracle.com OTN SPARC T4-2 Server oracle.com OTN Siebel CRM oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 30 September 2012.

    Read the article

  • Why We Do What We Do. (Part 3 of 5 Part Series on JDE 5G Postponed)

    - by Kem Butller-Oracle
    By Lyle Ekdahl - Oracle JD Edwards Sr. VP General Manager  In the closing of part two of this 5 part blog series, I stated that in the next installment I would explore the expected results of the digital overdrive era and the impact it will have on our economy. While I have full intentions of writing on that topic, I am inspired today to write about something that is top of mind. It’s top of mind because it has come up several times recently conversations with my Oracle’s JD Edwards team members, with customers and our partners, plus I feel passionately about why I do what I do…. It is not what we do but why we do that thing that we do Do you know what you do? For the most part, I bet you could tell me what you do even if your work has changed over the years.  My real question is, “Do you get excited about what you do, and are you fulfilled? Does your work deliver a sense of purpose, a cause to work for, and something to believe in?”  Alright, I guess that was not a single question. So let me just ask, “Why?” Why are you here, right now? Why do you get up in the morning? Why do you go to work? Of course, I can’t answer those questions for you but I can share with you my POV.   For starters, there are several things that drive me. As many of you know by now, I have a somewhat competitive nature but it is not solely the thrill of winning that actually fuels me. Now don’t get me wrong, I do like winning occasionally. However winning is only a potential result of competing and is clearly not guaranteed. So why compete? Why compete in business, and particularly why in this Enterprise Software business?  Here’s why! I am fascinated by creative and building processes. It is about making or producing things, causing something to come into existence. With the right skill, imagination and determination, whether it’s art or invention; the result can deliver value and inspire. In both avocation and vocation I always gravitate towards the create/build processes.  I believe one of the skills necessary for the create/build process is not just the aptitude but also, and especially, the desire and attitude that drives one to gain a deeper understanding. The more I learn about our customers, the more I seek to understand what makes the successful and what difficult issues cause them to struggle. I like to look for the complex, non-commodity process problems where streamlined design and modern technology can provide an easy and simple solution. It is especially gratifying to see our customers use our software to increase their own ability to deliver value to the market. What an incredible network effect! I know many of you share this customer obsession as well as the create/build addiction focused on simple and elegant design. This is what I believe is at the root of our common culture.  Are JD Edwards customers on a whole different than other ERP solutions’ customers? I would argue that for the most part, yes, they are. They selected our software, and our software is different. Why? Because I believe that the create/build process will generally result in solutions that reflect who built it and their culture. And a culture of people focused on why they create/build will attract different customers than one that is based on what is built or how the solution is delivered. In the past I have referred to this idea as character of the customer, and it transcends industry, size and run rate. Now some would argue that JD Edwards has some customers who are characters. But that is for a different post. As I have told you before, the JD Edwards culture is unique, and its resulting economy is valuable and deserving of our best efforts. 

    Read the article

  • Efficiently separating Read/Compute/Write steps for concurrent processing of entities in Entity/Component systems

    - by TravisG
    Setup I have an entity-component architecture where Entities can have a set of attributes (which are pure data with no behavior) and there exist systems that run the entity logic which act on that data. Essentially, in somewhat pseudo-code: Entity { id; map<id_type, Attribute> attributes; } System { update(); vector<Entity> entities; } A system that just moves along all entities at a constant rate might be MovementSystem extends System { update() { for each entity in entities position = entity.attributes["position"]; position += vec3(1,1,1); } } Essentially, I'm trying to parallelise update() as efficiently as possible. This can be done by running entire systems in parallel, or by giving each update() of one system a couple of components so different threads can execute the update of the same system, but for a different subset of entities registered with that system. Problem In reality, these systems sometimes require that entities interact(/read/write data from/to) each other, sometimes within the same system (e.g. an AI system that reads state from other entities surrounding the current processed entity), but sometimes between different systems that depend on each other (i.e. a movement system that requires data from a system that processes user input). Now, when trying to parallelize the update phases of entity/component systems, the phases in which data (components/attributes) from Entities are read and used to compute something, and the phase where the modified data is written back to entities need to be separated in order to avoid data races. Otherwise the only way (not taking into account just "critical section"ing everything) to avoid them is to serialize parts of the update process that depend on other parts. This seems ugly. To me it would seem more elegant to be able to (ideally) have all processing running in parallel, where a system may read data from all entities as it wishes, but doesn't write modifications to that data back until some later point. The fact that this is even possible is based on the assumption that modification write-backs are usually very small in complexity, and don't require much performance, whereas computations are very expensive (relatively). So the overhead added by a delayed-write phase might be evened out by more efficient updating of entities (by having threads work more % of the time instead of waiting). A concrete example of this might be a system that updates physics. The system needs to both read and write a lot of data to and from entities. Optimally, there would be a system in place where all available threads update a subset of all entities registered with the physics system. In the case of the physics system this isn't trivially possible because of race conditions. So without a workaround, we would have to find other systems to run in parallel (which don't modify the same data as the physics system), other wise the remaining threads are waiting and wasting time. However, that has disadvantages Practically, the L3 cache is pretty much always better utilized when updating a large system with multiple threads, as opposed to multiple systems at once, which all act on different sets of data. Finding and assembling other systems to run in parallel can be extremely time consuming to design well enough to optimize performance. Sometimes, it might even not be possible at all because a system just depends on data that is touched by all other systems. Solution? In my thinking, a possible solution would be a system where reading/updating and writing of data is separated, so that in one expensive phase, systems only read data and compute what they need to compute, and then in a separate, performance-wise cheap, write phase, attributes of entities that needed to be modified are finally written back to the entities. The Question How might such a system be implemented to achieve optimal performance, as well as making programmer life easier? What are the implementation details of such a system and what might have to be changed in the existing EC-architecture to accommodate this solution?

    Read the article

  • Type of computer for a developer on the road

    - by nabucosound
    Hi developers: I am planning to be traveling through eurasia and asia (russia, china, korea, japan, south east asia...) for a while and, although there are plenty of marvelous things to see and to do, I must keep on working :(. I am a python developer, dedicated mainly to web projects. I use django, sqlite3, browsers, and ocassionaly (only if I have no choice) I install postgres, mysql, apache or any other servers commonly used in the internets. I do my coding on vim, use ssh to connect, lftp to transfer files, IRC, grep/ack... So I spend most of my time in the terminal shells. But I also use IM or Skype to communicate with my clients and peers, as well as some other software (that after all is not mandatory for my day-to-day work). I currently work with a Macbook Pro (3 years old now) and so far I am very happy with the performance. But I don't want to carry it if I am going to be "on transit" for long time, it is simply huge and heavy for what I am planning to load in my rather small backpack (while traveling, less is more, you know). So here I am reading all kind of opinions about netbooks, because at first sight this is the kind of computer I thought I had to choose. I am going to use Linux for it, Microsoft is not my cup of tea and Mac is not available for them, unless I were to buy a Macbook air, something that I won't do because if I am robbed or rain/dust/truck loaders break it I would burst in tears. I am concerned about wifi performance and connectivity, I am going to use one of those linux distros/tools to hack/test on "open" networks (if you know what I mean) in case I am not in a place with real free wifi access and I find myself in an emergency. CPU speed should be acceptable, but since I don't plan to run Photoshop or expensive IDEs, I guess most of the time I won't be overloading the machine. Apart from this, maybe (surely) I am missing other features to consider. With that said (sorry about the length) here it comes my question, raised from a deep ignorance regarding the wars betweeb betbooks vs notebooks (I assume tablet PCs are not for programming yet): If I buy a netbook will I have to throw it away after 1 month on the road and buy a notebook? Or will I be OK? Thanks! Hector Update I have received great feedback so far! I would like to insist on the fact that I will be traveling through many different countries and scenarios. I am sure that while in Japan I will be more than fine with anything related to technology, connectivity, etc. But consider that I will be, for example, on a train through Russia (transsiberian) and will cross Mongolia as well. I will stay in friends' places sometimes, but most of the time I will have to work from hostel rooms, trains, buses, beaches (hey this last one doesn't sound too bad hehe!). I think some of your answers guys seem to focus on the geek part but loose the point of this "on the road" fact. I am very aware and agree that netbooks suck compared to notebooks, but what I am trying to do here is to find a balance and discover your experiences with netbooks to see first hand if a netbook will be a fail in the mid-long term of the trip for my purposes. So I have resumed the main concepts expressed here on this small list, in no particular order: keyboard/touchpad feel: I use vim so no need of moving mouse pointers that much, unless I am browsing the web, but intensive use of keyboard screen real state: again, terminal work for most of the time battery life: I think something very important weight/size: also very important looks not worth stealing it, don't give a shit if it is lost/stolen/broken: this may depend on kind of person, your economy, etc. Also to prevent losing work, I will upload EVERYTHING to the cloud whenever I'll have a chance. wifi: don't want to discover my wifi is one of those that cannot deal with half the routers on this planet or has poor connectivity. Thanks again for your answers and comments!

    Read the article

< Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >