Search Results

Search found 10106 results on 405 pages for 'fail fast'.

Page 196/405 | < Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >

  • Partner Webcast - Rethink HR! Introducing New Fusion HCM - 05 July 2012

    - by Thanos
    Introducing New Cloud Applications from the Leader in Human Capital Management Customers are constantly looking for better ways to manage talent, develop leaders, engage with employees, and strategically plan their workforce. To meet these challenges, you need modern applications that can efficiently deliver analytical insights, collaborative tools, and a personalized user experience. That's why Oracle has rethought the business of HR to provide value to the entire organization—from HR professionals to employees to managers. With Oracle Fusion Human Capital Management, you can: Do things your way with a role-centric user experience that can be personalized to the way you work Know your people better with the most complete HR business intelligence capabilities Work as a team by quickly finding and connecting with peers and experts to deliver concrete results Leverage enterprise-grade software as a service (SaaS) to get up and running fast and securely Our speaker, Csaba Fehér, HCM Senior Sales Consultant, will share how Oracle Fusion Human Capital Management can deliver immediate business value with new techniques and the latest innovations to address your toughest concerns, including how to: Align compensation and performance Organize and calibrate critical talent Predict key workforce trends and risks Delivery FormatThis FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Duration: 1 hour Register Now! For any questions please contact us at [email protected]

    Read the article

  • Data Integration 12c Raising the Big Data Roof at Oracle OpenWorld

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Times New Roman","serif"; mso-fareast-font-family:"MS Mincho";} Author: Dain Hansen, Director, Oracle It was an exciting OpenWorld 2013 for us in the Data Integration track. Our theme this year was all about ‘being future ready’ - previewing one of our biggest releases this year: Oracle Data Integration 12c. Just this week we followed up with this preview by announcing the general availability of 12c release for Oracle’s key data integration products: Oracle Data Integrator 12c and Oracle GoldenGate 12c. The new release delivers extreme performance, increase IT productivity, and simplify deployment, while helping IT organizations to keep pace with new data-oriented technology trends including cloud computing, big data analytics, real-time business intelligence. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Times New Roman","serif"; mso-fareast-font-family:"MS Mincho";} Mark Hurd's keynote on day one set the tone for the Data Integration sessions. Mark focused on big data analytics and the changing consumer expectations. Especially real-time insight is a key theme for Oracle overall and data integration products. In Mark Hurd's keynote we heard from key customers, such as Airbus and Thomson Reuters, how real-time analysis of operational data including machine data creates value, in some cases even saves lives. Thomas Kurian gave a deeper look into Oracle's big data and fast data solutions. In the initial lead Data Integration track session - Brad Adelberg, VP of Development, presented Oracle’s Data Integration 12c product strategy based on key trends from the initial OpenWorld keynotes. Brad talked about how Oracle's data integration products address the new data integration requirements that evolved with cloud computing, big data, and changing consumer expectations and how they set the key themes in our products’ road map. Brad explained why and how fast-time to value, high-performance and future-ready solutions is the top focus areas for product development. If you were not able to attend OpenWorld or this session I recommend reading the white paper: Five New Data Integration Requirements and How to Meet them with Oracle Data Integration, which provides an in-depth look into how Oracle addresses the new trends in the DI market. Following Brad’s session, Nick Wagner provided in depth review of Oracle GoldenGate’s latest features and roadmap. Nick discussed how Oracle GoldenGate’s tight integration with Oracle Database sets the product apart from the competition. We also heard that heterogeneity of the product is still a major focus for GoldenGate’s development and there will be more news on that front when there is a major release. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Times New Roman","serif"; mso-fareast-font-family:"MS Mincho";} After GoldenGate’s product strategy session, Denis Gray from the PM team presented Oracle Data Integrator’s product strategy session, talking about the latest and greatest on ODI. Another good session was delivered by long-time GoldenGate users, Comcast.  Jason Hurd and Amit Patel of Comcast talked about the various use cases they deploy Oracle GoldenGate throughout their enterprise, from database upgrades, feeding reporting systems, to active-active database synchronization.  The Comcast team shared many good tips on how to use GoldenGate for both zero downtime upgrades and active-active replication with conflict management requirement. One of our other important goals we had this year for the Data Integration track at OpenWorld was hearing from our customers. We ended day 1 on just that, with a wonderful award ceremony for Oracle Excellence Awards for Oracle Fusion Middleware Innovation. The ceremony was held in the Yerba Buena Center for the Arts. Congratulations to Royal Bank of Scotland and Yalumba Wine Company, the winners in the Data Integration category. You can find more information on the award and the winners in our previous blog post: 2013 Oracle Excellence Awards for Fusion Middleware Innovation… Selected for their innovation use of Oracle’s Data Integration products; the winners for the Data Integration Category are Royal Bank of Scotland and The Yalumba Wine Company. Congratulations!!! Royal Bank of Scotland’s Market and International Banking division provides clients across the globe with seamless trading and competitive pricing, underpinned by a deep knowledge of risk management across the full spectrum of financial products. They handle millions of transactions daily to keep the lifeblood of their clients’ businesses flowing – whether through payment management solutions or through bespoke trade finance solutions. Royal Bank of Scotland is leveraging Oracle GoldenGate and Oracle Data Integrator along with Oracle Business Intelligence Enterprise Edition and the Oracle Database for a variety of solutions. Mainly, Oracle GoldenGate and Oracle Data Integrator are used to feed their data warehouse – providing a real-time data integration solution that feeds transactional data to their analytics system in minutes to enable improved decision making with timely, accurate data for their business users. Oracle Data Integrator’s in-database transformation capabilities and its ability to integrate with Oracle GoldenGate for real-time data capture is the foundation of this implementation. This solution makes it such that changes happening in the analytics systems are available the same day they are deployed on the operational system with 100% data quality guaranteed. Additionally, the solution has helped to reduce their operational database size from 150GB to 10GB. Impressive! Now what if I told you this solution was built in 3 months and had a less than 6 month return on investment? That’s outstanding! The Yalumba Wine Company is situated in the Barossa Valley of Australia. It is the oldest family owned winery in Australia with a unique way of aging their wines in specially crafted 100 liter barrels. Did you know that “Yalumba” is Aboriginal for “all the land around”? The Yalumba Wine Company is growing rapidly, and was in need of introducing a more modern standard to the existing manufacturing processes to meet globalization demands, overall time-to-market, and better operational efficiency objectives of product development. The Yalumba Wine Company worked with a partner, Bristlecone to develop a unique solution whereby Oracle Data Integrator is leveraged to pull data from Salesforce.com and JD Edwards, in addition to their other pre-existing source systems, for consumption into their data warehouse. They have emphasized the overall ease of developing integration workflows with Oracle Data Integrator. The solution has brought better visibility for the business users, shorter data loading and transformation performance to their data warehouse with rapid incorporation of new data sources, and a solid future-proof foundation for their organization. Moving forward, they plan on leveraging more from Oracle’s Data Integration portfolio. Terrific! In addition to these two customers on Tuesday we featured many other important Oracle Data Integrator and Oracle GoldenGate customers. On Tuesday the GoldenGate panel included: Land O’Lakes, Smuckers, and Veolia Water. Besides giving us yummy nutrition and healthy water, these companies have another aspect in common. They all use GoldenGate to boost their ERP application. Please read the recap by Irem Radzik. On Wednesday, the ODI Panel included: Barry Ralston and Ryan Weber of Infinity Insurance, Paul Stracke of Paychex Inc., and Ian Wall of Vertex Pharmaceuticals for a session filled with interesting projects, use cases and approaches to leveraging Oracle Data Integrator. Please read the recap by Sandrine Riley for more. Thanks to everyone who joined with us and we hope to stay connected! To hear more about our Data Integration12c products join us in an upcoming webcast to learn more. Follow us www.twitter.com/ORCLGoldenGate or goto our website at www.oracle.com/goto/dataintegration

    Read the article

  • Why not to use StackTrace to find what method called you

    - by Alex.Davies
    Our obfuscator, SmartAssembly, does some pretty crazy reflection. It's an obfuscator, it's sort of its job to do things in the most awkward way possible. But sometimes, you can go too far. One such time is this little gem from the strings encoding feature: StackTrace stackTrace = new StackTrace(); StackFrame frame = stackTrace.GetFrame(1); Type ownerType = frame.GetMethod().DeclaringType; It's designed to find the type where the calling method is defined. A user found that strings encoding occasionally broke on x64 systems. Very strange. After some debugging (thank god for Reflector Pro, it would be impossible to debug processed assemblies without it) I found that the ownerType I got back was wrong. The reason is that the x64 JIT does tail call optimisation. This saves space on the stack, and speeds things up, by throwing away a method's stack frame if the last thing that it calls is the only thing returned. When this happens, the call to StackTrace faithfully tells you that the calling method is the one that called the one we really wanted. So using StackTrace isn't safe for anything other than debugging, and it will make your code fail in unpredictable ways. Don't use it!

    Read the article

  • Am I wrong to disagree with A Gentle Introduction to symfony's template best practices?

    - by AndrewKS
    I am currently learning symfony and going through the book A Gentle Introduction to symfony and came across this section in "Chapter 4: The Basics of Page Creation" on creating templates (or views): "If you need to execute some PHP code in the template, you should avoid using the usual PHP syntax, as shown in Listing 4-4. Instead, write your templates using the PHP alternative syntax, as shown in Listing 4-5, to keep the code understandable for non-PHP programmers." Listing 4-4 - The Usual PHP Syntax, Good for Actions, But Bad for Templates <p>Hello, world!</p> <?php if ($test) { echo "<p>".time()."</p>"; } ?> (The ironic thing about this is that echo statement would look even better if time was variable declared in the controller, because then you could just embed the variable in the string instead of concatenating) Listing 4-5 - The Alternative PHP Syntax, Good for Templates <p>Hello, world!</p> <?php if ($test): ?> <p><?php echo time(); ?> </p><?php endif; ?> I fail to see how listing 4-5 makes the code "understandable for non-PHP programmers", and its readability is shaky at best. 4-4 looks much more readable to me. Are there any programmers who are using symfony that write their templates like those in 4-4 rather than 4-5? Are there reasons I should use one over the other? There is the very slim chance that somewhere down the road someone less technical could be editing it the template, but how does 4-5 actually make it more understandable to them?

    Read the article

  • Cannot FTP without simultaneous SSH connection?

    - by Lucas
    I'm trying to set up an old box as a backup server (running 10.04.4 LTS). I intend to use 3rd party software on my PC to periodically connect to my server via FTP(S) and to mirror certain files. For some reason, all FTP connection attempts fail UNLESS I'm simultaneously connected via SSH. For example, if I use putty to test the connection to port 21, the system hangs and times out. I get: 220 Connected to LeServer USER lucas 331 Please specify the password. PASS [password] <cursor> However, when I'm simultaneously logged in (in another session) everything works: 220 Connected to LeServer USER lucas 331 Please specify the password. PASS [password] 230 Login successful. Basically, this means that my software will never be able to connect on its own, as intended. I know that the correct port is open because it works (sometimes) and nmap gives me: Starting Nmap 5.00 ( http://nmap.org ) at 2012-03-20 16:15 CDT Interesting ports on xx.xxx.xx.x: Not shown: 995 closed ports PORT STATE SERVICE 21/tcp open ftp 22/tcp open ssh 53/tcp open domain 139/tcp open netbios-ssn 445/tcp open microsoft-ds Nmap done: 1 IP address (1 host up) scanned in 0.15 seconds My only hypothesis is that this has something to do with iptables. Maybe it's allowing only established connections? I don't think that's how I set it up, but maybe? Here's my iptables rules for INPUT: lucas@rearden:~$ sudo iptables -L INPUT Chain INPUT (policy DROP) target prot opt source destination fail2ban-ssh tcp -- anywhere anywhere multiport dports ssh ufw-before-logging-input all -- anywhere anywhere ufw-before-input all -- anywhere anywhere ufw-after-input all -- anywhere anywhere ufw-after-logging-input all -- anywhere anywhere ufw-reject-input all -- anywhere anywhere ufw-track-input all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere tcp dpt:ftp I'm using vsftpd. Any thoughts/resources on how I could fix this? L

    Read the article

  • Ubuntu 12.04 LTS loops the login screen unless you login as Guest

    - by Mário Silva
    I am running a VMWare Player with a Ubuntu 12.04 LTS Precise Pangloin as Guest on my Windows 7. Sometimes I get the shutdown blue screen error in Windows, this time it happened when I was running the Player. When I restarted everything Ubuntu gave the not so unfamiliar in this forum Login Loop in adminstrator login. I login and there's this black screen where I can only read: "piix4...smbus:0.0.0.07.3 Host Smbus controller not enabled" . When I go to the Prompt in root mode it fails to update and only upgraded, specially some plugins ( I think graphic plugins) which also appear in one an error message after quitting the prompt, but they´are successfully installed. They are not the error message. After that I have been working with the Fail/safe Mode recovery panel. When I try to update via Root I get errors like this: W:failed to fetch http://extras.ubuntu.com/ubuntu/dists/precise/release.gpg could not resolve 'extras/ubuntu.com There are 8 more like this referring to areas like: -archive/canonical.com -ppa.Launchpad.net -security.Ubuntu.com -Us.archive.ubuntu.com - release.gpg precise-updates/release.gpg precise_backport/release.gpg Final Message: some index files failed to download.....they have been ignored or old files are used. The black screens most of the time pass by too fast for me to pick up any information. But in general I think I have done everything I was able to in the recovery panel including updating network and graphic packages and recovering filesystem packages and the basic stuff ( I am a beginner regarding Linux ) in the root prompt. Now I am stuck in this screen with graphic options: - Run in low-graphics mode just for one session - Reconfigure Graphics - Troubleshoot the error - Exit to console login I am trying to choose to reconfigure graphics but the mouse disappears in the virtual machine screen and sometimes when options change ity´s only the first and last option. ut this happens from the blue without messages. This particular option menu is in the regular GUI style against a black screen in Terminal style. Really strange. Thanx in advance, all is welcome and appreciated.

    Read the article

  • LLVM-3.1 libLLVMSupport.a undefined reference to `dladdr'

    - by user91387
    I'm trying to compile using the llvm-3.1 package. I'm running 12.04 x64 (3.2.0-26 kernel) && 12.10 (3.5.0-4) x64 backported llvm-3.1 from quantal, then debian experimental. Next I tried 12.10 with the native ubuntu llvm-3.1 package; this failed as well. user@system:/tmp/llvm-test# make compiling cpp yacc file: decaf-llvm.y output file: decaf-llvm bison -b decaf-llvm -d decaf-llvm.y /bin/mv -f decaf-llvm.tab.c decaf-llvm.tab.cc flex -odecaf-llvm.lex.cc decaf-llvm.lex g++ -o ./decaf-llvm decaf-llvm.tab.cc decaf-llvm.lex.cc decaf-stdlib.c `llvm-config --cppflags --ldflags --libs core jit native` -ly -ll /usr/lib/llvm-3.1/lib/libLLVMSupport.a(Signals.o): In function `PrintStackTrace(void*)': (.text+0x6c): undefined reference to `dladdr' /usr/lib/llvm-3.1/lib/libLLVMSupport.a(Signals.o): In function `PrintStackTrace(void*)': (.text+0x18f): undefined reference to `dladdr' collect2: error: ld returned 1 exit status make: *** [decaf-llvm] Error 1 I know the code works as I've run it in centos fine using llvm-3.1-6.fc18(rpm) Google was a bit helpful with this: "On some systems, incluning Ubuntu 11.10, linking may fail with message that libLLVMSupport.a in function PrintStackTrace(void*) has undefined reference to dladdr." "Workaround is to compile LLVM with cmake specifying the following variable: -DCMAKE_EXE_LINKER_FLAGS=-ldl" http://svn.dsource.org/projects/bindings/trunk/llvm-3.0/Readme I double checked y ldflags and everything seems ok. user@system:/llvm-config --ldflags -L/usr/lib/llvm-3.1/lib -lpthread -lffi -ldl -lm I'm unclear of what to do next; any suggestions?

    Read the article

  • Cedarview drivers boot to a blank screen

    - by map7
    I'm following the tutorial following tutorial to get my Intel D2700DC motherboard's graphics working: http://daily.siebler.eu/2012/06/ubuntu-12-04-driver-for-intel-cedarview-atom-n2000-und-d2000-serie/ When I boot I'm getting a blank screen. I followed the tutorial and read all the comments. I've also tried: Install gdm and use this instead of lightdm (ubuntu default) sudo apt-get install gdm Remove previous pae kernel: http://www.liberiangeek.net/2011/11/remove-old-kernels-in-ubuntu-11-10-oneiric-ocelot/ Reboot before adding cedarview packages. Have tried with and without the "video=LVDS-1:d" to GRUB_CMDLINE_LINUX_DEFAULT variable in /etc/default/grub I still get a blank screen. I am plugged into a HD screen through the HDMI and have tried the DVI connector also. I can see the grub menu, then a little of the loading and then 'No signal'. I can still ssh into the box though so it is logging in. lspci -v -s 00:02.0 00:02.0 VGA compatible controller: Intel Corporation Atom Processor D2xxx/N2xxx Integrated Graphics Controller (rev 09) (prog-if 00 [VGA controller]) Subsystem: Intel Corporation Device 2011 Flags: bus master, fast devsel, latency 0, IRQ 45 Memory at 80100000 (32-bit, non-prefetchable) [size=1M] I/O ports at 20d0 [size=8] Expansion ROM at <unassigned> [disabled] Capabilities: <access denied> Kernel driver in use: pvrsrvkm Kernel modules: cedarview_gfx uname -a Linux test-desktop 3.2.0-35-generic #55-Ubuntu SMP Wed Dec 5 17:45:18 UTC 2012 i686 i686 i386 GNU/Linux

    Read the article

  • Cross-platform desktop programming: C++ vs. Python

    - by John Wells
    Alright, to start off, I have experience as an amateur Obj-C/Cocoa and Ruby w/Rails programmer. These are great, but they aren't really helpful for writing cross-platform applications (hopefully GNUStep will one day be complete enough for the first to be multi platform, but that day is not today). C++, from what I can gather, is extremely powerful but also a huge, ugly behemoth that can take half a decade or more to master. I've also read that you can very easily not only shoot yourself in the foot, but blow your entire leg off with it since memory management is all manual. Obviously, this is all quite intimidating. Is it correct? Python seems to provide most of the power of C++ and is much easier to pick up at the cost of speed. How big is this sacrifice? Is it meaningful or can it be ignored? Which will have me writing fast, stable, highly reliable applications in a reasonable amount of time? Also, is it better to use Qt for your UI or instead maintain separate, native front ends for each platform? EDIT: For extra clarity, there are two types applications I want to write: one is an extremely friendly and convenient database frontend and the other, which no doubt will come much later on, is a 3D world editor.

    Read the article

  • Good resources for learning Rails?

    - by Bobby Tables
    I just finished working through Peter Cooper's "Beginning Ruby". So now I've got a reasonable grounding in the Ruby language and would like to move onto learning Rails. This question's answers give some good pointers, but I'd like to hear some specific reviews of books and online materials. I generally learn best by working through books with good practical/technical examples AND some passive reading content that breaks up the study between practical and reading sessions (this is what made "Beginning Ruby" great for me), but I'm worried that RoR is evolving fast and that any printed book I order might be obsolete by the time I get it and work through it. Is this a fair worry? Or can anyone recommend a good Rails 3 book that should be up to date at least for the next year or so? Also, I had a brief look at some of the online resources from the other questions, and Rails for Zombies seems to get a lot of praise. Has anyone here actually used it as their introductory guide to Rails? Basically I'd like to hear first-hand accounts of people who went through this "Ruby-to-Rails" learning phase recently and which materials were useful to you.

    Read the article

  • Efficient solution for multiplayer space partioning?

    - by DevilWithin
    This question is a little tricky, but I will try to make it clear, Lets say I am building an online game, not in a mmo scale, but gladly supporting as many players as possible, in a authoritative server approach, and I want really big worlds with lots of AI simulated enemies. I am aware of a few strategies to save server's CPU by subdividing the space and not processing what doesn't need processing. I 've already split the world by regions, that will require loading times and small transitions, which i think is important to mantain the quality of gameplay when playing locally (alone or even with a couple of friends) because the players won't normally be in more than one or two regions. But even a region can become pretty big, and have a lot of NPC simulating at a time, how do I handle this without screwing the player's experience? Approaches like one server per region and alike are not in the table. I am mainly looking for data structures to hold hordes of enemies, and even peaceful NPC. To finalize the question, please note that vehicles exist, therefore its considerably fast to travel within a region, influencing the "when" to cull areas. Sorry for the confusing question, thanks

    Read the article

  • Web application stopped behaving normally after migration from Dedicated servers to Ec2 servers

    - by sunny
    Web application stopped behaving normally after migration from Dedicated servers to Ec2 servers Old Dedicated Server configuration systemtype: 32 bit operating system. RAM: 3.99 Gb Ram Processer : 1.86 GHZ New Servers in EC2 systemtype: 64 bit operating system. RAM: 3.99 Gb Ram Processer : 2.73 GHZ - 2.31 GHZ Everthing is working fine in our production server. But as we migrated our web application from old servers to new servers and transferred the entire network traffice to new servers.Site suddenly stopped behaving abnormally. Sometimes it's super fast Some times slow. sometimes normal some times super slow and sometimes no response This all above happens with a time interval or around 2 - 3 minutes. This went on happening 8 - 10 hours. Few differences in old and new servers are Old servers are using II6 and New servers are using IIS 7.5. We are using exactly the same code in the old and new servers. Even the ec2 servers are having higher CPU then older servers but still having lower. But not sure how this is happned. Please suggets your views...

    Read the article

  • Cluster Nodes as RAID Drives

    - by BuckWoody
    I'm unable to sleep tonight so I thought I would push this post out VERY early. When you don't sleep your mind takes interesting turns, which can be a good thing. I was watching a briefing today by a couple of friends as they were talking about various ways to arrange a Windows Server Cluster for SQL Server. I often see an "active" node of a cluster with a "passive" node backing it up. That means one node is working and accepting transactions, and the other is not doing any work but simply "standing by" waiting for the first to fail over. The configuration in the demonstration I saw was a bit different. In this example, there were three nodes that were actively working, and a fourth standing by for all three. I've put configurations like this one into place before, but as I was looking at their architecture diagram, it looked familar - it looked like a RAID drive setup! And that's not a bad way to think about your cluster arrangements. The same concerns you might think about for a particular RAID configuration provides a good way to think about protecting your systems in general. So even if you're not staying awake all night thinking about SQL Server clusters, take this post as an opportunity for "lateral thinking" - a way of combining in your mind the concepts from one piece of knowledge to another. You might find a new way of making your technical environment a little better. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Announcing: Oracle Enterprise Manager 12c Delivers Advanced Self-Service Automation for Oracle Database 12c Multitenant

    - by Scott McNeil
    New Self-Service Driven Provisioning of Pluggable Databases Today Oracle announced new capabilities that support managing the full lifecycle of pluggable database as a service in Oracle Enterprise Manager 12c Release 3 (12.1.0.3). This latest release builds on the existing capabilities to provide advanced automation for deploying database as a service using Oracle Database 12c Multitenant option. It takes it one step further by offering pluggable database as a service through Oracle Enterprise Manager 12c self-service portal providing customers with fast provisioning of database cloud services with minimal time and effort. This is a significant addition to Oracle Enterprise Manager 12c’s existing portfolio of cloud services that includes infrastructure as a service, database as a service, testing as a service, and Java platform as a service. The solution provides a self-service mechanism to provision pluggable databases allowing users to request and access database(s) on-demand. The self-service operations are also enabled through REST APIs allowing customers to integrate with third-party automation systems or their custom enterprise portals. Benefits Self-service provisioning allows rapid access to pluggable database as a service for hosting or certifying applications on Oracle Database 12c Self-service driven migration to pluggable database as a service in order to migrate a pre-Oracle Database 12c database to a pluggable database as a service model and test the consolidation strategy Single service catalog for all approved pluggable database as a service configurations which helps customers achieve standardization while catering to all applications and users in the enterprise Resource guarantee via database resource manager (and IORM on Oracle Exadata) that enables deployment of mixed workloads in a shared environment Quota, role based access, and policy based management that enforces governance and reduces administrative overhead Chargeback or showback which improves metering and accountability for services consumed by each pluggable database Comprehensive REST APIs that support integration with ticketing or change management systems, and or with other self-service portals Minimal administrative and maintenance overhead through self-managing automation that allows for intelligent placement of pluggable databases To understand how pluggable database as a service works, watch this quick demo: Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter Download the Oracle Enterprise Manager Cloud Control12c Mobile app

    Read the article

  • Recent resources on Entity Framework 4

    - by Eric Nelson
    I just posted on the bits you need to install to explore all the features of Entity Framework 4 with the Visual Studio 2010 RC. I’ve also had a quick look (March 12th 2010) to see what new resources are out there on EF4. They appear a little thin on the ground – but there are some gems. The following all caught my attention: Julie Lerman has published 2 How-to-videos on EF4 on pluralsight.com. You need to create a free guest pass to watch them. Getting Started with Entity Framework 4.0 – Session given at Cairo CodeCamp 2010 . This includes ppt and demos. Entity Framework 4 providers – read through the comments What’s new with Entity Framework in Visual Studio 2010 RC Extending the design surface of EF4 using the Extension Starter Kit Persistence Ignorance and EF4 on geekSpeak on channel 9 (poor audio IMHO – I gave up) First of a series of posts on EF4 How to stop your dba having a heart attack with EF4 from Simon Sabin in the UK. This includes ppt and demos. And the biggy. You no longer have to depend on SQL Profiler to keep an eye on the generated SQL. There is now a commercial profiler for Entity Framework.  I am yet to try it – but I listened to a .NET rocks podcast which made it sound great. It is “hidden” in a session on DSLs in Boo –> Oren Eini on creating DSLs in Boo. This is a much richer experience than you would get from SQL Profiler – matching the SQL to the .NET code. And finally a momentous #fail to … drum roll… the Visual Studio 2010 and .NET Framework 4 Training Kit Feb release. This just contains one ppt on EF4 – and it is not even a good one. Real shame. P.S. I will update the 101 EF4 Resources with the above … but post devweek in case I find some more goodies. Related Links 101 EF4 Resources

    Read the article

  • LinkedIn Woopsie with the Outlook 2010 Social Media Connector

    - by Martin Hinshelwood
    I have always used the LinkedIn toolbar for Outlook to sort out, upload and sync my contacts. Because of this I have over 2000 contacts in my contacts list that I sync with my phone, Plaxo, live, Google and others. I got a surprise the other day when my LinkedIn account was suspended and I was unable to login.   Figure: Bad, account suspended   So I contacted LinkedIn customer services to find out what the problem is, and here is the response: Dear Martin, We have recently noticed a large number of page searches and profile views through your LinkedIn account. We are aware that you may be using an automated or manual process to systematically view LinkedIn web pages. The information within LinkedIn is provided by our users for usage on the site only. In order to protect user privacy, our User Agreement prohibits using: 1. Automated or manual means to view an excessively high number of profiles or mini-profiles. 2. Automated means to run searches to collect or store data obtained from our site. We have placed a restriction on your account until you agree to stop using these or similar methods to view pages on LinkedIn. We look forward to your reply to discuss this further. Sincerely, LinkedIn Privacy Team It looks like LinkedIn has suspended my account because of something that their component is doing! I do not know if this is an isolated case, or if it will happen more as more users get on Outlook 2010 and update to the new software, but watch out. Has anyone else been suspended who has installed the Office 2010 RTM and the LinkedIn Add-On? Technorati Tags: Fail,LinkedIn,Outlook 2010

    Read the article

  • New Packt Books: APEX & JRockit

    - by [email protected]
      I have received these 2 ebooks from Packt Publkishing and I am currently reviewing them. Both of them look great so far.   Oracle Application Express 3.2 - The Essentials and More First of all, I have to mention that I am new to APEX. I was interested on this product which is a development tool for Web applications on the Oracle Database. As I support JDeveloper and ADF, which are products that work very closely with the Oracle Database and are a rapid development tool as well, it is always interesting and useful to know complementary tools. APEX looks very useful and the book includes many working examples. A more complete review of this book is coming soon. Further information about this book can be seen at Packt.   Oracle JRockit: The Definitive Guide Many of our Oracle Coherence customers run their caches and clusters using JRockit. This JVM has helped us to solve lots of Service Requests. It is a really reliable, fast and stable JVM. It works great on both development and production environments with big amounts of data, concurrency, multi-threading and many other factors that can make a JVM crash. I must also mention JRockit Mission Control (JRMC), which is a great tool for management and monitoring. I really recommend it. As a matter of fact, some months ago, I created a document entitled "How to Monitor Coherence-Based Applications using JRockit Mission Control" (Doc Id 961617.1) on My Oracle Support. Also, the JRockit Runtime Analyzer (JRA) and it successor of newer versions, the JRockit Flight Recorder (JFR) are deeply reviewed. This book contains very clear and complete information about all this and more. I will post an entry with a more complete review soon (and will probably post an entry about Coherence monitoring with JRMC soon too). Further information about this book can be seen at Packt.  

    Read the article

  • Issue with multiplayer interpolation

    - by Ben Cracknell
    In a fast-paced multiplayer game I'm working on, there is an issue with the interpolation algorithm. You can see it clearly in the image below. Cyan: Local position when a packet is received Red: Position received from packet (goal) Blue: Line from local position to goal when packet is received Black: Local position every frame As you can see, the local position seems to oscillate around the goals instead of moving between them smoothly. Here is the code: // local transform position when the last packet arrived. Will lerp from here to the goal private Vector3 positionAtLastPacket; // location received from last packet private Vector3 goal; // time since the last packet arrived private float currentTime; // estimated time to reach goal (also the expected time of the next packet) private float timeToReachGoal; private void PacketReceived(Vector3 position, float timeBetweenPackets) { positionAtLastPacket = transform.position; goal = position; timeToReachGoal = timeBetweenPackets; currentTime = 0; Debug.DrawRay(transform.position, Vector3.up, Color.cyan, 5); // current local position Debug.DrawLine(transform.position, goal, Color.blue, 5); // path to goal Debug.DrawRay(goal, Vector3.up, Color.red, 5); // received goal position } private void FrameUpdate() { currentTime += Time.deltaTime; float delta = currentTime/timeToReachGoal; transform.position = FreeLerp(positionAtLastPacket, goal, currentTime / timeToReachGoal); // current local position Debug.DrawRay(transform.position, Vector3.up * 0.5f, Color.black, 5); } /// <summary> /// Lerp without being locked to 0-1 /// </summary> Vector3 FreeLerp(Vector3 from, Vector3 to, float t) { return from + (to - from) * t; } Any idea about what's going on?

    Read the article

  • Oracle University New Courses (Week 35)

    - by swalker
    Oracle University released the following new (versions of) courses recently: Fusion Middleware Oracle Directory Services 11g: Administration (5 days) Oracle SOA Suite 11g: Essential Concepts (Training on Demand) e-Business Suite R12 Oracle HRMS iRecruitment Fundamentals (Self-Study Course) R12 Oracle Payroll Fundamentals: Administration (Self-Study Course) R12 Oracle HRMS System Administration Fundamentals (Self-Study Course) R12 Oracle HRMS Self Service Fundamentals (Self-Study Course) R12 Oracle HRMS Implement and Use Fast Formula (Self-Study Course) R12 HRMS Work Structures Fundamentals (Self-Study Course) R12 HRMS Total Compensation Foundations (Self-Study Course) Siebel Siebel 8.1.x Chat and Voice Integration Using CCA (Self-Study Course) Siebel 8.1.x Search using Oracle Secure Enterprise Search (Self-Study Course) Siebel 8.1.x COM Web Services (Self-Study Course) Siebel 8.1.x COM Asset Based Order Management (Self-Study Course) Siebel 8.1.x COM: What is New in Product Configurator (Self-Study Course) Siebel 8.1.x COM Product Configurator Caching & Performance Management (Self-Study Course) Siebel 8.1.x COM PSP Engine Caching and Performance Management (Self-Study Course) Siebel 8.1.x Remote: Administration (Self-Study Course) Siebel 8.1.x Remote: Technical Foundations (Self-Study Course) Siebel Tools: Configuring Chart and Tree Applets (Self-Study Course) Sun - Server Administration SPARC SuperCluster Administration and Maintenance Seminar (2 days) OPN Only Sparc T4-Based Servers Installation Boot Camp (1 day) Primavera Primavera P6 Application Administration Rel 8.x (2 days) Oracle Retail Retail Merchandising System (RMS) Business Overview (Self-Study Course) Retail Invoice Matching (ReIM) Product Overview (Self-Study Course) Retail Invoice Matching (ReIM) Business Introduction (Self-Study Course) Retail Demand Forecasting: RDF Classic Product Overview (Self-Study Course) Retail Demand Forecasting Introduction (Self-Study Course) Retail Data Warehouse (RDW) Overview 13.1 (Self-Study Course) Oracle Retail Point-of-Service (POS) Product Overview (Self-Study Course) Retail Sales Audit (ReSA) Product Overview (Self-Study Course) Retail Price Management (RPM) Product Overview (Self-Study Course) Retail Merchandising System (RMS) Technical Introduction (Self-Study Course) Oracle Retail Integration Bus (RIB) Product Overview (Self-Study Course) Oracle Communiucations Unified Communications Suite Convergence Customization (2 days) OSM Foundations I: Tasks, Processes and Orders Get in contact with your local Oracle University team for more details and course dates. Stay Connected to Oracle University: LinkedIn OracleMix Twitter Facebook Google+

    Read the article

  • null pointers vs. Null Object Pattern

    - by GlenH7
    Attribution: This grew out of a related P.SE question My background is in C / C++, but I have worked a fair amount in Java and am currently coding C#. Because of my C background, checking passed and returned pointers is second-hand, but I acknowledge it biases my point of view. I recently saw mention of the Null Object Pattern where the idea is than an object is always returned. Normal case returns the expected, populated object and the error case returns empty object instead of a null pointer. The premise being that the calling function will always have some sort of object to access and therefore avoid null access memory violations. So what are the pros / cons of a null check versus using the Null Object Pattern? I can see cleaner calling code with the NOP, but I can also see where it would create hidden failures that don't otherwise get raised. I would rather have my application fail hard (aka an exception) while I'm developing it than have a silent mistake escape into the wild. Can't the Null Object Pattern have similar problems as not performing a null check? Many of the objects I have worked with hold objects or containers of their own. It seems like I would have to have a special case to guarantee all of the main object's containers had empty objects of their own. Seems like this could get ugly with multiple layers of nesting.

    Read the article

  • Infragistics - Now Available! 2 New Packs and Silverlight CTPs

    Infragistics® glows silver this season as we continue to innovate for the Silverlight 3 platform and deliver stockings stuffed with high performance controls needed to quickly and easily create great user experiences in Silverlight; and two new ICON packs guaranteed to make your applications shine. First Silverlight Pivot Grid Now Available Perfect for working with multi-dimensional data, the xamWebPivotGrid™ presents decision makers with highly-interactive pivoting views of business intelligence. Our new high-performance Silverlight charting control, the xamWebDataChart™ enables blazing fast updates every few milliseconds to charts with millions of data points. Both of these controls are planned for the 2010 Volume 1 release of NetAdvantage for Silverlight Data Visualization. Gift to Silverlight Line of Business Applications You’ll be able to deliver superior user experiences in LOB applications with Silverlight RIA services support, a ZIP compression library, a new control persistence framework, and new Silverlight data grid features like unbound columns and template layouts, plus an Office 2007-style ribbon UI for Silverlight. Tis the Season to Add Some Shine The NetAdvantage ICONS Legal Pack adds a touch of legalese to any application user interface with its rich, legal system-themed graphic icons. The NetAdvantage ICONS Education Pack supplies familiar, academic icons that developers can easily add to software reaching students, educators, schools and universities. Sold in themes Packs, ICON packs that are already available are: Web & Commerce, Healthcare, Office Basics, Business & Finance and Software & Computing. Buy any two of the seven packs for $299 USD (MSRP); 3 packs for $399 USD (MSRP); or sold separately for $199 USD (MSRP) each. For more Product details Contact Infragistics:      +1 (800) 231-8588 In Europe (English Speakers):      +44 (0) 20 8387 1474 En France (en langue française):      +33 (0) 800 667 307 Für Deutschland (Deutscher Sprecher):       0800 368 6381 In India:     +91 (80) 6785 1111 span.fullpost {display:none;}

    Read the article

  • Difference between Detach/Attach and Restore/BackUp a DB

    - by SAMIR BHOGAYTA
    Transact-SQL BACKUP/RESTORE is the normal method for database backup and recovery. Databases can be backed up while online. The backup file size is usually smaller than the database files since only used pages are backed up. Also, in the FULL or BULK_LOGGED recovery model, you can reduce potential data loss by performing transaction log backups. Detaching a database removes the database from SQL Server while leaving the physical database files intact. This allows you to rename or move the physical files and then re-attach. Although one could perform cold backups using this technique, detach/attach isn't really intended to be used as a backup/recovery process. Commonly it is recommended that you use BACKUP/RESTORE for disaster recovery (DR) scenario and copying data from one location to another. But this is not absolute, sometimes for a very large database, if you want to move it from one location to another, backup/restore process may spend a lot of time which you do not like, in this case, detaching/attaching a database is a better way since you can attach a workable database very fast. But you need to aware that detaching a database will bring it offline for a short time and detaching/attaching does not provide DR function. For more information about detaching and attaching databases, you can refer to: Detaching and Attaching Databases http://technet.microsoft.com/en-us/library/ms190794.aspx

    Read the article

  • Lenovo Ideapad 205 and Ubuntu 12.04 LTS

    - by Oranges
    I just wanted to report one thing, because I know there are a lot of Ideapad S205 users out there, that had issues running Ubuntu on their Lenovo Ideapad S205 in the past. Lenovo Ideapad with Ubuntu 12.04 LTS: Everything works out of the box. I just installed it and everything, including the microphone, the webcam, the WLAN, the AMD E-350 CPU/GPU and the sound worked right away. In other words: Everything works. Even the AMD E-350 GPU works very well with the Open Source drivers and with the prop. drivers. The prop. drivers were automatically offered to me via Ubuntu itself. It works great. A long idea, written short: I love it. It's extremly stable and very fast. I am using the German version of the Idepad S205, combined with the 32bit version of Ubuntu. There is only one, single thing to keep in mind when installing Ubuntu on the S205: Plugin your USB/CD/DVD drive in the left USB-Port. I don't know why, but it only works with the USB-Port on the left side of the Netbook. This is only for installing Ubuntu. After it has been installed and updated successfully, all of the USB ports will work. Oranges - feedback on Lenovo S205 with Ubuntu 12.04 LTS.

    Read the article

  • What are benefit/drawbacks of classifying defects during a peer code review

    - by DXM
    About 3 months ago, our engineering group rolled out Review Board to be used for all peer code reviews. Today, I had a discussion with one of the people involved in that process and found out that we are already looking for a replacement (possibly something commercial) because of several missing features. One of the features that is apparently asked by many people is the ability to classify/categorize each code review comment (i.e. is it a style issue, coding convention, resource leak, logic error, crash... whatever). For those teams that regularly practice code review, is this categorization a common practice? Do you do it? have you done it in the past? Is it good/bad? On one hand, it gives the team some more metrics and possibly will indicate more specific areas where developers may potentially need to be trained in (at least that seems to be the argument). Are there other benefits? And on the other hand, and this is my concern, is that it will slow down code review process that much more. As a team lead, I've done a fairly large share of reviews, and I've always liked the ability, to highlight a chunk of code, hammer off a comment and move on as fast as possible. Although I haven't tried it personally, I have a feeling that expanding that combo box every time and scrolling/searching for the right category would feel like something is tripping you. Also if we start keeping metrics on this stuff, my other concern is that valuable code review meeting time will be spent on arguing whether something is a logic error or if it should be classified as a crash.

    Read the article

  • MTN WMS Implementation Story

    - by aditya.agarkar
    MTN is Africa's largest cellular phone company serving millions of customers across 21 countries. MTN uses Oracle WMS to manage its distribution activities and its sizzling growth. Just for perspective, since 2004, Africa has been the fastest growing mobile phone market in the world. If you want to know more about MTN and the WMS Project at MTN, a summarized view of MTN WMS project is here. The WMS Project at MTN was presented at Oracle Open World in 2007. The extensive automation at MTN includes interface with Conveyor for item transport, High Speed Sorter for item routing, Put to Light for packing accuracy, ASRS Carousel/Lift for inventory Security and Storage Optimization, Check Weight Scale for shipping accuracy, Automated Carton Erectors for package creation and Automated Carton Labeling. Subsequent to this presentation and their go-live in 2007, the MTN warehouse has scaled new heights. The volume has grown manifolds (as can be expected in a fast growing cellular market). Oracle WMS has been able to scale very well to the increase in volume, just as it was designed to do. Here are a couple of videos that highlight the WMS operations at MTN:  1) Video Interview with Margaretha Theart (Warehouse Manager at MTN) 2) Automation Video at MTN (Hat tip: Syed Imran) Enjoy!

    Read the article

< Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >