Search Results

Search found 19446 results on 778 pages for 'network printer'.

Page 746/778 | < Previous Page | 742 743 744 745 746 747 748 749 750 751 752 753  | Next Page >

  • SPARC T4-4 Delivers World Record First Result on PeopleSoft Combined Benchmark

    - by Brian
    Oracle's SPARC T4-4 servers running Oracle's PeopleSoft HCM 9.1 combined online and batch benchmark achieved World Record 18,000 concurrent users while executing a PeopleSoft Payroll batch job of 500,000 employees in 43.32 minutes and maintaining online users response time at < 2 seconds. This world record is the first to run online and batch workloads concurrently. This result was obtained with a SPARC T4-4 server running Oracle Database 11g Release 2, a SPARC T4-4 server running PeopleSoft HCM 9.1 application server and a SPARC T4-2 server running Oracle WebLogic Server in the web tier. The SPARC T4-4 server running the application tier used Oracle Solaris Zones which provide a flexible, scalable and manageable virtualization environment. The average CPU utilization on the SPARC T4-2 server in the web tier was 17%, on the SPARC T4-4 server in the application tier it was 59%, and on the SPARC T4-4 server in the database tier was 35% (online and batch) leaving significant headroom for additional processing across the three tiers. The SPARC T4-4 server used for the database tier hosted Oracle Database 11g Release 2 using Oracle Automatic Storage Management (ASM) for database files management with I/O performance equivalent to raw devices. This is the first three tier mixed workload (online and batch) PeopleSoft benchmark also processing PeopleSoft payroll batch workload. Performance Landscape PeopleSoft HR Self-Service and Payroll Benchmark Systems Users Ave Response Search (sec) Ave Response Save (sec) Batch Time (min) Streams SPARC T4-2 (web) SPARC T4-4 (app) SPARC T4-2 (db) 18,000 0.944 0.503 43.32 64 Configuration Summary Application Configuration: 1 x SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 512 GB memory 5 x 300 GB SAS internal disks 1 x 100 GB and 2 x 300 GB internal SSDs 2 x 10 Gbe HBA Oracle Solaris 11 11/11 PeopleTools 8.52 PeopleSoft HCM 9.1 Oracle Tuxedo, Version 10.3.0.0, 64-bit, Patch Level 031 Java Platform, Standard Edition Development Kit 6 Update 32 Database Configuration: 1 x SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 256 GB memory 3 x 300 GB SAS internal disks Oracle Solaris 11 11/11 Oracle Database 11g Release 2 Web Tier Configuration: 1 x SPARC T4-2 server with 2 x SPARC T4 processors, 2.85 GHz 256 GB memory 2 x 300 GB SAS internal disks 1 x 100 GB internal SSD Oracle Solaris 11 11/11 PeopleTools 8.52 Oracle WebLogic Server 10.3.4 Java Platform, Standard Edition Development Kit 6 Update 32 Storage Configuration: 1 x Sun Server X2-4 as a COMSTAR head for data 4 x Intel Xeon X7550, 2.0 GHz 128 GB memory 1 x Sun Storage F5100 Flash Array (80 flash modules) 1 x Sun Storage F5100 Flash Array (40 flash modules) 1 x Sun Fire X4275 as a COMSTAR head for redo logs 12 x 2 TB SAS disks with Niwot Raid controller Benchmark Description This benchmark combines PeopleSoft HCM 9.1 HR Self Service online and PeopleSoft Payroll batch workloads to run on a unified database deployed on Oracle Database 11g Release 2. The PeopleSoft HRSS benchmark kit is a Oracle standard benchmark kit run by all platform vendors to measure the performance. It's an OLTP benchmark where DB SQLs are moderately complex. The results are certified by Oracle and a white paper is published. PeopleSoft HR SS defines a business transaction as a series of HTML pages that guide a user through a particular scenario. Users are defined as corporate Employees, Managers and HR administrators. The benchmark consist of 14 scenarios which emulate users performing typical HCM transactions such as viewing paycheck, promoting and hiring employees, updating employee profile and other typical HCM application transactions. All these transactions are well-defined in the PeopleSoft HR Self-Service 9.1 benchmark kit. This benchmark metric is the weighted average response search/save time for all the transactions. The PeopleSoft 9.1 Payroll (North America) benchmark demonstrates system performance for a range of processing volumes in a specific configuration. This workload represents large batch runs typical of a ERP environment during a mass update. The benchmark measures five application business process run times for a database representing large organization. They are Paysheet Creation, Payroll Calculation, Payroll Confirmation, Print Advice forms, and Create Direct Deposit File. The benchmark metric is the cumulative elapsed time taken to complete the Paysheet Creation, Payroll Calculation and Payroll Confirmation business application processes. The benchmark metrics are taken for each respective benchmark while running simultaneously on the same database back-end. Specifically, the payroll batch processes are started when the online workload reaches steady state (the maximum number of online users) and overlap with online transactions for the duration of the steady state. Key Points and Best Practices Two Oracle PeopleSoft Domain sets with 200 application servers each on a SPARC T4-4 server were hosted in 2 separate Oracle Solaris Zones to demonstrate consolidation of multiple application servers, ease of administration and performance tuning. Each Oracle Solaris Zone was bound to a separate processor set, each containing 15 cores (total 120 threads). The default set (1 core from first and third processor socket, total 16 threads) was used for network and disk interrupt handling. This was done to improve performance by reducing memory access latency by using the physical memory closest to the processors and offload I/O interrupt handling to default set threads, freeing up cpu resources for Application Servers threads and balancing application workload across 240 threads. See Also Oracle PeopleSoft Benchmark White Papers oracle.com SPARC T4-2 Server oracle.com OTN SPARC T4-4 Server oracle.com OTN PeopleSoft Enterprise Human Capital Management oracle.com OTN PeopleSoft Enterprise Human Capital Management (Payroll) oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Oracle's PeopleSoft HR and Payroll combined benchmark, www.oracle.com/us/solutions/benchmark/apps-benchmark/peoplesoft-167486.html, results 09/30/2012.

    Read the article

  • Keep it Professional &ndash; Multiple Environments

    - by AjarnMark
    I have certainly been reading blogs a whole lot more than writing them the last several weeks, and it’s about time I got back to writing.  I have been collecting several topics and references for blog posts…some of which will probably just never get written as the timeliness of the topics fade over time.  Nonetheless, I’m back, and I think it is time to revive my Doing Business Right series, this time coming from the slant of managing a development team rather than the previous angle of being self-employed.  First up: separating Dev, Test, and Prod. A few months ago, Colin Stasiuk (@BenchmarkIT) wrote a great post about separating your Dev, Test/UAT, and Prod environments.  This post covers all the important points such as removing Developer access from both PROD and UAT, and the importance of proper deployment (a.k.a. promotion) procedures.  I won’t repeat it all here, go read the original!  But what I do want to address is what I believe to be the #1 excuse people use for not having separate environments:  Money.  I discussed this briefly in my comment on Colin’s post at the time, but let me repeat it here and expand on it a bit. Don’t let the size of your company or the size of its budget dictate whether you do things professionally or not.  I am convinced that most developers and development teams would agree that it is a best practice to have separate environments for development, testing, and production (a.k.a. Live).  So why don’t they?  Because they think that it means separate servers which means more money.  While having separate physical servers for the different environments would be ideal, it is not an absolute requirement in order to make this work.  Here are a few ideas: Use multiple instances of SQL Server and multiple Web Sites with Headers or Ports.  For no additional fees* you can install multiple instances of SQL Server on the same machine.  This gives you a nice separation, allowing you to even use the same database names as will appear in PROD, yet isolating the data and security access.  And in IIS, you can create multiple Web Sites on the same server just by using Host Headers or different port numbers to separate them.  This approach does still pose the risk of non-Prod environments impacting performance on Prod, but when your application is busy enough for that to be a concern, you can probably afford one of the other options. Use desktop PCs instead of servers.  Instead of investing in full server-grade hardware, you can mimic the separate environments on old desktop PCs and at least get functional equivalency, if not performance matching.  The last I checked, Microsoft did not require separate licensing for SQL Server if that installation was used exclusively for dev or test purposes*.  There may be some version or performance differences between this approach and what you have in Prod, but you have isolated test from impacting Prod resources this way. Virtualization.  This is of course one of the hot topics of the day, and I would be remiss if I did not suggest this.  It is quite easy these days to setup virtual machines so that, again, your environments are fairly isolated from one another, and you retain all the security and procedural benefits of having separate environments. So the point is, keep your high professional standards intact.  You don’t need to compromise on using proper procedure just because you work in a small company with a small budget.  Keep doing things the right way! By the way, where I work, our DEV environment is not on a server.  All development is done on the developer’s individual workstation where it can be isolated from other developers’ work for the duration of writing the code, but also where the developers have to reconcile (merge) differences in code under concurrent development.  This usually means that each change is executed multiple times (once per developer to update their environments with the latest changes from others) giving us an extra, informal. test deployment before even going to the Test/UAT server.  It also means that if the network goes down, the developers can continue to hum along because they are not dependent on networked resources.  In fact, they will likely be even more productive because they aren’t being interrupted by email…but that’s another post I need to write. * I am not a lawyer, nor a licensing specialist, but it appeared to be so the last time I checked.  When in doubt, consult an expert on the topic.

    Read the article

  • In the Mobile and Tablet World, How Much is Too Much?

    - by andrewbrust
    The week of April 26th was a huge one in the world of mobile and tablet devices,  There were so many individual developments, announcements and solidifications of strategy, it’s almost impossible to believe they occurred in the same month, let alone the same week. Things started with Apple and Gizmodo having a Law and Order moment over the latter’s procurement of what appears to be the former’s 4th gen iPhone prototype.  We found out on the 26th that Gizmodo blogger Jason Chen’s apartment was raided by police and, honestly, that was a bit much. But Apple didn’t stop there.  They also published Steve Job’s critique of Adobe Flash and his explanation of Cupertino’s embargo of Flash on iPhones, iPods and iPads.  If you ask me, this too, was a bit much. Apple finished up the week by releasing the 3G version of its iPad product to the US market. I like (iLike?) my WiFi iPad.  The idea of getting a version of it that required a second 3G service monthly subscription, is, well, a bit  much. Microsoft was in the news too.  It killed a project it hadn’t even acknowledged the existence of: the Courier tablet.  That’s a bit much too.  If a tree falls in the woods, and Microsoft says they can’t hear it anyway, could they really have chopped it down? Maybe Microsoft Research should have licensed some of Courier’s technology from other parts of Microsoft.  Then maybe they could have kept the product alive.  Ask HTC: they’re going to be licensing technology from Microsoft because Redmond insists that Google’s Android operating system infringes on certain of their patents.  And since HTC now builds a number of handsets on Android, instead of being beholden, as they once were, to Windows Mobile, that means they can keep making their products.  Why does HTC have to pay the royalties, and not Google?  Maybe Microsoft decided that going after GOOG would have been a bit much, even for them. The agreement came not a moment to soon: HTC released their “Droid Incredible” (that name’s a bit much), an Android 2.1 handset with amazing hardware and HTC’s own Sense UI, on April 30th (this past Friday). This phone is very well-reviewed.  Maybe that’s why Google basically decided to beg off introducing a version of its Nexus One phone (also manufactured by HTC) on the Verizon Wireless network.  Google backing down?  That’s incredible, if not also a bit much. And that brings us to HP.  Which this week announced its acquisition of Palm and its webOS mobile phone touch-oriented operating system.  HP also killed its own Slate initiative.  Apparently HP realized that Windows 7, even with a proprietary HP touch UI added on top, is no match for the iPad.  I’m guessing they think webOS might work a bit better,  And I’m wondering if HP even wants to use webOS for phone handsets, beyond the Pre and Pixi.  Using it just for slate devices would be a bit extreme, but maybe not too much. Honestly, this was not Microsoft’s best week.  It killed a project and a close partner did likewise.  Then that same partner bought a competing OS product, while another partner released their new product that uses yet another competing OS platform. What did Microsoft actually produce this past week? An update to its Windows Phone 7 developer tools that actually works with the version of Visual Studio 2010 released on April 12th, and the version of Silverlight released three days later. That took three weeks to get synced up, and that’s a bit much too. But at least it happened. Windows Phone 7 is Microsoft’s best hope for a comeback in the SmartPhone market and to offer a credible touch-based tablet device.  This week, two of Microsoft’s slate initiatives died, and its only mobile phone victory was around its competitor’s operating system.  I hope the new platform gets Redmond out of the PC ghetto and into the classes of device that get people really excited today.  If it can’t, that would be a bit much; probably too much.  And, as the signs at the Lonestar Cafe in NYC used to say, too much ain’t enough.

    Read the article

  • IBM Keynote: (hardware,software)–>{IBM.java.patterns}

    - by Janice J. Heiss
    On Sunday evening, September 30, 2012, Jason McGee, IBM Distinguished Engineer and Chief Architect Cloud Computing, along with John Duimovich IBM Distinguished Engineer and Java CTO, gave an information- and idea-rich keynote that left Java developers with much to ponder.Their focus was on the challenges to make Java more efficient and productive given the hardware and software environments of 2012. “One idea that is very interesting is the idea of multi-tenancy,” said McGee, “and how we can move up the spectrum. In traditional systems, we ran applications on dedicated middleware, operating systems and hardware. A lot of customers still run that way. Now people introduce hardware virtualization and share the hardware. That is good but there is a lot more we can do. We can share middleware and the application itself.” McGee challenged developers to better enable the Java language to function in these higher density models. He spoke about the need to describe patterns that help us grasp the full environment that an application needs, whether it’s a web or full enterprise application. Developers need to understand the resources that an application interacts with in a way that is simple and straightforward. The task is to then automate that deployment so that the complexity of infrastructure can be by-passed and developers can live in a simpler world where the cloud can automatically configure the needed environment. McGee argued that the key, something IBM has been working on, is to use a simpler pattern that allows a cloud-based architecture to embrace the entire infrastructure required for an application and make it highly available, scalable and able to recover from failure. The cloud-based architecture would automate the complexity of setting up and managing the infrastructure. IBM has been trying to realize this vision for customers so they can describe their Java application environment simply and allow the cloud to automate the deployment and management of applications. “The point,” explained McGee, “is to package the executable used to describe applications, to drop it into a shared system and let that system provide some intelligence about how to deploy and manage those applications.”John Duimovich on Improvements in JavaMcGee then brought onstage IBM’s Distinguished Engineer and CTO for Java, John Duimovich, who showed the audience ways to deploy Java applications more efficiently.Duimovich explained that, “When you run lots of copies of Java in the cloud or any hypervisor virtualized system, there are a lot of duplications of code and jar files. IBM has a facility called ‘shared classes’ where we put shared code, read only artefacts in a cache that is sharable across hypervisors.” By putting JIT code in ahead of time, he explained that the application server will use 20% less memory and operate 30% faster.  He described another example of how the JVM allows for the maximum amount of sharing that manages the tenants and file sockets and memory use through throttling and control. Duimovich touched on the “thin is in” model and IBM’s Liberty Profile and lightweight runtime for the cloud, which allows for greater efficiency in interacting with the cloud.Duimovich discussed the confusion Java developers experience when, for example, the hypervisor tells them that that they have 8 and then 4 and then 16 cores. “Because hypervisors are virtualized, they can change based on resource needs across the hypervisor layer. You may have 10 instances of an operation system and you may need to reallocate memory, " explained Duimovich.  He showed how to resize LPARs, reallocate CPUs and migrate applications as needed. He explained how application servers can resize thread pools and better use resources based on information from the hypervisors.Java Challenges in Hardware and SoftwareMcGee ended the keynote with a summary of upcoming hardware and software challenges for the Java platform. He noted that one reason developers love Java is it allows them to ignore differences in hardware. He stated that the most important things happening in hardware were in network and storage – in developments such as the speed of SSD, the exploitation of high-speed, low-latency networking, and recent developments such as storage-class memory, and non-volatile main memory. “So we are challenged to maintain the benefits of Java and the abstraction it provides from hardware while still exploiting the new innovations in hardware,” said McGee.McGee discussed transactional messaging applications where developers send messages transactionally persist a message to storage, something traditionally done by backing messages on spinning disks, something mostly outdated. “Now,” he pointed out, “we would use SSD and store it in Flash and get 70,000 messages a second. If we stored it using a PCI express-based flash memory device, it is still Flash but put on a PCI express bus on a card closer to the CPU. This way I get 300,000 messages a second and 25% improvement in latency.” McGee’s central point was that hardware has a huge impact on the performance and scalability of applications. New technologies are enabling developers to build classes of Java applications previously unheard of. “We need to be able to balance these things in Java – we need to maintain the abstraction but also be able to exploit the evolution of hardware technology,” said McGee. According to McGee, IBM's current focus is on systems wherein hardware and software are shipped together in what are called Expert Integrated Systems – systems that are pre-optimized, and pre-integrated together. McGee closed IBM’s engaging and thought-provoking keynote by pointing out that the use of Java in complex applications is increasingly being augmented by a host of other languages with strong communities around them – JavaScript, JRuby, Scala, Python and so forth. Java developers now must understand the strengths and weaknesses of such newcomers as applications increasingly involve a complex interconnection of languages.

    Read the article

  • Visual Studio Load Testing using Windows Azure

    - by Tarun Arora
    In my opinion the biggest adoption barrier in performance testing on smaller projects is not the tooling but the high infrastructure and administration cost that comes with this phase of testing. Only if a reusable solution was possible and infrastructure management wasn’t as expensive, adoption would certainly spike. It certainly is possible if you bring Visual Studio and Windows Azure into the equation. It is possible to run your test rig in the cloud without getting tangled in SCVMM or Lab Management. All you need is an active Azure subscription, Windows Azure endpoint enabled developer workstation running visual studio ultimate on premise, windows azure endpoint enabled worker roles on azure compute instances set up to run as test controllers and test agents. My test rig is running SQL server 2012 and Visual Studio 2012 RC agents. The beauty is that the solution is reusable, you can open the azure project, change the subscription and certificate, click publish and *BOOM* in less than 15 minutes you could have your own test rig running in the cloud. In this blog post I intend to show you how you can use the power of Windows Azure to effectively abstract the administration cost of infrastructure management and lower the total cost of Load & Performance Testing. As a bonus, I will share a reusable solution that you can use to automate test rig creation for both VS 2010 agents as well as VS 2012 agents. Introduction The slide show below should help you under the high level details of what we are trying to achive... Leveraging Azure for Performance Testing View more PowerPoint from Avanade Scenario 1 – Running a Test Rig in Windows Azure To start off with the basics, in the first scenario I plan to discuss how to, - Automate deployment & configuration of Windows Azure Worker Roles for Test Controller and Test Agent - Automate deployment & configuration of SQL database on Test Controller on the Test Controller Worker Role - Scaling Test Agents on demand - Creating a Web Performance Test and a simple Load Test - Managing Test Controllers right from Visual Studio on Premise Developer Workstation - Viewing results of the Load Test - Cleaning up - Have the above work in the shape of a reusable solution for both VS2010 and VS2012 Test Rig Scenario 2 – The scaled out Test Rig and sharing data using SQL Azure A scaled out version of this implementation would involve running multiple test rigs running in the cloud, in this scenario I will show you how to sync the load test database from these distributed test rigs into one SQL Azure database using Azure sync. The selling point for this scenario is being able to collate the load test efforts from across the organization into one data store. - Deploy multiple test rigs using the reusable solution from scenario 1 - Set up and configure Windows Azure Sync - Test SQL Azure Load Test result database created as a result of Windows Azure Sync - Cleaning up - Have the above work in the shape of a reusable solution for both VS2010 and VS2012 Test Rig The Ingredients Though with an active MSDN ultimate subscription you would already have access to everything and more, you will essentially need the below to try out the scenarios, 1. Windows Azure Subscription 2. Windows Azure Storage – Blob Storage 3. Windows Azure Compute – Worker Role 4. SQL Azure Database 5. SQL Data Sync 6. Windows Azure Connect – End points 7. SQL 2012 Express or SQL 2008 R2 Express 8. Visual Studio All Agents 2012 or Visual Studio All Agents 2010 9. A developer workstation set up with Visual Studio 2012 – Ultimate or Visual Studio 2010 – Ultimate 10. Visual Studio Load Test Unlimited Virtual User Pack. Walkthrough To set up the test rig in the cloud, the test controller, test agent and SQL express installers need to be available when the worker role set up starts, the easiest and most efficient way is to pre upload the required software into Windows Azure Blob storage. SQL express, test controller and test agent expose various switches which we can take advantage of including the quiet install switch. Once all the 3 have been installed the test controller needs to be registered with the test agents and the SQL database needs to be associated to the test controller. By enabling Windows Azure connect on the machines in the cloud and the developer workstation on premise we successfully create a virtual network amongst the machines enabling 2 way communication. All of the above can be done programmatically, let’s see step by step how… Scenario 1 Video Walkthrough–Leveraging Windows Azure for performance Testing Scenario 2 Work in progress, watch this space for more… Solution If you are still reading and are interested in the solution, drop me an email with your windows live id. I’ll add you to my TFS preview project which has a re-usable solution for both VS 2010 and VS 2012 test rigs as well as guidance and demo performance tests.   Conclusion Other posts and resources available here. Possibilities…. Endless!

    Read the article

  • Streaming desktop with avconv - severe sound issues

    - by Tommy Brunn
    I'm trying to do some live streaming in Ubuntu 12.10, but I'm having some problems with audio. More specifically, the quality is complete garbage and it's at least 10 seconds out of sync with the video. I'm using an excellent guide found here to set up my loopback devices so that I can combine the desktop audio with the microphone input. It seems to work, as I'm able to stream both audio and video to Twitch.tv. But, as I said, the audio quality is terrible. The microphone audio is very, very low, but if I increase it, I get a horrible garbled sound that is absolutely unbearable. Nothing like that is present during VoIP calls or when recording sound alone with the sound recorder, so it's not an issue with the microphone itself. The entire audio stream is also delayed about 10-15 seconds compared to the video stream. I put together an imgur album of my settings. Here is some example output from when I'm streaming: avconv version 0.8.4-6:0.8.4-0ubuntu0.12.10.1, Copyright (c) 2000-2012 the Libav developers built on Nov 6 2012 16:51:11 with gcc 4.7.2 [x11grab @ 0x162fd80] device: :0.0+570,262 -> display: :0.0 x: 570 y: 262 width: 1280 height: 720 [x11grab @ 0x162fd80] shared memory extension found [x11grab @ 0x162fd80] Estimating duration from bitrate, this may be inaccurate Input #0, x11grab, from ':0.0+570,262': Duration: N/A, start: 1353181686.735113, bitrate: 884736 kb/s Stream #0.0: Video: rawvideo, bgra, 1280x720, 884736 kb/s, 30 tbr, 1000k tbn, 30 tbc [alsa @ 0x163fce0] capture with some ALSA plugins, especially dsnoop, may hang. [alsa @ 0x163fce0] Estimating duration from bitrate, this may be inaccurate Input #1, alsa, from 'pulse': Duration: N/A, start: 1353181686.773841, bitrate: N/A Stream #1.0: Audio: pcm_s16le, 48000 Hz, 2 channels, s16, 1536 kb/s Incompatible pixel format 'bgra' for codec 'libx264', auto-selecting format 'yuv420p' [buffer @ 0x1641ec0] w:1280 h:720 pixfmt:bgra [scale @ 0x1642480] w:1280 h:720 fmt:bgra -> w:852 h:480 fmt:yuv420p flags:0x4 [libx264 @ 0x165ae80] VBV maxrate unspecified, assuming CBR [libx264 @ 0x165ae80] using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE4.2 [libx264 @ 0x165ae80] profile Main, level 3.1 [libx264 @ 0x165ae80] 264 - core 123 r2189 35cf912 - H.264/MPEG-4 AVC codec - Copyleft 2003-2012 - http://www.videolan.org/x264.html - options: cabac=1 ref=2 deblock=1:0:0 analyse=0x1:0x111 me=hex subme=6 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=16 chroma_me=1 trellis=1 8x8dct=0 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=4 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=0 b_adapt=1 b_bias=0 direct=1 weightb=0 open_gop=1 weightp=1 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=30 rc=cbr mbtree=1 bitrate=712 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 vbv_maxrate=712 vbv_bufsize=512 nal_hrd=none ip_ratio=1.25 aq=1:1.00 Output #0, flv, to 'rtmp://live.justin.tv/app/live_23011330_Pt1plSRM0z5WVNJ0QmCHvTPmpUnfC4': Metadata: encoder : Lavf53.21.0 Stream #0.0: Video: libx264, yuv420p, 852x480, q=-1--1, 712 kb/s, 1k tbn, 30 tbc Stream #0.1: Audio: libmp3lame, 44100 Hz, 2 channels, s16, 712 kb/s Stream mapping: Stream #0:0 -> #0:0 (rawvideo -> libx264) Stream #1:0 -> #0:1 (pcm_s16le -> libmp3lame) Press ctrl-c to stop encoding frame= 17 fps= 0 q=0.0 size= 0kB time=10000000000.00 bitrate= 0.0kbitframe= 32 fps= 31 q=0.0 size= 0kB time=10000000000.00 bitrate= 0.0kbitframe= 40 fps= 23 q=29.0 size= 44kB time=0.03 bitrate=13786.2kbits/s dup=frame= 47 fps= 21 q=31.0 size= 93kB time=2.73 bitrate= 277.7kbits/s dup=0frame= 62 fps= 23 q=29.0 size= 160kB time=3.23 bitrate= 406.2kbits/s dup=0frame= 77 fps= 24 q=23.0 size= 209kB time=3.71 bitrate= 462.5kbits/s dup=0frame= 92 fps= 25 q=20.0 size= 267kB time=4.91 bitrate= 445.2kbits/s dup=0frame= 107 fps= 25 q=20.0 size= 318kB time=5.41 bitrate= 482.1kbits/s dup=0frame= 123 fps= 26 q=18.0 size= 368kB time=5.96 bitrate= 505.7kbits/s dup=0frame= 139 fps= 26 q=16.0 size= 419kB time=6.48 bitrate= 529.7kbits/s dup=0frame= 155 fps= 27 q=15.0 size= 473kB time=7.00 bitrate= 553.6kbits/s dup=0frame= 170 fps= 27 q=14.0 size= 525kB time=7.52 bitrate= 571.7kbits/s dup=0 frame= 180 fps= 25 q=-1.0 Lsize= 652kB time=7.97 bitrate= 670.0kbits/s dup=0 drop=32 //Here I stop the streaming video:531kB audio:112kB global headers:0kB muxing overhead 1.345945% [libx264 @ 0x165ae80] frame I:1 Avg QP:30.43 size: 39748 [libx264 @ 0x165ae80] frame P:45 Avg QP:11.37 size: 11110 [libx264 @ 0x165ae80] frame B:134 Avg QP:15.93 size: 27 [libx264 @ 0x165ae80] consecutive B-frames: 0.6% 0.0% 1.7% 97.8% [libx264 @ 0x165ae80] mb I I16..4: 7.3% 0.0% 92.7% [libx264 @ 0x165ae80] mb P I16..4: 0.1% 0.0% 0.1% P16..4: 49.1% 1.2% 2.1% 0.0% 0.0% skip:47.4% [libx264 @ 0x165ae80] mb B I16..4: 0.0% 0.0% 0.0% B16..8: 0.1% 0.0% 0.0% direct: 0.0% skip:99.9% L0:42.5% L1:56.9% BI: 0.6% [libx264 @ 0x165ae80] coded y,uvDC,uvAC intra: 82.3% 87.4% 71.9% inter: 7.1% 8.4% 7.0% [libx264 @ 0x165ae80] i16 v,h,dc,p: 27% 29% 16% 28% [libx264 @ 0x165ae80] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 22% 21% 14% 8% 8% 8% 7% 5% 7% [libx264 @ 0x165ae80] i8c dc,h,v,p: 47% 22% 20% 11% [libx264 @ 0x165ae80] Weighted P-Frames: Y:0.0% UV:0.0% [libx264 @ 0x165ae80] ref P L0: 96.4% 3.6% [libx264 @ 0x165ae80] kb/s:474.19 Received signal 2: terminating. Any ideas on how I can resolve this? The video delay is perfectly acceptable, so I wouldn't think that it's a network issue that's causing the delay in the audio. Any help would be appreciated.

    Read the article

  • Using Hadooop (HDInsight) with Microsoft - Two (OK, Three) Options

    - by BuckWoody
    Microsoft has many tools for “Big Data”. In fact, you need many tools – there’s no product called “Big Data Solution” in a shrink-wrapped box – if you find one, you probably shouldn’t buy it. It’s tempting to want a single tool that handles everything in a problem domain, but with large, complex data, that isn’t a reality. You’ll mix and match several systems, open and closed source, to solve a given problem. But there are tools that help with handling data at large, complex scales. Normally the best way to do this is to break up the data into parts, and then put the calculation engines for that chunk of data right on the node where the data is stored. These systems are in a family called “Distributed File and Compute”. Microsoft has a couple of these, including the High Performance Computing edition of Windows Server. Recently we partnered with Hortonworks to bring the Apache Foundation’s release of Hadoop to Windows. And as it turns out, there are actually two (technically three) ways you can use it. (There’s a more detailed set of information here: http://www.microsoft.com/sqlserver/en/us/solutions-technologies/business-intelligence/big-data.aspx, I’ll cover the options at a general level below)  First Option: Windows Azure HDInsight Service  Your first option is that you can simply log on to a Hadoop control node and begin to run Pig or Hive statements against data that you have stored in Windows Azure. There’s nothing to set up (although you can configure things where needed), and you can send the commands, get the output of the job(s), and stop using the service when you are done – and repeat the process later if you wish. (There are also connectors to run jobs from Microsoft Excel, but that’s another post)   This option is useful when you have a periodic burst of work for a Hadoop workload, or the data collection has been happening into Windows Azure storage anyway. That might be from a web application, the logs from a web application, telemetrics (remote sensor input), and other modes of constant collection.   You can read more about this option here:  http://blogs.msdn.com/b/windowsazure/archive/2012/10/24/getting-started-with-windows-azure-hdinsight-service.aspx Second Option: Microsoft HDInsight Server Your second option is to use the Hadoop Distribution for on-premises Windows called Microsoft HDInsight Server. You set up the Name Node(s), Job Tracker(s), and Data Node(s), among other components, and you have control over the entire ecostructure.   This option is useful if you want to  have complete control over the system, leave it running all the time, or you have a huge quantity of data that you have to bulk-load constantly – something that isn’t going to be practical with a network transfer or disk-mailing scheme. You can read more about this option here: http://www.microsoft.com/sqlserver/en/us/solutions-technologies/business-intelligence/big-data.aspx Third Option (unsupported): Installation on Windows Azure Virtual Machines  Although unsupported, you could simply use a Windows Azure Virtual Machine (we support both Windows and Linux servers) and install Hadoop yourself – it’s open-source, so there’s nothing preventing you from doing that.   Aside from being unsupported, there are other issues you’ll run into with this approach – primarily involving performance and the amount of configuration you’ll need to do to access the data nodes properly. But for a single-node installation (where all components run on one system) such as learning, demos, training and the like, this isn’t a bad option. Did I mention that’s unsupported? :) You can learn more about Windows Azure Virtual Machines here: http://www.windowsazure.com/en-us/home/scenarios/virtual-machines/ And more about Hadoop and the installation/configuration (on Linux) here: http://en.wikipedia.org/wiki/Apache_Hadoop And more about the HDInsight installation here: http://www.microsoft.com/web/gallery/install.aspx?appid=HDINSIGHT-PREVIEW Choosing the right option Since you have two or three routes you can go, the best thing to do is evaluate the need you have, and place the workload where it makes the most sense.  My suggestion is to install the HDInsight Server locally on a test system, and play around with it. Read up on the best ways to use Hadoop for a given workload, understand the parts, write a little Pig and Hive, and get your feet wet. Then sign up for a test account on HDInsight Service, and see how that leverages what you know. If you're a true tinkerer, go ahead and try the VM route as well. Oh - there’s another great reference on the Windows Azure HDInsight that just came out, here: http://blogs.msdn.com/b/brunoterkaly/archive/2012/11/16/hadoop-on-azure-introduction.aspx  

    Read the article

  • Windows Azure Recipe: Enterprise LOBs

    - by Clint Edmonson
    Enterprises are more and more dependent on their specialized internal Line of Business (LOB) applications than ever before. Naturally, the more software they leverage on-premises, the more infrastructure they need manage. It’s frequently the case that our customers simply can’t scale up their hardware purchases and operational staff as fast as internal demand for software requires. The result is that getting new or enhanced applications in the hands of business users becomes slower and more expensive every day. Being able to quickly deliver applications in a rapidly changing business environment while maintaining high standards of corporate security is a challenge that can be met right now by moving enterprise LOBs out into the cloud and leveraging Azure’s Access Control services. In fact, we’re seeing many of our customers (both large and small) see huge benefits from moving their web based business applications such as corporate help desks, expense tracking, travel portals, timesheets, and more to Windows Azure. Drivers Cost Reduction Time to market Security Solution Here’s a sketch of how many Windows Azure Enterprise LOBs are being architected and deployed: Ingredients Web Role – this will host the core of the application. Each web role is a virtual machine hosting an application written in ASP.NET (or optionally php, or node.js). The number of web roles can be scaled up or down as needed to handle peak and non-peak traffic loads. Many Java based applications are also being deployed to Windows Azure with a little more effort. Database – every modern web application needs to store data. SQL Azure databases look and act exactly like their on-premise siblings but are fault tolerant and have data redundancy built in. Access Control – this service is necessary to establish federated identity between the cloud hosted application and an enterprise’s corporate network. It works in conjunction with a secure token service (STS) that is hosted on-premises to establish the corporate user’s identity and credentials. The source code for an on-premises STS is provided in the Windows Azure training kit and merely needs to be customized for the corporate environment and published on a publicly accessible corporate web site. Once set up, corporate users see a near seamless single sign-on experience. Reporting – businesses live and die by their reports and SQL Azure Reporting, based on SQL Server Reporting 2008 R2, can serve up reports with tables, charts, maps, gauges, and more. These reports can be accessed from the Windows Azure Portal, through a web browser, or directly from applications. Service Bus (optional) – if deep integration with other applications and systems is needed, the service bus is the answer. It enables secure service layer communication between applications hosted behind firewalls in on-premises or partner datacenters and applications hosted inside Windows Azure. The Service Bus provides the ability to securely expose just the information and services that are necessary to create a simpler, more secure architecture than opening up a full blown VPN. Data Sync (optional) – in cases where the data stored in the cloud needs to be shared internally, establishing a secure one-way or two-way data-sync connection between the on-premises and off-premises databases is a perfect option. It can be very granular, allowing us to specify exactly what tables and columns to synchronize, setup filters to sync only a subset of rows, set the conflict resolution policy for two-way sync, and specify how frequently data should be synchronized Training Labs These links point to online Windows Azure training labs where you can learn more about the individual ingredients described above. (Note: The entire Windows Azure Training Kit can also be downloaded for offline use.) Windows Azure (16 labs) Windows Azure is an internet-scale cloud computing and services platform hosted in Microsoft data centers, which provides an operating system and a set of developer services which can be used individually or together. It gives developers the choice to build web applications; applications running on connected devices, PCs, or servers; or hybrid solutions offering the best of both worlds. New or enhanced applications can be built using existing skills with the Visual Studio development environment and the .NET Framework. With its standards-based and interoperable approach, the services platform supports multiple internet protocols, including HTTP, REST, SOAP, and plain XML SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. Windows Azure Services (9 labs) As applications collaborate across organizational boundaries, ensuring secure transactions across disparate security domains is crucial but difficult to implement. Windows Azure Services provides hosted authentication and access control using powerful, secure, standards-based infrastructure. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • How to Use RDA to Generate WLS Thread Dumps At Specified Intervals?

    - by Daniel Mortimer
    Introduction There are many ways to generate a thread dump of a WebLogic Managed Server. For example, take a look at: Taking Thread Dumps - [an excellent blog post on the Middleware Magic site]or  Different ways to take thread dumps in WebLogic Server (Document 1098691.1) There is another method - use Remote Diagnostic Agent! The solution described below is not documented, but it is relatively straightforward to execute. One advantage of using RDA to collect the thread dumps is RDA will also collect configuration, log files, network, system, performance information at the same time. Instructions 1. Not familiar with Remote Diagnostic Agent? Take a look at my previous blog "Resolve SRs Faster Using RDA - Find the Right Profile" 2. Choose a profile, which includes the WebLogic Server data collection modules (for example the profile "WebLogicServer"). At RDA setup time you should see the prompt below: ------------------------------------------------------------------------------- S301WLS: Collects Oracle WebLogic Server Information ------------------------------------------------------------------------------- Enter the location of the directory where the domains to analyze are located (For example in UNIX, <BEA Home>/user_projects/domains or <Middleware Home>/user_projects/domains) Hit 'Return' to accept the default (/oracle/11AS/Middleware/user_projects/domains) > For a successful WLS connection, ensure that the domain Admin Server is up and running. Data Collection Type:   1  Collect for a single server (offline mode)   2  Collect for a single server (using WLS connection)   3  Collect for multiple servers (using WLS connection) Enter the item number Hit 'Return' to accept the default (1) > 2 Choose option 2 or 3. Note: Collect for a single server or multiple servers using WLS connection means that RDA will attempt to connect to execute online WLST commands against the targeted server(s). The thread dumps are collected using the WLST function - "threadDumps()". If WLST cannot connect to the managed server, RDA will proceed to collect other data and ignore the request to collect thread dumps. If in the final output you see no Thread Dump menu item, then it's likely that the managed server is in a state which prevents new connections to it. If faced with this scenario, you would have to employ alternative methods for collecting thread dumps. 3. The RDA setup will create a setup.cfg file in the RDA_HOME directory. Open this file in an editor. You will find the following parameters which govern the number of thread dumps and thread dump interval. #N.Number of thread dumps to capture WREQ_THREAD_DUMP=10 #N.Thread dump interval WREQ_THREAD_DUMP_INTERVAL=5000 The example lines above show the default settings. In other words, RDA will collect 10 thread dumps at 5000 millisecond (5 second) intervals. You may want to change this to something like: #N.Number of thread dumps to capture WREQ_THREAD_DUMP=10 #N.Thread dump interval WREQ_THREAD_DUMP_INTERVAL=30000 However, bear in mind, that such change will increase the total amount of time it takes for RDA to complete its run. 4. Once you are happy with the setup.cfg, run RDA. RDA will collect, render, generate and package all files in the output directory. 5. For ease of viewing, open up the RDA Start html file - "xxxx__start.htm". The thread dumps can be found under the WLST Collections for the target managed server(s). See screenshots belowScreenshot 1:RDA Start Page - Main Index Screenshot 2: Managed Server Sub Index Screenshot 3: WLST Collections Screenshot 4: Thread Dump Page - List of dump file links Screenshot 5: Thread Dump Dat File Link Additional Comment: A) You can view the thread dump files within the RDA Start Page framework, but most likely you will want to download the dat files for in-depth analysis via thread dump analysis tools such as: Thread Dump Analyzer -  Samurai - a GUI based tail , thread dump analysis tool If you are new to thread dump analysis - take a look at this recorded Support Advisor Webcast  Oracle WebLogic Server: Diagnosing Performance Issues through Java Thread Dumps[Slidedeck from webcast in PDF format]B) I have logged a couple of enhancement requests for the RDA Development Team to consider: Add timestamp to dump file links, dat filename and at the top of the body of the dat file Package the individual thread dumps in a zip so all dump files can be conveniently downloaded in one go.

    Read the article

  • Upcoming User Group Events in 2011

    - by john.orourke(at)oracle.com
    At a recent customer event, someone asked me if Oracle had any plans to re-create the Hyperion Solutions Conference.  Unfortunately the answer is no.  With so many different product lines it would be challenging and costly for Oracle to run separate user conferences for every product line, and it would create too many events for customers with multiple products to attend.  So Oracle Open World is the company's main event for showcasing what's new and what's coming across all product lines.  If customers find Oracle OpenWorld too overwhelming or if the timing is bad, there are a number of other conferences, which are run by Oracle user groups and include a number of sessions focused on Oracle Hyperion EPM and BI products.  Here's a sneak preview of what's coming up for conferences in 2011 where you can network with other Hyperion users and learn what's new and what's coming in our products. Alliance 2011:  This conference is run by the Oracle Higher Education User Group (HEUG).  It's being held March 27 - 30th in lovely Denver, Colorado.  (a great location and time for skiers!)  This event is targeted at customers in Higher Education and Public Sector organizations and is expecting to draw over 3,500 attendees.  There will be a number of sessions focusing on Oracle Hyperion EPM and BI products in the Budgeting track, as well as the Reporting & BI track.  This includes product-focused sessions delivered by Oracle and partners, as well as case studies delivered by customers.  Here's a link to the registration page where you can get more information: http://www.heug.org/p/cm/ld/fid=255 Collaborate 2011:  This conference is run by three different user groups;  OAUG, IOUG and Quest.  It's being held April 10 - 14th in sunny Orlando, Florida.  (yes, sunshine and warmth!)  This event is targeted to customers with Oracle E-Business Suite, PeopleSoft, JD Edwards, Hyperion, Primavera and other products and is expected to draw over 5,000 attendees.  You'll find a number of sessions focused on Oracle Hyperion EPM and BI products in the BI/Data Warehousing/EPM track.  This includes product-focused sessions delivered by Oracle, our partners, and customers as well as a number of customer case studies.  There will also be an exhibit area with a number of demo pods focused on EPM and BI products.  Here's a link to the conference web site where you can get more information: http://collaborate.oaug.org/ Also, please note that the OAUG has a Hyperion SIG that runs focused EPM/Hyperion events throughout the year.  Here's a link to their web site where you can get more information: http://hyperionsig.oaug.org/ Kscope 2011:  Formerly the Kaleidoscope conference, this one is run by the Oracle Developer Tools User Group (ODTUG).  This conference is being held June 26 - 30th in Long Beach, CA. (surf's up!)  Historically, this event has focused on Oracle Development tools, but over the past few years the EPM and BI content has grown with over 100 sessions planned this year.  So this event is becoming a great venue for existing Hyperion customers to learn about the latest developments with Oracle Essbase, Hyperion Planning, Hyperion Financial Management, Oracle BI and other products.   You'll also find hands-on workshops, product demonstrations as well as EPM and BI Symposiums run by Oracle Development staff.  Here's a link to the web site where you can get more details.  http://www.kscope11.com/biepm UKOUG Conference Series:  EPM and Hyperion 2011:  For Hyperion customers in the UK, the UKOUG has a Hyperion SIG that runs a focused conference for EPM and Hyperion products.  The 2011 event is planned for June in London.  Here's a link to the web site for this event where you can get more information: http://hyperion.ukoug.org/default.asp?p=8461 In addition to these conferences, you can also find Oracle EPM and BI content at regional user group meetings globally as well as Marketing events run by Oracle.  Check the events page at www.oracle.com for the details on upcoming Marketing and regional User Group events.  So while Oracle will not be trying to replicate the Hyperion Solutions conference, the good news is that there are a number of other events available where customers can find out what's new and what's coming with Oracle EPM and BI products.  And these events are running at different times of the year in different locations - so you can pick the event that makes the most sense for your company from a timing and location standpoint. I'll be delivering a number of sessions at the Alliance and Collaborate conferences and hope to see many of our loyal customers and partners at these events.  And there's always Oracle OpenWorld coming up in October, for which the planning has already started.  I look forward to seeing you in 2011.

    Read the article

  • Oracle WebCenter at the Enterprise 2.0 Conference

    - by Brian Dirking
    We had a great week at the E20 Conference, presenting in four sessions – Andy MacMillan gave a session titled Today’s Successful Enterprises are Social Enterprises and was on a panel that Tony Byrne moderated; Christian Finn spoke on a panel on Unified Communications Unified Communications + Social Computing = Best of Both Worlds?, Mark Bennett spoke on a panel on The Evolution of Talent Management. The key areas of focus this year were sentiment analysis, adoption and community building, the benefits of failure, and social’s role in process applications. Sentiment analysis. This was focused not on external audiences but more on employee sentiment. Tim Young showed his internal "NikoNiko" project, where employees use smilies to report their current mood. The result was a dashboard that showed the company mood by department. Since the goal is to improve productivity, people can see which departments are running into issues and try and address them. A company might otherwise wait until the end of the quarter financials to find out that there was a problem and product didn’t ship. This is a way to identify issues immediately. Tim is great – he had the crowd laughing as soon as he hit the stage, with his proposed hastag for his session: by making it 138 characters long, people couldn’t say much behind his back. And as I tweeted during his session, I loved his comment that complexity diffuses energy - it sounds like something Sun Tzu would say. Another example of employee sentiment analysis was CubeVibe. Founder and CEO Aaron Aycock, in his 3 minute pitch or die session talked about how engaged employees perform better. It was too bad he got gonged, he was just picking up speed, but CubeVibe did win the vote – congratulations to them. Internal adoption, community building, and involvement. On this topic I spoke to Terri Griffith, and she said there is some good work going on at University of Indiana regarding this, and hinted that she might be blogging about it in the near future. This area holds lots of interest for me. Amongst our customers, - CPAC stands out as an organization that has successfully built a community. So, I wonder - what are the building blocks? A strong leader? A common or unifying purpose? A certain level of engagement? I imagine someone has created an equation that says “for a community to grow at 30% per month, there must be an engagement level x to the square root of y, where x equals current community size, and y equals the expected growth rate, and the result is how many engagements the average user must contribute to maintain that growth.” Does anyone have a framework like that? The net result of everyone’s experience is that there is nothing to do but start early and fail often. Kevin Jones made this the focus of his keynote. He talked about the types of failure and what they mean. And he showed his famous kids at work video: Kevin’s blog also has this post: Social Business Failure #8: Workflow Integration. This is something that we’ve been working on at Oracle. Since so much of business is based in enterprise applications such as ERP and CRM (and since Oracle offers e-Business Suite, Siebel, PeopleSoft, and JD Edwards, as well as Fusion Applications), it makes sense that the social capabilities of Oracle WebCenter is built right into these applications. There are two types of social collaboration – ad-hoc, and exception handling. When you are in a business process and encounter an exception, you immediately look for 1) the document that tells you how to handle it, or 2) the person who can tell you how to handle it. With WebCenter built into these processes, people either search their content management system, or engage in expertise location and conversation. The great thing is, THEY DON’T HAVE TO LEAVE THE APPLICATION TO DO IT. Oracle has built the social capabilities right into the applications and business processes. I don’t think enough folks were able to see that at the event, but I expect that over the next six months folks will become very aware of it. WebCenter also provides the ability to have ad-hoc collaboration, search, and expertise location that folks need when they are innovating or collaborating. We demonstrated Oracle Social Network. It’s built on our Oracle WebCenter product to provide social collaboration inside and outside of your company. When we showed it to people, there were a number of areas that they commented on that were different from the other products being shown at the conference: Screenshots from within the product Many authors working on documents simultaneously Flagging people for follow up Direct ability to call out to people Ability to see presence not just if someone is online, but which conversation they are actively in Great stuff, the conference was full of smart people that that we enjoy spending time with. We’ll keep up in the meantime, but we look forward to seeing you in Boston.

    Read the article

  • Architect Day: Boston - Agenda Update

    - by Bob Rhubart
    Here's the latest information on the session schedule and content for Oracle Technology Network Architect Day in Boston, MA on September 12, 2012. Registration is open, but seating is limited. When: September 12, 2012 8:30am – 5:00pm Where: Boston Marriott Burlington One Burlington Mall Road Burlington, MA 01803 Register now Agenda Time Session Title Room 8:30 am - 9:00 am Registration and Continental Breakfast Salon E Foyer 9:00 am - 9:15 am Welcome and Opening Comments | Bob Rhubart Salon E 9:15 am - 10:00 am Engineered Systems: Oracle's Vision for the Future | Ralf Dossmann Oracle's Exadata and Exalogic are impressive products in their own right. But working in combination they deliver unparalleled transaction processing performance with up to a 30x increase over existing legacy systems, with the lowest cost of ownership over a 3 or 5 year basis than any other hardware. In this session you'll learn how to leverage Oracle's Engineered Systems within your enterprise to deliver record-breaking performance at the lowest TCO. Salon E 10:00 am - 10:30 am Securing Public and Private Clouds | Anton Nielsen Long before the term "Cloud Computing" existed, Oracle technologies supported and promoted the concept. Centralized data with remote users has been at the core of these technologies for decades. The public cloud, and extending private clouds to the internet, though, has added security challenges never imagined decades ago. This presentation will examine a real life security breach and introduce architecture, technologies and policies to secure public and private clouds.  Salon E 10:30 am - 10:45 am Break 10:45 am - 11:30 am Breakout Sessions (pick one) Cloud Computing - Making IT Simple | Scott Mattoon The road to Cloud Computing is not without a few bumps. This session will help to smooth out your journey by tackling some of the potential complications. We'll examine whether standardization is a prerequisite for the Cloud. We'll look at why refactoring isn't just for application code. We'll check out deployable entities and their simplification via higher levels of abstraction. And we'll close out the session with a look at engineered systems and modular clouds. Salon E Innovations in Grid Computing with Oracle Coherence | Rob Misek Learn how Coherence can increase the availability, scalability and performance of your existing applications with its advanced low-latency data-grid technologies. Also hear some interesting industry-specific use cases that customers had implemented and how Oracle is integrating Coherence into its Enterprise Java stack. Salon C 11:30 am - 12:15 pm Breakout Sessions (pick one) Enterprise Strategy for Cloud Security | Dave Chappelle Security is high on the list of concerns for many organizations as they evaluate their cloud computing options. This session will examine security in the context of the various forms of cloud computing. We'll consider technical and non-technical aspects of security, and discuss several strategies for cloud computing, from both the consumer and producer perspectives. Salon E Oracle Enterprise Manager | Avi Huber Much more than a DB management tool, Oracle Enterprise Manager provides management and monitoring coverage for the entire Oracle stack, and beyond. This session will concentrate on the middleware management functionality in OEM, starting with Real User Experience monitoring, through AppServer management, and into deep-dive Java diagnostics. We’ll discuss Business Driven Application Management (BDAM) and the benefits of top-down monitoring. Lastly, we’ll demonstrate how to trace a specific user experience problem, through a multitier SOA application, to its root cause, deep in the JVM. Salon C 12:15 pm - 1:15 pm Lunch Salon E Foyer 1:15 pm - 2:00 pm Panel Discussion - Q&A with session speakers Salon E 2:00 pm - 2:45 pm Breakout Sessions (pick one) Oracle Cloud Reference Architecture | Anbu Krishnaswamy Cloud initiatives are beginning to dominate enterprise IT roadmaps. Successful adoption of Cloud and the subsequent governance challenges warrant a Cloud reference architecture that is applied consistently across the enterprise. This presentation will answer the important questions: What exactly is a Cloud, why you need it, what changes it will bring to the enterprise, and what are the key capabilities of a Cloud infrastructure are - using Oracle's Cloud Reference Architecture, which is part of the IT Strategies from Oracle (ITSO) Cloud Enterprise Technology Strategy (ETS). Salon E 21st Century SOA | Peter Belknap Service Oriented Architecture has evolved from concept to reality in the last decade. The right methodology coupled with mature SOA technologies has helped customers demonstrate success in both innovation and ROI. In this session you will learn how Oracle SOA Suite's orchestration, virtualization, and governance capabilities provide the infrastructure to run mission critical business and system applications. And we'll take a special look at the convergence of SOA & BPM using Oracle's Unified technology stack. Salon C 2:45 pm - 3:00 pm Break 3:00 pm - 4:00 pm Roundtable Discussion Salon E 4:00 pm - 4:15 pm Closing Comments & Readouts from Roundtables Salon E 4:15 pm - 5:00 pm Networking / Reception Salon E Foyer Note: Session schedule and content subject to change.

    Read the article

  • IndyTechFest Recap

    - by Johnm
    The sun had yet to raise above the horizon on Saturday, May 22nd and I was traveling toward the location of the 2010 IndyTechFest. In my freshly awaken, and pre-coffee, state I reflected on the months that preceded this day and how quickly they slipped away. The big day had finally come and the morning dew glistened with a unique brightness that morning. What is this all about? For those who are unfamiliar with IndyTechFest, it is a regional conference held in Indianapolis and hosted by the Indianapolis .NET Developers Association (IndyNDA) and the Indianapolis Professional Association for SQL Server (IndyPASS).  The event presents multiple tracks and sessions covering subjects such as Business Intelligence,  Database Administration, .NET Development, SharePoint Development, Windows Mobile Development as well as non-Microsoft topics such as Lean and MongoDB. This year's event was the third hosting of IndyTechFest. No man is an island No event such as IndyTechFest is executed by a single person. My fellow co-founders, with their highly complementary skill sets and philanthropy make the process very enjoyable. Our amazing volunteers and their aid were indispensible. The generous financial support of our sponsors that made the event and fabulous prizes possible. The spectacular line up of speakers who came from near and far to donate their time and knowledge. Our beloved attendees who sacrificed the first sunny Saturday in weeks to expand their skill sets and network with their peers. We are deeply appreciative. Challenges in preparation With the preparation of any event comes challenges. It is these challenges that makes the process of planning an event so interesting. This year's largest challenge was the location of the event. In the past two years IndyTechFest was held at the Gene B. Glick Junior Achievement Center in Indianapolis. This facility has been the hub of the Indy technical community for many years. As the big day drew near, the facility's availability came into question due to some recent changes that had occurred with those who operated the facility. We began our search for an alternative option. Thankfully, the Marriott Indianapolis East was available, was very spacious and willing to work within the range of our budget. Within days of our event, the decision to move proved to be wise since the prior location had begun renovations to the interior. Whew! Always trust your gut. Every day it's getting better At the ending of each year, we huddle together, review the evaluations and identify an area in which the event could improve. This year's big opportunity for improvement resided in the prize give-away portion at the end of the day. In the 2008 event, admittedly, this portion was rather chaotic, rushed and disorganized. This year, we broke the drawing into two sections, of which each attendee received two tickets. The first ticket was a drawing for the mountain of books that were given away. The second ticket was a drawing for the big prizes, the 2 Xboxes, 3 laptops and iPad. We peppered the ticket drawings with gift card raffles and tossing t-shirts into the audience. If at first you don't succeed, try and try again Each year of IndyTechFest, we have offered a means for ad-hoc sessions or discussion groups to pop-up. To our disappointment it was something that never quite took off. We have always believed that this unique type of session was valuable and wanted to figure out a way to make it work for this year. A special thanks to Alan Stevens, who took on and facilitated the "open space" track and made it an official success. Share with your tweety When the attendee badges were designed we decided to place an emphasis on the attendee's Twitter account as well as the events hash-tag (#IndyTechFest) to encourage some real-time buzz during the day. At the host table we displayed a Twitter feed for all to enjoy. It was quite successful and encouraging use of social media. My badge was missing my Twitter account since it was recently changed. For those who care to follow my rather sparse tweets, my address is @johnnydata. Man, this is one long blog post! All in all it was a very successful event. It is always great to see new faces and meet old friends. The planning for the 2011 IndyTechFest will kick off very soon. We have more capacity for future growth and a truck full of great ideas. Stay tuned!

    Read the article

  • What's New In 11.1.2.1 (Talleyrand SP1)

    - by russ.bishop
    This release is primarily about bug fixes and that's what we spent the most time on, but we also addressed a number of other things: 1. Performance improvements We've done a lot of work to improve the performance of page load and execution times. For example, the View Compare page is about half the size it was previously! We've also done a lot of work on the server to improve performance of queries, exports, action scripts, etc. We implemented some finer-grained locking so fewer operations will block other users while they are in progress. We made some optimizations to improve performance when you have a lot of network or database latency as well. Just a few examples: An Import that previously took 8 GB of memory and hours to complete now runs in about 30 minutes and never takes more than 1 GB of RAM. Searching by exact Node Name now completes within 2 seconds even for a hierarchy with millions of nodes. Another search that was taking 30 seconds to run now completes in less than 5 seconds. 2. Upgrade support This release supports automatic upgrade from previous releases, built right into the console. 3. Console Improvements The Console has been reorganized and made easier to use. It is also much more multi-threaded so it responds quicker without freezing up when you save changes or when it needs to get status. 4. Property Namespaces Properties now have a concept called a Namespace. This is tied into the Application Templates to prevent conflicts with duplicate property names. Right now, if you have an AccountType and you pull in the HFM template, it also has AccountType so you end up creating properties with decorations on the name like "Account Type (HFM)". This is no longer necessary. In addition, properties within a namespace must have unique labels but they can be duplicated across namespaces. So in the Property Grid when you click on the HFM category, you just see "AccountType". When you click on MyCategory, you see "AccountType", but they are different properties with different values. Within formulas, the names are still unique (eg: Custom.AccountType vs HFM.AccountType). I'll write more about this one later. 5. Single Sign On DRM now supports Single Sign-On via HSS. For example, if you are using Oracle's OAM as your SSO solution then you configure HSS to use OAM just like you would before. You also configure DRM to use HSS, again just like before. Then you configure OAM to protect the DRM web app, like you would any other website. However once you do those things, users are no longer prompted to enter their username/password. They simply get redirected to OAM if they don't already have a login token, otherwise they pick their application and sail right into DRM. You can also avoid having to pick an application (see the next item) 6. URL-based navigation You can now specify the application you want to log into via the URL. Combined with SSO and your Intranet, it becomes easy to provide links on our intranet portal that take users directly into a specific DRM application. We also support specifying the Version, Hierarchy, and Node. Again, this can be used on your internal portal, but the scenarios get even more interesting when you are using workflow like Oracle BPEL you can automatically generate links within emails that will take users directly to a specific node in the UI. 7. Job status and cancellation A lot of the jobs now report their status and support true cancellation. Action Scripts also report a progress complete percentage since the amount of work is known ahead of time. 8. Action Script Options Action scripts support Option declarations at the top of the file so a script can self-describe (when specified in the file, the corresponding item in the file is ignored). For example: Option|DetectDelimiter Option|UsePropertyNames|true This will tell DRM to automatically detect the delimiter (a pipe symbol in this case) and that all references to properties are by Name, not by Label. Note that when you load a script in the UI, if you use Labels we automatically try to match them up if they are unique. Any duplicates are indicated and you are presented with a choice to pick which property you actually referred to. This is somewhat similar to Version substitution, but tailored for properties. There are other more minor changes and like I said earlier a lot of bug fixes and performance improvements. Hopefully I will get a chance to dig into some of these things in future blog posts.

    Read the article

  • Whoosh: PASS Board Year 1, Q4

    - by Denise McInerney
    "Whoosh". That's the sound the last quarter of 2012 made as it rushed by. My first year on the PASS Board is complete, and the last three months of it were probably the busiest. PASS Summit 2012 Much of October was devoted to preparing for Summit. Every Board  member, HQ staffer and dozens of volunteers were busy in the run-up to our flagship event. It takes a lot of work to put on the Summit. The community meetings,  first-timers program, keynotes, sessions and that fabulous Community Appreciation party are the result of many hours of preparation. Virtual Chapters at the Summit With a lot of help from Karla Landrum, Michelle Nalliah, Lana Montgomery and others at HQ the VCs had a good presence at Summit. We started the week with a VC leaders meeting. I shared some information about the activities and growth during the first part of the year.   From January - September 2012: The number of VCs increased from 14 to 20 VC membership  grew from 55,200 to 80,100 Total attendance at VC meetings increased from 1,480 to 2,198 Been part of PASS Global Growth with language-based VC- including Chinese, Spanish and Portuguese. We also heard from some VC leaders and volunteers. Ryan Adams (Performance VC) shared his tips for successful marketing of VC events. Amy Lewis (Business Intelligence VC) described how the BI chapter has expanded to support PASS' global growth by finding volunteers to organize events at times that are convenient for people in Europe and Australia. Felipe Ferreira (Portuguese language VC) described the experience of building a user group first in Brazil, then expanding to work with Portuguese-speaking data professionals around the world. Virtual Chapter leaders and volunteers were in evidence throughout Summit, beginning with the Welcome Reception. For the past several years VCs have had an organized presence at this event, signing up new members and advertising their meetings. Many VC leaders also spent time at the Community Zone. This new addition to the Summit proved to be a vibrant spot were new members and volunteers could network with others and find out how to start a chapter or host a SQL Saturday. Women In Technology 2012 was the 10th WIT Luncheon to be held at Summit. I was honored to be asked to be on the panel to discuss the topic "Where Have We Been and Where are We Going?" The PASS community has come a long way in our understanding of issues facing women in tech and our support of women in the organization. It was great to hear from panelists Stefanie Higgins and Kevin Kline who were there at the beginning as well as Kendra Little and Jen Stirrup who are part of the progress being made by women in our community today. Bylaw Changes The Board spent a good deal of time in 2012 discussing how to move our global growth initiatives forward. An important component of this is a proposed change to how the Board is elected with some seats representing geographic regions. At the end of December we voted on these proposed bylaw changes which have been published for review. The member review and feedback is open until February 8. I encourage all members to review these changes and send any feedback to [email protected]  In addition to reading the bylaws, I recommend reading Bill Graziano's blog post on the subject. Business Analytics Conference At Summit we announced a new event: the PASS Business Analytics Conference. The inaugural event will be April 10-12, 2013 in Chicago. The world of data is changing rapidly. More and more businesses want to extract value and insight from their data. Data professionals who provide these insights or enable others to do so are in demand. The BA Conference offers expert content on predictive analytics, data exploration and visualization, content delivery strategies and more. By holding this new event PASS is participating in important discussions happening in our industry, offering our members more educational value and reaching out to data professionals who are not currently part of our organization. New Year, New Portfolio In addition to my work with the Virtual Chapters I am also now responsible for the 24 Hours of PASS portfolio. Since the first 24HOP of 2013 is scheduled for January 30 we started the transition of the portfolio work from Rob Farley to me right after Summit. Work immediately started to secure speakers for the January event. We have also been evaluating webinar platforms that can be used for 24HOP as well as the Virtual Chapters. Next Up 24 Hours of PASS: Business Analytics Edition will be held on January 30. I'll be there and will moderate one or two sessions. The 24HOP topics are a sneak peek into the type of content that will be offered at the Business Analytics Conference. I hope to see some of you there. The Virtual Chapters have hit the ground running in 2013; many of them have events scheduled. The Application Development VC is getting restarted  and a new Business Analytics VC will be starting soon. Check out the lineup and join the VCs that interest you. And watch the Events page and Connector for announcements of upcoming meetings. At the end of January I will be attending a Board meeting in Seattle, and February 23 I will be at SQL Saturday #177 in Silicon Valley.

    Read the article

  • High Jinks, Hi Jacks, Exceptional DBA Awards and PASS

    - by Rodney
    The countdown to PASS has counted down.  The day after tomorrow I will board a plane, like many others, on my way for the 4th year in a row to SQL PASS Summit.  The anticipation has been excruciating but luckily I have this little thing called a day job as a DBA that has kept me busy and not thinking too much about the event. Well that is not exactly true since my beautiful wife works for PASS so we get to talk about SQL from the time we wake up until late in the evening. I would not have it any other way and I feel very fortunate to be a part of this great event and to have been chosen as the Exceptional DBA Award judge also for the 4th year in a row.  This year, I will have been again tasked with presenting the award to the winner, Mr. Jeff Moden and it will be a true honor to meet him in person as I have read many of his articles on SSC and have attended his session at PASS previously.  The speech is all ready but one item remains, which will be a surprise to all who attend the party on Tuesday night in Seattle (see links below).  Let's face it, Exceptional DBAs everywhere work very hard protecting our data stores, tuning queries, mentoring, saving money, installing clusters, etc and once in a while there is time to be exceptionally non-professional and have a bit of fun. Once incident that happened this year that falls under the High Jinks category was when my network admin asked if I could Telnet into a SQL instance and see if I could make the connection through the firewall that he had just configured. I was able to establish a connection on port 1433 and it occurred to me that it would be very interesting if I could actually run T-SQL queries via a Telnet session much like you might do with an SMTP server. With that thought, I proceeded to demonstrate this could be possible by convincing my senior DBA Shawn McGehee that I was able to do so. At first he did not believe me. It shook his world view.  It was inconceivable.  What I had done, behind the scenes, of course, was to copy and rename SQLCMD.exe to Telnet.exe and used it to connect and run a simple, "Select * from sys.databases" on the SQL instance. I think if it had been anyone other than Shawn I could have extended this ruse indefinitely but he caught on within 30 seconds. It was a fun thirty seconds though. On the High Jacks side of the house, which is really merged to be SQL HACKS, I finally, after several years of struggling with how to connect to an untrusted domain like in a DMZ with a windows account in SSMS, I stumbled upon a solution that does away with the requirement to use SQL Authentication.  While "Runas" is a great command to use to run an application with a higher privileged account, I had not previously been able to figure out how to connect to the remote domain with SSMS and "Runsas". It never connected and caused a login failure every time for the remote windows domain account. Then I ran across an option for "Runas",   "/netonly".  This option postpones the login until a connection is made and only then passes the remote login you supply when you first launch SSMS with the "Runas" command. So a typical shortcut would look like: "C:\Windows\System32\runas.exe /netonly /user:remotedomain.com\rodlandrum "C:\Program Files\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\Ssms.exe" You will want to make sure the passwords are synced between the two domains, your local domain and the remote domain, otherwise you may have account lockout issues, but I have found in weeks of testing this is a stable solution. Now it is time to get ready to head for Seattle. Please, if you see me (@SQLBeat) or my wife (@Karlakay22) please run up and high five me (wait..High Jinks.High Jacks.High Fives.Need to change the title) or give me a big bear hug if you are strong enough to lift me off the ground. And if you do actually do that, I will think you are awesome and will not embarrass you by crying out for help or complaining of a broken back or sciatic nerve damage. And now the links to others who have all of the details. First, for the MVP Deep Dives 2, of which, like John, I was lucky enough to be able to participate in this year. http://www.simple-talk.com/community/blogs/johnm/archive/2011/09/29/103577.aspx And the details of the SSC party where the Exceptional DBA of 2011, Jeff Moden, will be awarded. http://www.simple-talk.com/community/blogs/rebecca_amos/archive/2011/10/05/103661.aspx   Cheers! Rodney

    Read the article

  • MSCC: Career & IT Fair 2014

    Already a couple of weeks ago, I've been addressed by Ibraahim and Yunus to see whether it would be interesting to participate in the 1st Career & IT Fair organised by the UoM Computer Club. Well, luckily we met at the Global Windows Azure Bootcamp and I wasn't too sure whether it would be possible for me to attend after all. The main reason is given because of work demand and furthermore due to the fact that the Mauritius Software Craftsmanship Community currently has no advertising material at all. Here's the brief statement of the event: "The UOM Students' Computer Club in collaboration with the UOM Students' Union and UOM CSE Department is organising a 'Career & IT Fair' on the 23rd and 24th April 2014. This event has for objective to provide a platform to tertiary students, secondary students as well as vocational students, the opportunity to meet job recruiters." Luckily, I was reminded that the 23rd is a Wednesday, and therefore I decided that it might be interesting to move our weekly Code & Coffee session to the university and hence be able to attend the career fair. As it turned out it was a great choice and thankfully Pritvi, Nadim as well as Ishwon volunteered to be around at the "community booth". Thankfully, the computer club gave us - the MSCC and the LUGM - one of their spaces in the lobby area of the Paul Octave Wiéhé Auditorium. My impression about the event Very well and professionally organised. Seriously, the lads over at the UoM Computer Club did a great job in organising their 2 days event, and felt very comfortable at any time. Actually, it was kind of amusing to some of the members constantly running around and checking everything. Even though that the whole process went smooth and easy off the hand. There were a couple of interesting pieces of information and announcements during the opening ceremony. For example, the Computer Science faculty is a very young one and has been initiated back in 1988 only - just by 4 staff members at that time. Now, after 25 years they have achieved quite a lot and there are currently 1.000+ active students attending the numerous lectures and courses. But there is no room to rest on previous achievements, and I was kind of surprised to hear that there are plans to extend the campus, and offer new lectures in the fields of nanotechnology, big data handling, and - crossing fingers - the introduction and establishment of a space control centre. Mauritius is already part of the Square Kilometre Array (SKA) and hopefully there will be more activities into that direction in the near future. Community - Awareness and collaboration As stated earlier, I could only spent one morning but luckily other members of the MSCC and the LUGM stayed during the whole two days and provided answers to any interested person. As for me, I took the opportunity to get in touch with the other companies in the lobby. Mainly, to create some awareness about our IT communities but also to see whether there might be options for future engagement in common activities, too. So far, I was able to speak to representatives of the following companies: ACCA Mauritius Business at Work Infomil LinkByNet Microsoft Indian Ocean Islands & French Pacific Spherinity Training Institute Spoon Consulting Ltd. State Informatics Ltd. Unfortunately, I only had a quick chat with an HR representative of LinkByNet but I fully count on our MSCC members like Nitin or LUGM member Ronny to spread our intentions over there.  So far, all of the representatives were really interested in our concepts and activities and I'm currently catching up with an introduction flyer for the MSCC that I'm going to send out to all those contacts via mail. It would be great to have more craftsmen as well as professional support on board. Some pictures from the event MSCC: Fantastic outlook for the near future. Announcements were made on Big data, nanotechnology, and space control centre in Mauritius. Interesting! MSCC: The lobby area was cramped with students. Great way to exchange and network. Good luck to all candidates! Passing the relay staff to... I recommend you to continue to read about the first Career & IT Fair on Ish's blog. He has a great summary and more details on those two days of IT activities than I have. Thanks and feel free to leave a comment (or two)... 

    Read the article

  • USB external drive is not recognized by any OS, how to troubleshoot in Ubuntu?

    - by Breno
    First of all I would like to inform you that I saw a question similar to mine but the error was different, so here's my problem... I have an external HD samsung s2 model of 500GB and a day to day just stopped working, tried in other systems (windows and mac) however are not recognized. In the windows device manager when I insert the usb it states that the device in question are not working properly. Well, in the logs of my ubuntu 4.12 I see the following message when I insert my usb device in: [ 2967.560216] usb 7-2: new full-speed USB device number 2 using uhci_hcd [ 2967.680182] usb 7-2: device descriptor read/64, error -71 [ 2967.904176] usb 7-2: device descriptor read/64, error -71 [ 2968.120227] usb 7-2: new full-speed USB device number 3 using uhci_hcd [ 2968.240207] usb 7-2: device descriptor read/64, error -71 [ 2968.464063] usb 7-2: device descriptor read/64, error -71 [ 2968.680087] usb 7-2: new full-speed USB device number 4 using uhci_hcd [ 2969.092085] usb 7-2: device not accepting address 4, error -71 [ 2969.208155] usb 7-2: new full-speed USB device number 5 using uhci_hcd [ 2969.624076] usb 7-2: device not accepting address 5, error -71 [ 2969.624118] hub 7-0:1.0: unable to enumerate USB device on port 2 [ 4520.240340] usb 7-1: new full-speed USB device number 6 using uhci_hcd [ 4520.364079] usb 7-1: device descriptor read/64, error -71 [ 4520.588109] usb 7-1: device descriptor read/64, error -71 [ 4520.804140] usb 7-1: new full-speed USB device number 7 using uhci_hcd [ 4520.924136] usb 7-1: device descriptor read/64, error -71 [ 4521.148083] usb 7-1: device descriptor read/64, error -71 [ 4521.364105] usb 7-1: new full-speed USB device number 8 using uhci_hcd [ 4521.776237] usb 7-1: device not accepting address 8, error -71 [ 4521.888206] usb 7-1: new full-speed USB device number 9 using uhci_hcd [ 4522.296102] usb 7-1: device not accepting address 9, error -71 [ 4522.296150] hub 7-0:1.0: unable to enumerate USB device on port 1 [ 4749.036104] usb 7-2: new full-speed USB device number 10 using uhci_hcd [ 4749.156209] usb 7-2: device descriptor read/64, error -71 [ 4749.380215] usb 7-2: device descriptor read/64, error -71 [ 4749.596206] usb 7-2: new full-speed USB device number 11 using uhci_hcd [ 4749.716409] usb 7-2: device descriptor read/64, error -71 [ 4749.940110] usb 7-2: device descriptor read/64, error -71 [ 4750.156257] usb 7-2: new full-speed USB device number 12 using uhci_hcd [ 4750.572150] usb 7-2: device not accepting address 12, error -71 [ 4750.684215] usb 7-2: new full-speed USB device number 13 using uhci_hcd [ 4751.100182] usb 7-2: device not accepting address 13, error -71 [ 4751.100224] hub 7-0:1.0: unable to enumerate USB device on port 2 Here is my system: Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 008 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 002: ID 08ff:2810 AuthenTec, Inc. AES2810 00:00.0 Host bridge: Intel Corporation Mobile 4 Series Chipset Memory Controller Hub (rev 07) 00:02.0 VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) 00:02.1 Display controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) 00:1a.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 02) 00:1a.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 02) 00:1a.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 02) 00:1a.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 02) 00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 02) 00:1c.0 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 1 (rev 02) 00:1c.1 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 2 (rev 02) 00:1c.4 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 5 (rev 02) 00:1d.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 02) 00:1d.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 02) 00:1d.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 02) 00:1d.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 02) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev 92) 00:1f.0 ISA bridge: Intel Corporation ICH9M LPC Interface Controller (rev 02) 00:1f.2 IDE interface: Intel Corporation 82801IBM/IEM (ICH9M/ICH9M-E) 2 port SATA Controller [IDE mode] (rev 02) 00:1f.3 SMBus: Intel Corporation 82801I (ICH9 Family) SMBus Controller (rev 02) 00:1f.5 IDE interface: Intel Corporation 82801IBM/IEM (ICH9M/ICH9M-E) 2 port SATA Controller [IDE mode] (rev 02) 02:01.0 CardBus bridge: Ricoh Co Ltd RL5c476 II (rev ba) 02:01.1 FireWire (IEEE 1394): Ricoh Co Ltd R5C832 IEEE 1394 Controller (rev 04) 02:01.2 SD Host controller: Ricoh Co Ltd R5C822 SD/SDIO/MMC/MS/MSPro Host Adapter (rev 21) 09:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5756ME Gigabit Ethernet PCI Express 0c:00.0 Network controller: Broadcom Corporation BCM4312 802.11b/g LP-PHY (rev 01) Does anyone have any clue what would be the problem?

    Read the article

  • The first day of JavaOne is already over!

    - by delabassee
    In the past Sunday used to be a more relaxing day with ‘just’ some JavaOne activities going on. Sunday used to be a soft day to prepare yourself for an exhausting week. This is now over as JavaOne is expanding; Sunday is now an integral part of the conference. One of the side effect of this extra day is that some activities related to JavaOne and OpenWorld such as MySQL Connect are being push to start a day earlier on Saturday (can you spot the pattern here?). On the GlassFish front, Sunday was a very busy day! It started at the Moscone Center with the annual GlassFish Community Event where the Java EE 7 and GF 4 roadmaps were presented and discussed. During the event, different GlassFish users such as ZeroTurnaround (the JRebel guys), Grupo RBS and IDR Solutions shared their views on GF, why they like GF but also what could be improved. The event was also a forum for the GF community to exchange with some of the key Java EE / GlassFish Oracle Executives and the different GF team members. The Strategy keynote and the Technical keynote were held in the Masonic Auditorium later in the after-noon. Oracle executives have presented the plans for Java SE, Java FX and Java EE. As on-demand replays will be available soon, I will not summarize several hours of content but here are some personal takeaways from those keynotes. Modularity Modularity is a big deal. We know by now that Project Jigsaw will not be ready for Java SE 8 but in any case, it is already possible (and encouraged) to test Jigsaw today. In the future, Java EE plan to rely on the modularity features provided by Java SE, so Project Jigsaw is also relevant for Java EE developers. Shorter term, to cover some of the modular requirements, Java SE will adopt the approach that was used for Java EE 6 and the notion of Profiles. This approach does not define a module system per say; Profiles is a way to clearly define different subsets of Java SE to fulfill different needs (e.g. the full JRE is not required for a headless application). The introduction of different Profiles, from the Base profile (10mb) to the Full Profile (+50mb), has been proposed for Java SE 8. Embedded Embedded is a strong theme going forward for the Java Plaform. There is now a dedicated program : Java Embedded @ JavaOne Java by nature (e.g. platform independence, built-in security, ability easily talks to any back-end systems, large set of skills available on the market, etc.) is probably the most suited platform for the Internet of Things. You can quickly be up-to-speed and develop services and applications for that space just by using your current Java skills. All you need to start developing on ARM is a 35$ Raspberry Pi ARM board (25$ if you are cheap and can live without an ethernet connection) and the recently released JDK for Linux/ARM. Obviously, GlassFish runs on Raspberry Pi. If you wan to go further in the embedded space, you should take a look Java SE Embedded, an optimized, low footprint, Java environment that support the major embedded architectures (ARM, PPC and x86). Finally, Oracle has recently introduced Java Embedded Suite, a new solution that brings modern middleware capabilities to the embedded space. Java Embedded Suite is an optimized solution that leverage Java SE Embedded but also GlassFish, Jersey and JavaDB to deploy advanced value added capabilities (eg. sensor data filtering and) deeper in the network, closer to the devices. JavaFX JavaFX is going strong! Starting from Java SE 7u6, JavaFX is bundled with the JDK. JavaFX is now available for all the major desktop platforms (Windows, Linux and Mac OS X). JavaFX is now also available, in developer preview, for low end device running Linux/ARM. During the keynote, JavaFX was shown running on a Raspberry Pi! And as announced during the keynote, JavaFX should be fully open-sourced by the end of the year; contributions are welcome!. There is a strong momentum around JavaFX, it’s the ideal client solution for the Java platform. A client layer that works perfectly with GlassFish on the back-end. If you were not convince by JavaFX, it’s time to reconsider it! As an old Chinese proverb say “One tweet is worth a thousand words!” HTML5, Project Avatar and Java EE 7 HTML5 got a lot of airtime too, it was covered during the Java EE 7 section of the keynote. Some details about Project Avatar, Oracle’s incubator project for a TSA (Thin Server Architecture) solution, were diluted and shown during the keynote. On the tooling side, Project Easel running on NetBeans 7.3 beta was demo’ed, including a cool NetBeans debugging session running in Chrome! HTML 5, Project Avatar and Java EE 7 deserve separate posts... Feedback We need your feedback! There are many projects, JSRs and products cooking : GlassFish 4, Project Jigsaw, Concurrency Utilities for Java EE (JSR 236), OpenJFX, OpenJDK to name just a few. Those projects, those specifications will have a profound impact on the Java platform for the years to come! So if you have the opportunity, download, install, learn, tests them and give feedback! Remember, you can "Make the Future Java!" Finally, the traditional GlassFish Party at the Thirsty Bear concluded the first JavaOne day. This party is another place where the community can freely exchange with the GlassFish team in a more relaxed, more friendly (but sometime more noisy) atmosphere. Arun has posted a set of pictures to reflect the atmosphere of the keynotes and the GlassFish party. You can find more details on the others Java EE and GlassFish activities here.

    Read the article

  • Windows Phone 7 Review &ndash; Part 1: LG Quantum

    - by Nikita Polyakov
    As many of my fellow geeks, I ran out and got a retail windows Phone 7 on the first day. Just had to have it :) I’ve had the developer prototypes in my hands for previous 3 months on and off, so I finally wanted to have one I call my own. I’ve rushed the Launch   I’ve checked out both AT&T and T-Mobile offerings on day 1 and decided on a Samsung Focus. Great screen, super light and thin. If you don’t believe me that this phone can compete with the best of the non-Phone 7 offerings - get it in your hand to compare for yourself. I have to say that even though the on-screen keyboard on Windows Phone 7 is one of the best, the amount of text I write on my phone and my expectation of how long that takes for a short reply are very high. Also the phone being so slick and sexy did not feel solid or confident in my hand or pocket. As the dust settled   Arrives the LG Quantum – now on AT&T and worldwide. First impression of the softer plastic, the back battery cover is solid metal - the entire phone feels solid and indestructible! Phone fits just right in my hand, it’s almost too good. It does not feel like it will crack in your jeans. I feel safe holding it and don’t feel like if I or someone were to bump into me walking it’d fly out of my hand. I’ve dropped and had thrown the Focus a few times on accident as it’s weight is negligible. I won’t even dream of lying the first day adjusting to a 3.5’ LCD screen from the Samsung’s blistering bright and poppy AMOLED 4’ was hard. But the colors and sharpness are still very good. I find it almost easier on the eyes actually for day to day use.  I had a chance to lay the phone down in the line with the prototypes and final versions of other phones that had LCD screens – LG makes HTC looks like a budget LCD compared to a high end LCD in the home theatre department. I am consistently complemented by friends that have the HD7 or Surround on how much better my screen looks. The screen just looks like the most color correct phone out of the line up. Even next to Samsung it makes it look oversaturated, but can’t match the true blacks compensating with true white.   Day to Day Usability   What I also noticed that is a huge difference is how much I am not accidently hitting the soft keys at the bottom. I real pain on Focus since holding it in am average size hand already would accidently touch the controls at the bottom. QWERTY keyboard on this phone is great. It’s like the mission for LG is “make it solid!”. Keyboard has a very durable feel.   LG’s has a secret wild card though is the DLNA support. If you seen an ad for it, you should. Imagine this – playing a song from your phone straight to your network connected A/V receiver. Done. Pictures to TV. Done. Video. Done. DLNA works with components that advertise to as well as Windows 7, XBOX 360 and other consoles.  I will write an extensive review of that experience in near future. LG Exclusive apps – from panorama photo taker to voice to text translator and even look-n-type app that works like a backup inverse camera, there is quite a bit there that won’t be found on the other phones. I’ll review those in more detail in another segment. Conclusion So for a quick comparison: If you want a phone that is super thin, light and is core reference of a Windows Phone 7 – Samsung Focus it is. If you want a great phone with solid secure feel, real keyboard, media features - the hands down winner is LG Quantum.   You can pick up the LG Quantum at AT&T in US and worldwide as LG Optimus 7Q.   Final thought: I have not had SmartPhone that I felt was a reliable trusty primary communication device since Samsung BlackJack II, this time the LG got the crown.   [ Disclosure: Phone was provided to me free of charge. That has been the case for all of my phones for years, nothing new - I get them all. ]

    Read the article

  • Simple Scripting for your Exalogic Storage

    - by Trond Strømme
    As part of my job in Oracle ACS (Advanced Customer Services) I'm handling lots of different systems and customers. Among the recent systems I worked with have been Oracle's Exalogic engineered systems. One of the things I'd never had much exposure to as a system developer/architect/middleware guy/Java dude has been storage; outside of consuming it for my photography needs.. Well, I'm always ready for a new challenge... I'd downloaded the 7000 series storage simulator when it was released in the good old Sun days, found it fun and instructive to play around with, but as I never touched storage in any way (besides consuming it..) I forgot about it. A couple of years ago when I started working with Exalogic engineered systems it again came into light as an invaluable learning and testing tool for the embedded storage in an Exalogic;  Oracle's Sun ZFS Storage 7320 Appliance.  aaaanyway... I've been "booted" into a part-time role as the interim storage/system admin/middleware/Java guy for a client and found I needed to create the occasional report or summary or whatever.. of what's using the storage in the 7320 (as default configured for an Exalogic, 40T of disk in a mirrored configuration, yielding 18T of actual space.) Reading the nice documentation and some articles on the Oracle Technology Network I saw great possibilities with the embedded ECMAScript3/JavaScript engine in the 7000 series.  In my personal opinion anyone who's dealing with Exalogic administration, or exposed to any of the 7000 series of storage appliances and servers that Oracle offers should have a VirtualBox instance of it kicking around. For development and testing it's a fantastic tool. (It can save you from explaining (most) of the embarrassing FAILS you can do if you test something in a production system to your management...) So download, and install.  A small sidestep, if after firing up the 7000 series simulator in VirtualBox you've forgotten what it's IP address is, the following will sort you out if you log in directly via the running VirtualBox VM. So in my case I can ssh to 192.168.56.101 or point a browser to https://192.168.56.101:215 to log into the storage appliance. One simple way of executing a script on the 7320 is to ssh to the device and redirecting a file with the script in it to ssh. ssh [email protected] < myscript.js One question I got from my client and the people who will take over the systems was: "how can we see the quotas and allocations for all projects/shares in one easy go so we don't have to go navigating around in the BUI for all the hundreds of shares the 7320 is hosting just to check if anything is running dry?" Easy! JavaScript time, VirtualBox and emacs! //NOTE! this script is available 'as is' It has ben run on a couple of 7320's, (running 2010.08.17.3.0,1-1.25 & // 2011.04.24.1.0,1-1.8) a 7420 and the VB image, but I personally //offer no guarantee whatsoever that it won't make your server topple, catch fire or in any way go pear shaped.. //run at your own risk or learn from my code and or mistakes.. script run('cd /'); run('shares'); //get all projects: proj = list(); function spaceToGig(bytes){ return bytes/1073741824; //convert bytes to GB } function fullInPercent(quota, space_data){ tmp = (space_data/quota)*100; return tmp; } //print header, slightly good looking printf(" %s/%-15s %8s(GB) %7s(GB) %5s(GB) %7s(GB) %3s\n","Project", "Share","Quota","Ref", "Snap", "Total","%full"); printf("-------------------------------------------------------------------------------\n") //for each project, get all shares. check for quota and calculate percentage and human readable figures.. for (i=0;i<proj.length;i++){ run('select ' + proj[i]); //get all shares for a project var pshares = list(); //for each share get quota properties for (j=0;j<pshares.length;j++){ run('select ' + pshares[j]); quota = get('quota'); //properties associated with a share or inherited from a project spaceData = get('space_data'); spaceSnap = get('space_snapshots'); spaceTotal = get('space_total'); if(quota>0){ //has quota printf(" %s/%-15s \t%4.2fGB\t%.2fGB\t%.2fGB\t%.2fGB\t%5.2f%%\n",proj[i], pshares[j],spaceToGig(quota),spaceToGig(spaceData),spaceToGig(spaceSnap),spaceToGig(spaceTotal),fullInPercent(quota,spaceTotal)); }else{ //no quota printf(" %s/%-15s \t%8s\t%.2fGB\t%.2fGB\t%.2fGB\t%s\n",proj[i],pshares[j], "N/A", spaceToGig(spaceData),spaceToGig(spaceSnap),spaceToGig(spaceTotal),"N/A"); } run('cd ..'); } run('done'); } The resulting output should look something like this: Project/Share Quota(GB) Ref(GB) Snap(GB) Total(GB) %full ------------------------------------------------------------------------------- ACSExalogicSystem/domains N/A 0.04GB 0.00GB 0.04GB N/A ACSExalogicSystem/logs N/A 0.01GB 0.00GB 0.01GB N/A ACSExalogicSystem/nodemgrs N/A 0.00GB 0.00GB 0.00GB N/A ACSExalogicSystem/stores N/A 0.04GB 0.00GB 0.04GB N/A ***_dev/FMW_***_1 133GB 4.24GB 0.01GB 4.25GB 3.19% ***_dev/FMW_***_2 N/A 4.25GB 0.01GB 4.26GB N/A ***_dev/applications 10GB 0.00GB 0.00GB 0.00GB 0.00% ***_dev/domains 50GB 10.75GB 3.55GB 14.30GB 28.61% ***_dev/logs 20GB 0.32GB 0.01GB 0.33GB 1.66% ***_dev/softwaredepot 20GB 4.15GB 0.00GB 4.15GB 20.73% ***_dev/stores 20GB 0.01GB 0.00GB 0.01GB 0.05% ###_dev/FMW_###_1 400GB 17.63GB 0.12GB 17.75GB 4.44% ###_dev/applications N/A 0.00GB 0.00GB 0.00GB N/A ###_dev/domains 120GB 14.21GB 5.53GB 19.74GB 16.45% ###_dev/logs 15GB 0.00GB 0.00GB 0.00GB 0.00% ###_dev/softwaredepot 250GB 73.55GB 0.02GB 73.57GB 29.43% …snip My apologies if the output is a bit mis-aligned here and there, I only bothered making it look good, not perfect :/ I also removed some of the project names (*,#)

    Read the article

  • How to Save Hundreds or Thousands of Dollars on Cell Phone Service

    - by Chris Hoffman
    Cell phone contracts are bad. You get a seemingly cheap phone up front, but you more than pay for the cost of the phone over two years. Prepaid phone plans are surging in North America for a reason. Prepaid phone plans will be cheaper and more flexible than traditional contracts with big carriers for many people. However much you use your phone, there’s a good chance you can save money with a prepaid service. No More Contracts Here’s how cell phone service typically works in North America: You get a subsidized phone for “free”, $99, or $199. You sign up for a two-year contract and more than pay back the cost of that phone over the length of the contract. This is similar to leasing something or purchasing it on a credit card and paying it back over two years — you spend less up front, but you’re paying more in the long run. But this isn’t the only option. You could opt for a cheaper prepaid service that doesn’t lock you into a contract. If you don’t use your phone much, you could just pay for what you use and avoid the hefty cell phone bills. If you use your phone a lot, you could get a cheaper plan, too. Now, this certainly isn’t for everyone. If you want the latest iPhone or Galaxy smartphone every two years and require a 4G data connection, prepaid services may not be for you. On the other hand, if you don’t need the latest phone, you can save money here. You can also save a huge amount of money if you don’t use your phone much. Phone Options When you choose your prepaid or contract-free service, you’ll often be able to purchase a phone from them. You’ll generally be able to find dirt-cheap dumbphones and the cheapest, slowest Android phones for not very much money. If you are able to buy a top-of-the-line smartphone, you’ll have to pay the full, unsubsidized price. That’s $649 for either an iPhone 5S or Samsung Galaxy S4. Whatever phones the service provider offers, you could always buy a phone elsewhere — for example, you could buy an unsubsidized iPhone direct from Apple and then take it to your cell phone service of choice. Most services will allow you to get a SIM card and pop it into your existing phone rather than purchasing a phone. If you can get a hand-me-down smartphone, you can often save quite a bit of money. For example, you may have a family member upgrading from an iPhone 4S to an iPhone 5S. You could take their phone to a prepaid carrier and have a nicer phone on a cheap cell phone plan. If you brought an old smartphone to a big carrier like AT&T or Verizon, they wouldn’t give you a discount on your monthly plan. You’d have to pay the same amount of money every month as if you had gotten a subsidized phone. Google’s Nexus phones are also great options for people looking to buy smartphones and pay up-front. Google’s Nexus 4 offered a modern, almost top-of-the-line Android smartphone experience at $299 or $349 when it came out last year. Google will soon be releasing the Nexus 5 and it’s expected to be priced at $349. That’s certainly a lot more than a cheap phone, but it’s a fairly high-end smartphone at almost half the price of an iPhone 5S or Galaxy S4. Nexus phones can be purchased online from Google’s Play Store. Service Options When choosing a service, you need to consider what you actually use. If you’re someone who only uses your phone rarely, you can get plans that will allow you to pay as little as a few dollars per month. If you’re someone who’s usually in range of Wi-Fi, you may not need much data at all. If you want a plan with unlimited talk, texting, and data usage, you can get it for much cheaper than you’d pay on a major carrier like AT&T. The options here range from pay-as-you-go plans, like the ones offered by T-Mobile, which allow you to put a certain amount of money in and only drain that balance when you actually use minutes, texts, or data. If you only make a few calls and send a few texts per month, you’d only pay a few bucks. On the other end, Walmart’s Straight Talk service is a popular option that offers unlimited talk, texting, and data at $45 per month. Which service is right for you depends on a lot of things, including your usage and what each network’s coverage is like in your area. You’ll want to do some research of your own before choosing a service. Prepaid services also offer you even more flexibility after you choose one. If you’re not happy or a better deal comes along, you can switch — you’re not locked into your service for two years and you won’t pay an early termination fee. Image Credit: Intel Free Press on Flickr, Jon Fingas on Flickr, John Karakatsanis on Flickr, kendalkinggroup on Flickr     

    Read the article

  • Come meet our Interns in Dublin

    - by klaudia.drulis
    Oracle Worldwide Product Translation Group (WPTG) provides solutions for all Oracle product and Content translation requirements. WPTG is a global organisation with its headquarters in Ireland and employees in Oracle offices worldwide. WPTG offer expertise in fields such as process engineering, tools development, linguistic quality, terminology, global product release, financial and vendor management. WPTG provides translation solution for over 40 languages including Asia Pacific, European, American and Middle Eastern languages. WPTG first introduced an intern program over 10 years ago and it has become a key component of our teams structure. The majority of Interns are sourced from a Computer Science related course, these Interns joining the engineering team. Others are sourced from Business courses and work within the Business / Project management area. The intern program allows us to maintain ties with current course curriculum and brings fresh energy and perspective into our Organisation. Four of the full time staff working in Dublin today joined us originally as Interns and subsequently were offered permanent positions. Come Meet some of our 2010 Interns, Come and see what Darragh, Anthony, Caoimhe, James and Artemij thought about working within the WPTG at Oracle: Darragh “Oracle has been a fun, challenging work placement for me. From day one I was treated as a full member of staff, this was both comforting and a little bit scary. The responsibilities stack up but I found I was able to keep on top of everything and even make improvements to how we handle a few things thanks to a great team and a very supportive manager. There’s a very positive atmosphere in work that’s really conducive to getting a lot of work done. Ideas seem to be the central hub in my line of business so all of my ideas and innovations were greeted with enthusiasm. Oracle has given me a fantastic opportunity and I urge you to grab it with both hands, you’ll find that you’re with a set of like minded people from all works of life that make work both interesting and fun. Even when the pressure is on you know that you can always get help and advice from someone nearby. My last word of advice is don’t be afraid to stick your neck out, everyone here is willing to learn, try something new and innovate, your voice will be heard and who knows, you could end up having a large impact on Oracle and your career.” Anthony “I had a great experience working with Oracle, from day one I was treated like a full member of staff with responsibilities of my own. I found that the more I put into the work the more I got out from the experience. Volunteering and being willing to face challenges have made this a more exciting placement. I am given a lot of leeway to do my own projects and so I’ve found that I am really enjoying my time here.” Caoimhe “I am currently spending my year of placement within the Release Management Team in the WPTG. My main role is to handle the finance process of all translation projects under 100k which includes creating workspecs and PO's, sending out kits, dealing with vendor queries and handling the invoicing and payment part. I am really enjoying my time here at Oracle, everyone is very open and friendly and willing to help you out with any questions you may have. I would definitely be interested in returning to Oracle after I graduate!” James “I am currently on a 12 month placement with Oracle, working as part of the Worldwide Product Translation Group in the Business Management. The Business Management team provides a global view on WPTG’s vendor and business strategy and is an interface into WPTG for new business. The business management team work together to support the external translation partner network. My role is to support the Business Management team and also to work on various projects when the need arises. This involves working with translation vendors and working with other Oracle employees worldwide. I am really enjoying my time working for Oracle, at times it can be challenging bit also very rewarding. I would recommend any student wanting to undertake a placement year to apply to Oracle, I made some great friends and I will never forget my time in Dublin.” Artemij “From working within Oracle, I have truly understood what "career path" is, and what opportunities a large corporation like Oracle can offer. Without any illusions, the work itself is exciting, sometimes challenging, tests your ability to handle pressure, to make decisions and take responsibility, to learn quickly and cooperate efficiently in order to solve a problem. I have learned a lot about myself. What I am good at, where and what I can do better. My placement at Oracle has allowed me to get a clearer picture of what I want, and which door I am going to open after college. If you have any questions related to this article feel free to contact  [email protected].  You can find our job opportunities via http://campus.oracle.com

    Read the article

  • How to Achieve Real-Time Data Protection and Availabilty....For Real

    - by JoeMeeks
    There is a class of business and mission critical applications where downtime or data loss have substantial negative impact on revenue, customer service, reputation, cost, etc. Because the Oracle Database is used extensively to provide reliable performance and availability for this class of application, it also provides an integrated set of capabilities for real-time data protection and availability. Active Data Guard, depicted in the figure below, is the cornerstone for accomplishing these objectives because it provides the absolute best real-time data protection and availability for the Oracle Database. This is a bold statement, but it is supported by the facts. It isn’t so much that alternative solutions are bad, it’s just that their architectures prevent them from achieving the same levels of data protection, availability, simplicity, and asset utilization provided by Active Data Guard. Let’s explore further. Backups are the most popular method used to protect data and are an essential best practice for every database. Not surprisingly, Oracle Recovery Manager (RMAN) is one of the most commonly used features of the Oracle Database. But comparing Active Data Guard to backups is like comparing apples to motorcycles. Active Data Guard uses a hot (open read-only), synchronized copy of the production database to provide real-time data protection and HA. In contrast, a restore from backup takes time and often has many moving parts - people, processes, software and systems – that can create a level of uncertainty during an outage that critical applications can’t afford. This is why backups play a secondary role for your most critical databases by complementing real-time solutions that can provide both data protection and availability. Before Data Guard, enterprises used storage remote-mirroring for real-time data protection and availability. Remote-mirroring is a sophisticated storage technology promoted as a generic infrastructure solution that makes a simple promise – whatever is written to a primary volume will also be written to the mirrored volume at a remote site. Keeping this promise is also what causes data loss and downtime when the data written to primary volumes is corrupt – the same corruption is faithfully mirrored to the remote volume making both copies unusable. This happens because remote-mirroring is a generic process. It has no  intrinsic knowledge of Oracle data structures to enable advanced protection, nor can it perform independent Oracle validation BEFORE changes are applied to the remote copy. There is also nothing to prevent human error (e.g. a storage admin accidentally deleting critical files) from also impacting the remote mirrored copy. Remote-mirroring tricks users by creating a false impression that there are two separate copies of the Oracle Database. In truth; while remote-mirroring maintains two copies of the data on different volumes, both are part of a single closely coupled system. Not only will remote-mirroring propagate corruptions and administrative errors, but the changes applied to the mirrored volume are a result of the same Oracle code path that applied the change to the source volume. There is no isolation, either from a storage mirroring perspective or from an Oracle software perspective.  Bottom line, storage remote-mirroring lacks both the smarts and isolation level necessary to provide true data protection. Active Data Guard offers much more than storage remote-mirroring when your objective is protecting your enterprise from downtime and data loss. Like remote-mirroring, an Active Data Guard replica is an exact block for block copy of the primary. Unlike remote-mirroring, an Active Data Guard replica is NOT a tightly coupled copy of the source volumes - it is a completely independent Oracle Database. Active Data Guard’s inherent knowledge of Oracle data block and redo structures enables a separate Oracle Database using a different Oracle code path than the primary to use the full complement of Oracle data validation methods before changes are applied to the synchronized copy. These include: physical check sum, logical intra-block checking, lost write validation, and automatic block repair. The figure below illustrates the stark difference between the knowledge that remote-mirroring can discern from an Oracle data block and what Active Data Guard can discern. An Active Data Guard standby also provides a range of additional services enabled by the fact that it is a running Oracle Database - not just a mirrored copy of data files. An Active Data Guard standby database can be open read-only while it is synchronizing with the primary. This enables read-only workloads to be offloaded from the primary system and run on the active standby - boosting performance by utilizing all assets. An Active Data Guard standby can also be used to implement many types of system and database maintenance in rolling fashion. Maintenance and upgrades are first implemented on the standby while production runs unaffected at the primary. After the primary and standby are synchronized and all changes have been validated, the production workload is quickly switched to the standby. The only downtime is the time required for user connections to transfer from one system to the next. These capabilities further expand the expectations of availability offered by a data protection solution beyond what is possible to do using storage remote-mirroring. So don’t be fooled by appearances.  Storage remote-mirroring and Active Data Guard replication may look similar on the surface - but the devil is in the details. Only Active Data Guard has the smarts, the isolation, and the simplicity, to provide the best data protection and availability for the Oracle Database. Stay tuned for future blog posts that dive into the many differences between storage remote-mirroring and Active Data Guard along the dimensions of data protection, data availability, cost, asset utilization and return on investment. For additional information on Active Data Guard, see: Active Data Guard Technical White Paper Active Data Guard vs Storage Remote-Mirroring Active Data Guard Home Page on the Oracle Technology Network

    Read the article

  • Day 1 - Finding Like Minds

    - by dapostolov
    So, is being a Game Developer any different from being an IT Developer? I picture a poorly lit environment where I get to purchase my own desk lamp; I'm thinking one of those huge lava lamps that pump out so much heat you could fry an egg on it. To my right: a "great wall" of empty coke cans dwarf me. Eating my last slice of pizza I look across my desk to see a fellow developer with a smug look on his face;  he's just coded his latest module for the game and it looks like he's in nirvana. My duty, of course, is to remind him to keep focused on the job at hand. So, picking up my trusty elastic and aerodynamically crafted paper bullet I begin a 10 minute war of welts and laughter which is promptly abrupted by our Project Manager demanding more details from our morning Scrum meeting. After providing about 5 minutes of geek speak and several words of comfort to make his eyes glaze over...it hits me, the idea for the module...beckoning my developer friend over, we quickly shoo the Project Manager away and begin our brainstorming frenzy ... now, where'd I put that full can of coke? OK. OK. This isn't probably the most ideal game developer environment, but it definitely sounds fun to me...and from what I gather is nothing like most game development companies. But I'm not doing this blog series to "go pro"; like I stated in my first post I want to make a 2D game from an idea my best friend and I drummed up long, long ago. I'm in this for the passion AND I want to see how easy it is for us .Net Developers to create a game. So where do I start? Where can I find like minded individuals? What technologies are there? What do I need to make a video game? The questions are endless....AND...since I already have an idea ... lets start with ... Technology (yes, I'm a geek, live with it...) Technology OK. Predominantly, games are still made in C++ or even C. I'm not sure how much assembly code is floating around lately, however, that is not my concern. I do know C / C++ from my past, enough to even get me by, but I'm mainly interested in a recent, not-so-new, technology called XNA. What is XNA? XNA allows us .Net Developers to make 2D / 3D games for windows, Xbox*, and Windows Mobile 7*. * = for a nominal fee *cough* The following link is your one stop shop to XNA game development: http://creators.xna.com/en-US/education/gettingstarted The above site hosts information such as: - getting started - a sample/instructional shooter game in 2D / 3D with code (if I'm taking too long for you in this blog series) - downloads - starter kits... http://creators.xna.com/en-US/education/starterkits/ And of course...forums. You can also subscribe and pay for their premium membership which gets you some pretty awesome tutorials, resources, downloads, and premium community support. Some general Wiki information about XNA: http://en.wikipedia.org/wiki/XNA_%28Microsoft%29 Community Support OK. Let's move on to industry and community support. Apart from XNA, there are some really cool sites out there, I just haven't found all of them yet. However, I found a really cool Game Development website called Gamastura. You can click on the following link to get you there: http://www.gamasutra.com/ The site is 100% dedicated to "The Art & Business of Making Games". Armed with blogs, twitter, jobs/resumes and most importantly industry news; one could subscribe to the feed and got lost in the wealth of information it provides. On a side note: I remember Gamasutra being around when my best friend and I wanted to make a video game...meaning, they've been around for a while now. I think the most beneficial aspect of this site is to understand the industry you want to get into. Otherwise, it's just a cool site to keep up to date with the industry in general. Another Community Support option is LinkedIn. Amongst the land of extremely bloated achievements and responsibilities lay 3 groups (that I have found) that deal with game development.: http://www.linkedin.com/groups?gid=59205 - Game Developers http://www.linkedin.com/groups?gid=824817 - DirectX Game Developer Network http://www.linkedin.com/groups?gid=756587 - DirectX Developers The Game Developers group in LinkedIn is by far the most active of the three and could possibly provide a wealth of support. What I've done thus far: - I lightly researched the XNA technology - I looked around for some community sites to assist me - I downloaded the XNA Game Studio 3.1 on my PC and installed it on my IDE - I even tried both tutorials! http://creators.xna.com/en-US/education/gettingstarted/bgintro/chapter1   Best Regards D.

    Read the article

< Previous Page | 742 743 744 745 746 747 748 749 750 751 752 753  | Next Page >