Search Results

Search found 18581 results on 744 pages for 'oracle openworld 2012'.

Page 551/744 | < Previous Page | 547 548 549 550 551 552 553 554 555 556 557 558  | Next Page >

  • Screen Casting using ffmpeg (too fast)

    - by rowman
    I can use ffmpeg to make screen casts: ffmpeg -f x11grab -s 1280x800 -i :0.0 -c:v libx264 -framerate 30 -r 30 -crf 18 out.mkv However the output comes out to be too fast paced. It also happens with GTK RecordMyDesktop if I enable the encode on the fly. So, the questions is how to get a normal video pace. Also in order to capture the sound with ffmpeg what option should be used? FFmpeg Output: ffmpeg -f x11grab -s 1280x800 -r 30 -i :0.0 -c:v libx264 -framerate 30 -r 30 -crf 18 out.mkv ffmpeg version N-35162-g87244c8 Copyright (c) 2000-2012 the FFmpeg developers built on Oct 7 2012 15:56:19 with gcc 4.6 (Ubuntu/Linaro 4.6.3-1ubuntu5) configuration: --enable-gpl --enable-libfaac --enable-libfdk-aac --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-librtmp --enable-libtheora --enable-libvorbis --enable-libvpx --enable-x11grab --enable-libx264 --enable-nonfree --enable-version3 libavutil 51. 73.102 / 51. 73.102 libavcodec 54. 64.100 / 54. 64.100 libavformat 54. 29.105 / 54. 29.105 libavdevice 54. 3.100 / 54. 3.100 libavfilter 3. 19.102 / 3. 19.102 libswscale 2. 1.101 / 2. 1.101 libswresample 0. 16.100 / 0. 16.100 libpostproc 52. 1.100 / 52. 1.100 [x11grab @ 0xab896a0] device: :0.0 -> display: :0.0 x: 0 y: 0 width: 1280 height: 800 [x11grab @ 0xab896a0] shared memory extension found [x11grab @ 0xab896a0] Estimating duration from bitrate, this may be inaccurate Input #0, x11grab, from ':0.0': Duration: N/A, start: 1350136942.608988, bitrate: 983040 kb/s Stream #0:0: Video: rawvideo (BGR[0] / 0x524742), bgr0, 1280x800, 983040 kb/s, 30 tbr, 1000k tbn, 30 tbc [libx264 @ 0xab87320] using cpu capabilities: MMX2 SSE2Fast SSSE3 Cache64 SlowCTZ SlowAtom [libx264 @ 0xab87320] profile High 4:4:4 Predictive, level 3.2, 4:4:4 8-bit [libx264 @ 0xab87320] 264 - core 128 r2 198a7ea - H.264/MPEG-4 AVC codec - Copyleft 2003-2012 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=4 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=18.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00 Output #0, matroska, to 'out.mkv': Metadata: encoder : Lavf54.29.105 Stream #0:0: Video: h264, yuv444p, 1280x800, q=-1--1, 1k tbn, 30 tbc Stream mapping: Stream #0:0 -> #0:0 (rawvideo -> libx264) Press [q] to stop, [?] for help frame= 10 fps=0.0 q=0.0 size= 1kB time=00:00:00.00 bitrate= 0.0kbits/sframe= 19 fps= 17 q=0.0 size= 1kB time=00:00:00.00 bitrate= 0.0kbits/sframe= 28 fps= 17 q=0.0 size= 1kB time=00:00:00.00 bitrate= 0.0kbits/sframe= 37 fps= 17 q=0.0 size= 1kB time=00:00:00.00 bitrate= 0.0kbits/sframe= 45 fps= 16 q=0.0 size= 1kB time=00:00:00.00 bitrate= 0.0kbits/sframe= 47 fps= 14 q=0.0 size= 1kB time=00:00:00.00 bitrate= 0.0kbits/sframe= 52 fps= 13 q=24.0 size= 257kB time=00:00:00.00 bitrate=2101632.0kbiframe= 55 fps= 12 q=24.0 size= 257kB time=00:00:00.10 bitrate=20808.2kbitsframe= 59 fps= 11 q=24.0 size= 289kB time=00:00:00.23 bitrate=10145.0kbitsframe= 64 fps= 11 q=24.0 size= 289kB time=00:00:00.40 bitrate=5894.7kbits/frame= 70 fps= 11 q=24.0 size= 289kB time=00:00:00.60 bitrate=3933.1kbits/frame= 72 fps= 10 q=24.0 size= 289kB time=00:00:00.66 bitrate=3549.2kbits/frame= 77 fps=9.8 q=24.0 size= 289kB time=00:00:00.83 bitrate=2837.7kbits/frame= 80 fps=9.6 q=24.0 size= 289kB time=00:00:00.93 bitrate=2533.5kbits/frame= 85 fps=9.3 q=24.0 size= 289kB time=00:00:01.10 bitrate=2146.9kbits/frame= 89 fps=9.3 q=24.0 size= 289kB time=00:00:01.23 bitrate=1917.1kbits/frame= 92 fps=9.1 q=24.0 size= 289kB time=00:00:01.33 bitrate=1773.3kbits/frame= 96 fps=9.0 q=24.0 size= 289kB time=00:00:01.46 bitrate=1612.4kbits/frame= 99 fps=8.8 q=24.0 size= 321kB time=00:00:01.56 bitrate=1676.8kbits/frame= 104 fps=8.7 q=24.0 size= 321kB time=00:00:01.73 bitrate=1515.2kbits/frame= 109 fps=5.3 q=24.0 Lsize= 1093kB time=00:00:03.56 bitrate=2511.5kbits/s video:1092kB audio:0kB subtitle:0 global headers:0kB muxing overhead 0.120198% [libx264 @ 0xab87320] frame I:3 Avg QP:18.93 size:142610 [libx264 @ 0xab87320] frame P:43 Avg QP:20.79 size: 15751 [libx264 @ 0xab87320] frame B:63 Avg QP:23.75 size: 195 [libx264 @ 0xab87320] consecutive B-frames: 21.1% 1.8% 11.0% 66.1% [libx264 @ 0xab87320] mb I I16..4: 50.0% 21.1% 28.9% [libx264 @ 0xab87320] mb P I16..4: 6.1% 0.9% 3.2% P16..4: 5.5% 1.2% 0.6% 0.0% 0.0% skip:82.5% [libx264 @ 0xab87320] mb B I16..4: 0.4% 0.1% 0.0% B16..8: 2.9% 0.1% 0.0% direct: 0.0% skip:96.5% L0:40.7% L1:57.0% BI: 2.3% [libx264 @ 0xab87320] 8x8 transform intra:14.5% inter:46.1% [libx264 @ 0xab87320] coded y,u,v intra: 33.5% 24.1% 25.4% inter: 0.9% 0.4% 0.4% [libx264 @ 0xab87320] i16 v,h,dc,p: 70% 26% 1% 3% [libx264 @ 0xab87320] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 11% 21% 30% 5% 7% 5% 7% 4% 10% [libx264 @ 0xab87320] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 32% 35% 12% 2% 4% 3% 4% 3% 5% [libx264 @ 0xab87320] Weighted P-Frames: Y:0.0% UV:0.0% [libx264 @ 0xab87320] ref P L0: 57.0% 5.6% 26.8% 10.6% [libx264 @ 0xab87320] ref B L0: 69.4% 22.6% 8.0% [libx264 @ 0xab87320] ref B L1: 93.7% 6.3% [libx264 @ 0xab87320] kb/s:2460.40

    Read the article

  • How to structure a Visual Studio project for the data access layer

    - by Akk
    I currently have a project that uses various DB access technologies mainly for showcasing or for demos. Currently we have: Namespace App.Data (App.Data.dll) Folder NHibernate Folder EntityFramework Folder LinqToSql The above structure is ok as we only use Sql Server as the DB. But going forward we will be including Oracle, MySql etc. So what would be a better structure with this in mind? I thought about: Namespace App.Data.SqlServer (App.Data.SqlServer.dll) Folder NHibernate Folder EntityFramework Folder LinqToSql Or would it just be better to have separate assemblies for each database and access technology?: Namespace App.Data.SqlServer.NHibernate (App.Data.SqlServer.NHibernate.dll) Namespace App.Data.SqlServer.EntityFramework(App.Data.SqlServer.EntityFramework.dll) Namespace App.Data.Oracle.NHibernate (App.Data.Oracle.NHibernate.dll) Namespace App.Data.MySql.NHibernate (App.Data.MySql.Oracle.dll)

    Read the article

  • Couldn't run loadjava on user schema to load dbwsclientws.jar dbwsclientdb102.jar

    - by padmaja
    I am trying to load oracle webservice client jars to my schema. I did set the PATH to inlcude: /u01/app/oracle/product/10.2.0/db_1/bin When I try to run loadjava as "loadjava -u myschema/myscehmapwd -r -v -f -genmissing dbwsclientws.jar dbwsclientdb102.jar" I am getting error: Exception in thread "main" java.lang.NoClassDefFoundError: oracle/aurora/server/tools/loadjava/LoadJavaMain. Does it mean that jvm is not setup on the box? How can I check if the jvm is enabled or not? I am running it on Oracle 10g in UNIX environment. Any help with the issue is greatly appreciated.

    Read the article

  • Do you know of a C macro to compute Unix time and date?

    - by Alexis Wilke
    I'm wondering if someone knows/has a C macro to compute a static Unix time from a hard coded date and time as in: time_t t = UNIX_TIMESTAMP(2012, 5, 10, 9, 26, 13); I'm looking into that because I want to have a numeric static timestamp. This will be done hundred of times throughout the software, each time with a different date, and I want to make sure it is fast because it will run hundreds of times every second. Converting dates that many times would definitively slow down things (i.e. calling mktime() is slower than having a static number compiled in place, right?) [made an update to try to render this paragraph clearer, Nov 23, 2012] Update I want to clarify the question with more information about the process being used. As my server receives requests, for each request, it starts a new process. That process is constantly updated with new plugins and quite often such updates require a database update. Those must be run only once. To know whether an update is necessary, I want to use a Unix date (which is better than using a counter because a counter is much more likely to break once in a while.) The plugins will thus receive an update signal and have their on_update() function called. There I want to do something like this: void some_plugin::on_update(time_t last_update) { if(last_update < UNIX_TIMESTAMP(2010, 3, 22, 20, 9, 26)) { ...run update... } if(last_update < UNIX_TIMESTAMP(2012, 5, 10, 9, 26, 13)) { ...run update... } // as many test as required... } As you can see, if I have to compute the unix timestamp each time, this could represent thousands of calls per process and if you receive 100 hits a second x 1000 calls, you wasted 100,000 calls when you could have had the compiler compute those numbers once at compile time. Putting the value in a static variable is of no interest because this code will run once per process run. Note that the last_update variable changes depending on the website being hit (it comes from the database.) Code Okay, I got the code now: // helper (Days in February) #define _SNAP_UNIX_TIMESTAMP_FDAY(year) \ (((year) % 400) == 0 ? 29LL : \ (((year) % 100) == 0 ? 28LL : \ (((year) % 4) == 0 ? 29LL : \ 28LL))) // helper (Days in the year) #define _SNAP_UNIX_TIMESTAMP_YDAY(year, month, day) \ ( \ /* January */ static_cast<qint64>(day) \ /* February */ + ((month) >= 2 ? 31LL : 0LL) \ /* March */ + ((month) >= 3 ? _SNAP_UNIX_TIMESTAMP_FDAY(year) : 0LL) \ /* April */ + ((month) >= 4 ? 31LL : 0LL) \ /* May */ + ((month) >= 5 ? 30LL : 0LL) \ /* June */ + ((month) >= 6 ? 31LL : 0LL) \ /* July */ + ((month) >= 7 ? 30LL : 0LL) \ /* August */ + ((month) >= 8 ? 31LL : 0LL) \ /* September */+ ((month) >= 9 ? 31LL : 0LL) \ /* October */ + ((month) >= 10 ? 30LL : 0LL) \ /* November */ + ((month) >= 11 ? 31LL : 0LL) \ /* December */ + ((month) >= 12 ? 30LL : 0LL) \ ) #define SNAP_UNIX_TIMESTAMP(year, month, day, hour, minute, second) \ ( /* time */ static_cast<qint64>(second) \ + static_cast<qint64>(minute) * 60LL \ + static_cast<qint64>(hour) * 3600LL \ + /* year day (month + day) */ (_SNAP_UNIX_TIMESTAMP_YDAY(year, month, day) - 1) * 86400LL \ + /* year */ (static_cast<qint64>(year) - 1970LL) * 31536000LL \ + ((static_cast<qint64>(year) - 1969LL) / 4LL) * 86400LL \ - ((static_cast<qint64>(year) - 1901LL) / 100LL) * 86400LL \ + ((static_cast<qint64>(year) - 1601LL) / 400LL) * 86400LL ) WARNING: Do not use these macros to dynamically compute a date. It is SLOWER than mktime(). This being said, if you have a hard coded date, then the compiler will compute the time_t value at compile time. Slower to compile, but faster to execute over and over again.

    Read the article

  • Social Shopping

    - by David Dorf
    I've written about various breeds of social shopping in the past, so I decided to give some thought into a categorization with examples. Below I've listed the different types of social shopping I've observed and some companies that support them. Comments and Ratings -- Commenting on products has been around almost as long as e-commerce. Two popular players in this space are BazaarVoice and PowerReviews. Most shoppers prefer relying on peer reviews rather than retailer descriptions, so the influence over sales is very strong. f-commerce -- A new term that was sure to rear its ugly head when retailers started allowing shopping on Facebook, And its all Elastic Path and Alvenda's fault! Co-shopping -- Retailers like Wet Seal are enabling multiple people to shop together online. This is particularly applicable to fashion, where the real-time exchange of opinions is important. I actually tried this with a co-worker and its pretty cool. Bragging -- Blippy is Twitter for shoppers, allowing purchases to be "tweeted" so you can keep up with your friends. I get alerted when friends download music or apps from iTunes because chances are I'll be interested as well. This covert influence is one-up'ed by Snatter, a service that gives people discounts for tweeting or posting promotions from retailers. This is the petri dish of viral marketing. Advice -- Combine the bragging of Blippy and the opinions from BazaarVoice and you'd get ShopSocially, a social network dedicated to spreading product knowledge amongst informed shoppers. I'm sure if I gave it more thought, a few more types would come to mind, but I've got to get back to work. Now is not the time to be blogging at Oracle!

    Read the article

  • Whitepaper: The Socially Enabled Enterprise

    - by Richard Lefebvre
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Sharing the results of our new executive study, which explored the opportunities and challenges global organizations are facing in the transition to becoming socially enabled enterprises. Oracle, Leader Networks, and Social Media Today recently conducted an online survey of over 900 Marketing and IT executives to understand how companies are leveraging social technologies and practices throughout their organizations. Read Now! /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";}

    Read the article

  • Upgrade 10g Osso to 11g OAM (Part 2)

    - by Pankaj Chandiramani
    This is part 2 of http://blogs.oracle.com/pankaj/2010/11/upgrade_10g_osso_to_11g_oam.html So last post we saw the overview of upgrading osso to oam11g . Now some more details on same . As we are using the co-existence feature , we have to install the OAM server and upgrade the existing OSSO 10g server to the OAM servers. OAM Upgrade Steps Overview Pre-Req : You already have a OAM 11g Installed Upgrade Step 1: Configure User Store & Make it Primary Upgrade Step 2: Create Policy Domain , this is dome by UA automatically Upgrade Step 3: Migrate Partners : This is done by running Upgrade Assistant Verify successful Upgrade Details on UA step : To Upgrade the existing OSSO 10g servers to OAM server , this is done by running the UA script in OAM , which copies over all the partner app details from osso to OAM 11g , run_ua.sh is the script name which will ask you to input the Policies.properties from SSO $OH/sso/config folder of osso 10g & other variables like db password . Some pointers Upgrading oso to Oam 11g , by default enables the coexistence mode on the OAM Server Front-end the OAM server with the same Load Balancer that is the front end of the OSSO 10g servers. Now, OAM and OSSO 10g servers are working in a co-exist mode. OAM 11g is made to understand 10g OSSO Token format and session handling capabilities so as to co-exist with 10g OSSO servers./li How to test ? Try to access the partner applications and verify that single sign on works. Also, verify that user does not have to login in if the user is already authenticated by either OAM or OSSO 10g server. Screen-shots & Troubleshooting tips to be followed .......

    Read the article

  • SharePoint 2010 MSDN Labs

    - by Kelly Jones
    Eric Ligman, from Microsoft, posted a great blog post this week listing all of the SharePoint 2010 Virtual Labs that are available from Microsoft.  His blog entry is here: http://blogs.msdn.com/b/mssmallbiz/archive/2012/03/13/sharepoint-server-2010-msdn-virtual-labs-available-to-you-online-plus-more-sharepoint-2010-resources.aspx He also posted other resources as well. I’ve copied his Virtual Lab links here: SharePoint Server 2010 Virtual Labs MSDN Virtual Lab: SharePoint Server 2010: Introduction MSDN Virtual Lab: Getting Started with SharePoint 2010 MSDN Virtual Lab: SharePoint 2010 User Interface Advancements MSDN Virtual Lab: SharePoint Server 2010 Connectors & Using the Business Data Connectivity (BDC) Service MSDN Virtual Lab: SharePoint Server 2010: Advanced Search Security MSDN Virtual Lab: SharePoint Server 2010: Configuring Search UIs MSDN Virtual Lab: SharePoint Server 2010: Content Processing and Property Extraction MSDN Virtual Lab: SharePoint Server 2010: Developing a Custom Connector MSDN Virtual Lab: SharePoint Server 2010: Fast Search Web Crawler MSDN Virtual Lab: SharePoint Server 2010: Federated Search MSDN Virtual Lab: SharePoint Server 2010: Linguistics MSDN Virtual Lab: SharePoint Server 2010: People Search Administration and Management MSDN Virtual Lab: SharePoint Server 2010: Relevancy and Ranking MSDN Virtual Lab: Customizing MySites MSDN Virtual Lab: Designing Lists and Schemas MSDN Virtual Lab: Developing a BCS External Content Type with Visual Studio 2010 MSDN Virtual Lab: Developing a Sandboxed Solution with Web Parts MSDN Virtual Lab: Developing a Visual Web Part in Visual Studio 2010 MSDN Virtual Lab: Developing Business Intelligence Applications MSDN Virtual Lab: Enterprise Content Management MSDN Virtual Lab: LINQ to SharePoint 2010 MSDN Virtual Lab: Visual Studio SharePoint Tools MSDN Virtual Lab: Workflow In addition to the SharePoint Server 2010 Virtual Labs, here are a few other SharePoint 2010 resources that I thought you might also be interested in: Technical reference for Microsoft SharePoint Server 2010 SharePoint 2010: IT Pro Evaluation Guide Connecting SharePoint 2010 to Line-of-Business Systems to Deliver Business-Critical Solutions Configure SharePoint Server 2010 as a Single Server with Microsoft SQL Server: Test Lab Guide Microsoft SQL Server 2012 Reporting Services Add-in for Microsoft SharePoint Technologies 2010 Deploying FAST Search Server 2010 for SharePoint FAST Search Server 2010 for SharePoint Add or Remove an Index Column Upgrade worksheet for SharePoint Server 2010 Microsoft SharePoint Server 2010 Technical Library in Compiled Help format Microsoft SharePoint Foundation 2010 Technical Library in Compiled Help format Microsoft FAST Search Server 2010 for SharePoint Technical Library in Compiled Help format Microsoft Reseller partner Learning Path Microsoft solutions partners and ISVs Learning Path Microsoft partner Practice Accelerator for SharePoint Microsoft partner SharePoint 2010 Internal Use Licenses SharePoint Case Studies SharePoint MSDN Forums SharePoint TechNet Forums Microsoft SharePoint 2010 page on Microsoft Partner Network portal

    Read the article

  • WebCenter Spaces 11g - UI Customization

    - by john.brunswick
    When developing on top of a portal platform to support an intranet or extranet, a portion of the development time is spent adjusting the out-of-box user templates to adjust the look and feel of the platform for your organization. Generally your deployment will not need to look like anything like the sites posted on http://cssremix.com/ or http://www.webcreme.com/, but will meet business needs by adjusting basic elements like navigation, color palate and logo placement. After spending some time doing custom UI development with WebCenter Spaces 11G I have gathered a few tips that I hope can help to speed anyone's efforts to quickly "skin" a WebCenter Spaces deployment. A detailed white paper was released that outlines a technique to quickly update the UI during runtime - http://www.oracle.com/technology/products/webcenter/pdf/owcs_r11120_cust_skins_runtime_wp.pdf. Customizing at "runtime" means using CSS and images to adjust the page layout and feel, which when creatively done can change the pages drastically. WebCenter also allows for detailed templates to manage the placement of major page elements like menus, sidebar, etc, but by adjusting only images and CSS we can end up with something like the custom solution shown below. view large image Let's dive right in and take a look at some tools to make our efforts more efficient.

    Read the article

  • DBA Command line options

    - by Anthony Shorten
    There are a number of database utilities supplied with the installation of the Oracle Utilities Application Framework based products. These are typically run in interactive mode where the utility prompts you for the values and then executes the required functionality. Did you know that the utilities also have command line options that allow you to run the utility in silent mode as well? You can assess the command line options by specifying the -h option on the command line. Here is an example of the oragensec command line options: oragensec -d <Owner,OwnerPswd,DbName> -u <Database Users> -r <ReadRole,UserRole> -l <logfile> -h where: -d <Owner,OwnerPswd,DbName> Database connect information for the target database. e.g. spladm,spladm,DB200ODB. -u <Database Users> A comma-separated list of database users where synonyms need to be created. e.g. spluser, splread -r <ReadRole,UserRole> Optional. Names of database roles with read and read-write privileges. Default roles are SPL_READ, SPL_USER. e.g. spl_read,spl_user -l <logfile> Optional. Name of the log file. -h Help The command line options allow the DBA to automate the exeucution either via a script or some utility can than execute utilities. This optin can apply to the majority of DBA utilities supplied with the product. Take a look at others.

    Read the article

  • TOTD #166: Using NoSQL database in your Java EE 6 Applications on GlassFish - MongoDB for now!

    - by arungupta
    The Java EE 6 platform includes Java Persistence API to work with RDBMS. The JPA specification defines a comprehensive API that includes, but not restricted to, how a database table can be mapped to a POJO and vice versa, provides mechanisms how a PersistenceContext can be injected in a @Stateless bean and then be used for performing different operations on the database table and write typesafe queries. There are several well known advantages of RDBMS but the NoSQL movement has gained traction over past couple of years. The NoSQL databases are not intended to be a replacement for the mainstream RDBMS. As Philosophy of NoSQL explains, NoSQL database was designed for casual use where all the features typically provided by an RDBMS are not required. The name "NoSQL" is more of a category of databases that is more known for what it is not rather than what it is. The basic principles of NoSQL database are: No need to have a pre-defined schema and that makes them a schema-less database. Addition of new properties to existing objects is easy and does not require ALTER TABLE. The unstructured data gives flexibility to change the format of data any time without downtime or reduced service levels. Also there are no joins happening on the server because there is no structure and thus no relation between them. Scalability and performance is more important than the entire set of functionality typically provided by an RDBMS. This set of databases provide eventual consistency and/or transactions restricted to single items but more focus on CRUD. Not be restricted to SQL to access the information stored in the backing database. Designed to scale-out (horizontal) instead of scale-up (vertical). This is important knowing that databases, and everything else as well, is moving into the cloud. RBDMS can scale-out using sharding but requires complex management and not for the faint of heart. Unlike RBDMS which require a separate caching tier, most of the NoSQL databases comes with integrated caching. Designed for less management and simpler data models lead to lower administration as well. There are primarily three types of NoSQL databases: Key-Value stores (e.g. Cassandra and Riak) Document databases (MongoDB or CouchDB) Graph databases (Neo4J) You may think NoSQL is panacea but as I mentioned above they are not meant to replace the mainstream databases and here is why: RDBMS have been around for many years, very stable, and functionally rich. This is something CIOs and CTOs can bet their money on without much worry. There is a reason 98% of Fortune 100 companies run Oracle :-) NoSQL is cutting edge, brings excitement to developers, but enterprises are cautious about them. Commercial databases like Oracle are well supported by the backing enterprises in terms of providing support resources on a global scale. There is a full ecosystem built around these commercial databases providing training, performance tuning, architecture guidance, and everything else. NoSQL is fairly new and typically backed by a single company not able to meet the scale of these big enterprises. NoSQL databases are good for CRUDing operations but business intelligence is extremely important for enterprises to stay competitive. RDBMS provide extensive tooling to generate this data but that was not the original intention of NoSQL databases and is lacking in that area. Generating any meaningful information other than CRUDing require extensive programming. Not suited for complex transactions such as banking systems or other highly transactional applications requiring 2-phase commit. SQL cannot be used with NoSQL databases and writing simple queries can be involving. Enough talking, lets take a look at some code. This blog has published multiple blogs on how to access a RDBMS using JPA in a Java EE 6 application. This Tip Of The Day (TOTD) will show you can use MongoDB (a document-oriented database) with a typical 3-tier Java EE 6 application. Lets get started! The complete source code of this project can be downloaded here. Download MongoDB for your platform from here (1.8.2 as of this writing) and start the server as: arun@ArunUbuntu:~/tools/mongodb-linux-x86_64-1.8.2/bin$./mongod./mongod --help for help and startup optionsSun Jun 26 20:41:11 [initandlisten] MongoDB starting : pid=11210port=27017 dbpath=/data/db/ 64-bit Sun Jun 26 20:41:11 [initandlisten] db version v1.8.2, pdfile version4.5Sun Jun 26 20:41:11 [initandlisten] git version:433bbaa14aaba6860da15bd4de8edf600f56501bSun Jun 26 20:41:11 [initandlisten] build sys info: Linuxbs-linux64.10gen.cc 2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 2017:48:28 EST 2009 x86_64 BOOST_LIB_VERSION=1_41Sun Jun 26 20:41:11 [initandlisten] waiting for connections on port 27017Sun Jun 26 20:41:11 [websvr] web admin interface listening on port 28017 The default directory for the database is /data/db and needs to be created as: sudo mkdir -p /data/db/sudo chown `id -u` /data/db You can specify a different directory using "--dbpath" option. Refer to Quickstart for your specific platform. Using NetBeans, create a Java EE 6 project and make sure to enable CDI and add JavaServer Faces framework. Download MongoDB Java Driver (2.6.3 of this writing) and add it to the project library by selecting "Properties", "LIbraries", "Add Library...", creating a new library by specifying the location of the JAR file, and adding the library to the created project. Edit the generated "index.xhtml" such that it looks like: <h1>Add a new movie</h1><h:form> Name: <h:inputText value="#{movie.name}" size="20"/><br/> Year: <h:inputText value="#{movie.year}" size="6"/><br/> Language: <h:inputText value="#{movie.language}" size="20"/><br/> <h:commandButton actionListener="#{movieSessionBean.createMovie}" action="show" title="Add" value="submit"/></h:form> This page has a simple HTML form with three text boxes and a submit button. The text boxes take name, year, and language of a movie and the submit button invokes the "createMovie" method of "movieSessionBean" and then render "show.xhtml". Create "show.xhtml" ("New" -> "Other..." -> "Other" -> "XHTML File") such that it looks like: <head> <title><h1>List of movies</h1></title> </head> <body> <h:form> <h:dataTable value="#{movieSessionBean.movies}" var="m" > <h:column><f:facet name="header">Name</f:facet>#{m.name}</h:column> <h:column><f:facet name="header">Year</f:facet>#{m.year}</h:column> <h:column><f:facet name="header">Language</f:facet>#{m.language}</h:column> </h:dataTable> </h:form> This page shows the name, year, and language of all movies stored in the database so far. The list of movies is returned by "movieSessionBean.movies" property. Now create the "Movie" class such that it looks like: import com.mongodb.BasicDBObject;import com.mongodb.BasicDBObject;import com.mongodb.DBObject;import javax.enterprise.inject.Model;import javax.validation.constraints.Size;/** * @author arun */@Modelpublic class Movie { @Size(min=1, max=20) private String name; @Size(min=1, max=20) private String language; private int year; // getters and setters for "name", "year", "language" public BasicDBObject toDBObject() { BasicDBObject doc = new BasicDBObject(); doc.put("name", name); doc.put("year", year); doc.put("language", language); return doc; } public static Movie fromDBObject(DBObject doc) { Movie m = new Movie(); m.name = (String)doc.get("name"); m.year = (int)doc.get("year"); m.language = (String)doc.get("language"); return m; } @Override public String toString() { return name + ", " + year + ", " + language; }} Other than the usual boilerplate code, the key methods here are "toDBObject" and "fromDBObject". These methods provide a conversion from "Movie" -> "DBObject" and vice versa. The "DBObject" is a MongoDB class that comes as part of the mongo-2.6.3.jar file and which we added to our project earlier.  The complete javadoc for 2.6.3 can be seen here. Notice, this class also uses Bean Validation constraints and will be honored by the JSF layer. Finally, create "MovieSessionBean" stateless EJB with all the business logic such that it looks like: package org.glassfish.samples;import com.mongodb.BasicDBObject;import com.mongodb.DB;import com.mongodb.DBCollection;import com.mongodb.DBCursor;import com.mongodb.DBObject;import com.mongodb.Mongo;import java.net.UnknownHostException;import java.util.ArrayList;import java.util.List;import javax.annotation.PostConstruct;import javax.ejb.Stateless;import javax.inject.Inject;import javax.inject.Named;/** * @author arun */@Stateless@Namedpublic class MovieSessionBean { @Inject Movie movie; DBCollection movieColl; @PostConstruct private void initDB() throws UnknownHostException { Mongo m = new Mongo(); DB db = m.getDB("movieDB"); movieColl = db.getCollection("movies"); if (movieColl == null) { movieColl = db.createCollection("movies", null); } } public void createMovie() { BasicDBObject doc = movie.toDBObject(); movieColl.insert(doc); } public List<Movie> getMovies() { List<Movie> movies = new ArrayList(); DBCursor cur = movieColl.find(); System.out.println("getMovies: Found " + cur.size() + " movie(s)"); for (DBObject dbo : cur.toArray()) { movies.add(Movie.fromDBObject(dbo)); } return movies; }} The database is initialized in @PostConstruct. Instead of a working with a database table, NoSQL databases work with a schema-less document. The "Movie" class is the document in our case and stored in the collection "movies". The collection allows us to perform query functions on all movies. The "getMovies" method invokes "find" method on the collection which is equivalent to the SQL query "select * from movies" and then returns a List<Movie>. Also notice that there is no "persistence.xml" in the project. Right-click and run the project to see the output as: Enter some values in the text box and click on enter to see the result as: If you reached here then you've successfully used MongoDB in your Java EE 6 application, congratulations! Some food for thought and further play ... SQL to MongoDB mapping shows mapping between traditional SQL -> Mongo query language. Tutorial shows fun things you can do with MongoDB. Try the interactive online shell  The cookbook provides common ways of using MongoDB In terms of this project, here are some tasks that can be tried: Encapsulate database management in a JPA persistence provider. Is it even worth it because the capabilities are going to be very different ? MongoDB uses "BSonObject" class for JSON representation, add @XmlRootElement on a POJO and how a compatible JSON representation can be generated. This will make the fromXXX and toXXX methods redundant.

    Read the article

  • TOTD #166: Using NoSQL database in your Java EE 6 Applications on GlassFish - MongoDB for now!

    - by arungupta
    The Java EE 6 platform includes Java Persistence API to work with RDBMS. The JPA specification defines a comprehensive API that includes, but not restricted to, how a database table can be mapped to a POJO and vice versa, provides mechanisms how a PersistenceContext can be injected in a @Stateless bean and then be used for performing different operations on the database table and write typesafe queries. There are several well known advantages of RDBMS but the NoSQL movement has gained traction over past couple of years. The NoSQL databases are not intended to be a replacement for the mainstream RDBMS. As Philosophy of NoSQL explains, NoSQL database was designed for casual use where all the features typically provided by an RDBMS are not required. The name "NoSQL" is more of a category of databases that is more known for what it is not rather than what it is. The basic principles of NoSQL database are: No need to have a pre-defined schema and that makes them a schema-less database. Addition of new properties to existing objects is easy and does not require ALTER TABLE. The unstructured data gives flexibility to change the format of data any time without downtime or reduced service levels. Also there are no joins happening on the server because there is no structure and thus no relation between them. Scalability and performance is more important than the entire set of functionality typically provided by an RDBMS. This set of databases provide eventual consistency and/or transactions restricted to single items but more focus on CRUD. Not be restricted to SQL to access the information stored in the backing database. Designed to scale-out (horizontal) instead of scale-up (vertical). This is important knowing that databases, and everything else as well, is moving into the cloud. RBDMS can scale-out using sharding but requires complex management and not for the faint of heart. Unlike RBDMS which require a separate caching tier, most of the NoSQL databases comes with integrated caching. Designed for less management and simpler data models lead to lower administration as well. There are primarily three types of NoSQL databases: Key-Value stores (e.g. Cassandra and Riak) Document databases (MongoDB or CouchDB) Graph databases (Neo4J) You may think NoSQL is panacea but as I mentioned above they are not meant to replace the mainstream databases and here is why: RDBMS have been around for many years, very stable, and functionally rich. This is something CIOs and CTOs can bet their money on without much worry. There is a reason 98% of Fortune 100 companies run Oracle :-) NoSQL is cutting edge, brings excitement to developers, but enterprises are cautious about them. Commercial databases like Oracle are well supported by the backing enterprises in terms of providing support resources on a global scale. There is a full ecosystem built around these commercial databases providing training, performance tuning, architecture guidance, and everything else. NoSQL is fairly new and typically backed by a single company not able to meet the scale of these big enterprises. NoSQL databases are good for CRUDing operations but business intelligence is extremely important for enterprises to stay competitive. RDBMS provide extensive tooling to generate this data but that was not the original intention of NoSQL databases and is lacking in that area. Generating any meaningful information other than CRUDing require extensive programming. Not suited for complex transactions such as banking systems or other highly transactional applications requiring 2-phase commit. SQL cannot be used with NoSQL databases and writing simple queries can be involving. Enough talking, lets take a look at some code. This blog has published multiple blogs on how to access a RDBMS using JPA in a Java EE 6 application. This Tip Of The Day (TOTD) will show you can use MongoDB (a document-oriented database) with a typical 3-tier Java EE 6 application. Lets get started! The complete source code of this project can be downloaded here. Download MongoDB for your platform from here (1.8.2 as of this writing) and start the server as: arun@ArunUbuntu:~/tools/mongodb-linux-x86_64-1.8.2/bin$./mongod./mongod --help for help and startup optionsSun Jun 26 20:41:11 [initandlisten] MongoDB starting : pid=11210port=27017 dbpath=/data/db/ 64-bit Sun Jun 26 20:41:11 [initandlisten] db version v1.8.2, pdfile version4.5Sun Jun 26 20:41:11 [initandlisten] git version:433bbaa14aaba6860da15bd4de8edf600f56501bSun Jun 26 20:41:11 [initandlisten] build sys info: Linuxbs-linux64.10gen.cc 2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 2017:48:28 EST 2009 x86_64 BOOST_LIB_VERSION=1_41Sun Jun 26 20:41:11 [initandlisten] waiting for connections on port 27017Sun Jun 26 20:41:11 [websvr] web admin interface listening on port 28017 The default directory for the database is /data/db and needs to be created as: sudo mkdir -p /data/db/sudo chown `id -u` /data/db You can specify a different directory using "--dbpath" option. Refer to Quickstart for your specific platform. Using NetBeans, create a Java EE 6 project and make sure to enable CDI and add JavaServer Faces framework. Download MongoDB Java Driver (2.6.3 of this writing) and add it to the project library by selecting "Properties", "LIbraries", "Add Library...", creating a new library by specifying the location of the JAR file, and adding the library to the created project. Edit the generated "index.xhtml" such that it looks like: <h1>Add a new movie</h1><h:form> Name: <h:inputText value="#{movie.name}" size="20"/><br/> Year: <h:inputText value="#{movie.year}" size="6"/><br/> Language: <h:inputText value="#{movie.language}" size="20"/><br/> <h:commandButton actionListener="#{movieSessionBean.createMovie}" action="show" title="Add" value="submit"/></h:form> This page has a simple HTML form with three text boxes and a submit button. The text boxes take name, year, and language of a movie and the submit button invokes the "createMovie" method of "movieSessionBean" and then render "show.xhtml". Create "show.xhtml" ("New" -> "Other..." -> "Other" -> "XHTML File") such that it looks like: <head> <title><h1>List of movies</h1></title> </head> <body> <h:form> <h:dataTable value="#{movieSessionBean.movies}" var="m" > <h:column><f:facet name="header">Name</f:facet>#{m.name}</h:column> <h:column><f:facet name="header">Year</f:facet>#{m.year}</h:column> <h:column><f:facet name="header">Language</f:facet>#{m.language}</h:column> </h:dataTable> </h:form> This page shows the name, year, and language of all movies stored in the database so far. The list of movies is returned by "movieSessionBean.movies" property. Now create the "Movie" class such that it looks like: import com.mongodb.BasicDBObject;import com.mongodb.BasicDBObject;import com.mongodb.DBObject;import javax.enterprise.inject.Model;import javax.validation.constraints.Size;/** * @author arun */@Modelpublic class Movie { @Size(min=1, max=20) private String name; @Size(min=1, max=20) private String language; private int year; // getters and setters for "name", "year", "language" public BasicDBObject toDBObject() { BasicDBObject doc = new BasicDBObject(); doc.put("name", name); doc.put("year", year); doc.put("language", language); return doc; } public static Movie fromDBObject(DBObject doc) { Movie m = new Movie(); m.name = (String)doc.get("name"); m.year = (int)doc.get("year"); m.language = (String)doc.get("language"); return m; } @Override public String toString() { return name + ", " + year + ", " + language; }} Other than the usual boilerplate code, the key methods here are "toDBObject" and "fromDBObject". These methods provide a conversion from "Movie" -> "DBObject" and vice versa. The "DBObject" is a MongoDB class that comes as part of the mongo-2.6.3.jar file and which we added to our project earlier.  The complete javadoc for 2.6.3 can be seen here. Notice, this class also uses Bean Validation constraints and will be honored by the JSF layer. Finally, create "MovieSessionBean" stateless EJB with all the business logic such that it looks like: package org.glassfish.samples;import com.mongodb.BasicDBObject;import com.mongodb.DB;import com.mongodb.DBCollection;import com.mongodb.DBCursor;import com.mongodb.DBObject;import com.mongodb.Mongo;import java.net.UnknownHostException;import java.util.ArrayList;import java.util.List;import javax.annotation.PostConstruct;import javax.ejb.Stateless;import javax.inject.Inject;import javax.inject.Named;/** * @author arun */@Stateless@Namedpublic class MovieSessionBean { @Inject Movie movie; DBCollection movieColl; @PostConstruct private void initDB() throws UnknownHostException { Mongo m = new Mongo(); DB db = m.getDB("movieDB"); movieColl = db.getCollection("movies"); if (movieColl == null) { movieColl = db.createCollection("movies", null); } } public void createMovie() { BasicDBObject doc = movie.toDBObject(); movieColl.insert(doc); } public List<Movie> getMovies() { List<Movie> movies = new ArrayList(); DBCursor cur = movieColl.find(); System.out.println("getMovies: Found " + cur.size() + " movie(s)"); for (DBObject dbo : cur.toArray()) { movies.add(Movie.fromDBObject(dbo)); } return movies; }} The database is initialized in @PostConstruct. Instead of a working with a database table, NoSQL databases work with a schema-less document. The "Movie" class is the document in our case and stored in the collection "movies". The collection allows us to perform query functions on all movies. The "getMovies" method invokes "find" method on the collection which is equivalent to the SQL query "select * from movies" and then returns a List<Movie>. Also notice that there is no "persistence.xml" in the project. Right-click and run the project to see the output as: Enter some values in the text box and click on enter to see the result as: If you reached here then you've successfully used MongoDB in your Java EE 6 application, congratulations! Some food for thought and further play ... SQL to MongoDB mapping shows mapping between traditional SQL -> Mongo query language. Tutorial shows fun things you can do with MongoDB. Try the interactive online shell  The cookbook provides common ways of using MongoDB In terms of this project, here are some tasks that can be tried: Encapsulate database management in a JPA persistence provider. Is it even worth it because the capabilities are going to be very different ? MongoDB uses "BSonObject" class for JSON representation, add @XmlRootElement on a POJO and how a compatible JSON representation can be generated. This will make the fromXXX and toXXX methods redundant.

    Read the article

  • New Walkthrough Capability in AutoVue 20

    - by warren.baird
    New in AutoVue 20 is the capability to view a 3D model of a building from the inside - this is a very powerful tool for anyone who needs to work with models of plants, refineries, or other buildings. All of the standard AutoVue functionality is available, so you can click on any part of the building to get attribute data, manipulate the view, do measurement, etc. For example, in the image below we've made the Architectural model (Walls, Floors, etc.) transparent, but left the electrical and mechanical models opaque, so it's easy to see where the wires and piping run behind the walls. Additionally you can bring together different files and different types of files, using our digital mockup capability - in the image below the heating and air conditioning sytem on the left came from one file, and the electrical box on the right came from another wile, and the model of the room came from yet a third file, but with everything brought together into AutoVue you can do things like use our measurement capability to ensure there's enough space to get maintenance equipment down the hallway, before the building is even built. For more information about Walkthrough, you can view a video demo at http://download.oracle.com/autovue/3D_walkthrough_movie.wmv We're very excited about this new capability - do you think this will be useful for you in your work with AutoVue? Let us know!

    Read the article

  • Rychlejší aplikace i bez zmen dotazu - 1.díl - vliv castých Commitu

    - by david.krch
    Pamatuji si, jak jsem pred pár lety sedel na konferenci Oracle Develop a sledoval svého ponekud známejšího kolegu Marka Townsenda ukazovat prostinké demo - jak se jedna a tatáž operace dá udelat bud velmi pomalu a nebo velmi rychle. Nešlo o klasické ladení SQL, ale efektivní volání techto dotazu. Jednotlivé techniky samy o sobe jsou pritom asi všem známé. Ohromilo mne ale hlavne to, jak velký rozdíl ve výkonu stejné operace provádené stejným jednoduchým SQL dotazem muže být. Ríkám si, že kdyby ten rozdíl videlo více lidí, možná by se casteji donutili napsat o ten jeden, dva rádky kódu více, aby byl kód treba i o rád efektivnejší a rychlejší. Pokusil jsem se udelat podobný príklad a na jeho základne vznikla trojice clánku, které postupne vycházely na server Databázový svet.Z nejakého duvodu se ale poslední díl nikdy nedockal publikace. Protože jsem od té doby párkrát narazil na potrebu odkázat nekoho na celou trojici clánku, rozhodl jsem se je re-publikovat na tomto blogu. Problém a rešení budu ukazovat v Jave, ale ve skutecnosti jde o postupy, které jsou platné at už voláte databázi odkudkoliv. Úkol je jednoduchý - vložit 100.000 záznamu do jednoduché tabulky. A postupne si ukážeme že ze základního nejpomalejšího rešení se nekolika jednoduchými kroky dostaneme na rešení, které bude 80x rychlejší.

    Read the article

  • Exception Handling

    - by raghu.yadav
    Here is the few links on which andre had demonstrateddifferences-of-handling-jboexception-in handling-exceptions-in-oracle-ui-shell However in this post we can see how to display exception in popup being in the same page. I use similar usecase as andre however we'll not be using Exception Handling property from taskflow, instead we use popup and invoke the same programmatically. This is a dynamic region example where user can select jobs or locations links to edit the records of corresponding tables being in the same page and click commit to save changes. To generate exception we deliberately change commit to CommitAction in commit action binding code created in the bean (same as andre) and catch the exception and add brief description of exception into #{pageFlowScope.message}. Drop Popup component after Commit button and add dialog within in popup button, bind the popup component to backing bean and invoke the same in catch clause as shown below. public String Commit() { try{ BindingContainer bindings = getBindings(); OperationBinding operationBinding = bindings.getOperationBinding("CommitAction"); Object result = operationBinding.execute(); if (!operationBinding.getErrors().isEmpty()) { return null; } }catch (NullPointerException e) { setELValue("#{pageFlowScope.message}", "NullPointerException..."); e.printStackTrace(); String popupId = this.getPopup().getClientId(FacesContext.getCurrentInstance()); PatternsPublicUtil.invokePopup(popupId); } return null; } } private void setELValue(String el, String value) { FacesContext facesContext = FacesContext.getCurrentInstance(); ELContext elContext = facesContext.getELContext(); ExpressionFactory expressionFactory = facesContext.getApplication().getExpressionFactory(); ValueExpression valueExp = expressionFactory.createValueExpression(elContext, el, Object.class); valueExp.setValue(elContext, value); } .

    Read the article

  • Anunciando: Grandes Melhorias para Web Sites da Windows Azure

    - by Leniel Macaferi
    Estou animado para anunciar algumas grandes melhorias para os Web Sites da Windows Azure que introduzimos no início deste verão.  As melhorias de hoje incluem: uma nova opção de hospedagem adaptável compartilhada de baixo custo, suporte a domínios personalizados para websites hospedados em modo compartilhado ou em modo reservado usando registros CNAME e A-Records (o último permitindo naked domains), suporte para deployment contínuo usando tanto CodePlex e GitHub, e a extensibilidade FastCGI. Todas essas melhorias estão agora online em produção e disponíveis para serem usadas imediatamente. Nova Camada Escalonável "Compartilhada" A Windows Azure permite que você implante e hospede até 10 websites em um ambiente gratuito e compartilhado com múltiplas aplicações. Você pode começar a desenvolver e testar websites sem nenhum custo usando este modo compartilhado (gratuito). O modo compartilhado suporta a capacidade de executar sites que servem até 165MB/dia de conteúdo (5GB/mês). Todas as capacidades que introduzimos em Junho com esta camada gratuita permanecem inalteradas com a atualização de hoje. Começando com o lançamento de hoje, você pode agora aumentar elasticamente seu website para além desta capacidade usando uma nova opção "shared" (compartilhada) de baixo custo (a qual estamos apresentando hoje), bem como pode usar a opção "reserved instance" (instância reservada) - a qual suportamos desde Junho. Aumentar a capacidade de qualquer um desses modos é fácil. Basta clicar na aba "scale" (aumentar a capacidade) do seu website dentro do Portal da Windows Azure, escolher a opção de modo de hospedagem que você deseja usar com ele, e clicar no botão "Salvar". Mudanças levam apenas alguns segundos para serem aplicadas e não requerem nenhum código para serem alteradas e também não requerem que a aplicação seja reimplantada/reinstalada: A seguir estão mais alguns detalhes sobre a nova opção "shared" (compartilhada), bem como a opção existente "reserved" (reservada): Modo Compartilhado Com o lançamento de hoje, estamos introduzindo um novo modo de hospedagem de baixo custo "compartilhado" para Web Sites da Windows Azure. Um website em execução no modo compartilhado é implantado/instalado em um ambiente de hospedagem compartilhado com várias outras aplicações. Ao contrário da opção de modo free (gratuito), um web-site no modo compartilhado não tem quotas/limite máximo para a quantidade de largura de banda que o mesmo pode servir. Os primeiros 5 GB/mês de banda que você servir com uma website compartilhado é grátis, e então você passará a pagar a taxa padrão "pay as you go" (pague pelo que utilizar) da largura de banda de saída da Windows Azure quando a banda de saída ultrapassar os 5 GB. Um website em execução no modo compartilhado agora também suporta a capacidade de mapear múltiplos nomes de domínio DNS personalizados, usando ambos CNAMEs e A-records para tanto. O novo suporte A-record que estamos introduzindo com o lançamento de hoje oferece a possibilidade para você suportar "naked domains" (domínios nús - sem o www) com seus web-sites (por exemplo, http://microsoft.com além de http://www.microsoft.com). Nós também, no futuro, permitiremos SSL baseada em SNI como um recurso nativo nos websites que rodam em modo compartilhado (esta funcionalidade não é suportada com o lançamento de hoje - mas chagará mais tarde ainda este ano, para ambos as opções de hospedagem - compartilhada e reservada). Você paga por um website no modo compartilhado utilizando o modelo padrão "pay as you go" que suportamos com outros recursos da Windows Azure (ou seja, sem custos iniciais, e você só paga pelas horas nas quais o recurso estiver ativo). Um web-site em execução no modo compartilhado custa apenas 1,3 centavos/hora durante este período de preview (isso dá uma média de $ 9.36/mês ou R$ 19,00/mês - dólar a R$ 2,03 em 17-Setembro-2012) Modo Reservado Além de executar sites em modo compartilhado, também suportamos a execução dos mesmos dentro de uma instância reservada. Quando rodando em modo de instância reservada, seus sites terão a garantia de serem executados de maneira isolada dentro de sua própria VM (virtual machine - máquina virtual) Pequena, Média ou Grande (o que significa que, nenhum outro cliente da Windows azure terá suas aplicações sendo executadas dentro de sua VM. Somente as suas aplicações). Você pode executar qualquer número de websites dentro de uma máquina virtual, e não existem quotas para limites de CPU ou memória. Você pode executar seus sites usando uma única VM de instância reservada, ou pode aumentar a capacidade tendo várias instâncias (por exemplo, 2 VMs de médio porte, etc.). Dimensionar para cima ou para baixo é fácil - basta selecionar a VM da instância "reservada" dentro da aba "scale" no Portal da Windows Azure, escolher o tamanho da VM que você quer, o número de instâncias que você deseja executar e clicar em salvar. As alterações têm efeito em segundos: Ao contrário do modo compartilhado, não há custo por site quando se roda no modo reservado. Em vez disso, você só paga pelas instâncias de VMs reservadas que você usar - e você pode executar qualquer número de websites que você quiser dentro delas, sem custo adicional (por exemplo, você pode executar um único site dentro de uma instância de VM reservada ou 100 websites dentro dela com o mesmo custo). VMs de instâncias reservadas têm um custo inicial de $ 8 cents/hora ou R$ 16 centavos/hora para uma pequena VM reservada. Dimensionamento Elástico para Cima/para Baixo Os Web Sites da Windows Azure permitem que você dimensione para cima ou para baixo a sua capacidade dentro de segundos. Isso permite que você implante um site usando a opção de modo compartilhado, para começar, e em seguida, dinamicamente aumente a capacidade usando a opção de modo reservado somente quando você precisar - sem que você tenha que alterar qualquer código ou reimplantar sua aplicação. Se o tráfego do seu site diminuir, você pode diminuir o número de instâncias reservadas que você estiver usando, ou voltar para a camada de modo compartilhado - tudo em segundos e sem ter que mudar o código, reimplantar a aplicação ou ajustar os mapeamentos de DNS. Você também pode usar o "Dashboard" (Painel de Controle) dentro do Portal da Windows Azure para facilmente monitorar a carga do seu site em tempo real (ele mostra não apenas as solicitações/segundo e a largura de banda consumida, mas também estatísticas como a utilização de CPU e memória). Devido ao modelo de preços "pay as you go" da Windows Azure, você só paga a capacidade de computação que você usar em uma determinada hora. Assim, se o seu site está funcionando a maior parte do mês em modo compartilhado (a $ 1.3 cents/hora ou R$ 2,64 centavos/hora), mas há um final de semana em que ele fica muito popular e você decide aumentar sua capacidade colocando-o em modo reservado para que seja executado em sua própria VM dedicada (a $ 8 cents/hora ou R$ 16 centavos/hora), você só terá que pagar os centavos/hora adicionais para as horas em que o site estiver sendo executado no modo reservado. Você não precisa pagar nenhum custo inicial para habilitar isso, e uma vez que você retornar seu site para o modo compartilhado, você voltará a pagar $ 1.3 cents/hora ou R$ 2,64 centavos/hora). Isto faz com que essa opção seja super flexível e de baixo custo. Suporte Melhorado para Domínio Personalizado Web sites em execução no modo "compartilhado" ou no modo "reservado" suportam a habilidade de terem nomes personalizados (host names) associados a eles (por exemplo www.mysitename.com). Você pode associar múltiplos domínios personalizados para cada Web Site da Windows Azure. Com o lançamento de hoje estamos introduzindo suporte para registros A-Records (um recurso muito pedido pelos usuários). Com o suporte a A-Record, agora você pode associar domínios 'naked' ao seu Web Site da Windows Azure - ou seja, em vez de ter que usar www.mysitename.com você pode simplesmente usar mysitename.com (sem o prefixo www). Tendo em vista que você pode mapear vários domínios para um único site, você pode, opcionalmente, permitir ambos domínios (com www e a versão 'naked') para um site (e então usar uma regra de reescrita de URL/redirecionamento (em Inglês) para evitar problemas de SEO). Nós também melhoramos a interface do usuário para o gerenciamento de domínios personalizados dentro do Portal da Windows Azure como parte do lançamento de hoje. Clicando no botão "Manage Domains" (Gerenciar Domínios) na bandeja na parte inferior do portal agora traz uma interface de usuário personalizada que torna fácil gerenciar/configurar os domínios: Como parte dessa atualização nós também tornamos significativamente mais suave/mais fácil validar a posse de domínios personalizados, e também tornamos mais fácil alternar entre sites/domínios existentes para Web Sites da Windows Azure, sem que o website fique fora do ar. Suporte a Deployment (Implantação) contínua com Git e CodePlex ou GitHub Um dos recursos mais populares que lançamos no início deste verão foi o suporte para a publicação de sites diretamente para a Windows Azure usando sistemas de controle de código como TFS e Git. Esse recurso fornece uma maneira muito poderosa para gerenciar as implantações/instalações da aplicação usando controle de código. É realmente fácil ativar este recurso através da página do dashboard de um web site: A opção TFS que lançamos no início deste verão oferece uma solução de implantação contínua muito rica que permite automatizar os builds e a execução de testes unitários a cada vez que você atualizar o repositório do seu website, e em seguida, se os testes forem bem sucedidos, a aplicação é automaticamente publicada/implantada na Windows Azure. Com o lançamento de hoje, estamos expandindo nosso suporte Git para também permitir cenários de implantação contínua integrando esse suporte com projetos hospedados no CodePlex e no GitHub. Este suporte está habilitado para todos os web-sites (incluindo os que usam o modo "free" (gratuito)). A partir de hoje, quando você escolher o link "Set up Git publishing" (Configurar publicação Git) na página do dashboard de um website, você verá duas opções adicionais quando a publicação baseada em Git estiver habilitada para o web-site: Você pode clicar em qualquer um dos links "Deploy from my CodePlex project" (Implantar a partir do meu projeto no CodePlex) ou "Deploy from my GitHub project"  (Implantar a partir do meu projeto no GitHub) para seguir um simples passo a passo para configurar uma conexão entre o seu website e um repositório de código que você hospeda no CodePlex ou no GitHub. Uma vez que essa conexão é estabelecida, o CodePlex ou o GitHub automaticamente notificará a Windows Azure a cada vez que um checkin ocorrer. Isso fará com que a Windows Azure faça o download do código e compile/implante a nova versão da sua aplicação automaticamente.  Os dois vídeos a seguir (em Inglês) mostram quão fácil é permitir esse fluxo de trabalho ao implantar uma app inicial e logo em seguida fazer uma alteração na mesma: Habilitando Implantação Contínua com os Websites da Windows Azure e CodePlex (2 minutos) Habilitando Implantação Contínua com os Websites da Windows Azure e GitHub (2 minutos) Esta abordagem permite um fluxo de trabalho de implantação contínua realmente limpo, e torna muito mais fácil suportar um ambiente de desenvolvimento em equipe usando Git: Nota: o lançamento de hoje suporta estabelecer conexões com repositórios públicos do GitHub/CodePlex. Suporte para repositórios privados será habitado em poucas semanas. Suporte para Múltiplos Branches (Ramos de Desenvolvimento) Anteriormente, nós somente suportávamos implantar o código que estava localizado no branch 'master' do repositório Git. Muitas vezes, porém, os desenvolvedores querem implantar a partir de branches alternativos (por exemplo, um branch de teste ou um branch com uma versão futura da aplicação). Este é agora um cenário suportado - tanto com projetos locais baseados no git, bem como com projetos ligados ao CodePlex ou GitHub. Isto permite uma variedade de cenários úteis. Por exemplo, agora você pode ter dois web-sites - um em "produção" e um outro para "testes" - ambos ligados ao mesmo repositório no CodePlex ou no GitHub. Você pode configurar um dos websites de forma que ele sempre baixe o que estiver presente no branch master, e que o outro website sempre baixe o que estiver no branch de testes. Isto permite uma maneira muito limpa para habilitar o teste final de seu site antes que ele entre em produção. Este vídeo de 1 minuto (em Inglês) demonstra como configurar qual branch usar com um web-site. Resumo Os recursos mostrados acima estão agora ao vivo em produção e disponíveis para uso imediato. Se você ainda não tem uma conta da Windows Azure, você pode inscrever-se em um teste gratuito para começar a usar estes recursos hoje mesmo. Visite o O Centro de Desenvolvedores da Windows Azure (em Inglês) para saber mais sobre como criar aplicações para serem usadas na nuvem. Nós teremos ainda mais novos recursos e melhorias chegando nas próximas semanas - incluindo suporte para os recentes lançamentos do Windows Server 2012 e .NET 4.5 (habilitaremos novas imagens de web e work roles com o Windows Server 2012 e NET 4.5 no próximo mês). Fique de olho no meu blog para detalhes assim que esses novos recursos ficarem disponíveis. Espero que ajude, - Scott P.S. Além do blog, eu também estou utilizando o Twitter para atualizações rápidas e para compartilhar links. Siga-me em: twitter.com/ScottGu Texto traduzido do post original por Leniel Macaferi.

    Read the article

  • New Release Overview Part 1

    - by brian.harrison
    Ladies & Gentlemen, I have been getting a lot of questions over the last month or two about the next release of WCI codenamed "Neo". Unfortunately I cannot give you an exact release date which I know you all would be asking me for if we were talking face to face, but I can definitely provide you with information about some of the features that will be made available. So over the next few blog entries, I am going to provide you with details about two features and even provide you with screenshots for some of them. KD Browser Portlet This portlet will provide a windows explorer look and feel to the Knowledge Directory from with a Community Page or My Page. Not only will the portlet provide access to the folder structure and the documents within, but the user or community manager will also have the ability to modify what is being shown. From with a preferences page, the user or community manager can change what top-level folders are shown within the folder structure as well as what properties are available for each document that is shown. There are also a number of other portlet specific customizations available as well. Embedded Tagging Engine As some of you might be aware, there was a product made available just prior to the Oracle acquisition known as Pathways which gave users the ability to add tags to documents that were either in the Knowledge Directory or in the Collaboration Documents section. Although this product is no longer available separately for customers to purchase, we definitely did feel that the functionality was important and interesting enough that other customers should have access to it. The decision was made for this release to embed the original Pathways product as the Tagging Engine for WCI and Collaboration. This tagging engine will allow a user to add tags to a document as well as through the Collaboration Documents section. Once the tags are added to the Tagging Engine and associated with documents, then a user will have the ability to filter the documents when processing a search according to the Tags Cloud that will now be available on the Search Results page and this will be true no matter what kind of search is being processed. In addition to all of that, all of the Pathways portlets will also be available for users to add to their My Page.

    Read the article

  • Java Spotlight Episode 150: James Gosling on Java

    - by Roger Brinkley
    Interview with James Gosling, father of Java and Java Champion, on the history of Java, his work at Liquid Robotics, Netbeans, the future of Java and what he sees as the next revolutionary trend in the computer industry. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link: Java Spotlight Podcast in iTunes. Show Notes Feature Interview James Gosling received a BSc in Computer Science from the University of Calgary, Canada in 1977. He received a PhD in Computer Science from Carnegie-Mellon University in 1983. The title of his thesis was "The Algebraic Manipulation of Constraints". He spent many years as a VP & Fellow at Sun Microsystems. He has built satellite data acquisition systems, a multiprocessor version of Unix, several compilers, mail systems and window managers. He has also built a WYSIWYG text editor, a constraint based drawing editor and a text editor called `Emacs' for Unix systems. At Sun his early activity was as lead engineer of the NeWS window system. He did the original design of the Java programming language and implemented its original compiler and virtual machine. He has been a contributor to the Real-Time Specification for Java, and a researcher at Sun labs where his primary interest was software development tools.     He then was the Chief Technology Officer of Sun's Developer Products Group and the CTO of Sun's Client Software Group. He briefly worked for Oracle after the acquisition of Sun. After a year off, he spent some time at Google and is now the chief software architect at Liquid Robotics where he spends his time writing software for the Waveglider, an autonomous ocean-going robot.

    Read the article

  • New Trusted Status awarded to first Mobile Java Developer

    - by Jacob Lehrbaum
    Java Verified has just announced that GameLoft is the first developer to receive its new Trusted Status!  Java Verified is an industry-recognized Java testing and signing program backed and funded by companies such as AT&T, LG, Motorola, Nokia, Oracle, Orange, Samsung and Vodafone, and chartered with making it easier for mobile developers to certify and deploy applications for use across the billions of mobile handsets that run the Java ME.  Because of its breadth and diversity, Java ME provides an unmatched opportunity to reach more than 3 billions consumers, but at the same time, developers are faced with the challenge of working with multiple distribution channels and a range of handsets. To this end, the Java Verified program provides a suite of tests that help to validate identity, functionality, integrity, and quality.  Since its rebirth in 2010 as an independent organization, the Java Verified program has been actively working to make it even easier to create and distribute Java ME apps.  Example initiatives include updates to the Unified Testing Criteria to make it easier to test "Simple Apps," community outreach to better understand and address developer pain-points  and a new "Trusted Status."  In the words of the Java Verified Program, Trusted Status is:a privileged status to be granted to developers who will have proven that the quality of their Java ME apps is of a consistently high standard. These are developers who will have earned the trust of Java Verified by demonstrating unfailingly that testing to the UTC standard is a crucial part of their product development activityThe first developer to be awarded this status is GameLoft.  By achieving Trusted Status Gameloft can now test their applications to the Java Verified standard without needing to provide Java Verified with the evidence.  The apps then automatically get signed with the Java Verified signature enabling GameLoft to benefit from reduced costs and time-to-market for their new Java ME applications from here on out.  Learn more about the exciting news or apply now for Trusted Status!

    Read the article

  • How Do I Implement parameterMaps for ADF Regions and Dynamic Regions?

    - by david.giammona
    parameterMap objects defined by managed beans can help reduce the number of child <parameter> elements listed under an ADF region or dynamic region page definition task flow binding. But more importantly, the parameterMap approach also allows greater flexibility in determining what input parameters are passed to an ADF region or dynamic region. This can be especially helpful when using dynamic regions where each task flow utilized can provide an entirely different set of input parameters. The parameterMap is specified within an ADF region or dynamic region page definition task flow binding as shown below: <taskFlow id="checkoutflow1" taskFlowId="/WEB-INF/checkout-flow.xml#checkout-flow" activation="deferred" xmlns="http://xmlns.oracle.com/adf/controller/binding" parametersMap="#{pageFlowScope.userInfoBean.parameterMap}"/> The parameter map object must implement the java.util.Map interface. The keys it specifies match the names of input parameters defined by the task flows utilized within the task flow binding. An example parameterMap object class is shown below: import java.util.HashMap; import java.util.Map; public class UserInfoBean { private Map<String, Object> parameterMap = new HashMap<String, Object>(); public Map getParameterMap() { parameterMap.put("isLoggedIn", getSecurity().isAuthenticated()); parameterMap.put("principalName", getSecurity().getPrincipalName()); return parameterMap; }

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #033

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Spatial Database Definition and Research Documents Here is the definition from Wikipedia about spatial database : A spatial database is a database that is optimized to store and query data related to objects in space, including points, lines and polygons. While typical databases can understand various numeric and character types of data, additional functionality needs to be added for databases to process spatial data types. Select Only Date Part From DateTime – Best Practice A very common question which I receive is how to only get Date or Time part from datetime value. In this blog post I explain the same in very simple words. T-SQL Paging Query Technique Comparison (OVER and ROW_NUMBER()) – CTE vs. Derived Table I have received few emails and comments about my post SQL SERVER – T-SQL Paging Query Technique Comparison – SQL 2000 vs SQL 2005. The main question was is this can be done using CTE? Absolutely! What about Performance? It is identical! Please refer above mentioned article for the history of paging. SQL SERVER – Cannot resolve collation conflict for equal to operation One of the very first error I ever encountered in my career was to resolve this conflict. I have blogged about it and I have realized that many others like me who are facing this error. LEN and DATALENGTH of NULL Simple Example Here is the question for you what is the LEN of NULL value? Well it is very easy – just read the blog. Recovery Models and Selection Very simple and easy explanation of the Database Backup Recovery Model and how to select the best option for you. Explanation SQL SERVER Hash Join Hash join gives best performance when two more join tables are joined and at-least one of them have no index or is not sorted. It is also expected that smaller of the either of table can be read in memory completely (though not necessary). Easy Sequence of SELECT FROM JOIN WHERE GROUP BY HAVING ORDER BY SELECT yourcolumns FROM tablenames JOIN tablenames WHERE condition GROUP BY yourcolumns HAVING aggregatecolumn condition ORDER BY yourcolumns NorthWind Database or AdventureWorks Database – Samples Databases In this blog post we learn how to install Northwind database. I also shared the source where one can download this database as that is used in many examples on MSDN help files. sp_HelpText for sp_HelpText – Puzzle A simple quick puzzle – do you know the answer of it? If not, go ahead and read the blog. 2008 SQL SERVER – 2008 – Step By Step Installation Guide With Images When SQL Server 2008 was newly introduced lots of people had no clue how to install SQL Server 2008 and the amount of the question which I used to receive were so much. I wrote this blog post with the spirit that this will help all the newbies to install SQL Server 2008 with the help of images. Still today this blog post has been bible for all of the people who are confused with SQL Server installation. Inline Variable Assignment I loved this feature. I have always wanted this feature to be present in SQL Server. The last time when I met developers from Microsoft SQL Server, I had talked about this feature. I think this feature saves some time but make the code more readable. Introduction to Policy Management – Enforcing Rules on SQL Server If our company policy is to create all the Stored Procedure with prefix ‘usp’ that developers should be just prevented to create Stored Procedure with any other prefix. Let us see a small tutorial how to create conditions and policy which will prevent any future SP to be created with any other prefix. 2009 Performance Counters from System Views – By Kevin Mckenna Many of you are not aware of this fact that access to performance information is readily available in SQL Server and that too without querying performance counters using a custom application or via perfmon. Till now, this fact has remained undisclosed but through this post I would like to explain you can easily access SQL Server performance counter information. Without putting much effort you will come across the system viewsys.dm_os_performance_counters. As the name suggests, this provides you easy access to the SQL Server performance counter information that is passed on to perfmon, but you can get at it via tsql. Customize Toolbar – Remove Debug Button from Toolbar I was fond of SQL Server Debugger feature in SQL Server 2000. To my utter disappointment, this feature was withdrawn from SQL Server 2005. The button of the debugger is similar to a play button and is used to run debugging commands of Visual Studio. Because of this reason, it gets very much infuriating for developers when they are developing on both – Visual Studio and SSMS. Let us now see how we can remove debugging button from SQL Server Management Studio. Effect of Normalization on Index and Performance A very interesting conversation which started from twitter. If you want to read one link this is the link I encourage you to read it. SSMS Feature – Multi-server Queries Using SQL Server Management Studio (SSMS) DBAs can now query multiple servers from one window. It is quite common for DBAs with large amount of servers to maintain and gather information from multiple SQL Servers and create report. This feature is a blessing for the DBAs, as they can now assemble all the information instantaneously without going anywhere. Query Optimizer Hint ROBUST PLAN – Question to You “ROBUST PLAN” is a kind of query hint which works quite differently than other hints. It does not improve join or force any indexes to use; it just makes sure that a query does not crash due to over the limit size of row. Let me elaborate upon it in the blog post. 2010 Do you really know the difference between various date functions available in SQL Server 2012? Here is a three part story where we explored the same with examples: Fastest Way to Restore the Database Difference Between DATETIME and DATETIME2 Difference Between DATETIME and DATETIME2 – WITH GETDATE Shrinking NDF and MDF Files – Readers’ Opinion Shrinking Database always creates performance degradation and increases fragmentation in the database. I suggest that you keep that in mind before you start reading the following comment. If you are going to say Shrinking Database is bad and evil, here I am saying it first and loud. Now, the comment of Imran is written while keeping in mind only the process showing how the Shrinking Database Operation works. Imran has already explained his understanding and requests further explanation. I have removed the Best Practices section from Imran’s comments, as there are a few corrections. 2011 Solution – Puzzle – SELECT * vs SELECT COUNT(*) This is very interesting question and I am very confident that not every one knows the answer to this question. Let me ask you again – Which will be faster SELECT* or SELECT COUNT (*) or do you think this is apples and oranges comparison. 2012 Service Broker and CAP_CPU_PERCENT – Limiting SQL Server Instances to CPU Usage In SQL Server 2012 there are a few enhancements with regards to SQL Server Resource Governor. One of the enhancement is how the resources are allocated. Let me explain you with examples. Let us understand the entire discussion with the help of three different examples. Finding Size of a Columnstore Index Using DMVs One of the very common question I often see is need of the list of columnstore index along with their size and corresponding table name. I quickly re-wrote a script using DMVs sys.indexes and sys.dm_db_partition_stats. This script gives the size of the columnstore index on disk only. I am sure there will be advanced script to retrieve details related to components associated with the columnstore index. However, I believe following script is sufficient to start getting an idea of columnstore index size. Developer Training Resources and Summary Roundup Developer Training - Importance and Significance - Part 1 In this part we discussed the importance of training in the real world. The most important and valuable resource any company is its employee. Employees who have been well-trained will be better at their jobs and produce a better product.  An employee who is well trained obviously knows more about their job and all the technical aspects. I have a very high opinion about training employees and it is the most important task. Developer Training – Employee Morals and Ethics – Part 2 In this part we discussed the most crucial components of training. Often employees are expecting the company to pay for their training and the company expresses no interest in training the employee. Quite often training expenses are the real issue for both the employee and employer. Developer Training – Difficult Questions and Alternative Perspective - Part 3 This part was the most difficult to write as I tried to address a few difficult questions and answers. Training is such a sensitive issue that many developers when not receiving chance for training think about leaving the organization. Developer Training – Various Options for Developer Training – Part 4 In this part I tried to explore a few methods and options for training. The generic feedback I received on this blog post was short and I should have explored each of the subject of the training in details. I believe there are two big buckets of training 1) Instructor Lead Training and 2) Self Lead Training. Developer Training – A Conclusive Summary- Part 5 There is no better motivation than a personal desire to learn new technology. Honestly there is nothing more personal learning. That “change is the only constant” and “adapt & overcome” are the essential lessons of life. One cannot stop the learning and resist the change. In the IT industry “ego of knowing all” and the “resistance to change” are the most challenging issues. A Quick Look at Logging and Ideas around Logging Question: What is the first thing comes to your mind when you hear the word “Logging”? Strange enough I got a different answer every single time. Let me just list what answer I got from my friends. Let us go over them one by one. Beginning Performance Tuning with SQL Server Execution Plan Solution of Puzzle – Swap Value of Column Without Case Statement Earlier this week I asked a question where I asked how to Swap Values of the column without using CASE Statement. Read here: SQL SERVER – A Puzzle – Swap Value of Column Without Case Statement. I have proposed 3 different solutions in the blog posts itself. I had requested the help of the community to come up with alternate solutions and honestly I am stunned and amazed by the qualified entries. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Looking ahead at 2011-with Paul Greenberg

    - by divya.malik
    It is almost the end of 2010, rather unbelievable how fast this year has gone by. It is always interesting to read what our CRM gurus have to say about the coming year. So here is CRM luminary, Paul Greenberg’s  forecast for 2011. Mobile CRM growth accelerates. CRM and “Social” companies continue to integrate their capabilities as a few suites begin to emerge. Social “rankings”, as a measure of customer engagement, will become a standard public measure. Analytics exhibits the most significant growth of any area with Customer Insight apps leading the way. Marketing apps mature with social marketing becoming an integral part of the application offering. Customer service begins to redefine itself with greater emphasis on service communities, web self-service and customer knowledge capture. Knowledge management replaces enterprise content management as a core requirement for large businesses. Customer experience reasserts itself loudly as the core of CRM and SCRM - This one is kind of a no-brainer in a way. Co-creation and customer driven product innovation becomes more than just an advanced idea. Microsoft Azure emerges as a true cloud provider at the level of Amazon as cloud computing considers its rise to becoming a primary technology infrastructure. Application marketplaces will become commonplace as companies look to platform providers to fill ecosystem needs, not just CRM. I do encourage you to read the details of his forecasts, that are split into two blog posts. For Part I click here and for Part II, click here. Technorati Tags: oracle,siebel CRM,scrm,paul greenberg

    Read the article

  • OWB 11gR2 - Early Arriving Facts

    - by Dawei Sun
    A common challenge when building ETL components for a data warehouse is how to handle early arriving facts. OWB 11gR2 introduced a new feature to address this for dimensional objects entitled Orphan Management. An orphan record is one that does not have a corresponding existing parent record. Orphan management automates the process of handling source rows that do not meet the requirements necessary to form a valid dimension or cube record. In this article, a simple example will be provided to show you how to use Orphan Management in OWB. We first import a sample MDL file that contains all the objects we need. Then we take some time to examine all the objects. After that, we prepare the source data, deploy the target table and dimension/cube loading map. Finally, we run the loading maps, and check the data in target dimension/cube tables. OK, let’s start… 1. Import MDL file and examine sample project First, download zip file from here, which includes a MDL file and three source data files. Then we open OWB design center, import orphan_management.mdl by using the menu File->Import->Warehouse Builder Metadata. Now we have several objects in BI_DEMO project as below: Mapping LOAD_CHANNELS_OM: The mapping for dimension loading. Mapping LOAD_SALES_OM: The mapping for cube loading. Dimension CHANNELS_OM: The dimension that contains channels data. Cube SALES_OM: The cube that contains sales data. Table CHANNELS_OM: The star implementation table of dimension CHANNELS_OM. Table SALES_OM: The star implementation table of cube SALES_OM. Table SRC_CHANNELS: The source table of channels data, that will be loaded into dimension CHANNELS_OM. Table SRC_ORDERS and SRC_ORDER_ITEMS: The source tables of sales data that will be loaded into cube SALES_OM. Sequence CLASS_OM_DIM_SEQ: The sequence used for loading dimension CHANNELS_OM. Dimension CHANNELS_OM This dimension has a hierarchy with three levels: TOTAL, CLASS and CHANNEL. Each level has three attributes: ID (surrogate key), NAME and SOURCE_ID (business key). It has a standard star implementation. The orphan management policy and the default parent setting are shown in the following screenshots: The orphan management policy options that you can set for loading are: Reject Orphan: The record is not inserted. Default Parent: You can specify a default parent record. This default record is used as the parent record for any record that does not have an existing parent record. If the default parent record does not exist, Warehouse Builder creates the default parent record. You specify the attribute values of the default parent record at the time of defining the dimensional object. If any ancestor of the default parent does not exist, Warehouse Builder also creates this record. No Maintenance: This is the default behavior. Warehouse Builder does not actively detect, reject, or fix orphan records. While removing data from a dimension, you can select one of the following orphan management policies: Reject Removal: Warehouse Builder does not allow you to delete the record if it has existing child records. No Maintenance: This is the default behavior. Warehouse Builder does not actively detect, reject, or fix orphan records. (More details are at http://download.oracle.com/docs/cd/E11882_01/owb.112/e10935/dim_objects.htm#insertedID1) Cube SALES_OM This cube is references to dimension CHANNELS_OM. It has three measures: AMOUNT, QUANTITY and COST. The orphan management policy setting are shown as following screenshot: The orphan management policy options that you can set for loading are: No Maintenance: Warehouse Builder does not actively detect, reject, or fix orphan rows. Default Dimension Record: Warehouse Builder assigns a default dimension record for any row that has an invalid or null dimension key value. Use the Settings button to define the default parent row. Reject Orphan: Warehouse Builder does not insert the row if it does not have an existing dimension record. (More details are at http://download.oracle.com/docs/cd/E11882_01/owb.112/e10935/dim_objects.htm#BABEACDG) Mapping LOAD_CHANNELS_OM This mapping loads source data from table SRC_CHANNELS to dimension CHANNELS_OM. The operator CHANNELS_IN is bound to table SRC_CHANNELS; CHANNELS_OUT is bound to dimension CHANNELS_OM. The TOTALS operator is used for generating a constant value for the top level in the dimension. The CLASS_FILTER operator is used to filter out the “invalid” class name, so then we can see what will happen when those channel records with an “invalid” parent are loading into dimension. Some properties of the dimension operator in this mapping are important to orphan management. See the screenshot below: Create Default Level Records: If YES, then default level records will be created. This property must be set to YES for dimensions and cubes if one of their orphan management policies is “Default Parent” or “Default Dimension Record”. This property is set to NO by default, so the user may need to set this to YES manually. LOAD policy for INVALID keys/ LOAD policy for NULL keys: These two properties have the same meaning as in the dimension editor. The values are set to the same as the dimension value when user drops the dimension into the mapping. The user does not need to modify these properties. Record Error Rows: If YES, error rows will be inserted into error table when loading the dimension. REMOVE Orphan Policy: This property is used when removing data from a dimension. Since the dimension loading type is set to LOAD in this example, this property is disabled. Mapping LOAD_SALES_OM This mapping loads source data from table SRC_ORDERS and SRC_ORDER_ITEMS to cube SALES_OM. This mapping seems a little bit complicated, but operators in the red rectangle are used to filter out and generate the records with “invalid” or “null” dimension keys. Some properties of the cube operator in a mapping are important to orphan management. See the screenshot below: Enable Source Aggregation: Should be checked in this example. If the default dimension record orphan policy is set for the cube operator, then it is recommended that source aggregation also be enabled. Otherwise, the orphan management processing may produce multiple fact rows with the same default dimension references, which will cause an “unstable rowset” execution error in the database, since the dimension refs are used as update match attributes for updating the fact table. LOAD policy for INVALID keys/ LOAD policy for NULL keys: These two properties have the same meaning as in the cube editor. The values are set to the same as in the cube editor when the user drops the cube into the mapping. The user does not need to modify these properties. Record Error Rows: If YES, error rows will be inserted into error table when loading the cube. 2. Deploy objects and mappings We now can deploy the objects. First, make sure location SALES_WH_LOCAL has been correctly configured. Then open Control Center Manager by using the menu Tools->Control Center Manager. Expand BI_DEMO->SALES_WH_LOCAL, click SALES_WH node on the project tree. We can see the following objects: Deploy all the objects in the following order: Sequence CLASS_OM_DIM_SEQ Table CHANNELS_OM, SALES_OM, SRC_CHANNELS, SRC_ORDERS, SRC_ORDER_ITEMS Dimension CHANNELS_OM Cube SALES_OM Mapping LOAD_CHANNELS_OM, LOAD_SALES_OM Note that we deployed source tables as well. Normally, we import source table from database instead of deploying them to target schema. However, in this example, we designed the source tables in OWB and deployed them to database for the purpose of this demonstration. 3. Prepare and examine source data Before running the mappings, we need to populate and examine the source data first. Run SRC_CHANNELS.sql, SRC_ORDERS.sql and SRC_ORDER_ITEMS.sql as target user. Then we check the data in these three tables. Table SRC_CHANNELS SQL> select rownum, id, class, name from src_channels; Records 1~5 are correct; they should be loaded into dimension without error. Records 6,7 and 8 have null parents; they should be loaded into dimension with a default parent value, and should be inserted into error table at the same time. Records 9, 10 and 11 have “invalid” parents; they should be rejected by dimension, and inserted into error table. Table SRC_ORDERS and SRC_ORDER_ITEMS SQL> select rownum, a.id, a.channel, b.amount, b.quantity, b.cost from src_orders a, src_order_items b where a.id = b.order_id; Record 178 has null dimension reference; it should be loaded into cube with a default dimension reference, and should be inserted into error table at the same time. Record 179 has “invalid” dimension reference; it should be rejected by cube, and inserted into error table. Other records should be aggregated and loaded into cube correctly. 4. Run the mappings and examine the target data In the Control Center Manager, expand BI_DEMO-> SALES_WH_LOCAL-> SALES_WH-> Mappings, right click on LOAD_CHANNELS_OM node, click Start. Use the same way to run mapping LOAD_SALES_OM. When they successfully finished, we can check the data in target tables. Table CHANNELS_OM SQL> select rownum, total_id, total_name, total_source_id, class_id,class_name, class_source_id, channel_id, channel_name,channel_source_id from channels_om order by abs(dimension_key); Records 1,2 and 3 are the default dimension records for the three levels. Records 8, 10 and 15 are the loaded records that originally have null parents. We see their parents name (class_name) is set to DEF_CLASS_NAME. Those records whose CHANNEL_NAME are Special_4, Special_5 and Special_6 are not loaded to this table because of the invalid parent. Error Table CHANNELS_OM_ERR SQL> select rownum, class_source_id, channel_id, channel_name,channel_source_id, err$$$_error_reason from channels_om_err order by channel_name; We can see all the record with null parent or invalid parent are inserted into this error table. Error reason is “Default parent used for record” for the first three records, and “No parent found for record” for the last three. Table SALES_OM SQL> select a.*, b.channel_name from sales_om a, channels_om b where a.channels=b.channel_id; We can see the order record with null channel_name has been loaded into target table with a default channel_name. The one with “invalid” channel_name are not loaded. Error Table SALES_OM_ERR SQL> select a.amount, a.cost, a.quantity, a.channels, b.channel_name, a.err$$$_error_reason from sales_om_err a, channels_om b where a.channels=b.channel_id(+); We can see the order records with null or invalid channel_name are inserted into error table. If the dimension reference column is null, the error reason is “Default dimension record used for fact”. If it is invalid, the error reason is “Dimension record not found for fact”. Summary In summary, this article illustrated the Orphan Management feature in OWB 11gR2. Automated orphan management policies improve ETL developer and administrator productivity by addressing an important cause of cube and dimension load failures, without requiring developers to explicitly build logic to handle these orphan rows.

    Read the article

  • What Would Google Do?

    - by David Dorf
    Last year I read Jeff Jarvis' book What Would Google Do? and found it very interesting. Jeff is a long-time journalist that's been studying technology, and more specifically the internet. He used his skills to reverse-engineer Google into a list of "Google rules," then goes on to describe a futuristic world driven by these rules. Its an interesting look at crowd-sourcing, openness, and collaboration across many industries, including retail (Google Shops). This year Jeff Jarvis will be a keynote speaker at CrossTalk, Oracle's user conference dedicated to the retail industry. This year's theme is... Retail Redefined: Redesign. Reinvigorate. Reimagine. I think that's pretty appropriate given the massive changes the industry has undergone during the last three years. The thing I really like about this conference is that we try to let the retailers do most of the talking. I'm very interested in hearing about the innovative projects they've got brewing, and where they think our industry is heading. I'll be speaking, but I'm not sure about what so let me know of any interesting topics.

    Read the article

  • Ray Wang: Why engagement matters in an era of customer experience

    - by Michael Snow
    Why engagement matters in an era of customer experience R "Ray" Wang Principal Analyst & CEO, Constellation Research Mobile enterprise, social business, cloud computing, advanced analytics, and unified communications are converging. Armed with the art of the possible, innovators are seeking to apply disruptive consumer technologies to enterprise class uses — call it the consumerization of IT in the enterprise. The likely results include new methods of furthering relationships, crafting longer term engagement, and creating transformational business models. It's part of a shift from transactional systems to engagement systems. These transactional systems have been around since the 1950s. You know them as ERP, finance and accounting systems, or even payroll. These systems are designed for massive computational scale; users find them rigid and techie. Meanwhile, we've moved to new engagement systems such as Facebook and Twitter in the consumer world. The rich usability and intuitive design reflect how users want to work — and now users are coming to expect the same paradigms and designs in their enterprise world. ~~~ Ray is a prolific contributor to his own blog as well as others. For a sneak peak at Ray's thoughts on engagement, take a look at this quick teaser on Avoiding Social Media Fatigue Through Engagement Or perhaps you might agree with Ray on Dealing With The Real Problem In Social Business Adoption – The People! Check out Ray's post on the Harvard Business Review Blog to get his perspective on "How to Engage Your Customers and Employees." For a daily dose of Ray - follow him on Twitter: @rwang0 But MOST IMPORTANTLY.... Don't miss the opportunity to join leading industry analyst, R "Ray" Wang of Constellation Research in the latest webcast of the Oracle Social Business Thought Leaders Series as he explains how to apply the 9 C's of Engagement for both your customers and employees.

    Read the article

< Previous Page | 547 548 549 550 551 552 553 554 555 556 557 558  | Next Page >