Search Results

Search found 2042 results on 82 pages for 'average'.

Page 63/82 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • Oracle Tutor: XPDL conversion (and why you should care)

    - by mary.keane
    You may have noticed that the Oracle Business Process Converter feature in Tutor 14 supports "XPDL" conversion to Oracle Business Process Analysis Suite (BPA), Oracle Business Process Management Suite (BPM), and Oracle Tutor, and you may have briefly wondered "what is XPDL?" before you moved on to the Visio import feature (a very popular feature in Tutor 14). This posting is for those who do not yet understand (or care) about XPDL and process modeling. Many of us (and I'm including myself) have spent years working in the process definition arena: we've written procedures, designed systems and software to help others write procedures, and have been responsible for embedding policies and procedures into training material for employees. We've worked with tools such as Oracle Tutor, Microsoft Visio, Microsoft Word, and UPK. Most of us have never worked with "modeling tools" before, and we certainly never had to understand BPMN. It's a brave new world in this arena, and companies desperately need people with policy and procedural system expertise to be able to work with system analysts so there is a seamless transfer of knowledge from IT to employees. When working with applications, a picture is worth a thousand words, so eventually you're going to need to understand and be able to work with business process models. XPDL is an acronym for XML Process Definition Language, and it is an interchange format for business process models. It allows you to take a BPMN model that was developed in one workflow application such as BizAgi and import it into another workflow application or a true BPMN management system such as Oracle BPM. Specifically, the XPDL format contains the graphical information of a model as well as any executable information. By using a common format, models can be moved from a basic modeling application used by business owners to applications used by system architects. Over 80 applications support the XPDL format, including MetaStorm ProVision, BEA ALBPM, BizAgi, and Tibco. I mention these applications because we have provided XSLT mapping files specifically for these vendors. Oracle Business Process Converter was designed with user extensibility in mind, and thus users can add their own XML files so that additional XPDL models from other vendors can be converted to BPM, BPA, and Oracle Tutor. Instructions on how to add your own files can be found in Appendix 4 of the Oracle Business Converter manual. Let's take a visual look at how this works. Here is an example of a model devloped in BizAgi: This model can be created by the average business user without a large learning curve, and it's a good start for the system analyst who will be adding web services as well as for the business manager who manages the process described in the model. By exporting this model as XPDL, the information can be converted into Oracle BPA and Oracle BPM as well as converted to Oracle Tutor to become the framework for a procedure. Through this conversion feature, one graphic illustration of a business process can be used by a system analyst, business analyst, business manager, and employee, as seen below. Model Converted to Tutor Procedure Below is the task section of the procedure after conversion from an XPDL file. Model converted to BPA Model converted to BPM End users still want step by step instructions on how to perform their jobs, so procedures (Oracle Tutor) and application simulations (UPK) are still a critical piece of the solution. But IT professionals need graphic descriptions of how the applications work, regardless of whether there are any tasks involving humans. Now there is a way to convert procedures (Oracle Tutor docx files) and basic models (XPDL files) so that business managers and system analysts can share process information. References Wikipedia XPDL. Workflow Management Coalition, XPDL Support and Resources Oracle Business Process Converter manual, Oracle Tutor 14 Oracle Business Process Management 11g If you have any XPDL conversion stories to share, we'd love to hear from you. Best wishes for the coming new year, Mary Keane, Senior Development Manager, Oracle Tutor and BPM

    Read the article

  • 13 Things From the Oracle Social Summit You Should Know

    - by Mike Stiles
    Oracle held its first annual Oracle Social Summit, “The School for the Socially Gifted,” this past week in Las Vegas.  If anyone came to the event uncertain as to why Oracle has such an interest in social, and what its plans for social are, they left with an entirely new vision of where social is headed, and why.For those unable to attend, I was able to keep my MacBook charged just long enough to capture some of the more pertinent takeaways.1. The social enterprise is inevitable.  Social technology is disrupting the hierarchies of big companies.  It’s a revolution in corporate structures, just as it has been in various governments.  It’s not crazy to ask yourself if your CEO is the next Mubarak.  (David Kilpatrick Author of “The Facebook Effect” and founder of the Techonomy Conference) 2. The social enterprise represents collaboration on steroids.  It’s tapping into the power of your people, as opposed to keeping them “in their place.”  3. 1 in every 7 humans on earth is an active Facebook user.  75% have posted a negative comment after a poor customer experience.  The average user will inform 53 people of a bad experience.4. Checking social media is the 2nd biggest use of phones now.  Reading posts from brands is 4th.5. 70% of marketers have little or no understanding of the social conversations happening around their brand.6. Advertising, when done well, is content we care about, preferably informed by those we trust.7. Acquiring low-quality fans through gimmicks, or focusing purely on fan acquisition is a mistake.  And relying purely on organic distribution is a mistake.  (John Yi, Head of Marketing Partnerships – Facebook)8. Using all this newfound data and insight serves to positively affect the customer experience.  It allows organizations to now leverage the investments they’ve made in social up to now.9. Social is not a marketing utopia where everything is free.  It’s pay to play.  The paid component is about driving attention.  10. We are only in the infancy of ad-targeting opportunities in social.  There’s an evolution underway from interest-based targeting to action-based targeting.11. There’s actually very little overlap of the people following you on different social platforms.  Don’t assume it’s the same audience on each.12. People who can create content and who also have an understanding of what drives that content are growing increasingly valuable.13. Oracle Social’s future is enterprise SRM, integrated across marketing, selling, service, HR and every other corner of the organization.And in case you thought those were the only gems to come out of the summit, you may want to keep an eye out for Tuesday’s Social Spotlight, ever so aptly titled “13 More Things from the Oracle Social Summit You Should Know.”

    Read the article

  • Hey Retailers, Are You Ready For The Holiday Season?

    - by Jeri Kelley
    With online holiday spending reaching $35.3 billion in 2011 and American shoppers spending just under $750 on average on their holiday purchases this year, how ready is your business for the 2012 holiday season?   ?? Today’s shoppers do not take their purchases lightly.  They are more connected, interact with more resources to make decisions, diligently compare products and services, seek out the best deals, and ask for input from friends and family.   This holiday season, as consumers browse for apparel, tablets, toys, and much more, they will be bombarded with retailer communication - from emails and commercials to countless search engine results and social recommendations.  With a flurry of activity coming at consumers from every channel and competitor, your success this year will rely on communicating a consistent, personalized message no matter where your customers are shopping.  Here are a few ideas to help with your commerce strategy this holiday season: CONSISTENCY COUNTS FOR MULTICHANNEL SHOPPERS??According to a November 2011 study commissioned by Oracle, “Channel Commerce 2011: The Consumer View,” 54% of consumers in the U.S. and Canada regularly employ two or more channels before they make a purchase.  While each channel has its own unique benefit, user profile, and purpose, it’s critical that your shoppers have a consistent core experience wherever they’re looking for information or making a purchase.  Be sure consumers can consistently search and browse the same product information and receive the same promotions online, on their mobile devices, and in-store.? USE YOUR CUSTOMER’S CONTEXT TO SURFACE RELEVANT CONTENTYour Web site is likely the hub of your holiday activity.  According to a Monetate infographic, 39% of shoppers will visit your Web site directly to find out about the best holiday deals.   Use everything you know about your customers from past purchase data to browsing history to provide a relevant experience at every click, and assemble content in a context that entices shoppers to buy online, or influences an offline purchase.? TAKE ADVANTAGE OF MOBILE BEHAVIOR?Having a mobile program is no longer a choice.   Armed with smartphones and tablets, consumers now have access to more and more product information and can compare products and prices from anywhere.  In fact, approximately 52% of smartphone users will use their device to research products, redeem coupons and use apps to assist in their holiday gift purchase.  At a minimum, be sure your mobile environment has store information, consistent pricing and promotions, and simple checkout capabilities. ARM IN-STORE ASSOCIATES WITH TABLETS?According to RISNews.com, 31% of retailers plan to begin testing tablets in stores in 2012, 22% have already begun such testing and 6% had fully deployed tablets within stores.   Take advantage of this compelling sales tool to get shoppers interacting with videos, user reviews, how-to guides, side-by-side product comparisons, and specs.  Automatically trigger upsell and cross sell suggestions for store associates to recommend for each product or category, build in alerts for promotions, and allow associates to place orders and check inventory from their tablet.  ? WISDOM OF THE CROWDS IS GOOD, BUT WISDOM FROM FRIENDS IS BETTER?Shoppers who grapple with options are looking for recommendations; they’d rather get advice from friends, and they’re more likely to spend more while doing so.    In fact, according to an infographic by Mr. Youth, 66% of social media users made a purchase on Black Friday or Cyber Monday as a direct result of social media interactions with brands or family.   This holiday season, be sure you are leveraging your social channels from Facebook to Pinterest to drive consistent promotions and help your brand to become part of the conversation. So, are you ready for the holidays this year?  

    Read the article

  • ArchBeat Link-o-Rama Top 10 for September 9-15, 2012

    - by Bob Rhubart
    The Top 10 most-viewed items shared on the OTN ArchBeat Facebook page for the week of September 9-15, 2017. 15 Lessons from 15 Years as a Software Architect | Ingo Rammer In this presentation from the GOTO Conference in Copenhagen, Ingo Rammer shares 15 tips regarding people, complexity and technology that he learned doing software architecture for 15 years. Attend OTN Architect Day – by Architects, for Architects – October 25 You won't need 3D glasses to take in these live presentations (8 sessions, two tracks) on Cloud computing, SOA, and engineered systems. And the ticket price is: Zero. Nothing. Absolutely free. Register now for Oracle Technology Network Architect Day in Los Angeles. Thursday October 25, 2012, 8:00 a.m. – 5:00 p.m. Sofitel Los Angeles , 8555 Beverly Boulevard , Los Angeles, CA 90048. Cloud API and service designers, stop thinking small | Cloud Computing - InfoWorld "The focus must shift away from fine-grained APIs that provide some type of primitive service, such as pushing data to a block of storage or perhaps making a request to a cloud-rooted database," says InfoWorld's David Linthicum. "To go beyond primitives, you must understand how these services should be used in a much larger architectural context. In other words, you need to understand how businesses will employ these services to form real workplace solutions—inside and outside the enterprise." Adding a runtime picker to a taskflow parameter in WebCenter | Yannick Ongena Oracle ACE Yannick Ongena shows how to create an Oracle WebCenter popup to allow users to "select items or do more complex things." Oracle IAM 11g R2 docs are now available "One of the great things about the new doc set is the inclusion of ePub files," says Fusion Middleware A-Team blogger Chris Johnson. "This means that if you have an iPad you can load up the doc library onto that and read the docs on the couch." Setting up a local Yum Server using the Exalogic ZFS Storage Appliance | Donald A concise technical post from the man named Donald. What's New in Oracle VM VirtualBox 4.2? | The Fat Bloke Sings "One of the trends we've seen is that as the average host platform becomes more powerful, our users are consistently running more and more vm's," says The Fat Bloke. "Some of our users have large libraries of vm's of various vintages, whilst others have groups of vm's that are run together as an assembly of the various tiers in a multi-tiered software solution, for example, a database tier, middleware tier, and front-ends." The new VirtualBox release, a year in the making, addresses the needs of these users, he explains. Configuring Oracle Business Intelligence 11g MDS XML Source Control Management with Git Version Control | Christian Screen Oracle ACE Christian Screen developed this tutorial for those interested in learning how to configure the Oracle Business Intelligence 11g (11.1.1.6) metadata repository for development using the new MDS XML source control management functionality. Identity and Access Management at Oracle Open World 2012 | Brian Eidelman Fusion Middleware A-Team blogger Brian Eideleman highlights three Oracle Openworld sessions that will put Identity and Access Management in the spotlight, and shares a link to the "Focus On: Identity Management" document, a comprehensive listing of Openworld activities also dealing with IM. Starting and stopping WebLogic automatically using Upstart | Chris Johnson "In Ubuntu, RedHat and Oracle Linux there's a new flavor of init called Upstart that all the kids are using," says Oracle Fusion Middleware A-Team member Chris Johnson. "It's the new hotness when it comes to making programs into daemons and wiring them to start and stop at appropriate times." Thought for the Day "The purpose of software engineering is to control complexity, not to create it." — Pamela Zave Source: SoftwareQuotes.com

    Read the article

  • How to Convert Videos to 3GP for Mobile Phones

    - by DigitalGeekery
    Would you like to play videos on your phone, but the device only supports 3GP files? We’ll show you how to convert popular video files into 3GP mobile phone video format with Pazera Free Video to 3GP Converter. Download the Pazera Free Video to 3GP Converter (Download link below). It will allow you to convert popular video files (AVI, MPEG, MP4, FLV, MKV, and MOV) to work on your mobile phone. There is no installation to run. You’ll just need to unzip the download folder and double-click the videoto3gp.exe file to run the application. To add video files to the queue, click on the Add files button. Browse for your file, and click Open.   Your video will be added to the Queue. You can add multiple files to the queue and convert them all at one time. The converter comes with several pre-configured profiles for conversion settings. To load a profile, select one from the Profile drop down list and then click the Load button. The settings in the panels at the bottom of the application will be automatically updated.   If you are a more advanced user, the options on the lower panels allow for adjusting settings to your liking. You can choose between 3GP and 3G2 (for some older phones), H.263, MPEG-4, and XviD video codecs, AAC or AMR-NB audio codecs, as well as a variety of bitrates, resolutions, etc.  By default, the converted file will be output to the same location as the input directory. You can change it by clicking the text box input radio button and browsing for a different folder. Click Convert to start the conversion process. A conversion output box will open and display the progress. When finished, click Close.   Now you’re ready to load the video onto your phone and enjoy.     Conclusion Pazera Free Video to 3GP Converter is not exactly the ultimate video conversion tool, but it is quick and simple enough for the average user to convert most video formats to 3GP. Plus, it’s portable. You can copy the folder to a USB drive and take it with you. Do you have some 3GP video files you’d like to convert to more common formats? Check out our earlier article on how to convert 3GP to AVI and MPEG for free. Link Download Pazera Free Video to 3GP Converter Similar Articles Productive Geek Tips Convert .3GP and .3G2 Files to AVI / MPEG for FreeExtract Audio from a Video File with Pazera Free Audio ExtractorConvert PDF Files to Word Documents and Other FormatsConvert YouTube Videos to MP3 with YouTube DownloaderFriday Fun: Watch HD Video Content with Meevid TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Daily Motivator (Firefox) FetchMp3 Can Download Videos & Convert Them to Mp3 Use Flixtime To Create Video Slideshows Creating a Password Reset Disk in Windows Bypass Waiting Time On Customer Service Calls With Lucyphone MELTUP – "The Beginning Of US Currency Crisis And Hyperinflation"

    Read the article

  • How the number of indexes built on a table can impact performances?

    - by Davide Mauri
    We all know that putting too many indexes (I’m talking of non-clustered index only, of course) on table may produce performance problems due to the overhead that each index bring to all insert/update/delete operations on that table. But how much? I mean, we all agree – I think – that, generally speaking, having many indexes on a table is “bad”. But how bad it can be? How much the performance will degrade? And on a concurrent system how much this situation can also hurts SELECT performances? If SQL Server take more time to update a row on a table due to the amount of indexes it also has to update, this also means that locks will be held for more time, slowing down the perceived performance of all queries involved. I was quite curious to measure this, also because when teaching it’s by far more impressive and effective to show to attended a chart with the measured impact, so that they can really “feel” what it means! To do the tests, I’ve create a script that creates a table (that has a clustered index on the primary key which is an identity column) , loads 1000 rows into the table (inserting 1000 row using only one insert, instead of issuing 1000 insert of one row, in order to minimize the overhead needed to handle the transaction, that would have otherwise ), and measures the time taken to do it. The process is then repeated 16 times, each time adding a new index on the table, using columns from table in a round-robin fashion. Test are done against different row sizes, so that it’s possible to check if performance changes depending on row size. The result are interesting, although expected. This is the chart showing how much time it takes to insert 1000 on a table that has from 0 to 16 non-clustered indexes. Each test has been run 20 times in order to have an average value. The value has been cleaned from outliers value due to unpredictable performance fluctuations due to machine activity. The test shows that in a  table with a row size of 80 bytes, 1000 rows can be inserted in 9,05 msec if no indexes are present on the table, and the value grows up to 88 (!!!) msec when you have 16 indexes on it This means a impact on performance of 975%. That’s *huge*! Now, what happens if we have a bigger row size? Say that we have a table with a row size of 1520 byte. Here’s the data, from 0 to 16 indexes on that table: In this case we need near 22 msec to insert 1000 in a table with no indexes, but we need more that 500msec if the table has 16 active indexes! Now we’re talking of a 2410% impact on performance! Now we can have a tangible idea of what’s the impact of having (too?) many indexes on a table and also how the size of a row also impact performances. That’s why the golden rule of OLTP databases “few indexes, but good” is so true! (And in fact last week I saw a database with tables with 1700bytes row size and 23 (!!!) indexes on them!) This also means that a too heavy denormalization is really not a good idea (we’re always talking about OLTP systems, keep it in mind), since the performance get worse with the increase of the row size. So, be careful out there, and keep in mind the “equilibrium” is the key world of a database professional: equilibrium between read and write performance, between normalization and denormalization, between to few and too may indexes. PS Tests are done on a VMWare Workstation 7 VM with 2 CPU and 4 GB of Memory. Host machine is a Dell Precsioni M6500 with i7 Extreme X920 Quad-Core HT 2.0Ghz and 16Gb of RAM. Database is stored on a SSD Intel X-25E Drive, Simple Recovery Model, running on SQL Server 2008 R2. If you also want to to tests on your own, you can download the test script here: Open TestIndexPerformance.sql

    Read the article

  • Oracle Retail Mobile Point-of-Service

    - by David Dorf
    When most people discuss mobile in retail, they immediately go to shopping applications.  While I agree the consumer side of mobile is huge, I believe its also important to arm store associates with mobile tools.  There are around a dozen major roll-outs of mobile POS to chain retailers, and all have been successful.  This does not, however, signal the demise of traditional registers.  Retailers will adopt mobile POS slowly and reduce the number of fixed registers over time, but there's likely to be a combination of both for the foreseeable future.  Even Apple retains at least one fixed register in every store, you just have to know where to look. The business benefits for mobile POS are pretty straightforward: 1. Faster checkout.  Walmart's CFO recently reported that for every second they shave off the average transaction time, they can potentially save $12M a year in labor.  I think its more likely that labor will be redeployed to enhance the customer experience. 2. Smarter associates.  The sales associates on the floor need the same access to information that consumers have, if not more.  They need ready access to product details, reviews, inventory, etc. to meet consumer expectations.  In a recent study, 40% of consumers said a savvy store associate can impact their final product selection more than a website. 3. Lower costs.  Mobile POS hardware (iPod touch + sled) costs about a fifth of fixed registers, not to mention the reclaimed space that can be used for product displays. But almost all Mobile POS solutions can claim those benefits equally.  Where there's differentiation is on the technical side.  Oracle recently announced availability of the Oracle Retail Mobile Point-of-Service, and it has three big technology advantages in the market: 1. Portable. We used a popular open-source component called PhoneGap that abstracts the app from the underlying OS and hardware so that iOS, Android, and other platforms could be supported.  Further, we used Web technologies such as HTML5 and JavaScript, which are commonly known by many programmers, as opposed to ObjectiveC which is more difficult to find.  The screen can adjust to different form-factors and sizes, just like you see with browsers.  In the future when a new, zippy device gets released, retailers will have the option to move to that device more easily than if they used a native app. 2. Flexible.  Our Mobile POS is free with the Oracle Retail Point-of-Service product.  Retailers can use any combination of fixed and mobile registers, and those ratios can change as required.  Perhaps start with 1 mobile and 4 fixed per store, then transition over time to 4 mobile and 1 fixed without any additional software licenses.  Our scalable solution supports lots of combinations. 3. Consistent.  Because our Mobile POS is fully integrated to our traditional POS, the same business logic is reused.  Third-party Mobile POS solutions often handle pricing, promotions, and tax calculations separately leading to possible inconsistencies within the store.  That won't happen with Oracle's solution. For many retailers, Mobile POS can lower costs, increase customer service, and generally enhance a consumer's in-store experience.  Apple led the way, but lots of other retailers are discovering the many benefits of adding mobile capabilities in their stores.  Just be sure to examine both the business and technology benefits so you get the most value from your solution for the longest period of time.

    Read the article

  • Expanding on requestaudit - Tracing who is doing what...and for how long

    - by Kyle Hatlestad
    One of the most helpful tracing sections in WebCenter Content (and one that is on by default) is the requestaudit tracing.  This tracing section summarizes the top service requests happening in the server along with how they are performing.  By default, it has 2 different rotations.  One happens every 2 minutes (listing up to 5 services) and another happens every 60 minutes (listing up to 20 services).  These traces provide the total time for all the requests against that service along with the number of requests and its average request time.  This information can provide a good start in possibly troubleshooting performance issues or tracking a particular issue.   >requestaudit/6 12.10 16:48:00.493 Audit Request Monitor !csMonitorTotalRequests,47,1,0.39009329676628113,0.21034042537212372,1>requestaudit/6 12.10 16:48:00.509 Audit Request Monitor Request Audit Report over the last 120 Seconds for server wcc-base_4444****requestaudit/6 12.10 16:48:00.509 Audit Request Monitor -Num Requests 47 Errors 1 Reqs/sec. 0.39009329676628113 Avg. Latency (secs) 0.21034042537212372 Max Thread Count 1requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 1 Service FLD_BROWSE Total Elapsed Time (secs) 3.5320000648498535 Num requests 10 Num errors 0 Avg. Latency (secs) 0.3531999886035919 requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 2 Service GET_SEARCH_RESULTS Total Elapsed Time (secs) 2.694999933242798 Num requests 6 Num errors 0 Avg. Latency (secs) 0.4491666555404663requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 3 Service GET_DOC_PAGE Total Elapsed Time (secs) 1.8839999437332153 Num requests 5 Num errors 1 Avg. Latency (secs) 0.376800000667572requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 4 Service DOC_INFO Total Elapsed Time (secs) 0.4620000123977661 Num requests 3 Num errors 0 Avg. Latency (secs) 0.15399999916553497requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 5 Service GET_PERSONALIZED_JAVASCRIPT Total Elapsed Time (secs) 0.4099999964237213 Num requests 8 Num errors 0 Avg. Latency (secs) 0.051249999552965164requestaudit/6 12.10 16:48:00.509 Audit Request Monitor ****End Audit Report***** To change the default rotation or size of output, these can be set as configuration variables for the server: RequestAuditIntervalSeconds1 – Used for the shorter of the two summary intervals (default is 120 seconds)RequestAuditIntervalSeconds2 – Used for the longer of the two summary intervals (default is 3600 seconds)RequestAuditListDepth1 – Number of services listed for the first request audit summary interval (default is 5)RequestAuditListDepth2 – Number of services listed for the second request audit summary interval (default is 20) If you want to get more granular, you can enable 'Full Verbose Tracing' from the System Audit Information page and now you will get an audit entry for each and every service request.  >requestaudit/6 12.10 16:58:35.431 IdcServer-68 GET_USER_INFO [dUser=bob][StatusMessage=You are logged in as 'bob'.] 0.08765099942684174(secs) What's nice is it reports who executed the service and how long that particular request took.  In some cases, depending on the service, additional information will be added to the tracing relevant to that  service. >requestaudit/6 12.10 17:00:44.727 IdcServer-81 GET_SEARCH_RESULTS [dUser=bob][QueryText=%28+dDocType+%3cmatches%3e+%60Document%60+%29][StatusCode=0][StatusMessage=Success] 0.4696030020713806(secs) You can even go into more detail and insert any additional data into the tracing.  You simply need to add this configuration variable with a comma separated list of variables from local data to insert. RequestAuditAdditionalVerboseFieldsList=TotalRows,path In this case, for any search results, the number of items the user found is traced: >requestaudit/6 12.10 17:15:28.665 IdcServer-36 GET_SEARCH_RESULTS [TotalRows=224][dUser=bob][QueryText=%28+dDocType+%3cmatches%3e+%60Application%60+%29][Sta... I also recently ran into the case where services were being called from a client through RIDC.  All of the services were being executed as the same user, but they wanted to correlate the requests coming from the client to the ones being executed on the server.  So what we did was add a new field to the request audit list: RequestAuditAdditionalVerboseFieldsList=ClientToken And then in the RIDC client, ClientToken was added to the binder along with a unique value that could be traced for that request.  Now they had a way of tracing on both ends and identifying exactly which client request resulted in which request on the server.

    Read the article

  • Managing Social Relationships for the Enterprise – Part 2

    - by Michael Snow
    12.00 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-fareast-font-family:Calibri; mso-bidi-font-family:"Times New Roman";} Reggie Bradford, Senior Vice President, Oracle  On September 13, 2012, I sat down with Altimeter Analyst Jeremiah Owyang to talk about how enterprise businesses are approaching the management of both their social media strategies and internal structures. There’s no longer any question as to whether companies are adopting social full throttle. That’s exactly the way it should be, because it’s a top online behavior across all age groups. For your consumers, it’s an ingrained, normal form of communication. And beyond connecting with friends, social users are reaching out for information and service from brands. Jeremiah tells us 29% of Twitter followers follow a brand and 58% of Facebook users have “Liked” a brand. Even on the B2B side, people act on reviews and recommendations. Just as in the early 90’s we saw companies move from static to dynamic web sites, businesses of all sizes are moving from just establishing a social presence to determining effective and efficient ways to use it. I like to say we’re in the 2nd or 3rd inning of a 9-inning game. Corporate social started out as a Facebook page, it’s multiple channels servicing customers wherever they are. Social is also moving from merely moderating to analyzing so that the signal can be separated from the noise, so that impactful influencers can be separated from other users. Organizationally, social started with the marketers. Now we’re getting into social selling, commerce, service, HR, recruiting, and collaboration. That’s Oracle’s concept of enterprise social relationship management, a framework to extend social across the entire organization real-time in as holistic a way as possible. Social requires more corporate coordination than ever before. One of my favorite statistics is that the average corporation at enterprise has 178 social accounts, according to Altimeter. Not all of them active, not all of them necessary, but 178 of them. That kind of fragmentation creates risk, so the smarter companies will look for solutions (as opposed to tools) that can organize, scale and defragment, as well as quickly integrate other networks and technologies that will come along. Our conversation goes deep into the various corporate social structures we’re seeing, as well as the advantages and disadvantages of each. There are also a couple of great examples of how known brands used an integrated, holistic approach to achieve stated social goals. What’s especially exciting to me is the Oracle SRM framework for the enterprise provides companywide integration into one seamless system. This is not a dream. This is going to have substantial business impact in the next several years.

    Read the article

  • New spreadsheet accompanying SmartAssembly 6.0 provides statistics for prioritizing bug fixes

    - by Jason Crease
    One problem developers face is how to prioritize the many voices providing input into software bugs. If there is something wrong with a function that is the darling of a particular user, he or she tends to want action - now! The developer's dilemma is how to ascertain that the problem is major or minor, and when it should be addressed. Now there is a new spreadsheet accompanying SmartAssembly that provides exactly that information in an objective manner. This might upset those used to getting their way by being the loudest or pushiest, but ultimately it will ensure that the biggest problems get the priority they deserve. Here's how it works: Feature Usage Reporting (FUR) in SmartAssembly 6.0 provides a wealth of data about how your software is used by its end-users, but in the SmartAssembly UI the data isn't mined to its full extent. The new Excel spreadsheet for FUR extracts statistics from that data and presents them in easy-to-understand forms. I developed the spreadsheet feature in Microsoft Excel, using a fair amount of VBA. The spreadsheet connects directly to the database which stores the feature-usage data, and shows a wide variety of statistics and tables extracted from that data.  You want to know what percentage of users have used the 'Export as XML' button?  No problem.  How popular is v5.3 is compared to v5.1?  There's graphs for that. You need to know whether you have more users in Russia or Brazil? There's a big pie chart for that. I recently witnessed the spreadsheet in use here at Red Gate Software. My bug is exposed as minor While testing new features in .NET Reflector, I found a usability bug in the Refresh button and filed it in the Red Gate bug-tracking system. The bug was labelled "V.NEXT MINOR," which means it would be fixed in the next point release. Although I'm a professional tester, I'm not much different than most software users when they discover a bug that affects them personally: I wanted it fixed immediately. There was an ulterior motive at play here, of course. I would get to see my colleagues put the spreadsheet to work. The Reflector team loaded up the spreadsheet to view the feature-usage statistics that SmartAssembly collected for the refresh button. The resulting statistics showed that only 8% of users have ever pressed the Refresh button, and only 2.6% of sessions involve pressing the button. When Refresh is used, it's only pressed on average 1.6 times a session, with a maximum of 8 times during a session. This was in stark contrast to what I was doing as a conscientious tester: pressing it dozens of times per session. The spreadsheet provides evidence that my bug was a minor one. On to more serious things Based on the solid evidence uncovered by the spreadsheet, the Reflector team concluded that my experience does not represent that of the vast majority of Reflector's recorded users. The Reflector team had ample data to send me back to my desk and keep the bug classified as "V.NEXT MINOR." The team then went back to fixing more serious bugs. If I'm in the shoes of the user, I might not be thoroughly happy, but I cannot deny that the evidence clearly placed me in a very small minority. Next time I'm hoping the spreadsheet will prove that my bug is more important. Find out more about Feature-Usage Reporting here. The spreadsheet is available for free download here.

    Read the article

  • Games development with a game loop that's abstracted away

    - by Davy8
    Most game development happens with a main game loop. Are there any good articles/blog posts/discussions about games without a game loop? I imagine they'd mostly be web games, but I'd be interested in hearing otherwise. (As a side note, I think it's really interesting that the concept is almost exclusively used in gaming as far as I'm aware, perhaps that may be another question.) Edit: I realize there's probably a redraw loop somewhere. I guess what I really mean is a loop that is hidden to you. Frames are something you as the developer are not concerned with as you're working on a higher level of abstraction. E.g. someLootItem.moveTo(inventory, someAnimatationType) and that will move from the loot box to your inventory using the specified animation type without the game developer having to worry about the implementation details of that animation. Maybe that's how "real" games end up working, but from reading most tutorials they seem to imply a much more granular level of control is used, but that might just be an artifact of being a tutorial. Edit2: I think most people are misunderstanding what I'm trying to ask, likely because I'm having trouble describing exactly what I'm trying to ask. After some more thinking perhaps what I'm referring to is more along the lines of what I believe is referred to as "scripting" where you're working at a very high level and having some game engine take care of the low level details. For example, take custom maps in Starcraft II or Warcraft III. Many of the "maps" have gameplay that deviates enough from the primary game that they could be considered a separate game written on the same engine. What I'm referring to then is along those lines. I may be wrong because I only dabbed in the Warcraft III editor, but as far as I remember no where in the map editor do you control the game loop, and yet you can create many different games out of it. In my mind, these are games in their own right. If you're playing DotA you don't say you're playing Warcraft III, you say you're playing DotA because that's the actual game you're playing. Such a system may impose limitations that don't exist if you're creating a game from scratch, but it greatly reduces development time because much of the "hard" work has already been done for you. Hopefully that clarifies what I'm asking. Another example of what is I mean, is when you write a web app, of course it communicates through sockets and TCP. But does the average web developer doesn't explicitly write code for connecting sockets. They just need to know about receiving a request and sending a response. There are unique scenarios where you do occasionally need to use raw sockets, but it's generally rare in web development. In a similar fashion, it's very possible to write a game without directly using the game loop, even though one is used behind the scenes. Probably not a AAA title, but there must be hundreds of smaller scale games that can and possibly are written this way. Are there any good resources on writing these "simpler" games?

    Read the article

  • WebLogic Server Performance and Tuning: Part II - Thread Management

    - by Gokhan Gungor
    WebLogic Server, like any other java application server, provides resources so that your applications use them to provide services. Unfortunately none of these resources are unlimited and they must be managed carefully. One of these resources is threads which are pooled to provide better throughput and performance along with the fast response time and to avoid deadlocks. Threads are execution points that WebLogic Server delivers its power and execute work. Managing threads is very important because it may affect the overall performance of the entire system. In previous releases of WebLogic Server 9.0 we had multiple execute queues and user defined thread pools. There were different queues for different type of work which had fixed number of execute threads.  Tuning of this thread pools and finding the proper number of threads was time consuming which required many trials. WebLogic Server 9.0 and the following releases use a single thread pool and a single priority-based execute queue. All type of work is executed in this single thread pool. Its size (thread count) is automatically decreased or increased (self-tuned). The new “self-tuning” system simplifies getting the proper number of threads and utilizing them.Work manager allows your applications to run concurrently in multiple threads. Work manager is a mechanism that allows you to manage and utilize threads and create rules/guidelines to follow when assigning requests to threads. We can set a scheduling guideline or priority a request with a work manager and then associate this work manager with one or more applications. At run-time, WebLogic Server uses these guidelines to assign pending work/requests to execution threads. The position of a request in the execute queue is determined by its priority. There is a default work manager that is provided. The default work manager should be sufficient for most applications. However there can be cases you want to change this default configuration. Your application(s) may be providing services that need mixture of fast response time and long running processes like batch updates. However wrong configuration of work managers can lead a performance penalty while expecting improvement.We can define/configure work managers at;•    Domain Level: config.xml•    Application Level: weblogic-application.xml •    Component Level: weblogic-ejb-jar.xml or weblogic.xml(For a specific web application use weblogic.xml)We can use the following predefined rules/constraints to manage the work;•    Fair Share Request Class: Specifies the average thread-use time required to process requests. The default is 50.•    Response Time Request Class: Specifies a response time goal in milliseconds.•    Context Request Class: Assigns request classes to requests based on context information.•    Min Threads Constraint: Limits the number of concurrent threads executing requests.•    Max Threads Constraint: Guarantees the number of threads the server will allocate to requests.•    Capacity Constraint: Causes the server to reject requests only when it has reached its capacity. Let’s create a work manager for our application for a long running work.Go to WebLogic console and select Environment | Work Managers from the domain structure tree. Click New button and select Work manager and click next. Enter the name for the work manager and click next. Then select the managed server instances(s) or clusters from available targets (the one that your long running application is deployed) and finish. Click on MyWorkManager, and open the Configuration tab and check Ignore Stuck Threads and save. This will prevent WebLogic to tread long running processes (that is taking more than a specified time) as stuck and enable to finish the process.

    Read the article

  • Crime Scene Investigation: SQL Server

    - by Rodney Landrum
    “The packages are running slower in Prod than they are in Dev” My week began with this simple declaration from one of our lead BI developers, quickly followed by an emailed spreadsheet demonstrating that, over 5 executions, an extensive ETL process was running average 630 seconds faster on Dev than on Prod. The situation needed some scientific investigation to determine why the same code, the same data, the same schema would yield consistently slower results on a more powerful server. Prod had yet to be officially christened with a “Go Live” date so I had the time, and having recently been binge watching CSI: New York, I also had the inclination. An inspection of the two systems, Prod and Dev, revealed the first surprise: although Prod was indeed a “bigger” system, with double the amount of RAM of Dev, the latter actually had twice as many processor cores. On neither system did I see much sign of resources being heavily taxed, while the ETL process was running. Without any real supporting evidence, I jumped to a conclusion that my years of performance tuning should have helped me avoid, and that was that the hardware differences explained the better performance on Dev. We spent time setting up a Test system, similarly scoped to Prod except with 4 times the cores, and ported everything across. The results of our careful benchmarks left us truly bemused; the ETL process on the new server was slower than on both other systems. We burned more time tweaking server configurations, monitoring IO and network latency, several times believing we’d uncovered the smoking gun, until the results of subsequent test runs pitched us back into confusion. Finally, I decided, enough was enough. Hadn’t I learned very early in my DBA career that almost all bottlenecks were caused by code and database design, not hardware? It was time to get back to basics. With over 100 SSIS packages and hundreds of queries, each handling specific tasks such as file loads, bulk inserts, transforms, logging, and so on, the task seemed formidable. And yet, after barely an hour spent with Profiler, Extended Events, and wait statistics DMVs, I had a lead in the shape of a query that joined three tables, containing millions of rows, returned 3279 results, but performed 239K logical reads. As soon as I looked at the execution plans for the query in Dev and Test I saw the culprit, an implicit conversion warning on a join predicate field that was numeric in one table and a varchar(50) in another! I turned this information over to the BI developers who quickly resolved the data type mismatches and found and fixed “several” others as well. After the schema changes the same query with the same databases ran in under 1 second on all systems and reduced the logical reads down to fewer than 300. The analysis also revealed that on Dev, the ETL task was pulling data across a LAN, whereas Prod and Test were connected across slower WAN, in large part explaining why the same process ran slower on the latter two systems. Loading the data locally on Prod delivered a further 20% gain in performance. As we progress through our DBA careers we learn valuable lessons. Sometimes, with a project deadline looming and pressure mounting, we choose to forget them. I was close to giving into the temptation to throw more hardware at the problem. I’m pleased at least that I resisted, though I still kick myself for not looking at the code on day one. It can seem a daunting prospect to return to the fundamentals of the code so close to roll out, but with the right tools, and surprisingly little time, you can collect the evidence that reveals the true problem. It is a lesson I trust I will remember for my next 20 years as a DBA, if I’m ever again tempted to bypass the evidence.

    Read the article

  • Improving Click and Drag with C++

    - by Josh
    I'm currently using SFML 2.0 to develop a game in C++. I have a game sprite class that has a click and drag method. The method works, but there is a slight problem. If the mouse moves too fast, the object the user selected can't keep up and is left behind in the spot where the mouse left its bounds. I will share the class definition and the given function implementation. Definition: class codePeg { protected: FloatRect bounds; CircleShape circle; int xPos, yPos, xDiff, yDiff, once; int xBase, yBase; Vector2i mousePos; Vector2f circlePos; public: void init(RenderWindow& Window); void draw(RenderWindow& Window); void drag(RenderWindow& Window); void setPegPosition(int x, int y); void setPegColor(Color pegColor); void mouseOver(RenderWindow& Window); friend int isPegSelected(void); }; Implementation of the "drag" function: void codePeg::drag(RenderWindow& Window) { mousePos = Mouse::getPosition(Window); circlePos = circle.getPosition(); if(Mouse::isButtonPressed(Mouse::Left)) { if(mousePos.x > xPos && mousePos.y > yPos && mousePos.x - bounds.width < xPos && mousePos.y - bounds.height < yPos) { if(once) { xDiff = mousePos.x - circlePos.x; yDiff = mousePos.y - circlePos.y; once = 0; } xPos = mousePos.x - xDiff; yPos = mousePos.y - yDiff; circle.setPosition(xPos, yPos); } } else { once = 1; xPos = xBase; yPos = yBase; xDiff = 0; yDiff = 0; circle.setPosition(xBase, yBase); } Window.draw(circle); } Like I said, the function works, but to me, the code is very ugly and I think it could be improved and could be more efficient. The only thing I can think of as to why the object cannot keep up with the mouse is that there are too many function calls and/or checks. The user does not really have to mouse the mouse "fast" for it to happen, I would say at an average pace the object is left behind. How can I improve the code so that the object remains with the mouse when it is selected? Any help improving this code or giving advice is greatly appreciated.

    Read the article

  • Elo system behaves oddly in program I've created

    - by adc
    Alright, so I'm looking to build a small program (C# and XAML) that, essentially, does this: Generate array of players. Each player has a current rating and a true rating. I set current rating to 1200 as a starting point right now; I've also tried setting it to true rating and the average of the two. True rating is what their skill level actually is. Their true rating is calculated based on percentages from the current League of Legends rating system; generating an array of 970 thousand generates results very similar to the data from here: (removed due to URL limit - but trust me, the results are very similar). This array is of length specified by the user. If need be, sort the array from smallest to largest. Play X number of games, again specified by the user. This is done by taking the array of players (which is sorted by Current Rating after being created) and running through it in groups of 10. The first five are on team one, the second five are on team two. It then takes the True Rating of these players and calculates an expected chance to win using the Elo system. It generates a random double and compares it to the expected chance to win; if the number is lower, team one wins - otherwise team two wins. I then update the rating of the players via, again, the Elo system - giving the winning team a score of 1 and the losing team a score of 0. I use a K value of 36 (but have tried 12, 24, and even higher ones) and an F value of 400. After going through the entire loop of players (which I have conveniently forced to be a multiple of ten), it sorts the array - again via current rating. This, if my understanding of the Elo system is correct, runs properly. However, it doesn't seem to work. I have a running test telling me how many players of the full array are within 100 current rating of their true rating. I would expect some portion of the population to be outside this range (as probability is not always going to go in their favor), but a full 40-45% of the population is outside of this range. I also have it outputting the maximum difference between true and current rating - and I have never seen this drop below 500! It hovers between 550-600, occasionally going over or under. I'm at a loss as to what to change - I've fiddled with the K and F values, where I start all the players, etc. but nothing changes the fact that eventually a good 40% of the population is outside the range. And it isn't that I have it playing too few games - it's now run through over 60 thousand games and the problem never disappears or really fluctuates. The full C# code, including everything except the XAML file and the Player class (pastebin is being very slow and I can only post two links, so I can't link to the XAML file): http://pastebin.com/rFcZRL84 The Player class: http://pastebin.com/4cJTdTRu I guess my question is did I do anything wrong? Is there a problem with the way I implemented the system, or is it just that Riot uses a significantly modified Elo system? I don't think it's the latter, as that still wouldn't explain the massive true and current rating differences to me, however.

    Read the article

  • Where Twitter Stands Heading Into 2013

    - by Mike Stiles
    As Twitter continued throughout 2012 to be a stage on which global politics and culture played itself out, the company itself underwent some adjustments that give us a good indication of what users and brands can expect from the platform in 2013. The power of the network did anything but fade. Celebrities continued to use it to connect one-on-one. Even the Pope signed on this year. It continued to fuel revolutions. It played an exponentially large factor in this US Presidential election. And around the world, the freedom to speak was challenged as users were fired, sued, sometimes even jailed for their tweets. Expect more of the same in 2013, as Twitter has entrenched itself, for individuals, causes and brands, as the fastest, easiest, most efficient way to message the masses so some measure of impact can come from it. It’s changed everything, and it’s not finished. These fun facts reveal the position of strength with which Twitter enters 2013: It now generates a billion tweets every 2.5 days It has 500 million+ users The average Twitter user has tweeted 307 times 32% of everyone using the Internet uses Twitter It’s expected to bring in $540 million in ad revenue by 2014 11 new accounts are created every second High-level Executive Summary: people love it, people use it, and they’re going to keep loving and using it. Whether or not outside developers love it is a different matter. 2012 marked a shift from welcoming the third party support that played at least some role in Twitter being so warmly embraced, to discouraging anything that replicates what Twitter can do itself…or plans to do itself. It’s not the open playground it once was. Now Twitter must spend 2013 proving it can innovate in-house and keep us just as entranced. Likewise, Twitter is distancing itself from Facebook. Images from the #1 platform’s Instagram don’t work on Twitter anymore, and Twitter’s rolling out their own photo filter product. Where the two have lived in a “plenty of room for everybody” symbiosis up to now, 2013 could see the giants ramping up a full-on rivalry. Twitter is exhibiting a deliberate strategy. Updates have centered on more visually appealing search results, and making finding and sharing content easier. Deals have been cut with some media entities so their content stands out. CEO Dick Costolo has said tweets aren’t the attraction, they’re what leads you to content. Twitter aims to be a key distributor of media and info. Add the addition of former News Corp. President Peter Chernin to the board, and their hashtag landing page experience for events, and their media behemoth ambitions get pretty clear. There are challenges ahead and Costolo has also laid those out; entry into China, figuring out how to have Twitter deliver both comprehensive and relevant, targeted experiences, and the visualization of big data. What does this mean for corporations? They can expect a more media-rich evolution and growing emphases on imagery. They can expect more opportunities to create great media content and leverage Twitter for its distribution. And they can expect new ways to surface in searches. Are brands diving in? 56% of customer tweets to companies get completely and totally ignored. Ugh. A study Twitter recently conducted with Compete shows people who see tweets from retailers are more likely to buy a product. And, the more retailer tweets they see, the more likely they are to purchase on the retail site. As more of those tweets point to engaging media content from the brand, the results should get even better. Twitter appears ready for 2013. Enterprise brands have some work to do. @mikestilesPhoto Stuart Miles, freedigitalphotos.net

    Read the article

  • Session Sharing with another User on *NIX and Windows

    - by Giri Mandalika
    Oracle Solaris Since Solaris is not widely known for its graphical interface, let's just focus on sharing a terminal session in read-only mode with another user on the same system. Here is an example. eg., % finger Login Name TTY Idle When Where root Super-User pts/1 Sat 16:57 dhcp-amer-vpn-rmdc-a sunperf ??? pts/2 4 Sat 16:41 pitcher.sfbay.sun.com In this example, two users root and sunperf are connected to the same system from two different terminals pts/1 and pts/2 respectively. If the root user wants to show something to sunperf user -- what s/he is doing in her/his terminal, for example, it can be accomplished with the following command. script -a /dev/null | tee -a <target_terminal eg., # script -a /dev/null | tee -a /dev/pts/2 Script started, file is /dev/null # # uptime 5:04pm up 1 day(s), 2:56, 2 users, load average: 0.81, 0.81, 0.81 # # isainfo -v 64-bit sparcv9 applications crc32c cbcond pause mont mpmul sha512 sha256 sha1 md5 camellia kasumi des aes ima hpc vis3 fmaf asi_blk_init vis2 vis popc 32-bit sparc applications crc32c cbcond pause mont mpmul sha512 sha256 sha1 md5 camellia kasumi des aes ima hpc vis3 fmaf asi_blk_init vis2 vis popc v8plus div32 mul32 # # exit Script done, file is /dev/null After the script .. | tee .. command, sunperf user should be able to see the root user's stdin and stdout contents in her/his own terminal until the script session exits in root user's terminal. Since this kind of sharing is based on capturing and redirecting the contents to the target terminal, the users on the receiving end won't be able to see whatever is being edited on initiators' terminal [using editors such as vi]. Also it is not possible to share the session with any connected user on the system unless the initiator has the necessary permissions and privileges. The script utility records everything printed in a terminal session, while the tee utility replicates the contents of the screen capture on to the standard output of the target terimal. The tee utility does not buffer the output - so, the screen capture from the initiators' terminal appears almost right away in the target terminal. Though I never tested, this technique may work on all *NIX and Linux flavors with little or no changes. Also there might be other ways to accomplish this. [Thanks to Sujeet for sharing this tip] Microsoft Windows Most of the Windows users may rely on VNC services to share a desktop session. Another way to share the desktop session is to use the Remote Desktop Connection (RDC) client. Here are the steps. Connect to the target Windows system using Remote Desktop Connection client Launch Windows Task Manager Navigate to the "Users" tab Find the user session that you want to connect to and have full control over as the other user who is currently holding that session Select the user name in Windows Task Manager, right click and choose the option "Remote Control" A window pops up on the other user's session with the message "<USER is requesting to control your session remotely. Do you accept the request?" Once the other user says "Yes", you will be granted access to that session. Since then both users should be able to see the same screen and even control the session from their respective workstations.

    Read the article

  • Sitting Pretty

    - by Phil Factor
    Guest Editorial for Simple-Talk IT Pro newsletter'DBAs and SysAdmins generally prefer an expression of calmness under adversity. It is a subtle trick, and requires practice in front of a mirror to get it just right. Too much adversity and they think you're not coping; too much calmness and they think you're under-employed' I dislike the term 'avatar', when used to describe a portrait photograph. An avatar, in the sense of a picture, is merely the depiction of one's role-play alter-ego, often a ridiculous bronze-age deity. However, professional image is important. The choice and creation of online photos has an effect on the way your message is received and it is important to get that right. It is fine to use that photo of you after ten lagers on holiday in an Ibiza nightclub, but what works on Facebook looks hilarious on LinkedIn. My splendid photograph that I use online was done by a professional photographer at great expense and I've never had the slightest twinge of regret when I remember how much I paid for it. It is me, but a more pensive and dignified edition, oozing trust and wisdom. One gasps at the magical skill that a professional photographer can conjure up, without digital manipulation, to make the best of a derisory noggin (ed: slang for a head). Even if he had offered to depict me as a semi-naked, muscle-bound, sword-wielding hero, I'd have demurred. No, any professional person needs a carefully cultivated image that looks right. I'd never thought of using that profile shot, though I couldn't help noticing the photographer flinch slightly when he first caught sight of my face. There is a problem with using an avatar. The use of a single image doesn't express the appropriate emotion. At the moment, it is weird to see someone with a laughing portrait writing something solemn. A neutral cast to the face, somewhat like a passport photo, is probably the best compromise. Actually, the same is true of a working life in IT. One of the first skills I learned was not to laugh at managers, but, instead, to develop a facial expression that promoted a sense of keenness, energy and respect. Every profession has its own preferred facial cast. A neighbour of mine has the natural gift of a face that displays barely repressed grief. Though he is characteristically cheerful, he earns a remarkable income as a pallbearer. DBAs and SysAdmins generally prefer an expression of calmness under adversity. It is a subtle trick, and requires practice in front of a mirror to get it just right. Too much adversity and they think you're not coping; too much calmness and they think you're under-employed. With an appropriate avatar, you could do away with a lot of the need for 'smilies' to give clues as to the meaning of what you've written on forums and blogs. If you had a set of avatars, showing the full gamut of human emotions expressible in writing: Rage, fear, reproach, joy, ebullience, apprehension, exasperation, dissembly, irony, pathos, euphoria, remorse and so on. It would be quite a drop-down list on forums, but given the vast prairies of space on the average hard drive, who cares? It would cut down on the number of spats in Forums just as long as one picks the right avatar. As an unreconstructed geek, I find it hard to admit to the value of image in the workplace, but it is true. Just as we use professionals to tidy up and order our CVs and job applications, we should employ experts to enhance our professional image. After all you don't perform surgery or dentistry on yourself do you?

    Read the article

  • WebCenter Customer Spotlight: Hyundai Motor Company

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryHyundai Motor Company is one of the world’s fastest-growing car manufacturers, ranked as the fifth-largest in 2011. The company also operates the world’s largest integrated automobile manufacturing facility in Ulsan, Republic of Korea, which can produce 1.6 million units per year. They  undertook a project to improve business efficiency and reinforce data security by centralizing the company’s sales, financial, and car manufacturing documents into a single repository. Hyundai Motor Company chose Oracle Exalogic, Oracle Exadata, Oracle WebLogic Sever, and Oracle WebCenter Content 11g, as they provided better performance, stability, storage, and scalability than their competitors.  Hyundai Motor Company cut the overall time spent each day on document-related work by around 85%, saved more than US$1 million in paper and printing costs, laid the foundation for a smart work environment, and supported their future growth in the competitive car industry. Company OverviewHyundai Motor Company is one of the world’s fastest-growing car manufacturers, ranked as the fifth-largest in 2011. The company also operates the world’s largest integrated automobile manufacturing facility in Ulsan, Republic of Korea, which can produce 1.6 million units per year. The company strives to enhance its brand image and market recognition by continuously improving the quality and design of its cars. Business Challenges To maximize the company’s growth potential, Hyundai Motor Company undertook a project to improve business efficiency and reinforce data security by centralizing the company’s sales, financial, and car manufacturing documents into a single repository. Specifically, they wanted to: Introduce a smart work environment to improve staff productivity and efficiency, and take advantage of rapid company growth due to new, enhanced car designs Replace a legacy document system managed by individual staff to improve collaboration, the visibility of corporate documents, and sharing of work-related files between employees Improve the security and storage of documents containing corporate intellectual property, and prevent intellectual property loss when staff leaves the company Eliminate delays when downloading files from the central server to a PC Build a large, single document repository to more efficiently manage and share data between 30,000 staff at the company’s headquarters Establish a scalable system that can be extended to Hyundai offices around the world Solution DeployedAfter conducting a large-scale benchmark test, Hyundai Motor Company chose Oracle Exalogic, Oracle Exadata, Oracle WebLogic Sever, and Oracle WebCenter Content 11g, as they provided better performance, stability, storage, and scalability than their competitors. Business Results Lowered the overall time spent each day on all document-related work by approximately 85%—from 4.5 hours to around 42 minutes on an average day Saved more than US$1 million per year in printer, paper, and toner costs, and laid the foundation for a completely paperless environment Reduced staff’s time spent requesting and receiving documents about car sales or designs from supervisors by 50%, by storing and managing all documents across the corporation in a single repository Cut the time required to draft new-car manufacturing, sales, and design documents by 20%, by allowing employees to reference high-quality data, such as marketing strategy and product planning documents already in the system Enhanced staff productivity at company headquarters by 9% by reducing the document-related tasks of 30,000 administrative and research and development staff Ensured the system could scale to hold 3 petabytes of car sales, manufacturing, and design data by 2013 and be deployed at branches worldwide We chose Oracle Exalogic, Oracle Exadata, and Oracle WebCenter Content to support our new document-centralization system over their competitors as Oracle offers stable storage for petabytes of data and high processing speeds. We have cut the overall time spent each day on document-related work by around 85%, saved more than US$1 million in paper and printing costs, laid the foundation for a smart work environment, and supported our future growth in the competitive car industry. Kang Tae-jin, Manager, General Affairs Team, Hyundai Motor Company Additional Information Hyundai Motor Company Customer Snapshot Oracle WebCenter Content

    Read the article

  • Controlling soft errors and false alarms in SSIS

    - by Jim Giercyk
    If you are like me, you dread the 3AM wake-up call.  I would say that the majority of the pages I get are false alarms.  The alerts that require action often require me to open an SSIS package, see where the trouble is and try to identify the offending data.  That can be very time-consuming and can take quite a chunk out of my beauty sleep.  For those reasons, I have developed a simple error handling scenario for SSIS which allows me to rest a little easier.  Let me first say, this is a high level discussion; getting into the nuts and bolts of creating each shape is outside the scope of this document, but if you have an average understanding of SSIS, you should have no problem following along. In the Data Flow above you can see that there is a caution triangle.  For the purpose of this discussion I am creating a truncation error to demonstrate the process, so this is to be expected.  The first thing we need to do is to redirect the error output.  Double-clicking on the Query shape presents us with the properties window for the input.  Simply set the columns that you want to redirect to Redirect Row in the dropdown box and hit Apply. Without going into a dissertation on error handling, I will just note that you can decide which errors you want to redirect on Error and on Truncation.  Therefore, to override this process for a column or condition, simply do not redirect that column or condition. The next thing we want to do is to add some information about the error; specifically, the name of the package which encountered the error and which step in the package wrote the record to the error table.  REMEMBER: If you redirect the error output, your package will not fail, so you will not know where the error record was created without some additional information.    I added 3 columns to my error record; Severity, Package Name and Step Name.  Severity is just a free-form column that you can use to note whether an error is fatal, whether the package is part of a test job and should be ignored, etc.  Package Name and Step Name are system variables. In my package I have created a truncation situation, where the firstname column is 50 characters in the input, but only 4 characters in the output.  Some records will pass without truncation, others will be sent to the error output.  However, the package will not fail. We can see that of the 14 input rows, 8 were redirected to the error table. This information can be used by another step or another scheduled process or triggered to determine whether an error should be sent.  It can also be used as a historical record of the errors that are encountered over time.  There are other system variables that might make more sense in your infrastructure, so try different things.  Date and time seem like something you would want in your output for example.  In summary, we have redirected the error output from an input, added derived columns with information about the errors, and inserted the information and the offending data into an error table.  The error table information can be used by another step or process to determine, based on the error information, what level alert must be sent.  This will eliminate false alarms, and give you a head start when a genuine error occurs.

    Read the article

  • The Three-Legged Milk Stool - Why Oracle Fusion Incentive Compensation makes the difference!

    - by Richard Lefebvre
    During the London Olympics, we were exposed to dozens of athletes who worked with sports psychologists to maximize their performance. Executives often hire business psychologists to coach their teams to excellence. In the same vein, Fusion Incentive Compensation can be used to get people to change their sales behavior so we can make our numbers. But what about using incentive compensation solutions in a non-sales scenario to drive change? Recently, I was working an opportunity where a company was having a low user adoption rate for Salesforce.com, which was causing problems for them. I suggested they use Fusion Incentive Comp to change the reps' behavior. We tossed around the idea of tracking user adoption by creating a variable bonus for reps based on how well they forecasted revenues in the new system. Another thought was to reward the reps for how often they logged into the system or for the percentage of leads that became opportunities and turned into revenue. A new twist on a great product. Fusion CRM's Sweet Spot I'm excited about the sales performance management (SPM) tools in Fusion CRM. This trio of Incentive Compensation, Territory Management, and Quota Management sets us apart from the competition because Oracle is the only vendor that provides all three of these capabilities on a single tech stack, in a single application, and with a single look and feel. The niche vendors offer standalone territory or incentive compensation solutions, but then the customer has to custom build the other tools and can end up with a Frankenstein-type environment. On average, companies overpay sales commissions by three to eight percent. You calculate that number for a company the size of Oracle for one quarter and it makes a pretty air-tight financial case for using SPM tools to figure accurate commissions. Plus when sales reps get the right compensation, they can be out selling rather than spending precious time figuring out what they didn't get paid or looking for another job. And one more thing ... Oracle knows incentive comp. We have been a Gartner Market Scope leader in this space for the last five years. Our solution gets high marks because of its scalability and because of its interoperability with other technologies. And now that we're leading with Fusion, our incentive compensation offering includes the innovations that the Fusion team built, plus enhancements from the E-Business Suite Incentive Comp team. It's a case of making a good thing even better. (See product video.) The "Wedge" Apps In a number of accounts that I'm working on, there is a non-Oracle CRM system of record. That gives me the perfect opportunity to introduce the benefits of our SPM tools and to get the customer using Fusion. Then the door is wide open for the company to uptake more of Fusion CRM, especially since all the integrations they need are out of the box. I really believe that implementing this wedge of SPM tools is the ticket to taking market share away from other vendors. It allows us to insert ourselves in an environment where no other CRM solution in the market has the extending capabilities of Fusion. Not Just Your Usual Suspects Usually the stakeholders that I talk to for Territory Management are tightly aligned with the sales management team. When I sell the quota planning tool, I'm talking to finance people on the ERP side of the house who are measuring quotas and forecasting revenue. And then Incentive Comp is of most interest to the sales operations people, and generally these people roll up to either HR or the payroll department. I think of our Fusion SPM tools as a three-legged stool straddling an organization's Sales, Finance, and HR departments. So when you're prospecting for opportunities -- yes, people with a CRM perspective will be very interested -- but don't limit yourselves to that constituency. You might find stakeholders in accounting, revenue planning, or HR compensation teams. You just might discover, as I did at United Airlines, that the HR organization is spearheading the CRM project because incentive compensation is what they need ... and they're the ones with the budget. Jason Loh Global Solutions Manager, Fusion CRM Sales Planning Oracle Corporation

    Read the article

  • Observing flow control idle time in TCP

    - by user12820842
    Previously I described how to observe congestion control strategies during transmission, and here I talked about TCP's sliding window approach for handling flow control on the receive side. A neat trick would now be to put the pieces together and ask the following question - how often is TCP transmission blocked by congestion control (send-side flow control) versus a zero-sized send window (which is the receiver saying it cannot process any more data)? So in effect we are asking whether the size of the receive window of the peer or the congestion control strategy may be sub-optimal. The result of such a problem would be that we have TCP data that we could be transmitting but we are not, potentially effecting throughput. So flow control is in effect: when the congestion window is less than or equal to the amount of bytes outstanding on the connection. We can derive this from args[3]-tcps_snxt - args[3]-tcps_suna, i.e. the difference between the next sequence number to send and the lowest unacknowledged sequence number; and when the window in the TCP segment received is advertised as 0 We time from these events until we send new data (i.e. args[4]-tcp_seq = snxt value when window closes. Here's the script: #!/usr/sbin/dtrace -s #pragma D option quiet tcp:::send / (args[3]-tcps_snxt - args[3]-tcps_suna) = args[3]-tcps_cwnd / { cwndclosed[args[1]-cs_cid] = timestamp; cwndsnxt[args[1]-cs_cid] = args[3]-tcps_snxt; @numclosed["cwnd", args[2]-ip_daddr, args[4]-tcp_dport] = count(); } tcp:::send / cwndclosed[args[1]-cs_cid] && args[4]-tcp_seq = cwndsnxt[args[1]-cs_cid] / { @meantimeclosed["cwnd", args[2]-ip_daddr, args[4]-tcp_dport] = avg(timestamp - cwndclosed[args[1]-cs_cid]); @stddevtimeclosed["cwnd", args[2]-ip_daddr, args[4]-tcp_dport] = stddev(timestamp - cwndclosed[args[1]-cs_cid]); @numclosed["cwnd", args[2]-ip_daddr, args[4]-tcp_dport] = count(); cwndclosed[args[1]-cs_cid] = 0; cwndsnxt[args[1]-cs_cid] = 0; } tcp:::receive / args[4]-tcp_window == 0 && (args[4]-tcp_flags & (TH_SYN|TH_RST|TH_FIN)) == 0 / { swndclosed[args[1]-cs_cid] = timestamp; swndsnxt[args[1]-cs_cid] = args[3]-tcps_snxt; @numclosed["swnd", args[2]-ip_saddr, args[4]-tcp_dport] = count(); } tcp:::send / swndclosed[args[1]-cs_cid] && args[4]-tcp_seq = swndsnxt[args[1]-cs_cid] / { @meantimeclosed["swnd", args[2]-ip_daddr, args[4]-tcp_sport] = avg(timestamp - swndclosed[args[1]-cs_cid]); @stddevtimeclosed["swnd", args[2]-ip_daddr, args[4]-tcp_sport] = stddev(timestamp - swndclosed[args[1]-cs_cid]); swndclosed[args[1]-cs_cid] = 0; swndsnxt[args[1]-cs_cid] = 0; } END { printf("%-6s %-20s %-8s %-25s %-8s %-8s\n", "Window", "Remote host", "Port", "TCP Avg WndClosed(ns)", "StdDev", "Num"); printa("%-6s %-20s %-8d %@-25d %@-8d %@-8d\n", @meantimeclosed, @stddevtimeclosed, @numclosed); } So this script will show us whether the peer's receive window size is preventing flow ("swnd" events) or whether congestion control is limiting flow ("cwnd" events). As an example I traced on a server with a large file transfer in progress via a webserver and with an active ssh connection running "find / -depth -print". Here is the output: ^C Window Remote host Port TCP Avg WndClosed(ns) StdDev Num cwnd 10.175.96.92 80 86064329 77311705 125 cwnd 10.175.96.92 22 122068522 151039669 81 So we see in this case, the congestion window closes 125 times for port 80 connections and 81 times for ssh. The average time the window is closed is 0.086sec for port 80 and 0.12sec for port 22. So if you wish to change congestion control algorithm in Oracle Solaris 11, a useful step may be to see if congestion really is an issue on your network. Scripts like the one posted above can help assess this, but it's worth reiterating that if congestion control is occuring, that's not necessarily a problem that needs fixing. Recall that congestion control is about controlling flow to prevent large-scale drops, so looking at congestion events in isolation doesn't tell us the whole story. For example, are we seeing more congestion events with one control algorithm, but more drops/retransmission with another? As always, it's best to start with measures of throughput and latency before arriving at a specific hypothesis such as "my congestion control algorithm is sub-optimal".

    Read the article

  • Brazil is Hot for Social Media

    - by Mike Stiles
    Today’s guest blog is from Oracle SVP Product Development Reggie Bradford, fresh off a visit to Sao Paulo, Brazil where he spoke at the Dachis Social Business Summit and spent some time getting a personal taste for the astonishing growth of social in Brazil, both in terms of usage and engagement. I knew it was big, but I now have an all-new appreciation for why the Wall Street Journal branded Brazil the “social media capital of the universe.” Brazil has the world’s 5th largest economy, an expanding middle class, an active younger demo market, a connected & outgoing culture, and an ongoing embrace of the social media platforms. According to comScore's 2012 Brazil Digital Future in Focus report, 97% are using social media, and that’s not even taking mobile-only users into account. There were 65 million Facebook users in 2012, spending an average 535 minutes there, up 208%. It’s one of Twitter’s fastest growing markets and the 2nd biggest market for YouTube. Instagram usage has grown over 300% since last year. That by itself is exciting, but look at the opportunity for social marketing brands. 74% of Brazilian social users follow brands on Facebook, and 59% have praised a company on either Twitter or Facebook. A 2011 Oh! Panel study found 81% of social networkers there used social to research new products and 75% went there looking for discounts. B2C eCommerce sales in Brazil is projected to hit $26.9 billion by 2015. I bet I’m not the only one who sees great things ahead, and I was fortunate enough give a keynote ABRADI, an association of leading digital agencies in Brazil with 53 execs from 35 agencies attending. I was also afforded the opportunity to give my impressions of what’s going on in Brazil to Jornal Propoganda & Marketing, one of the most popular publications in Latin America for marketers. I conveyed that especially in an environment like Brazil, where social users are so willing to connect and engage brands, marketers need to back away from the heavy-handed, one-way messaging of old school advertising and move toward genuine relationships and trust-building. To aide in this, organizational and operation changes must be embraced inside the enterprise. We've talked often about the new, tighter partnership forming between the CIO and CMO. If this partnership is not encouraged, fostered and resourced, the increasing amount of time consumers spend on mobile and digital, and the efficiencies and integrations offered by cloud-based software cannot be exploited. These are the kinds of changes that can yield social data that, when combined with enterprise data, helps you come to know your social audiences intimately and predict their needs. Consumers are always connected and need your brand to be accessible at any time, be it for information or customer service. And, of course, all of this is happening quite publicly. The holistic, socially-enable enterprise connects social to customer service systems and all other customer touch points, facilitating the kind of immediate, real-time, gratifying response customers are coming to expect. Social users in Brazil are highly active and clearly willing to meet us as brands more than halfway. Empowering yourself with a social management technology platform will have you set up to maximize this booming social market…from listening & monitoring to engagement to analytics to workflow & automation to globalization & language support. Brands, it’s time to be as social as the great people of Brazil are. Obrigado! @reggiebradfordPhoto: Gualberto107, freedigitalphotos.net

    Read the article

  • Looking for a real-world example illustrating that composition can be superior to inheritance

    - by Job
    I watched a bunch of lectures on Clojure and functional programming by Rich Hickey as well as some of the SICP lectures, and I am sold on many concepts of functional programming. I incorporated some of them into my C# code at a previous job, and luckily it was easy to write C# code in a more functional style. At my new job we use Python and multiple inheritance is all the rage. My co-workers are very smart but they have to produce code fast given the nature of the company. I am learning both the tools and the codebase, but the architecture itself slows me down as well. I have not written the existing class hierarchy (neither would I be able to remember everything about it), and so, when I started adding a fairly small feature, I realized that I had to read a lot of code in the process. At the surface the code is neatly organized and split into small functions/methods and not copy-paste-repetitive, but the flip side of being not repetitive is that there is some magic functionality hidden somewhere in the hierarchy chain that magically glues things together and does work on my behalf, but it is very hard to find and follow. I had to fire up a profiler and run it through several examples and plot the execution graph as well as step through a debugger a few times, search the code for some substring and just read pages at the time. I am pretty sure that once I am done, my resulting code will be short and neatly organized, and yet not very readable. What I write feels declarative, as if I was writing an XML file that drives some other magic engine, except that there is no clear documentation on what the XML should look like and what the engine does except for the existing examples that I can read as well as the source code for the 'engine'. There has got to be a better way. IMO using composition over inheritance can help quite a bit. That way the computation will be linear rather than jumping all over the hierarchy tree. Whenever the functionality does not quite fit into an inheritance model, it will need to be mangled to fit in, or the entire inheritance hierarchy will need to be refactored/rebalanced, sort of like an unbalanced binary tree needs reshuffling from time to time in order to improve the average seek time. As I mentioned before, my co-workers are very smart; they just have been doing things a certain way and probably have an ability to hold a lot of unrelated crap in their head at once. I want to convince them to give composition and functional as opposed to OOP approach a try. To do that, I need to find some very good material. I do not think that a SCIP lecture or one by Rich Hickey will do - I am afraid it will be flagged down as too academic. Then, simple examples of Dog and Frog and AddressBook classes do not really connivence one way or the other - they show how inheritance can be converted to composition but not why it is truly and objectively better. What I am looking for is some real-world example of code that has been written with a lot of inheritance, then hit a wall and re-written in a different style that uses composition. Perhaps there is a blog or a chapter. I am looking for something that can summarize and illustrate the sort of pain that I am going through. I already have been throwing the phrase "composition over inheritance" around, but it was not received as enthusiastically as I had hoped. I do not want to be perceived as a new guy who likes to complain and bash existing code while looking for a perfect approach while not contributing fast enough. At the same time, my gut is convinced that inheritance is often the instrument of evil and I want to show a better way in a near future. Have you stumbled upon any great resources that can help me?

    Read the article

  • Fast Data - Big Data's achilles heel

    - by thegreeneman
    At OOW 2013 in Mark Hurd and Thomas Kurian's keynote, they discussed Oracle's Fast Data software solution stack and discussed a number of customers deploying Oracle's Big Data / Fast Data solutions and in particular Oracle's NoSQL Database.  Since that time, there have been a large number of request seeking clarification on how the Fast Data software stack works together to deliver on the promise of real-time Big Data solutions.   Fast Data is a software solution stack that deals with one aspect of Big Data, high velocity.   The software in the Fast Data solution stack involves 3 key pieces and their integration:  Oracle Event Processing, Oracle Coherence, Oracle NoSQL Database.   All three of these technologies address a high throughput, low latency data management requirement.   Oracle Event Processing enables continuous query to filter the Big Data fire hose, enable intelligent chained events to real-time service invocation and augments the data stream to provide Big Data enrichment. Extended SQL syntax allows the definition of sliding windows of time to allow SQL statements to look for triggers on events like breach of weighted moving average on a real-time data stream.    Oracle Coherence is a distributed, grid caching solution which is used to provide very low latency access to cached data when the data is too big to fit into a single process, so it is spread around in a grid architecture to provide memory latency speed access.  It also has some special capabilities to deploy remote behavioral execution for "near data" processing.   The Oracle NoSQL Database is designed to ingest simple key-value data at a controlled throughput rate while providing data redundancy in a cluster to facilitate highly concurrent low latency reads.  For example, when large sensor networks are generating data that need to be captured while analysts are simultaneously extracting the data using range based queries for upstream analytics.  Another example might be storing cookies from user web sessions for ultra low latency user profile management, also leveraging that data using holistic MapReduce operations with your Hadoop cluster to do segmented site analysis.  Understand how NoSQL plays a critical role in Big Data capture and enrichment while simultaneously providing a low latency and scalable data management infrastructure thru clustered, always on, parallel processing in a shared nothing architecture. Learn how easily a NoSQL cluster can be deployed to provide essential services in industry specific Fast Data solutions. See these technologies work together in a demonstration highlighting the salient features of these Fast Data enabling technologies in a location based personalization service. The question then becomes how do these things work together to deliver an end to end Fast Data solution.  The answer is that while different applications will exhibit unique requirements that may drive the need for one or the other of these technologies, often when it comes to Big Data you may need to use them together.   You may have the need for the memory latencies of the Coherence cache, but just have too much data to cache, so you use a combination of Coherence and Oracle NoSQL to handle extreme speed cache overflow and retrieval.   Here is a great reference to how these two technologies are integrated and work together.  Coherence & Oracle NoSQL Database.   On the stream processing side, it is similar as with the Coherence case.  As your sliding windows get larger, holding all the data in the stream can become difficult and out of band data may need to be offloaded into persistent storage.  OEP needs an extreme speed database like Oracle NoSQL Database to help it continue to perform for the real time loop while dealing with persistent spill in the data stream.  Here is a great resource to learn more about how OEP and Oracle NoSQL Database are integrated and work together.  OEP & Oracle NoSQL Database.

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >