Search Results

Search found 5180 results on 208 pages for 'outside'.

Page 153/208 | < Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >

  • October 2013 FMW Proactive Patches Released

    - by mustafakaya
    The following Fusion Middleware (FMW) Proactive  patches were released on October 15, 2013. Bundle Patches : Bundle patches are collections of controlled, well tested critical bug fixes for a specific product  which may include security contents and occasionally minor enhancements. These are cumulative in nature meaning the latest bundle patch in a particular series includes the contents of the previous bundle patches released.  A suite bundle patch is an aggregation of multiple product  bundle patches that are part of a product suite. Oracle Identity Management Suite Bundle Patch 11.1.1.5.5 consisting of Oracle Identity Manager (OIM) 11.1.1.5.9 bundle patch Oracle Access Manager (OAM) 11.1.1.5.6 bundle patch. Oracle Adaptive Access Manager (OAAM) 11.1.1.5.2 bundle patch. Oracle Entitlement Server (OES) 11.1.1.5.4 bundle patch. Oracle Identity Management Suite Bundle Patch 11.1.2.0.4 consisting of Oracle Access Manager (OAM) 11.1.2.0.4 bundle patch. Oracle Adaptive Access Manager (OAAM) 11.1.2.0.2 bundle patch. Oracle Entitlement Server (OES) 11.1.2.0.2 bundle patch. Oracle Identity Analytics (OIA ) 11.1.1.5.6  bundle patch. Oracle GlassFish Server (OGFS) 2.1.1.22, 3.0.1.8 and 3.1.2.7 bundle patches. Oracle iPlanet Web Server (OiWS) 7.0.18 bundle patch Oracle SOA Suite (SOA) 11.1.1.7.1 bundle patch Oracle WebCenter Portal (WCP) 11.1.1.8.1 bundle patch Sun Role Manager (SRM) 4.1.7 and 5.0.3.2 bundle patches. Patch Set Updates (PSU) Patch Set Updates (PSU)  are collections of well controlled, well tested critical bug fixes for a specific product  that have been proven in customer environments. PSUs  may include security contents but no  enhancements are included. These are cumulative in nature meaning the latest PSU  in a particular series includes the contents of the previous PSUs  released.  Oracle Exalogic 2.0.3.0.4 Physical Linux x86-64 and 2.0.4.0.4 Physical Solaris x86-64 PSUs. Oracle WebLogic Server 10.3.6.0.6 and 12.1.1.0.6 PSUs. Critical Patch Update (CPU) : The Critical Patch Update program is Oracle's quarterly release of security fixes. The following additional patches were released as part of Oracle's Critical Patch Update program: Oracle JDeveloper 11.1.2.3.0, 11.1.2.4.0 and 12.1.2.0.0 Oracle Outside In Technology 8.4.0 and  8.4.1 Oracle Portal 11.1.1.6.0 Oracle Security Service  11.1.1.6.0, 11.1.1.7.0 and 12.1.2.0.0 Oracle WebCache 11.1.1.6.0 and 11.1.1.7.0 Oracle WebCenter Content 10.1.3.5.1, 11.1.1.6.0, 11.1.1.7.0 and 11.1.1.8.0 Oracle WebServices 10.1.3.5.0 and 11.1.1.6.0 For more information; Master Notes on Fusion Middleware Proactive Patching. PSU and CPU October 2013  Availability Document Critical Patch Update Advisory -  October 2013 

    Read the article

  • Screen Aspect Ratio

    - by Bill Evjen
    Jeffrey Dean, Pixar Aspect Ratio is very important to home video. What is aspect ratio – the ratio from the height to the width 2.35:1 The image is 2.35 times wide as it is high Pixar uses this for half of our movies This is called a widescreen image When modified to fit your television screen They cut this to fit the box of your screen When a comparison is made huge chunks of picture is missing It is harder to find what is going on when these pieces are missing The whole is greater than the pieces themselves. If you are missing pieces – you are missing the movie The soul and the mood is in the film shots. Cutting it to fit a screen, you are losing 30% of the movie Why different aspect ratios? Film before the 1950s 1.33:1 Academy Standard There were all aspects of images though. There was no standard. Thomas Edison developed projecting images onto a wall/screen He didn’t patent it as he saw no value in it. Then 1.37:1 came about to add a strip of sound This is the same size as a 35mm film Around 1952 – TV comes along NTSC Television followed the Academy Standard (4x3) Once TV came out, movie theater attendance plummets So Film brought forth color to combat this. Also early 3D Also Widescreen was brought forth. Cinema-Scope Studios at the time made movies bigger and bigger There was a Napoleon movie that was actually 4x1 … really wide. 1.85:1 Academy Flat 2.35:1 Anamorphic Scope (aka Panavision/Cinemascope) Almost all movies are made in these two aspect ratios Pixar has done half in one and half in the other Why choose one over the other? Artist choice It is part of the story the director wants to tell Can we preserve the story outside of the theaters? TVs before 1998 – they were very square Now TVs are very wide Historical options Toy Story released as it was and people cut it in a way that wasn’t liked by the studio Pan and Scan is another option Cut and then scan left or right depending on where the action is Frame Height Pixar can go back and animate more picture to account for the bottom/top bars. You end up with more sky and more ground The characters seem to get lost in the picture You lose what the director original intended Re-staging For animated movies, you can move characters around – restage the scene. It is a new completely different version of the film This is the best possible option that Pixar came up with They have stopped doing this really as the demand as pretty much dropped off Why not 1.33 today? There has been an evolution of taste and demands. VHS is a linear item The focus is about portability and not about quality Most was pan and scan and the quality was so bad – but people didn’t notice DVD was introduced in 1996 You could have more content – two versions of the film You could have the widescreen version and the 1.33 version People realized that they are seeing more of the movie with the widescreen High Def Televisions (16x9 monitors) This was introduced in 2005 Blu-ray Disc was introduced in 2006 This is all widescreen You cannot find a square TV anymore TVs are roughly 1.85:1 aspect ratio There is a change in demand Users are used to black bars and are used to widescreen Users are educated now What’s next for in-flight entertainment? High Def IFE Personal Electronic Devices 3D inflight

    Read the article

  • ArchBeat Link-o-Rama Top 10 for October 2012

    - by Bob Rhubart
    The Top 10 most popular items shared on the OTN ArchBeat Facebook Page for October 2012. OAM/OVD JVM Tuning | @FusionSecExpert Vinay from the Oracle Fusion Middleware Architecture Group (known as the A-Team) shares a process for analyzing and improving performance in Oracle Virtual Directory and Oracle Access Manager. SOA Galore: New Books for Technical Eyes Only Shake up up your technical skills with this trio of new technical books from community members covering SOA and BPM. Clustering ODI11g for High-Availability Part 1: Introduction and Architecture | Richard Yeardley "JEE agents can be deployed alongside, or instead of, standalone agents," says Rittman Meade's Richard Yeardley. "But there is one key advantage in using JEE agents and WebLogic – when you deploy JEE agents as part of a WebLogic cluster they can be configured together to form a high availability cluster." Learn more in Yeardley's extensive post. Solving Big Problems in Our 21st Century Information Society | Irving Wladawsky-Berger "I believe that the kind of extensive collaboration between the private sector, academia and government represented by the Internet revolution will be the way we will generally tackle big problems in the 21st century. Just as with the Internet, governments have a major role to play as the catalyst for many of the big projects that the private sector will then take forward and exploit. The need for high bandwidth, robust national broadband infrastructures is but one such example." -- Irving Wladawsky-Berger Eventually, 90% of tech budgets will be outside IT departments | ZDNet Another interesting post from ZDNet blogger Joe McKendrick about changing roles in IT. ADF Mobile - Login Functionality | Andrejus Baranovskis "The new ADF Mobile approach with native deployment is cool when you want to access phone functionality (camera, email, sms and etc.), also when you want to build mobile applications with advanced UI," reports Oracle ACE Director Andrejus Baranovskis. Podcast: Are You Future Proof? - Part 2 In Part 2, practicing architects and Oracle ACE Directors Ron Batra (AT&T), Basheer Khan (Innowave Technology), and Ronald van Luttikhuizen (Vennster) discuss re-tooling one’s skill set to reflect changes in enterprise IT, including the knowledge to steer stakeholders around the hype to what's truly valuable. ADF Mobile Custom Javascript — iFrame Injection | John Brunswick The ADF Mobile Framework provides a range of out of the box components to add within your AMX pages, according to John Brunswick. But what happens when "an out of the box component does not directly fulfill your development need? What options are available to extend your application interface?" John has an answer. Oracle Solaris 11.1 update focuses on database integration, cloud | Mark Fontecchio TechTarget editor Mark Fontecchio reports on the recent Oracle Solaris 11.1 release, with comments from IDC's Al Gillen. Architects Matter: Making sense of the people who make sense of enterprise IT Why do architects matter? Oracle Enterprise Architect Eric Stephens suggests that you ask yourself this question the next time you take the elevator to the Oracle offices on the 45th floor of the Willis Tower in Chicago, Illinois (or any other skyscraper, for that matter). If you had to take the stairs to get to those offices, who would you blame? "You get the picture," he says. "Architecture is essential for any necessarily complex structure, be it a building or an enterprise." (Read the article) Thought for the Day "I will contend that conceptual integrity is the most important consideration in system design. It is better to have a system omit certain anomalous features and improvements, but to reflect one set of design ideas, than to have one that contains many good but independent and uncoordinated ideas." — Frederick P. Brooks Source: SoftwareQuotes.com

    Read the article

  • PASS Summit 2013 Review

    - by Ajarn Mark Caldwell
    As a long-standing member of PASS who lives in the greater Seattle area and has attended about nine of these Summits, let me start out by saying how GREAT it was to go to Charlotte, North Carolina this year.  Many of the new folks that I met at the Summit this year, upon hearing that I was from Seattle, commented that I must have been disappointed to have to travel to the Summit this year after 5 years in a row in Seattle.  Well, nothing could be further from the truth.  I cheered loudly when I first heard that the 2013 Summit would be outside Seattle.  I have many fond memories of trips to Orlando, Florida and Grapevine, Texas for past Summits (missed out on Denver, unfortunately).  And there is a funny dynamic that takes place when the conference is local.  If you do as I have done the last several years and saved my company money by not getting a hotel, but rather just commuting from home, then both family and coworkers tend to act like you’re just on a normal schedule.  For example, I have a young family, and my wife and kids really wanted to still see me come home “after work”, but there are a whole lot of after-hours activities, social events, and great food to be enjoyed at the Summit each year.  Even more so if you really capitalize on the opportunities to meet face-to-face with people you either met at previous summits or have spoken to or heard of, from Twitter, blogs, and forums.  Then there is also the lovely commuting in Seattle traffic from neighboring cities rather than the convenience of just walking across the street from your hotel.  So I’m just saying, there are really nice aspects of having the conference 2500 miles away. Beyond that, the training was fantastic as usual.  The SQL Server community has many outstanding presenters and experts with deep knowledge of the tools who are extremely willing to share all of that with anyone who wants to listen.  The opening video with PASS President Bill Graziano in a NASCAR race turned dream sequence was very well done, and the keynotes, as usual, were great.  This year I was particularly impressed with how well attended were the Professional Development sessions.  Not too many years ago, those were very sparsely attended, but this year, the two that I attended were standing-room only, and these were not tiny rooms.  I would say this is a testament to both the maturity of the attendees realizing how important these topics are to career success, as well as to the ever-increasing skills of the presenters and the program committee for selecting speakers and topics that resonated with people.  If, as is usually the case, you were not able to get to every session that you wanted to because there were just too darn many good ones, I encourage you to get the recordings. Overall, it was a great time as these events always are.  It was wonderful to see old friends and make new ones, and the people of Charlotte did an awesome job hosting the event and letting their hospitality shine (extra kudos to SQLSentry for all they did with the shuttle, maps, and other event sponsorships).  We’re back in Seattle next year (it is a release year, after all) but I would say that with the success of this year’s event, I strongly encourage the Board and PASS HQ to firmly reestablish the location rotation schedule.  I’ll even go so far as to suggest standardizing on an alternating Seattle – Charlotte schedule, or something like that. If you missed the Summit this year, start saving now, and register early, so you can join us!

    Read the article

  • Where Twitter Stands Heading Into 2013

    - by Mike Stiles
    As Twitter continued throughout 2012 to be a stage on which global politics and culture played itself out, the company itself underwent some adjustments that give us a good indication of what users and brands can expect from the platform in 2013. The power of the network did anything but fade. Celebrities continued to use it to connect one-on-one. Even the Pope signed on this year. It continued to fuel revolutions. It played an exponentially large factor in this US Presidential election. And around the world, the freedom to speak was challenged as users were fired, sued, sometimes even jailed for their tweets. Expect more of the same in 2013, as Twitter has entrenched itself, for individuals, causes and brands, as the fastest, easiest, most efficient way to message the masses so some measure of impact can come from it. It’s changed everything, and it’s not finished. These fun facts reveal the position of strength with which Twitter enters 2013: It now generates a billion tweets every 2.5 days It has 500 million+ users The average Twitter user has tweeted 307 times 32% of everyone using the Internet uses Twitter It’s expected to bring in $540 million in ad revenue by 2014 11 new accounts are created every second High-level Executive Summary: people love it, people use it, and they’re going to keep loving and using it. Whether or not outside developers love it is a different matter. 2012 marked a shift from welcoming the third party support that played at least some role in Twitter being so warmly embraced, to discouraging anything that replicates what Twitter can do itself…or plans to do itself. It’s not the open playground it once was. Now Twitter must spend 2013 proving it can innovate in-house and keep us just as entranced. Likewise, Twitter is distancing itself from Facebook. Images from the #1 platform’s Instagram don’t work on Twitter anymore, and Twitter’s rolling out their own photo filter product. Where the two have lived in a “plenty of room for everybody” symbiosis up to now, 2013 could see the giants ramping up a full-on rivalry. Twitter is exhibiting a deliberate strategy. Updates have centered on more visually appealing search results, and making finding and sharing content easier. Deals have been cut with some media entities so their content stands out. CEO Dick Costolo has said tweets aren’t the attraction, they’re what leads you to content. Twitter aims to be a key distributor of media and info. Add the addition of former News Corp. President Peter Chernin to the board, and their hashtag landing page experience for events, and their media behemoth ambitions get pretty clear. There are challenges ahead and Costolo has also laid those out; entry into China, figuring out how to have Twitter deliver both comprehensive and relevant, targeted experiences, and the visualization of big data. What does this mean for corporations? They can expect a more media-rich evolution and growing emphases on imagery. They can expect more opportunities to create great media content and leverage Twitter for its distribution. And they can expect new ways to surface in searches. Are brands diving in? 56% of customer tweets to companies get completely and totally ignored. Ugh. A study Twitter recently conducted with Compete shows people who see tweets from retailers are more likely to buy a product. And, the more retailer tweets they see, the more likely they are to purchase on the retail site. As more of those tweets point to engaging media content from the brand, the results should get even better. Twitter appears ready for 2013. Enterprise brands have some work to do. @mikestilesPhoto Stuart Miles, freedigitalphotos.net

    Read the article

  • Fun with Python

    - by dotneteer
    I am taking a class on Coursera recently. My formal education is in physics. Although I have been working as a developer for over 18 years and have learnt a lot of programming on the job, I still would like to gain some systematic knowledge in computer science. Coursera courses taught by Standard professors provided me a wonderful chance. The three languages recommended for assignments are Java, C and Python. I am fluent in Java and have done some projects using C++/MFC/ATL in the past, but I would like to try something different this time. I first started with pure C. Soon I discover that I have to write a lot of code outside the question that I try to solve because the very limited C standard library. For example, to read a list of values from a file, I have to read characters by characters until I hit a delimiter. If I need a list that can grow, I have to create a data structure myself, something that I have taking for granted in .Net or Java. Out of frustration, I switched to Python. I was pleasantly surprised to find that Python is very easy to learn. The tutorial on the official Python site has the exactly the right pace for me, someone with experience in another programming. After a couple of hours on the tutorial and a few more minutes of toying with IDEL, I was in business. I like the “battery supplied” philosophy that gives everything that I need out of box. For someone from C# or Java background, curly braces are replaced by colon(:) and tab spaces. Although I tend to miss colon from time to time, I found that the idea of tab space is actually very nice once I get use to them. I also like to feature of multiple assignment and multiple return parameters. When I need to return a by-product, I just add it to the list of returns. When would use Python? I would use Python if I need to computer anything quick. The language is very easy to use. Python has a good collection of libraries (packages). The REPL of the interpreter allows me test ideas quickly before committing them into script. Lots of computer science work have been ported from Lisp to Python. Some universities are even teaching SICP in Python. When wouldn’t I use Python? I mostly would not use it in a managed environment, such as Ironpython or Jython. Both .Net and Java already have a rich library so one has to make a choice which library to use. If we use the managed runtime library, the code will tie to the particular runtime and thus not portable. If we use the Python library, then we will face the relatively long start-up time. For this reason, I would not recommend to use Ironpython for WP7 development. The only situation that I see merit with managed Python is in a server application where I can preload Python so that the start-up time is not a concern. Using Python as a managed glue language is an over-kill most of the time. A managed Scheme could be a better glue language as it is small enough to start-up very fast.

    Read the article

  • Social Targeting: This One's Just for You

    - by Mike Stiles
    Think of social targeting in terms of the archery competition we just saw in the Olympics. If someone loaded up 5 arrows and shot them straight up into the air all at once, hoping some would land near the target, the world would have united in laughter. But sadly for hysterical YouTube video viewing, that’s not what happened. The archers sought to maximize every arrow by zeroing in on the spot that would bring them the most points. Marketers have always sought to do the same. But they can only work with the tools that are available. A firm grasp of the desired target does little good if the ad products aren’t there to deliver that target. On the social side, both Facebook and Twitter have taken steps to enhance targeting for marketers. And why not? As the demand to monetize only goes up, they’re quite motivated to leverage and deliver their incredible user bases in ways that make economic sense for advertisers. You could target keywords on Twitter with promoted accounts, and get promoted tweets into search. They would surface for your followers and some users that Twitter thought were like them. Now you can go beyond keywords and target Twitter users based on 350 interests in 25 categories. How does a user wind up in one of these categories? Twitter looks at that user’s tweets, they look at whom they follow, and they run data through some sort of Twitter secret sauce. The result is, you have a much clearer shot at Twitter users who are most likely to welcome and be responsive to your tweets. And beyond the 350 interests, you can also create custom segments that find users who resemble followers of whatever Twitter handle you give it. That means you can now use boring tweets to sell like a madman, right? Not quite. This ad product is still quality-based, meaning if you’re not putting out tweets that lead to interest and thus, engagement, that tweet will earn a low quality score and wind up costing you more under Twitter’s auction system to maintain. That means, as the old knight in “Indiana Jones and the Last Crusade” cautions, “choose wisely” when targeting based on these interests and categories to make sure your interests truly do line up with theirs. On the Facebook side, they’re rolling out ad targeting that uses email addresses, phone numbers, game and app developers’ user ID’s, and eventually addresses for you bigger brands. Why? Because you marketers asked for it. Here you were with this amazing customer list but no way to reach those same customers should they be on Facebook. Now you can find and communicate with customers you gathered outside of social, and use Facebook to do it. Fair to say such users are a sensible target and will be responsive to your message since they’ve already bought something from you. And no you’re not giving your customer info to Facebook. They’ll use something called “hashing” to make sure you don’t see Facebook user data (beyond email, phone number, address, or user ID), and Facebook can’t see your customer data. The end result, social becomes far more workable and more valuable to marketers when it delivers on the promise that made it so exciting in the first place. That promise is the ability to move past casting wide nets to the masses and toward concentrating marketing dollars efficiently on the targets most likely to yield results.

    Read the article

  • LUKOIL Overseas Holding Optimizes Oil Field Development Projects with Integrated Project Management

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} LUKOIL Overseas Group is a growing oil and gas company that is an integral part of the vertically integrated oil company OAO LUKOIL. It is engaged in the exploration, acquisition, integration, and efficient development of oil and gas fields outside the Russian Federation to promote transforming LUKOIL into a transnational energy company. In 2010, the company signed a 20-year development project for the giant, West Qurna 2 oil field in Iraq. Executing 10,000 to 15,000 project activities simultaneously on 14 major construction and drilling projects in Iraq for the West Qurna-2 project meant the company needed a clear picture, in real time, of dependencies between its capital construction, geologic exploration and sinking projects—required for its building infrastructure oil field development projects in Iraq. LUKOIL Overseas Holding deployed Oracle’s Primavera P6 Enterprise Project Portfolio Management to generate structured project management information and optimize planning, monitoring, and analysis of all engineering and commercial activities—such as tenders, and bulk procurement of materials and equipment—related to oil field development projects. A word from LUKOIL Overseas Holding Ltd. “Previously, we created project schedules on desktop computers and uploaded them to the project server to be merged into one big file for each project participant to access. This was not scalable, as we’ve grown and now run up to 15,000 activities in numerous projects and subprojects at any time. With Oracle’s Primavera P6 Enterprise Project Portfolio Management, we can now work concurrently on projects with many team members, enjoy absolute security, and issue new baselines for all projects and project participants once a week, with ease.” – Sergey Kotov, Head of IT and the Communication Office, LUKOIL Mid-East Ltd. Oracle Primavera Solutions: · Facilitated managing dependencies between projects by enabling the general scheduler to reschedule all projects and subprojects once a week, realigning 10,000 to 15,000 project activities that the company runs at any time · Replaced Microsoft Project and a paper-based system with a complete solution that provides structured project data · Enhanced data security by establishing project management security policies that enable only authorized project members to edit their project tasks, while enabling each project participant to view all project data that are relevant to that individual’s task · Enabled the company to monitor project progress in comparison to the projected plan, based on physical project assets to determine if each project is on track to conclude within its time and budget limitations To view the full list of solutions view here. “Oracle Gold Partner Parma Telecom was key to our successful Primavera deployment, implementing the software’s basic functionalities, such as project content, timeframes management, and cost management, in addition to performing its integration with our enterprise resource planning system and intranet portal within ten months and in accordance with budgets,” said Rafik Baynazarov, head of the master planning and control office, LUKOIL Mid-East Ltd. “ To read the full version of the customer success story, please view here.

    Read the article

  • Merging Social Accounts: What We Learned This Weekend

    - by Mike Stiles
    Guest Post by Erika BrookesWe learned that it’s not always as easy as you think it’s going to be. While it’s widely accepted that merging multiple owned Facebook Pages that are duplicating communities and putting out the same type of content is a best practice, actually pulling it off without rattling fans is a trickier proposition. Facebook is nice and clear about how to merge Facebook Pages. Although content is not carried over, Likes from the pages you’re merging are. So you can imagine the surprise when such fans start seeing posts in their News Feed from a page they don’t believe they ever Liked. One community member accurately likened it to having your bank come under another bank’s brand name. The Facebook Page changes to the new brand, just like your debit card, emails, signs and other communication. This weekend we did our merge. The Facebook communities of Vitrue, Involver and Collective Intellect were pulled into one community, Oracle Social. Could we have handled it better? Oh yeah. Our intent was to make sure, to the fullest extent possible, that the fans of the Vitrue, Involver, and Collective Intellect brand pages were well-informed about the pending page merges in ADVANCE of the merge. While many were aware that Oracle acquired the three companies, many were not. We learned from fan feedback that we should have sent notifications MUCH earlier to make the brand Page merge crystal clear and to answer any questions. That was our bad, our responsibility and we apologize for Oracle Social showing up in your News Feed if you were not aware that it was a result of your fandom of Vitrue, Involver or Collective Intellect. It was our job to make you aware well in advance. Some felt they had never Liked the fan Pages of Vitrue, Involver or Collective Intellect, so they were understandably upset (some cultures may call it “fit to be tied”) when they found themselves fans of Oracle Social. One thing to consider is that since 2009, brands and developers have used and enjoyed free Involver tab apps like Twitter, RSS and YouTube (1.2 million of which are currently active), which included an opt-in Liking the Involver Page. Often, when Liking happens in a manner outside of the traditional clicking of a Like button on a brand Page, it’s easy to forget a Page was indeed Liked. Lastly, a few felt that their Like of the Page had been “bought.” It was not. No fans or Likes were separately purchased. Yes, the companies and the social properties of Vitrue, Involver and Collective Intellect were acquired by Oracle. Those brands are now being coordinated into the larger Oracle brand. In social media, that means those brands are being integrated into the Oracle Social community. So what now? We apologize and apply lessons learned. We learned that you not only have to communicate thoroughly and clearly, but you have to communicate well in advance of any actionable items that will affect fans. We’re more than willing to walk straight to the woodshed when we deserve it. Going forward, the social team here is dedicated to facilitating content, discussion and sharing around social for marketers, agencies, IT stakeholders and social staffs, including community managers. We anticipate Oracle Social being the premier gathering place for true social innovators as we move into social’s exciting next phase of development. Inevitably, some will still feel they are fans of the Page in error. While we hate to see you go, you may unlike the Page if it’s not relevant or useful to you. Let’s continue to contribute, participate, foster our desire to learn, and move forward together positively and constructively - both for current fans of the community and the many fans to come.

    Read the article

  • Oracle ties social, CRM, analytics products to customer experience

    - by Richard Lefebvre
    Oracle will embark on a new product strategy that centers on customer experience management, an approach driven by the company’s many recent acquisitions.  The new approach, announced by the company Monday night, will be seen in an expansive suite that features familiar Oracle products -- such as its Fusion CRM platform -- and offerings the company recently gained through acquisitions, including FatWire, RightNow and Vitrue. Billed as Oracle Customer Experience (CX), the suite enables businesses to respond to a market centered on the customer experience, said Anthony Lye, the company’s senior vice president of CRM. Companies “are very aware their products are commoditizing,” Lye said in an interview last week, referring to how the Web and social media channels have empowered customers. Customer experiences start and mature outside of CRM, and applications today need to reflect that shift, Lye said. Businesses thus need to step away from a pure CRM model, he said. Oracle claims CX will improve customer experience management by connecting businesses with customers across Web sites and social channels. Companies can create a single, real-time view of the customer and use predictive analytics of interactions to strengthen the customer experience, Oracle said. “Companies have to connect with their customers wherever, whenever and however they want,” Lye said. “They have to know and understand their customer.” Lye promoted Oracle CX as a suite that will work across channels to complement the company’s applications. A new strategy has been “cooking” for years now, but the acquisitions Oracle has made over the past two years made the time right for a “unique collaboration,” Lye said. CX includes basic Oracle CRM solutions such as Siebel and the new Fusion Apps. It also includes the company’s MDM products, Enterprise Data Quality, Customer Hub and Product Hub. And the suite is rounded out by the services that Oracle recently bought, transactions that created or enhanced the company’s presence in social, marketing, e-commerce and customer service. For instance, FatWire provides tools for marketing. ATG focuses on e-commerce. And RightNow specializes in customer service. Two recent acquisitions -- Collective Intellect and Vitrue -- gave Oracle a seat at the social table. Collective Intellect is a social intelligence program, and Vitrue is a social marketing and engagement platform. Those acquisitions have yet to be finalized. Oracle hopes to eventually integrate the two social offerings, as well as most of the other services, into the CX suite. CX can integrate on Oracle’s standard middleware, and can give users a lower TCO by leveraging it as a single stack on premise or as a cloud solution. Lye deferred questions about the pricing of CX, and instead pitched Oracle’s ability to offer multiple customer experience solutions in one suite. Businesses have struggled with the complexity of infrastructure and modern services that communicate with customers, Lye said. “They’ve struggled to pull all these things together. We’ve done that,” he said. Stephen Powers, a research director at Forrester Research Inc. in Cambridge, Mass., said it’s not surprising for Oracle to offer the CX suite and a related customer experience strategy.  “They’ve got CRM, ATG, FatWire. Clearly, it’s been the strategy for them,” he said. But the challenge for Oracle, and for any other vendor that has gone on an “acquisition spree,” is to connect its many products, Powers said. “The portfolio has to be more than the parts. They’ve got to realize the efficiencies and value of having these pieces to tie them together,” he said. “The proof is in the pudding. Adobe has done a nice job in its space with the products they’ve got. Now, Oracle has got to show it has something.” Albert McKeon (SearchCRM) Published: 25 Jun 2012 : http://searchcrm.techtarget.com/news/2240158644/Oracle-ties-social-CRM-analytics-products-to-customer-experience

    Read the article

  • Controlling soft errors and false alarms in SSIS

    - by Jim Giercyk
    If you are like me, you dread the 3AM wake-up call.  I would say that the majority of the pages I get are false alarms.  The alerts that require action often require me to open an SSIS package, see where the trouble is and try to identify the offending data.  That can be very time-consuming and can take quite a chunk out of my beauty sleep.  For those reasons, I have developed a simple error handling scenario for SSIS which allows me to rest a little easier.  Let me first say, this is a high level discussion; getting into the nuts and bolts of creating each shape is outside the scope of this document, but if you have an average understanding of SSIS, you should have no problem following along. In the Data Flow above you can see that there is a caution triangle.  For the purpose of this discussion I am creating a truncation error to demonstrate the process, so this is to be expected.  The first thing we need to do is to redirect the error output.  Double-clicking on the Query shape presents us with the properties window for the input.  Simply set the columns that you want to redirect to Redirect Row in the dropdown box and hit Apply. Without going into a dissertation on error handling, I will just note that you can decide which errors you want to redirect on Error and on Truncation.  Therefore, to override this process for a column or condition, simply do not redirect that column or condition. The next thing we want to do is to add some information about the error; specifically, the name of the package which encountered the error and which step in the package wrote the record to the error table.  REMEMBER: If you redirect the error output, your package will not fail, so you will not know where the error record was created without some additional information.    I added 3 columns to my error record; Severity, Package Name and Step Name.  Severity is just a free-form column that you can use to note whether an error is fatal, whether the package is part of a test job and should be ignored, etc.  Package Name and Step Name are system variables. In my package I have created a truncation situation, where the firstname column is 50 characters in the input, but only 4 characters in the output.  Some records will pass without truncation, others will be sent to the error output.  However, the package will not fail. We can see that of the 14 input rows, 8 were redirected to the error table. This information can be used by another step or another scheduled process or triggered to determine whether an error should be sent.  It can also be used as a historical record of the errors that are encountered over time.  There are other system variables that might make more sense in your infrastructure, so try different things.  Date and time seem like something you would want in your output for example.  In summary, we have redirected the error output from an input, added derived columns with information about the errors, and inserted the information and the offending data into an error table.  The error table information can be used by another step or process to determine, based on the error information, what level alert must be sent.  This will eliminate false alarms, and give you a head start when a genuine error occurs.

    Read the article

  • October 2013 Fusion Middleware (FMW) Proactive Patches released

    - by PCat
    We are glad to announce that the following Fusion Middleware (FMW) Proactive  patches were released on October 15, 2013.Bundle PatchesBundle patches are collections of controlled, well tested critical bug fixes for a specific product  which may include security contents and occasionally minor enhancements. These are cumulative in nature meaning the latest bundle patch in a particular series includes the contents of the previous bundle patches released.  A suite bundle patch is an aggregation of multiple product  bundle patches that are part of a product suite. Oracle Identity Management Suite Bundle Patch 11.1.1.5.5 consisting of Oracle Identity Manager (OIM) 11.1.1.5.9 bundle patch Oracle Access Manager (OAM) 11.1.1.5.6 bundle patch. Oracle Adaptive Access Manager (OAAM) 11.1.1.5.2 bundle patch. Oracle Entitlement Server (OES) 11.1.1.5.4 bundle patch. Oracle Identity Management Suite Bundle Patch 11.1.2.0.4 consisting of Oracle Access Manager (OAM) 11.1.2.0.4 bundle patch. Oracle Adaptive Access Manager (OAAM) 11.1.2.0.2 bundle patch. Oracle Entitlement Server (OES) 11.1.2.0.2 bundle patch. Oracle Identity Analytics (OIA ) 11.1.1.5.6  bundle patch. Oracle GlassFish Server (OGFS) 2.1.1.22, 3.0.1.8 and 3.1.2.7 bundle patches. Oracle iPlanet Web Server (OiWS) 7.0.18 bundle patch Oracle SOA Suite (SOA) 11.1.1.7.1 bundle patch Oracle WebCenter Portal (WCP) 11.1.1.8.1 bundle patch Sun Role Manager (SRM) 4.1.7 and 5.0.3.2 bundle patches. Patch Set Updates (PSU)Patch Set Updates (PSU)  are collections of well controlled, well tested critical bug fixes for a specific product  that have been proven in customer environments. PSUs  may include security contents but no  enhancements are included. These are cumulative in nature meaning the latest PSU  in a particular series includes the contents of the previous PSUs  released. Oracle Exalogic 2.0.3.0.4 Physical Linux x86-64 and 2.0.4.0.4 Physical Solaris x86-64 PSUs. Oracle WebLogic Server 10.3.6.0.6 and 12.1.1.0.6 PSUs. Critical Patch Update (CPU)The Critical Patch Update program is Oracle's quarterly release of security fixes.The following additional patches were released as part of Oracle's Critical Patch Update program: Oracle JDeveloper 11.1.2.3.0, 11.1.2.4.0 and 12.1.2.0.0 Oracle Outside In Technology 8.4.0 and  8.4.1 Oracle Portal 11.1.1.6.0 Oracle Security Service  11.1.1.6.0, 11.1.1.7.0 and 12.1.2.0.0 Oracle WebCache 11.1.1.6.0 and 11.1.1.7.0 Oracle WebCenter Content 10.1.3.5.1, 11.1.1.6.0, 11.1.1.7.0 and 11.1.1.8.0 Oracle WebServices 10.1.3.5.0 and 11.1.1.6.0 For more information: Master Notes on Fusion Middleware Proactive Patching PSU and CPU October 2013  Availability Document Critical Patch Update Advisory -  October 2013

    Read the article

  • The Future of Air Travel: Intelligence and Automation

    - by BobEvans
    Remember those white-knuckle flights through stormy weather where unexpected plunges in altitude result in near-permanent relocations of major internal organs? Perhaps there’s a better way, according to a recent Wall Street Journal article: “Pilots of a Honeywell International Inc. test plane stayed on their initial flight path, relying on the company's latest onboard radar technology to steer through the worst of the weather. The specially outfitted Boeing 757 barely shuddered as it gingerly skirted some of the most ferocious storm cells over Fort Walton Beach and then climbed above the rest in zero visibility.” Or how about the multifaceted check-in process, which might not wreak havoc on liver location but nevertheless makes you wonder if you’ve been trapped in some sort of covert psychological-stress test? Another WSJ article, called “The Self-Service Airport,” says there’s reason for hope there as well: “Airlines are laying the groundwork for the next big step in the airport experience: a trip from the curb to the plane without interacting with a single airline employee. At the airport of the near future, ‘your first interaction could be with a flight attendant,’ said Ben Minicucci, chief operating officer of Alaska Airlines, a unit of Alaska Air Group Inc.” And in the topsy-turvy world of air travel, it’s not just the passengers who’ve been experiencing bumpy rides: the airlines themselves are grappling with a range of challenges—some beyond their control, some not—that make profitability increasingly elusive in spite of heavy demand for their services. A recent piece in The Economist illustrates one of the mega-challenges confronting the airline industry via a striking set of contrasting and very large numbers: while the airlines pay $7 billion per year to third-party computerized reservation services, the airlines themselves earn a collective profit of only $3 billion per year. In that context, the anecdotes above point unmistakably to the future that airlines must pursue if they hope to be able to manage some of the factors outside of their control (e.g., weather) as well as all of those within their control (operating expenses, end-to-end visibility, safety, load optimization, etc.): more intelligence, more automation, more interconnectedness, and more real-time awareness of every facet of their operations. Those moves will benefit both passengers and the air carriers, says the WSJ piece on The Self-Service Airport: “Airlines say the advanced technology will quicken the airport experience for seasoned travelers—shaving a minute or two from the checked-baggage process alone—while freeing airline employees to focus on fliers with questions. ‘It's more about throughput with the resources you have than getting rid of humans,’ said Andrew O'Connor, director of airport solutions at Geneva-based airline IT provider SITA.” Oracle’s attempting to help airlines gain control over these challenges by blending together a range of its technologies into a solution called the Oracle Airline Data Model, which suggests the following steps: • To retain and grow their customer base, airlines need to focus on the customer experience. • To personalize and differentiate the customer experience, airlines need to effectively manage their passenger data. • The Oracle Airline Data Model can help airlines jump-start their customer-experience initiatives by consolidating passenger data into a customer data hub that drives realtime business intelligence and strategic customer insight. • Oracle’s Airline Data Model brings together multiple types of data that can jumpstart your data-warehousing project with rich out-of-the-box functionality. • Oracle’s Intelligent Warehouse for Airlines brings together the powerful capabilities of Oracle Exadata and the Oracle Airline Data Model to give you real-time strategic insights into passenger demand, revenues, sales channels and your flight network. The airline industry aside, the bullet points above offer a broad strategic outline for just about any industry because the customer experience is becoming pre-eminent in each and there is simply no way to deliver world-class customer experiences unless a company can capture, manage, and analyze all of the relevant data in real-time. I’ll leave you with two thoughts from the WSJ article about the new in-flight radar system from Honeywell: first, studies show that a single episode of serious turbulence can wrack up $150,000 in additional costs for an airline—so, it certainly behooves the carriers to gain the intelligence to avoid turbulence as much as possible. And second, it’s back to that top-priority customer-experience thing and the value that ever-increasing levels of intelligence can deliver. As the article says: “In the cabin, reporters watched screens showing the most intense parts of the nearly 10-mile wide storm, which churned some 7,000 feet below, in vibrant red and other colors. The screens also were filled with tiny symbols depicting likely locations of lightning and hail, which can damage planes and wreak havoc on the nerves of white-knuckle flyers.”  (Bob Evans is senior vice-president, communications, for Oracle.)  

    Read the article

  • Move on and look elsewhere, or confront the boss?

    - by Meister
    Background: I have my Associates in Applied Science (Comp/Info Tech) with a strong focus in programming, and I'm taking University classes to get my Bachelors. I was recently hired at a local company to be a Software Engineer I on a team of about 8, and I've been told they're looking to hire more. This is my first job, and I was offered what I feel to be an extremely generous starting salary ($30/hr essentially + benefits and yearly bonus). What got me hired was my passion for programming and a strong set of personal projects. Problem: I had no prior experience when I interviewed, so I didn't know exactly what to ask them about the company when I was hired. I've spotted a number of warning signs and annoyances since then, such as: Four developers when I started, with everyone talking about "Ben" or "Ryan" leaving. One engineer hired thirty days before me, one hired two weeks after me. Most of the department has been hiring a large number of people since I started. Extremely limited internet access. I understand the idea from an IT point of view, but not only is Facebook blocked, but so it Youtube, Twitter, and Pandora. I've also figured out that they block all access to non-DNS websites (http://xxx.xxx.xxx.xxx/) and strangely enough Miranda-IM. Low cubicles. Which is fine because I like my immediate coworkers, but they put the developers with the customer service, customer training, and QA department in a huge open room. Noise, noise, noise, and people stop to chitchat all day long. Headphones only go so far. Several emails have been sent out by my boss since I started telling us programmers to not talk about non-work-related-things like Video Games at our cubicles, despite us only spending maybe five minutes every few hours doing so. Further digging tells me that this is because someone keeps complaining that the programmers are "slacking off". People are looking over my shoulder all day. I was in the Freenode webchat to get help with a programming issue, and within minutes I had an email from my boss (to all the developers) telling us that we should NOT be connected to any outside chat servers at work. Version control system from 2005 that we must access with IE and keep the Java 1.4 JRE installed to be able to use. I accidentally updated to Java 6 one day and spent the next two days fighting with my PC to undo this "problem". No source control, no comments on anything, no standards, no code review, no unit testing, no common sense. I literally found a problem in how they handle string resource translations that stems from the simple fact that they don't trim excess white spaces, leading to developers doing: getResource("Date: ") instead of: getResource("Date") + ": ", and I was told to just add the excess white spaces back to the database instead of dealing with the issue directly. Some of these things I'd like to try to understand, but I like having IRC open to talk in a few different rooms during the day and keep in touch with friends/family over IM. They don't break my concentration (not NEARLY as much as the lady from QA stopping by to talk about her son), but because people are looking over my shoulder all day as they walk by they complain when they see something that's not "programmer-looking work". I've been told by my boss and QA that I do good, fast work. I should be judged on my work output and quality, not what I have up on my screen for the five seconds you're walking by So, my question is, even though I'm just barely at my 90 days: How do you decide to move on from a job and looking elsewhere, or when you should start working with your boss to resolve these issues? Is it even possible to get the boss to work with me in many of these things? This is the only place I heard back from even though I sent out several resume's a day for several months, and this place does pay well for putting up with their many flaws, but I'm just starting to get so miserable working here already. Should I just put up with it?

    Read the article

  • Arcball Problems with UDK

    - by opdude
    I'm trying to re-create an arcball example from a Nehe, where an object can be rotated in a more realistic way while floating in the air (in my game the object is attached to the player at a distance like for example the Physics Gun) however I'm having trouble getting this to work with UDK. I have created an LGArcBall which follows the example from Nehe and I've compared outputs from this with the example code. I think where my problem lies is what I do to the Quaternion that is returned from the LGArcBall. Currently I am taking the returned Quaternion converting it to a rotation matrix. Getting the product of the last rotation (set when the object is first clicked) and then returning that into a Rotator and setting that to the objects rotation. If you could point me in the right direction that would be great, my code can be found below. class LGArcBall extends Object; var Quat StartRotation; var Vector StartVector; var float AdjustWidth, AdjustHeight, Epsilon; function SetBounds(float NewWidth, float NewHeight) { AdjustWidth = 1.0f / ((NewWidth - 1.0f) * 0.5f); AdjustHeight = 1.0f / ((NewHeight - 1.0f) * 0.5f); } function StartDrag(Vector2D startPoint, Quat rotation) { StartVector = MapToSphere(startPoint); } function Quat Update(Vector2D currentPoint) { local Vector currentVector, perp; local Quat newRot; //Map the new point to the sphere currentVector = MapToSphere(currentPoint); //Compute the vector perpendicular to the start and current perp = startVector cross currentVector; //Make sure our length is larger than Epsilon if (VSize(perp) > Epsilon) { //Return the perpendicular vector as the transform newRot.X = perp.X; newRot.Y = perp.Y; newRot.Z = perp.Z; //In the quaternion values, w is cosine (theta / 2), where //theta is the rotation angle newRot.W = startVector dot currentVector; } else { //The two vectors coincide, so return an identity transform newRot.X = 0.0f; newRot.Y = 0.0f; newRot.Z = 0.0f; newRot.W = 0.0f; } return newRot; } function Vector MapToSphere(Vector2D point) { local float x, y, length, norm; local Vector result; //Transform the mouse coords to [-1..1] //and inverse the Y coord x = (point.X * AdjustWidth) - 1.0f; y = 1.0f - (point.Y * AdjustHeight); length = (x * x) + (y * y); //If the point is mapped outside of the sphere //( length > radius squared) if (length > 1.0f) { norm = 1.0f / Sqrt(length); //Return the "normalized" vector, a point on the sphere result.X = x * norm; result.Y = y * norm; result.Z = 0.0f; } else //It's inside of the sphere { //Return a vector to the point mapped inside the sphere //sqrt(radius squared - length) result.X = x; result.Y = y; result.Z = Sqrt(1.0f - length); } return result; } DefaultProperties { Epsilon = 0.000001f } I'm then attempting to rotate that object when the mouse is dragged, with the following update code in my PlayerController. //Get Mouse Position MousePosition.X = LGMouseInterfacePlayerInput(PlayerInput).MousePosition.X; MousePosition.Y = LGMouseInterfacePlayerInput(PlayerInput).MousePosition.Y; newQuat = ArcBall.Update(MousePosition); rotMatrix = MakeRotationMatrix(QuatToRotator(newQuat)); rotMatrix = rotMatrix * LastRot; LGMoveableActor(movingPawn.CurrentUseableObject).SetPhysics(EPhysics.PHYS_Rotating); LGMoveableActor(movingPawn.CurrentUseableObject).SetRotation(MatrixGetRotator(rotMatrix));

    Read the article

  • C# 5 Async, Part 3: Preparing Existing code For Await

    - by Reed
    While the Visual Studio Async CTP provides a fantastic model for asynchronous programming, it requires code to be implemented in terms of Task and Task<T>.  The CTP adds support for Task-based asynchrony to the .NET Framework methods, and promises to have these implemented directly in the framework in the future.  However, existing code outside the framework will need to be converted to using the Task class prior to being usable via the CTP. Wrapping existing asynchronous code into a Task or Task<T> is, thankfully, fairly straightforward.  There are two main approaches to this. Code written using the Asynchronous Programming Model (APM) is very easy to convert to using Task<T>.  The TaskFactory class provides the tools to directly convert APM code into a method returning a Task<T>.  This is done via the FromAsync method.  This method takes the BeginOperation and EndOperation methods, as well as any parameters and state objects as arguments, and returns a Task<T> directly. For example, we could easily convert the WebRequest BeginGetResponse and EndGetResponse methods into a method which returns a Task<WebResponse> via: Task<WebResponse> task = Task.Factory .FromAsync<WebResponse>( request.BeginGetResponse, request.EndGetResponse, null); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Event-based Asynchronous Pattern (EAP) code can also be wrapped into a Task<T>, though this requires a bit more effort than the one line of code above.  This is handled via the TaskCompletionSource<T> class.  MSDN provides a detailed example of using this to wrap an EAP operation into a method returning Task<T>.  It demonstrates handling cancellation and exception handling as well as the basic operation of the asynchronous method itself. The basic form of this operation is typically: Task<YourResult> GetResultAsync() { var tcs = new TaskCompletionSource<YourResult>(); // Handle the event, and setup the task results... this.GetResultCompleted += (o,e) => { if (e.Error != null) tcs.TrySetException(e.Error); else if (e.Cancelled) tcs.TrySetCanceled(); else tcs.TrySetResult(e.Result); }; // Call the asynchronous method this.GetResult(); // Return the task from the TaskCompletionSource return tcs.Task; } We can easily use these methods to wrap our own code into a method that returns a Task<T>.  Existing libraries which cannot be edited can be extended via Extension methods.  The CTP uses this technique to add appropriate methods throughout the framework. The suggested naming for these methods is to define these methods as “Task<YourResult> YourClass.YourOperationAsync(…)”.  However, this naming often conflicts with the default naming of the EAP.  If this is the case, the CTP has standardized on using “Task<YourResult> YourClass.YourOperationTaskAsync(…)”. Once we’ve wrapped all of our existing code into operations that return Task<T>, we can begin investigating how the Async CTP can be used with our own code.

    Read the article

  • Subterranean IL: Exception handling 2

    - by Simon Cooper
    Control flow in and around exception handlers is tightly controlled, due to the various ways the handler blocks can be executed. To start off with, I'll describe what SEH does when an exception is thrown. Handling exceptions When an exception is thrown, the CLR stops program execution at the throw statement and searches up the call stack looking for an appropriate handler; catch clauses are analyzed, and filter blocks are executed (I'll be looking at filter blocks in a later post). Then, when an appropriate catch or filter handler is found, the stack is unwound to that handler, executing successive finally and fault handlers in their own stack contexts along the way, and program execution continues at the start of the catch handler. Because catch, fault, finally and filter blocks can be executed essentially out of the blue by the SEH mechanism, without any reference to preceding instructions, you can't use arbitary branches in and out of exception handler blocks. Instead, you need to use specific instructions for control flow out of handler blocks: leave, endfinally/endfault, and endfilter. Exception handler control flow try blocks You cannot branch into or out of a try block or its handler using normal control flow instructions. The only way of entering a try block is by either falling through from preceding instructions, or by branching to the first instruction in the block. Once you are inside a try block, you can only leave it by throwing an exception or using the leave <label> instruction to jump to somewhere outside the block and its handler. The leave instructions signals the CLR to execute any finally handlers around the block. Most importantly, you cannot fall out of the block, and you cannot use a ret to return from the containing method (unlike in C#); you have to use leave to branch to a ret elsewhere in the method. As a side effect, leave empties the stack. catch blocks The only way of entering a catch block is if it is run by the SEH. At the start of the block execution, the thrown exception will be the only thing on the stack. The only way of leaving a catch block is to use throw, rethrow, or leave, in a similar way to try blocks. However, one thing you can do is use a leave to branch back to an arbitary place in the handler's try block! In other words, you can do this: .try { // ... newobj instance void [mscorlib]System.Exception::.ctor() throw MidTry: // ... leave.s RestOfMethod } catch [mscorlib]System.Exception { // ... leave.s MidTry } RestOfMethod: // ... As far as I know, this mechanism is not exposed in C# or VB. finally/fault blocks The only way of entering a finally or fault block is via the SEH, either as the result of a leave instruction in the corresponding try block, or as part of handling an exception. The only way to leave a finally or fault block is to use endfinally or endfault (both compile to the same binary representation), which continues execution after the finally/fault block, or, if the block was executed as part of handling an exception, signals that the SEH can continue walking the stack. filter blocks I'll be covering filters in a separate blog posts. They're quite different to the others, and have their own special semantics. Phew! Complicated stuff, but it's important to know if you're writing or outputting exception handlers in IL. Dealing with the C# compiler is probably best saved for the next post.

    Read the article

  • Ubuntu 12.04 LXC nat prerouting not working

    - by petermolnar
    I have a running Debian Wheezy setup I copied exactly to an Ubuntu 12.04 ( elementary OS, used as desktop as well ) While the Debian setup runs flawlessly, the Ubuntu version dies on the prerouting to containers ( or so it seems ) In short: lxc works containers work and run connecting to container from host OK ( including mixed ports & services ) connecting to outside world from container is fine What does not work is connecting from another box to the host on a port that should be NATed to a container. The setups: /etc/rc.local CMD_BRCTL=/sbin/brctl CMD_IFCONFIG=/sbin/ifconfig CMD_IPTABLES=/sbin/iptables CMD_ROUTE=/sbin/route NETWORK_BRIDGE_DEVICE_NAT=lxc-bridge HOST_NETDEVICE=eth0 PRIVATE_GW_NAT=192.168.42.1 PRIVATE_NETMASK=255.255.255.0 PUBLIC_IP=192.168.13.100 ${CMD_BRCTL} addbr ${NETWORK_BRIDGE_DEVICE_NAT} ${CMD_BRCTL} setfd ${NETWORK_BRIDGE_DEVICE_NAT} 0 ${CMD_IFCONFIG} ${NETWORK_BRIDGE_DEVICE_NAT} ${PRIVATE_GW_NAT} netmask ${PRIVATE_NETMASK} promisc up Therefore lxc network is 192.168.42.0/24 and the host eth0 ip is 192.168.13.100; setup via network manager as static address. iptables: *mangle :PREROUTING ACCEPT [0:0] :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] COMMIT *filter :FORWARD ACCEPT [0:0] :INPUT DROP [0:0] :OUTPUT ACCEPT [0:0] # Accept traffic from internal interfaces -A INPUT -i lo -j ACCEPT # accept traffic from lxc network -A INPUT -d 192.168.42.1 -s 192.168.42.0/24 -j ACCEPT # Accept internal traffic Make sure NEW incoming tcp connections are SYN # packets; otherwise we need to drop them: -A INPUT -p tcp ! --syn -m state --state NEW -j DROP # Packets with incoming fragments drop them. This attack result into Linux server panic such data loss. -A INPUT -f -j DROP # Incoming malformed XMAS packets drop them: -A INPUT -p tcp --tcp-flags ALL ALL -j DROP # Incoming malformed NULL packets: -A INPUT -p tcp --tcp-flags ALL NONE -j DROP # Accept traffic with the ACK flag set -A INPUT -p tcp -m tcp --tcp-flags ACK ACK -j ACCEPT # Allow incoming data that is part of a connection we established -A INPUT -m state --state ESTABLISHED -j ACCEPT # Allow data that is related to existing connections -A INPUT -m state --state RELATED -j ACCEPT # Accept responses to DNS queries -A INPUT -p udp -m udp --dport 1024:65535 --sport 53 -j ACCEPT # Accept responses to our pings -A INPUT -p icmp -m icmp --icmp-type echo-reply -j ACCEPT # Accept notifications of unreachable hosts -A INPUT -p icmp -m icmp --icmp-type destination-unreachable -j ACCEPT # Accept notifications to reduce sending speed -A INPUT -p icmp -m icmp --icmp-type source-quench -j ACCEPT # Accept notifications of lost packets -A INPUT -p icmp -m icmp --icmp-type time-exceeded -j ACCEPT # Accept notifications of protocol problems -A INPUT -p icmp -m icmp --icmp-type parameter-problem -j ACCEPT # Respond to pings, but limit -A INPUT -m icmp -p icmp --icmp-type echo-request -m state --state NEW -m limit --limit 6/s -j ACCEPT # Allow connections to SSH server -A INPUT -p tcp -m tcp --dport 22 -m state --state NEW -m limit --limit 12/s -j ACCEPT COMMIT *nat :OUTPUT ACCEPT [0:0] :PREROUTING ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] -A PREROUTING -d 192.168.13.100 -p tcp -m tcp --dport 2221 -m state --state NEW -m limit --limit 12/s -j DNAT --to-destination 192.168.42.11:22 -A PREROUTING -d 192.168.13.100 -p tcp -m tcp --dport 80 -m state --state NEW -m limit --limit 512/s -j DNAT --to-destination 192.168.42.11:80 -A PREROUTING -d 192.168.13.100 -p tcp -m tcp --dport 443 -m state --state NEW -m limit --limit 512/s -j DNAT --to-destination 192.168.42.11:443 -A POSTROUTING -d 192.168.42.0/24 -o eth0 -j SNAT --to-source 192.168.13.100 -A POSTROUTING -o eth0 -j MASQUERADE COMMIT sysctl: net.ipv4.conf.all.forwarding = 1 net.ipv4.conf.all.mc_forwarding = 0 net.ipv4.conf.default.forwarding = 1 net.ipv4.conf.default.mc_forwarding = 0 net.ipv4.ip_forward = 1 I've set up full iptables log on the container; none of the packets addressed to 192.168.13.100, port 80 is reaching the container. I've even tried different kernels ( server kernel, raring lts kernel, etc ), modprobe everything iptables & nat related, nothing. Any ideas?

    Read the article

  • What common interface would be appropriate for these game object classes?

    - by Jefffrey
    Question A component based system's goal is to solve the problems that derives from inheritance: for example the fact that some parts of the code (that are called components) are reused by very different classes that, hypothetically, would lie in a very different branch of the inheritance tree. That's a very nice concept, but I've found out that CBS is often hard to accomplish without using ugly hacks. Implementations of this system are often far from clean. But I don't want to discuss this any further. My question is: how can I solve the same problems a CBS try to solve with a very clean interface? (possibly with examples, there are a lot of abstract talks about the "perfect" design already). Context Here's an example I was going for before realizing I was just reinventing inheritance again: class Human { public: Position position; Movement movement; Sprite sprite; // other human specific components }; class Zombie { Position position; Movement movement; Sprite sprite; // other zombie specific components }; After writing that I realized I needed an interface, otherwise I would have needed N containers for N different types of objects (or to use boost::variant to gather them all together). So I've thought of polymorphism (move what systems do in a CBS design into class specific functions): class Entity { public: virtual void on_event(Event) {} // not pure virtual on purpose virtual void on_update(World) {} virtual void on_draw(Window) {} }; class Human : public Entity { private: Position position; Movement movement; Sprite sprite; public: virtual void on_event(Event) { ... } virtual void on_update(World) { ... } virtual void on_draw(Window) { ... } }; class Zombie : public Entity { private: Position position; Movement movement; Sprite sprite; public: virtual void on_event(Event) { ... } virtual void on_update(World) { ... } virtual void on_draw(Window) { ... } }; Which was nice, except for the fact that now the outside world would not even be able to know where a Human is positioned (it does not have access to its position member). That would be useful to track the player position for collision detection or if on_update the Zombie would want to track down its nearest human to move towards him. So I added const Position& get_position() const; to both the Zombie and Human classes. And then I realized that both functionality were shared, so it should have gone to the common base class: Entity. Do you notice anything? Yes, with that methodology I would have a god Entity class full of common functionality (which is the thing I was trying to avoid in the first place). Meaning of "hacks" in the implementation I'm referring to I'm talking about the implementations that defines Entities as simple IDs to which components are dynamically attached. Their implementation can vary from C-stylish: int last_id; Position* positions[MAX_ENTITIES]; Movement* movements[MAX_ENTITIES]; Where positions[i], movements[i], component[i], ... make up the entity. Or to more C++-style: int last_id; std::map<int, Position> positions; std::map<int, Movement> movements; From which systems can detect if an entity/id can have attached components.

    Read the article

  • Security Access Control With Solaris Virtualization

    - by Thierry Manfe-Oracle
    Numerous Solaris customers consolidate multiple applications or servers on a single platform. The resulting configuration consists of many environments hosted on a single infrastructure and security constraints sometimes exist between these environments. Recently, a customer consolidated many virtual machines belonging to both their Intranet and Extranet on a pair of SPARC Solaris servers interconnected through Infiniband. Virtual Machines were mapped to Solaris Zones and one security constraint was to prevent SSH connections between the Intranet and the Extranet. This case study gives us the opportunity to understand how the Oracle Solaris Network Virtualization Technology —a.k.a. Project Crossbow— can be used to control outbound traffic from Solaris Zones. Solaris Zones from both the Intranet and Extranet use an Infiniband network to access a ZFS Storage Appliance that exports NFS shares. Solaris global zones on both SPARC servers mount iSCSI LU exported by the Storage Appliance.  Non-global zones are installed on these iSCSI LU. With no security hardening, if an Extranet zone gets compromised, the attacker could try to use the Storage Appliance as a gateway to the Intranet zones, or even worse, to the global zones as all the zones are reachable from this node. One solution consists in using Solaris Network Virtualization Technology to stop outbound SSH traffic from the Solaris Zones. The virtualized network stack provides per-network link flows. A flow classifies network traffic on a specific link. As an example, on the network link used by a Solaris Zone to connect to the Infiniband, a flow can be created for TCP traffic on port 22, thereby a flow for the ssh traffic. A bandwidth can be specified for that flow and, if set to zero, the traffic is blocked. Last but not least, flows are created from the global zone, which means that even with root privileges in a Solaris zone an attacker cannot disable or delete a flow. With the flow approach, the outbound traffic of a Solaris zone is controlled from outside the zone. Schema 1 describes the new network setting once the security has been put in place. Here are the instructions to create a Crossbow flow as used in Schema 1 : (GZ)# zoneadm -z zonename halt ...halts the Solaris Zone. (GZ)# flowadm add-flow -l iblink -a transport=TCP,remote_port=22 -p maxbw=0 sshFilter  ...creates a flow on the IB partition "iblink" used by the zone to connect to the Infiniband.  This IB partition can be identified by intersecting the output of the commands 'zonecfg -z zonename info net' and 'dladm show-part'.  The flow is created on port 22, for the TCP traffic with a zero maximum bandwidth.  The name given to the flow is "sshFilter". (GZ)# zoneadm -z zonename boot  ...restarts the Solaris zone now that the flow is in place.Solaris Zones and Solaris Network Virtualization enable SSH access control on Infiniband (and on Ethernet) without the extra cost of a firewall. With this approach, no change is required on the Infiniband switch. All the security enforcements are put in place at the Solaris level, minimizing the impact on the overall infrastructure. The Crossbow flows come in addition to many other security controls available with Oracle Solaris such as IPFilter and Role Based Access Control, and that can be used to tackle security challenges.

    Read the article

  • Is this Hybrid of Interface / Composition kosher?

    - by paul
    I'm working on a project in which I'm considering using a hybrid of interfaces and composition as a single thing. What I mean by this is having a contain*ee* class be used as a front for functionality implemented in a contain*er* class, where the container exposes the containee as a public property. Example (pseudocode): class Visibility(lambda doShow, lambda doHide, lambda isVisible) public method Show() {...} public method Hide() {...} public property IsVisible public event Shown public event Hidden class SomeClassWithVisibility private member visibility = new Visibility(doShow, doHide, isVisible) public property Visibility with get() = visibility private method doShow() {...} private method doHide() {...} private method isVisible() {...} There are three reasons I'm considering this: The language in which I'm working (F#) has some annoyances w.r.t. implementing interfaces the way I need to (unless I'm missing something) and this will help avoid a lot of boilerplate code. The containee classes could really be considered properties of the container class(es); i.e. there seems to be a fairly strong has-a relationship. The containee classes will likely implement code which would have been pretty much the same when implemented in all the container classes, so why not do it once in one place? In the above example, this would include managing and emitting the Shown/Hidden events. Does anyone see any isseus with this Composiface/Intersition method, or know of a better way? EDIT 2012.07.26 - It seems a little background information is warranted: Where I work, we have a bunch of application front-ends that have limited access to system resources -- they need access to these resources to fully function. To remedy this we have a back-end application that can access the needed resources, with which the front-ends can communicate. (There is an API written for the front-ends for accessing back-end functionality as though it were part of the front-end.) The back-end program is out of date and its functionality is incomplete. It has made the transition from company to company a couple of times and we can't even compile it anymore. So I'm trying to rewrite it in my spare time. I'm trying to update things to make a nice(r) interface/API for the front-ends (while allowing for backwards compatibility with older front-ends), hopefully something full of OOPy goodness. The thing is, I don't want to write the front-end API after I've written pretty much the same code in F# for implementing the back-end; so, what I'm planning on doing is applying attributes to classes/methods/properties that I would like to have code for in the API then generate this code from the F# assembly using reflection. The method outlined in this question is a possible alternative I'm considering instead of implementing straight interfaces on the classes in F# because they're kind of a bear: In order to access something of an interface that has been implemented in a class, you have to explicitly cast an instance of that class to the interface type. This would make things painful when getting calls from the front-ends. If you don't want to have to do this, you have to call out all of the interface's methods/properties again in the class, outside of the interface implementation (which is separate from regular class members), and call the implementation's members. This is basically repeating the same code, which is what I'm trying to avoid!

    Read the article

  • Monitoring Windows Azure Service Bus Endpoint with BizTalk 360?

    - by Michael Stephenson
    I'm currently working with a customer who is undergoing an initiative to expose some of their line of business applications to external partners and SAAS applications and as part of this we have been looking at using the Windows Azure Service Bus. For the first part of the project we were focused on some synchronous request response scenarios where an external application would use the Service Bus relay functionality to get data from some internal applications. When we were looking at the operational monitoring side of the solution it was obvious that although most of the normal server monitoring capabilities would be required for the on premise components we would have to look at new approaches to validate that the operation of the service from outside of the organization was working as expected. A number of months ago one of my colleagues Elton Stoneman wrote about an approach I have introduced with a number of clients in the past where we implement a diagnostics service in each service component we build. This service would allow us to make a call which would flex some of the working parts of the system to prove it was working within any SLA. This approach is discussed on the following article: http://geekswithblogs.net/EltonStoneman/archive/2011/12/12/the-value-of-a-diagnostics-service.aspx In our solution we wanted to take the same approach but we had to consider that the service clients were external to the service. We also had to consider that by going through Windows Azure Service Bus it's not that easy to make most of your standard monitoring solutions just give you an easy way to do this. In a previous article I have described how you can use BizTalk 360 to monitor things using a custom extension to the Web Endpoint Manager and I felt that we could use this approach to provide an excellent way to monitor our service bus endpoint. The previous article is available on the following link: http://geekswithblogs.net/michaelstephenson/archive/2012/09/12/150696.aspx   The Monitoring Solution BizTalk 360 currently has an easy way to hook up the endpoint manager to a url which it will then call and if a successful response is returned it then considers the endpoint to be in a healthy state. We would take advantage of this by creating an ASP.net web page which would be called by BizTalk 360 and behind this page we would implement the functionality to call the diagnostics service on our Service Bus endpoint. The ASP.net page could include logic to work out how to handle the response from the diagnostics service. For example if the overall result of the diagnostics service was successful but the call to the diagnostics service was longer than a certain amount of time then we could return an error and indicate the service is taking too long. The following diagram illustrates the monitoring pattern.   The diagnostics service which is hosted in the line of business application allows us to ping a simple message through the Azure Service Bus relay to the WCF services in the LOB application and we they get a response back indicating that the service is working fine. To implement this I used the exact same approach I described in my previous post to create a custom web page which calls the diagnostics service and then it would return an HTTP response code which would depend on the error condition returned or a 200 if it was successful. One of the limitations of this approach is that the competing consumer pattern for listening to messages from service bus means that you cannot guarantee which server would process your diagnostics check message but with BizTalk 360 you could simply add multiple endpoint checks so that it could access the individual on-premise web servers directly to ensure that each server is working fine and then check that messages can also be processed through the cloud. Conclusion It took me about 15 minutes to get a proof of concept of this up and running which was able to monitor our web services which had been exposed via Windows Azure Service Bus. I was then able to inherit all of the monitoring benefits of BizTalk 360 to provide an enterprise class monitoring solution for our cloud enabled API.

    Read the article

  • Two Weeks To Go, Still Time to Register

    - by speakjava
    Yes, it's now only two weeks to the start of the 17th JavaOne conference! This will be my ninth JavaOne, I came fairly late to this event, attending for the first time in 2002.  Since then I've missed two conferences, 2006 for the birth of my son (a reasonable excuse I think) and 2010 for reasons we'll not go into here.  I have quite the collection of show devices, I've still got the WoWee robot, the HTC phone for JavaFX, the programmable pen and the Sharp Zaurus.  The only one I didn't keep was the homePod music player (I wonder why?) JavaOne is a special conference for many reasons, some of which I list here: A great opportunity to catch up on the latest changes in the Java world.  This is not just in terms of the platform, but as much about what people are doing with Java to build new and cool applications. A chance to meet people.  We have these things called BoFs, which stands for "Birds of a Feather", as in "Birds of a feather, flock together".  The idea being to have sessions where people who are interested in the same topic don't just get to listen to a presentation, but get to talk about it.  These sessions are great, but I find that JavaOne is as much about the people I meet in the corridors and the discussions I have there as it is about the sessions I get to attend. Think outside the box.  There are a lot of sessions at JavaOne covering the full gamut of Java technologies and applications.  Clearly going to sessions that relate to your area of interest is great, but attending some of the more esoteric sessions can often spark thoughts and stimulate the imagination to go off and do new and exciting things once you get back. Get the lowdown from the Java community.  Java is as much about community as anything else and there are plenty of events where you can get involved.  The GlassFish party is always popular and for Java Champions and JUG leaders there's a couple of special events too. Not just all hard work.  Oracle knows how to throw a party and the appreciation event will be a great opportunity to mingle with peers in a more relaxed environment.  This year Pearl Jam and Kings of Leon will be playing live.  Add free beer and what more could you want? So there you have it.  Just a few reasons for why you want to attend JavaOne this year.  Oh, and of course I'll be presenting three sessions which is even more reason to go.  As usual I've gone for some mainstream ("Custom Charts" for JavaFX) and some more 'out there' ("Java and the Raspberry Pi" and "Gestural Interfaces for JavaFX").  Once again I'll be providing plenty of demos so more than half my luggage this year will consist of a Kinect, robot arm, Raspberry Pis, gamepad and even an EEG sensor. If you're a student there's one even more attractive reason for going to JavaOne: It's Free! Registration is here.  Hope to see you there!

    Read the article

  • In the Firing Line: The impact of project and portfolio performance on the CEO

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} What are the primary measurements for rating CEO performance? For corporate boards, business analysts, investors, and the trade press the metrics they deploy are relatively binary in nature; what is being done to generate earnings, and what is being done to build and sustain high performance? As for the market, interest is primarily aroused when operational and financial performance falls outside planned commitments for the year. When organizations announce better than predicted results, they usually experience an immediate increase in share price. Likewise, poor results have an obviously negative impact on the share price and impact the role and tenure of the incumbent CEO. The danger for the CEO is that the risk of failure is ever present, ranging from manufacturing delays and supply chain issues to labor shortages and scope creep. This risk is enhanced by the involvement of secondary suppliers providing services critical to overall work schedules, and magnified further across a portfolio of programs and projects underway at any one time – and all set within a global context. All can impact planned return on investment and have an inevitable impact on the share price – the primary empirical measure of day-to-day performance. Read this complete complementary report, In the Firing Line and explore what is the direct link between the health of the portfolio and CEO performance. This report will provide an overview of the responsibility the CEO has for implementing and maintaining a culture of accountability, offer examples of some of the higher profile project failings in recent years, and detail the capabilities available to the CEO to mitigate the risks residing in their own portfolios. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • MySQL Server 5.6 defaults changes

    - by user12626240
    We're improving the MySQL Server defaults, as announced by Tomas Ulin at MySQL Connect. Here's what we're changing:  Setting  Old  New  Notes back_log  50  50 + ( max_connections / 5 ) capped at 900 binlog_checksum  off  CRC32  New variable in 5.6 binlog_row_event_max_size  1k  8k flush_time  1800  Windows changes from 1800 to 0  Was already 0 on other platforms host_cache_size  128  128 + 1 for each of the first 500 max_connections + 1 for every 20 max_connections over 500, capped at 2000  New variable in 5.6 innodb_autoextend_increment  8  64  Now affects *.ibd files. 64 is 64 megabytes innodb_buffer_pool_instances  0  8. On 32 bit Windows only, if innodb_buffer_pool_size is greater than 1300M, default is innodb_buffer_pool_size / 128M innodb_concurrency_tickets  500  5000 innodb_file_per_table  off  on innodb_log_file_size  5M  48M  InnoDB will always change size to match my.cnf value. Also see innodb_log_compressed_pages and binlog_row_image innodb_old_blocks_time 0  1000 1 second innodb_open_files  300  300; if innodb_file_per_table is ON, higher of table_open_cache or 300 innodb_purge_batch_size  20  300 innodb_purge_threads  0  1 innodb_stats_on_metadata  on  off join_buffer_size 128k  256k max_allowed_packet  1M  4M max_connect_errors  10  100 open_files_limit  0  5000  See note 1 query_cache_size  0  1M query_cache_type  on/1  off/0 sort_buffer_size  2M  256k sql_mode  none  NO_ENGINE_SUBSTITUTION  See later post about default my.cnf for STRICT_TRANS_TABLES sync_master_info  0  10000  Recommend: master_info_repository=table sync_relay_log  0  10000 sync_relay_log_info  0  10000  Recommend: relay_log_info_repository=table. Also see Replication Relay and Status Logs table_definition_cache  400  400 + table_open_cache / 2, capped at 2000 table_open_cache  400  2000   Also see table_open_cache_instances thread_cache_size  0  8 + max_connections/100, capped at 100 Note 1: In 5.5 there was already a rule to make open_files_limit 10 + max_connections + table_cache_size * 2 if that was higher than the user-specified value. Now uses the higher of that and (5000 or what you specify). We are also adding a new default my.cnf file and guided instructions on the key settings to adjust. More on this in a later post. We're also providing a page with suggestions for settings to improve backwards compatibility. The old example files like my-huge.cnf are obsolete. Some of the improvements are present from 5.6.6 and the rest are coming. These are ideas, and until they are in an official GA release, they are subject to change. As part of this work I reviewed every old server setting plus many hundreds of emails of feedback and testing results from inside and outside Oracle's MySQL Support team and the many excellent blog entries and comments from others over the years, including from many MySQL Gurus out there, like Baron, Sheeri, Ronald, Schlomi, Giuseppe and Mark Callaghan. With these changes we're trying to make it easier to set up the server by adjusting only a few settings that will cause others to be set. This happens only at server startup and only applies to variables where you haven't set a value. You'll see a similar approach used for the Performance Schema. The Gurus don't need this but for many newcomers the defaults will be very useful. Possibly the most unusual change is the way we vary the setting for innodb_buffer_pool_instances for 32-bit Windows. This is because we've found that DLLs with specified load addresses often fragment the limited four gigabyte 32-bit address space and make it impossible to allocate more than about 1300 megabytes of contiguous address space for the InnoDB buffer pool. The smaller requests for many pools are more likely to succeed. If you change the value of innodb_log_file_size in my.cnf you will see a message like this in the error log file at the next restart, instead of the old error message: [Warning] InnoDB: Resizing redo log from 2*64 to 5*128 pages, LSN=5735153 One of the biggest challenges for the defaults is the millions of installations on a huge range of systems, from point of sale terminals and routers though shared hosting or end user systems and on to major servers with lots of CPU cores, hundreds of gigabytes of RAM and terabytes of fast disk space. Our past defaults were for the smaller systems and these change that to larger shared hosting or shared end user systems, still with a bias towards the smaller end. There is a bias in favour of OLTP workloads, so reporting systems may need more changes. Where there is a conflict between the best settings for benchmarks and normal use, we've favoured production, not benchmarks. We're very interested in your feedback, comments and suggestions.

    Read the article

< Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >