Search Results

Search found 2170 results on 87 pages for 'earning potential'.

Page 75/87 | < Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >

  • Centered Content using panelGridLayout

    - by Duncan Mills
    A classic layout conundrum,  which I think pretty much every ADF developer may have faced at some time or other, is that of truly centered (centred) layout. Typically this requirement comes up in relation to say displaying a login type screen or similar. Superficially the  problem seems easy, but as my buddy Eduardo explained when discussing this subject a couple of years ago it's actually a little more complex than you might have thought. If fact, even the "solution" provided in that posting is not perfect and suffers from a several issues (not Eduardo's fault, just limitations of panelStretch!) The top, bottom, end and start facets all need something in them The percentages you apply to the topHeight, startWidth etc. are calculated as part of the whole width.  This means that you have to guestimate the correct percentage based on your typical screen size and the sizing of the centered content. So, at best, you will in fact only get approximate centering, and the more you tune that centering for a particular browser size the more it will fail if the user resizes. You can't attach styles to the panelStretchLayout facets so to provide things like background color or fixed sizing you need to embed another container that you can apply styles to, typically a panelgroupLayout   For reference here's the code to print a simple 100px x 100px red centered square  using the panelStretchLayout solution, approximately tuned to a 1980 x 1080 maximized browser (IDs omitted for brevity): <af:panelStretchLayout startWidth="45%" endWidth="45%"                        topHeight="45%"  bottomHeight="45%" >   <f:facet name="center">     <af:panelGroupLayout inlineStyle="height:100px;width:100px;background-color:red;"                          layout="vertical"/>   </f:facet>   <f:facet name="top">     <af:spacer height="1" width="1"/>   </f:facet>   <f:facet name="bottom">     <af:spacer height="1" width="1"/>   </f:facet>   <f:facet name="start">     <af:spacer height="1" width="1"/>   </f:facet>   <f:facet name="end">     <af:spacer height="1" width="1"/>    </f:facet> </af:panelStretchLayout>  And so to panelGridLayout  So here's the  good news, panelGridLayout makes this really easy and it works without the caveats above.  The key point is that percentages used in the grid definition are evaluated after the fixed sizes are taken into account, so rather than having to guestimate what percentage will "more, or less", center the content you can just say "allocate half of what's left" to the flexible content and you're done. Here's the same example using panelGridLayout: <af:panelGridLayout> <af:gridRow height="50%"/> <af:gridRow height="100px"> <af:gridCell width="50%" /> <af:gridCell width="100px" halign="stretch" valign="stretch"  inlineStyle="background-color:red;"> <af:spacer width="1" height="1"/> </af:gridCell> <af:gridCell width="50%" /> </af:gridRow> <af:gridRow height="50%"/> </af:panelGridLayout>  So you can see that the amount of markup is somewhat smaller (as is, I should mention, the generated DOM structure in the browser), mainly because we don't need to introduce artificial components to ensure that facets are actually observed in the final result.  But the key thing here is that the centering is no longer approximate and it will work as expected as the user resizes the browser screen.  By far this is a more satisfactory solution and although it's only a simple example, it will hopefully open your eyes to the potential of panelGridLayout as your number one, go-to layout container. Just a reminder though, right now, panelGridLayout is only available in 11.1.2.2 and above.

    Read the article

  • NDepend Evaluation: Part 3

    - by Anthony Trudeau
    NDepend is a Visual Studio add-in designed for intense code analysis with the goal of high code quality. NDepend uses a number of metrics and aggregates the data in pleasing static and active visual reports. My evaluation of NDepend will be broken up into several different parts. In the first part of the evaluation I looked at installing the add-in.  And in the last part I went over my first impressions including an overview of the features.  In this installment I provide a little more detail on a few of the features that I really like. Dependency Matrix The dependency matrix is one of the rich visual components provided with NDepend.  At a glance it lets you know where you have coupling problems including cycles.  It does this with number indicating the weight of the dependency and a color-coding that indicates the nature of the dependency. Green and blue cells are direct dependencies (with the difference being whether the relationship is from row-to-column or column-to-row).  Black cells are the ones that you really want to know about.  These indicate that you have a cycle.  That is, type A refers to type B and type B also refers to Type A. But, that’s not the end of the story.  A handy pop-up appears when you hover over the cell in question.  It explains the color, the dependency, and provides several interesting links that will teach you more than you want to know about the dependency. You can double-click the problem cells to explode the dependency.  That will show the dependencies on a method-by-method basis allowing you to more easily target and fix the problem.  When you’re done you can click the back button on the toolbar. Dependency Graph The dependency graph is another component provided.  It’s complementary to the dependency matrix, but it isn’t as easy to identify dependency issues using the window. On a positive note, it does provide more information than the matrix. My biggest issue with the dependency graph is determining what is shown.  This was not readily obvious.  I ended up using the navigation buttons to get an acceptable view.  I would have liked to choose what I see. Once you see the types you want you can get a decent idea of coupling strength based on the width of the dependency lines.  Double-arrowed lines are problematic and are shown in red.  The size of the boxes will be related to the metric being displayed.  This is controlled using the Box Size drop-down in the toolbar.  Personally, I don’t find the size of the box to be helpful, so I change it to Constant Font. One nice thing about the display is that you can see the entire path of dependencies when you hover over a type.  This is done by color-coding the dependencies and dependants.  It would be nice if selecting the box for the type would lock the highlighting in place. I did find a perhaps unintended work-around to the color-coding.  You can lock the color-coding in by hovering over the type, right-clicking, and then clicking on the canvas area to clear the pop-up menu.  You can then do whatever with it including saving it to an image file with the color-coding. CQL NDepend uses a code query language (CQL) to work with your code just like it was a database.  CQL cannot be confused with the robustness of T-SQL or even LINQ, but it represents an impressive attempt at providing an expressive way to enumerate and interrogate your code. There are two main windows you’ll use when working with CQL.  The CQL Query Explorer allows you to define what queries (rules) are run as part of a report – I immediately unselected rules that I don’t want in my results.  The CQL Query Edit window is where you can view or author your own rules.  The explorer window is pretty self-explanatory, so I won’t mention it further other than to say that any queries you author will appear in the custom group. Authoring your own queries is really hard to screw-up.  The Intellisense-like pop-ups tell you what you can do while making composition easy.  I was able to create a query within two minutes of playing with the editor.  My query warns if any types that are interfaces don’t start with an “I”. WARN IF Count > 0 IN SELECT TYPES WHERE IsInterface AND !NameLike “I” The results from the CQL Query Edit window are immediate. That fact makes it useful for ad hoc querying.  It’s worth mentioning two things that could make the experience smoother.  First, out of habit from using Visual Studio I expect to be able to scroll and press Tab to select an item in the list (like Intellisense).  You have to press Enter when you scroll to the item you want.  Second, the commands are case-sensitive.  I don’t see a really good reason to enforce that. CQL has a lot of potential not just in enforcing code quality, but also enforcing architectural constraints that your enterprise has defined. Up Next My next update will be the final part of the evaluation.  I will summarize my experience and provide my conclusions on the NDepend add-in. ** View Part 1 of the Evaluation ** ** View Part 2 of the Evaluation ** Disclaimer: Patrick Smacchia contacted me about reviewing NDepend. I received a free license in return for sharing my experiences and talking about the capabilities of the add-in on this site. There is no expectation of a positive review elicited from the author of NDepend.

    Read the article

  • Provocative Tweets From the Dachis Social Business Summit

    - by Mike Stiles
    On June 20, all who follow social business and how social is changing how we do business and internal business structures, gathered in London for the Dachis Social Business Summit. In addition to Oracle SVP Product Development, Reggie Bradford, brands and thought leaders posed some thought-provoking ideas and figures. Here are some of the most oft-tweeted points, and our thoughts that they provoked. Tweet: The winners will be those who use data to improve performance.Thought: Everyone is dwelling on ROI. Why isn’t everyone dwelling on the opportunity to make their product or service better (as if that doesn’t have an effect on ROI)? Big data can improve you…let it. Tweet: High performance hinges on integrated teams that interact with each other.Thought: Team members may work well with each other, but does the team as a whole “get” what other teams are doing? That’s the key to an integrated, companywide workforce. (Internal social platforms can facilitate that by the way). Tweet: Performance improvements come from making the invisible visible.Thought: Many of the factors that drive customer behavior and decisions are invisible. Through social, customers are now showing us what we couldn’t see before…if we’re paying attention. Tweet: Games have continuous feedback, which is why they’re so engaging.  Apply that to business operations.Thought: You think your employees have an obligation to be 100% passionate and engaged at all times about making you richer. Think again. Like customers, they must be motivated. Visible insight that they’re advancing on their goals helps. Tweet: Who can add value to the data?  Data will tend to migrate to where it will be most effective.Thought: Not everybody needs all the data. One team will be able to make sense of, use, and add value to data that may be irrelevant to another team. Like a strategized football play, the data has to get sent to the spot on the field where it’s needed most. Tweet: The sale isn’t the light at the end of the tunnel, it’s the start of a new marketing cycle.Thought: Another reason the ROI question is fundamentally flawed. The sale is not the end of the potential return on investment. After-the-sale service and nurturing begins where the sales “victory” ends. Tweet: A dead sale is one that’s not shared.  People must be incentivized to share.Thought: Guess what, customers now know their value to you as marketers on your behalf. They’ll tell people about your product, but you’ve got to answer, “Why should I?” And you’ve got to answer it with something substantial, not lame trinkets. Tweet: Social user motivations are competition, affection, excellence and curiosity.Thought: Your followers will engage IF; they can get something for doing it, love your culture so much they want you to win, are consistently stunned at the perfection and coolness of your products, or have been stimulated enough to want to know more. Tweet: In Europe, 92% surveyed said they couldn’t care less about brands.Thought: Oh well, so much for loving you or being impressed enough with your products & service that they want you to win. We’ve got a long way to go. Tweet: A complaint is a gift.Thought: Our instinct where complaints are concerned is to a) not listen, b) dismiss the one who complains as a kook, c) make excuses, and d) reassure ourselves with internal group-think that they’re wrong and we’re right. It’s the perfect recipe for how to never, ever grow or get better. In a way, this customer cares more than you do. Tweet: 78% of consumers think peer recommendation is the best form of advertising.  Eventually, engagement is going to eat advertising.Thought: Why is peer recommendation best? Trust. If a friend tells me how great a movie was, I believe him. He has credibility with me. He’s seen it, and he could care less if I buy a ticket. He’s telling me it was awesome because he sincerely believes that it was.  That’s gold. Tweet: 86% of customers are willing to pay more for a better customer experience. Thought: This “how mad can we make our customers without losing them” strategy has to end. The customer experience has actual monetary value, money you’re probably leaving on the table. @mikestilesPhoto: stock.xchng

    Read the article

  • Hiring New IT Employees versus Promoting Internally for IT Positions

    Recently I was asked my opinion regarding the hiring of IT professionals in regards to the option of hiring new IT employees versus promoting internally for IT positions. After thinking a little more about this question regarding staffing, specifically pertaining to promoting internally verses new employees; I think my answer to this question is that it truly depends on the situation. However, in most cases I would side with promoting internally. The key factors in this decision should be based on a company/department’s current values, culture, attitude, and existing priorities.  For example if a company values retaining all of its hard earned business knowledge then they would tend to promote existing employees internal over hiring a new employee. Moreover, the company will have to pay to train an existing employee to learn a new technology and the learning curve for some technologies can be very steep. Conversely, if a company values new technologies and technical proficiency over business knowledge then a company would tend to hire new employees because they may already have experience with a technology that the company is planning on using. In this scenario, the company would have to take on the additional overhead of allowing a new employee to learn how the business operates prior to them being fully effective. To illustrate my points above let us look at contractor that builds in ground pools for example.  He has the option to hire employees that are very strong but use small shovels to dig, or employees weak in physical strength but use large shovels to dig. Which employee should the contractor use to dig a hole for a new in ground pool? If we compare the possible candidates for this job we will find that they are very similar to hiring someone internally verses a new hire. The first example represents the existing workers that are very strong regarding the understanding how the business operates and the reasons why in a specific manner. However this employee could be potentially weaker than an outsider pertaining to specific technologies and would need some time to build their technical prowess for a new position much like the strong worker upgrading their shovels in order to remove more dirt at once when digging. The other employee is very similar to hiring a new person that may already have the large shovel but will need to increase their strength in order to use the shovel properly and efficiently so that they can move a maximum amount of dirt in a minimal amount of time. This can be compared to new employ learning how a business operates before they can be fully functional and integrated in the company/department. Another key factor in this dilemma pertains to existing employee and their passion for their work, their ability to accept new responsibility when given, and the willingness to take on responsibilities when they see a need in the business. As much as possible should be considered in this decision down to the mood of the team, the quality of existing staff, learning cure for both technology and business, and the potential side effects of the existing staff.  In addition, there are many more consideration based on the current team/department/companies culture and mood. There are several factors that need to be considered when promoting an individual or hiring new blood for a team. They both can provide great benefits as well as create controversy to a group. Personally, staffing especially in the IT world is like building a large scale system in that all of the components and modules must fit together and preform as one cohesive system in the same way a team must come together using their individually acquired skills so that they can work as one team.  If a module is out of place or is nonexistent then the rest of the team will suffer until the all of its issues are addressed and resolved. Benefits of Promoting Internally Internal promotions give employees a reason to constantly upgrade their technology, business, and communication skills if they want to further their career Employees can control their own destiny based on personal desires Employee already knows how the business operates Companies can save money by promoting internally because the initial overhead of allowing new hires to learn how a company operates is very expensive Newly promoted employees can assist in training their replacements while transitioning to their new role within a company. Existing employees already have a proven track record in regards fitting in with the business culture; this is always an unknown with all new hires Benefits of a New Hire New employees can energize and excite existing employees New employees can bring new ideas and advancements in technology New employees can offer a different perspective on existing issues based on their past experience. As you can see the decision to promote an existing employee from within a company verses hiring a new person should be based on several factors that should ultimately place the business in the best possible situation for the immediate and long term future. How would you handle this situation? Would you hire a new employee or promote from within?

    Read the article

  • Why everybody should do Sales!

    - by FelixWehmeyer
    I speak with many business students and ask them what job they want to get into. Most of them tell me they want a job in Marketing, Management Consulting or Finance. I hardly ever hear “Sales, that is what I want to do”, and I often wonder why. I would like to start with a quote from Zig Ziglar, a successful salesman: "Nothing happens until someone sells something." But to get back to the main point, why wouldn’t you want to get in sales? When people think of sales, they picture a typical salesman in their head and think that selling is scary and all about manipulating, pressuring and pushing someone into buying something they don’t need. Are these stereotypes accurate? I don’t believe so: So why should you want to be in sales? If you think about selling as providing the solution for the problem and talking about the benefits of making a decision, then every job in this world comes out of selling. In every job you deal with coworkers that you want to convince of your ideas or convincing your boss that the project you want to work on is good for the company.  These days, consumers and businesses are very well informed about services and products. When we are talking about highly complex products, such as IT solutions, businesses don’t accept your run-of-the-mill salesman who is pushing a sale. These are often long projects where salespeople have a consulting and leading role. Salespeople need to be able to consult companies and customers with their problem and convince a client that their solution is the best fit. Next to the fact that sales, is by far, not as scary and shady as you thought, there are a few points that will make you want to consider a sales career: Negotiating skills – When you are in sales you will learn how to negotiate. Salespeople learn to listen to their customers and try to make them happy, overcoming objections and come to a final agreement that both parties are happy with. Persistence/Challenge – As a salesperson you will often hear a negative answer, in a sales role you will start to embrace this and see a ‘no’ as a challenge not as a rejection. This attitude change can help you a lot in your career, but also in your personal life. You will become more optimistic and gain a go-getter attitude. Salary – As salespeople are seen as the moneymakers for the company, companies often reward their sales teams generously. Most likely in a sales role, you will receive a good basic salary and often you get nice bonuses on top of that based on your performance. Oracle is, for instance, the company that offers the highest average commission in the world. Further you can expect many other benefits as companies know that there is a high demand for good salespeople. Teamwork – Sales is a lot like having your own business, you are responsible for your own territory or set of clients. You are the one who is responsible for the revenue coming from that territory. So in order to gain revenue you will have to work together with many departments and people to make that happen. Every (potential) client could be seen as a different project, and you are the project leader. Understanding customers and the business – From any job that you choose sales will get you the most insight in the market. Salespeople are usually well-connected, talk with different customers and learn about the market and are up-to-date about all latest changes. Even if you want to change to a different role in the long run, you have a great head start as you understand the market and customers like no one else. Job security – Look at all the job postings out there. Many of them are sales-related. So if you want to have a steady job, plenty of choice and companies willing to invest in you, sales could be something for you.  Are you interested in exploring a sales career? At Oracle we are always looking for good sales professionals and fresh graduates who want to get into sales! For many languages such as Flemish, Dutch, German, French, Swedish and Norwegian (and more) we are currently looking for graduates who want to develop their career in Oracle. Please have a look at this article for the experience of a Business Development Consultant at Oracle in Dublin. Want to learn more about this job check out this link or send an email to jessica.ebbelaar-at-oracle.com! Have a look at our website http://campus.oracle.com for all of our other latest sales and non-sales vacancies!

    Read the article

  • Take Two: Comparing JVMs on ARM/Linux

    - by user12608080
    Although the intent of the previous article, entitled Comparing JVMs on ARM/Linux, was to introduce and highlight the availability of the HotSpot server compiler (referred to as c2) for Java SE-Embedded ARM v7,  it seems, based on feedback, that everyone was more interested in the OpenJDK comparisons to Java SE-E.  In fact there were two main concerns: The fact that the previous article compared Java SE-E 7 against OpenJDK 6 might be construed as an unlevel playing field because version 7 is newer and therefore potentially more optimized. That the generic compiler settings chosen to build the OpenJDK implementations did not put those versions in a particularly favorable light. With those considerations in mind, we'll institute the following changes to this version of the benchmarking: In order to help alleviate an additional concern that there is some sort of benchmark bias, we'll use a different suite, called DaCapo.  Funded and supported by many prestigious organizations, DaCapo's aim is to benchmark real world applications.  Further information about DaCapo can be found at http://dacapobench.org. At the suggestion of Xerxes Ranby, who has been a great help through this entire exercise, a newer Linux distribution will be used to assure that the OpenJDK implementations were built with more optimal compiler settings.  The Linux distribution in this instance is Ubuntu 11.10 Oneiric Ocelot. Having experienced difficulties getting Ubuntu 11.10 to run on the original D2Plug ARMv7 platform, for these benchmarks, we'll switch to an embedded system that has a supported Ubuntu 11.10 release.  That platform is the Freescale i.MX53 Quick Start Board.  It has an ARMv7 Coretex-A8 processor running at 1GHz with 1GB RAM. We'll limit comparisons to 4 JVM implementations: Java SE-E 7 Update 2 c1 compiler (default) Java SE-E 6 Update 30 (c1 compiler is the only option) OpenJDK 6 IcedTea6 1.11pre 6b23~pre11-0ubuntu1.11.10.2 CACAO build 1.1.0pre2 OpenJDK 6 IcedTea6 1.11pre 6b23~pre11-0ubuntu1.11.10.2 JamVM build-1.6.0-devel Certain OpenJDK implementations were eliminated from this round of testing for the simple reason that their performance was not competitive.  The Java SE 7u2 c2 compiler was also removed because although quite respectable, it did not perform as well as the c1 compilers.  Recall that c2 works optimally in long-lived situations.  Many of these benchmarks completed in a relatively short period of time.  To get a feel for where c2 shines, take a look at the first chart in this blog. The first chart that follows includes performance of all benchmark runs on all platforms.  Later on we'll look more at individual tests.  In all runs, smaller means faster.  The DaCapo aficionado may notice that only 10 of the 14 DaCapo tests for this version were executed.  The reason for this is that these 10 tests represent the only ones successfully completed by all 4 JVMs.  Only the Java SE-E 6u30 could successfully run all of the tests.  Both OpenJDK instances not only failed to complete certain tests, but also experienced VM aborts too. One of the first observations that can be made between Java SE-E 6 and 7 is that, for all intents and purposes, they are on par with regards to performance.  While it is a fact that successive Java SE releases add additional optimizations, it is also true that Java SE 7 introduces additional complexity to the Java platform thus balancing out any potential performance gains at this point.  We are still early into Java SE 7.  We would expect further performance enhancements for Java SE-E 7 in future updates. In comparing Java SE-E to OpenJDK performance, among both OpenJDK VMs, Cacao results are respectable in 4 of the 10 tests.  The charts that follow show the individual results of those four tests.  Both Java SE-E versions do win every test and outperform Cacao in the range of 9% to 55%. For the remaining 6 tests, Java SE-E significantly outperforms Cacao in the range of 114% to 311% So it looks like OpenJDK results are mixed for this round of benchmarks.  In some cases, performance looks to have improved.  But in a majority of instances, OpenJDK still lags behind Java SE-Embedded considerably. Time to put on my asbestos suit.  Let the flames begin...

    Read the article

  • SQL SERVER – Backing Up and Recovering the Tail End of a Transaction Log – Notes from the Field #042

    - by Pinal Dave
    [Notes from Pinal]: The biggest challenge which people face is not taking backup, but the biggest challenge is to restore a backup successfully. I have seen so many different examples where users have failed to restore their database because they made some mistake while they take backup and were not aware of the same. Tail Log backup was such an issue in earlier version of SQL Server but in the latest version of SQL Server, Microsoft team has fixed the confusion with additional information on the backup and restore screen itself. Now they have additional information, there are a few more people confused as they have no clue about this. Previously they did not find this as a issue and now they are finding tail log as a new learning. Linchpin People are database coaches and wellness experts for a data driven world. In this 42nd episode of the Notes from the Fields series database expert Tim Radney (partner at Linchpin People) explains in a very simple words, Backing Up and Recovering the Tail End of a Transaction Log. Many times when restoring a database over an existing database SQL Server will warn you about needing to make a tail end of the log backup. This might be your reminder that you have to choose to overwrite the database or could be your reminder that you are about to write over and lose any transactions since the last transaction log backup. You might be asking yourself “What is the tail end of the transaction log”. The tail end of the transaction log is simply any committed transactions that have occurred since the last transaction log backup. This is a very crucial part of a recovery strategy if you are lucky enough to be able to capture this part of the log. Most organizations have chosen to accept some amount of data loss. You might be shaking your head at this statement however if your organization is taking transaction logs backup every 15 minutes, then your potential risk of data loss is up to 15 minutes. Depending on the extent of the issue causing you to have to perform a restore, you may or may not have access to the transaction log (LDF) to be able to back up those vital transactions. For example, if the storage array or disk that holds your transaction log file becomes corrupt or damaged then you wouldn’t be able to recover the tail end of the log. If you do have access to the physical log file then you can still back up the tail end of the log. In 2013 I presented a session at the PASS Summit called “The Ultimate Tail Log Backup and Restore” and have been invited back this year to present it again. During this session I demonstrate how you can back up the tail end of the log even after the data file becomes corrupt. In my demonstration I set my database offline and then delete the data file (MDF). The database can’t become more corrupt than that. I attempt to bring the database back online to change the state to RECOVERY PENDING and then backup the tail end of the log. I can do this by specifying WITH NO_TRUNCATE. Using NO_TRUNCATE is equivalent to specifying both COPY_ONLY and CONTINUE_AFTER_ERROR. It as its name says, does not try to truncate the log. This is a great demo however how could I achieve backing up the tail end of the log if the failure destroys my entire instance of SQL and all I had was the LDF file? During my demonstration I also demonstrate that I can attach the log file to a database on another instance and then back up the tail end of the log. If I am performing proper backups then my most recent full, differential and log files should be on a server other than the one that crashed. I am able to achieve this task by creating new database with the same name as the failed database. I then set the database offline, delete my data file and overwrite the log with my good log file. I attempt to bring the database back online and then backup the log with NO_TRUNCATE just like in the first example. I encourage each of you to view my blog post and watch the video demonstration on how to perform these tasks. I really hope that none of you ever have to perform this in production, however it is a really good idea to know how to do this just in case. It really isn’t a matter of “IF” you will have to perform a restore of a production system but more of a “WHEN”. Being able to recover the tail end of the log in these sever cases could be the difference of having to notify all your business customers of data loss or not. If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server. Note: Tim has also written an excellent book on SQL Backup and Recovery, a must have for everyone. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Summit reflections

    - by Rob Farley
    So far, my three PASS Summit experiences have been notably different to each other. My first, I wasn’t on the board and I gave two regular sessions and a Lightning Talk in which I told jokes. My second, I was a board advisor, and I delivered a precon, a spotlight and a Lightning Talk in which I sang. My third (last week), I was a full board director, and I didn’t present at all. Let’s not talk about next year. I’m not sure there are many options left. This year, I noticed that a lot more people recognised me and said hello. I guess that’s potentially because of the singing last year, but could also be because board elections can bring a fair bit of attention, and because of the effort I’ve put in through things like 24HOP... Yeah, ok. It’d be the singing. My approach was very different though. I was watching things through different eyes. I looked for the things that seemed to be working and the things that didn’t. I had staff there again, and was curious to know how their things were working out. I knew a lot more about what was going on behind the scenes to make various things happen, and although very little about the Summit was actually my responsibility (based on not having that portfolio), my perspective had moved considerably. Before the Summit started, Board Members had been given notebooks – an idea Tom (who heads up PASS’ marketing) had come up with after being inspired by seeing Bill walk around with a notebook. The plan was to take notes about feedback we got from people. It was a good thing, and the notebook forms a nice pair with the SQLBits one I got a couple of years ago when I last spoke there. I think one of the biggest impacts of this was that during the first keynote, Bill told everyone present about the notebooks. This set a tone of “we’re listening”, and a number of people were definitely keen to tell us things that would cause us to pull out our notebooks. PASSTV was a new thing this year. Justin, the host, featured on the couch and talked a lot of people about a lot of things, including me (he talked to me about a lot of things, I don’t think he talked to a lot people about me). Reaching people through online methods is something which interests me a lot – it has huge potential, and I love the idea of being able to broadcast to people who are unable to attend in person. I’m keen to see how this medium can be developed over time. People who know me will know that I’m a keen advocate of certification – I've been SQL certified since version 6.5, and have even been involved in creating exams. However, I don’t believe in studying for exams. I think training is worthwhile for learning new skills, but the goal should be on learning those skills, not on passing an exam. Exams should be for proving that the skills are there, not a goal in themselves. The PASS Summit is an excellent place to take exams though, and with an attitude of professional development throughout the event, why not? So I did. I wasn’t expecting to take one, but I was persuaded and took the MCM Knowledge Exam. I hadn’t even looked at the syllabus, but tried it anyway. I was very tired, and even fell asleep at one point during it. I’ll find out my result at some point in the future – the Prometric site just says “Tested” at the moment. As I said, it wasn’t something I was expecting to do, but it was good to have something unexpected during the week. Of course it was good to catch up with old friends and make new ones. I feel like every time I’m in the US I see things develop a bit more, with more and more people knowing who I am, who my staff are, and recognising the LobsterPot brand. I missed being a presenter, but I definitely enjoyed seeing many friends on the list of presenters. I won’t try to list them, because there are so many these days that people might feel sad if I don’t mention them. For those that I managed to see, I was pleased to see that the majority of them have lifted their presentation skills since I last saw them, and I happily told them as much. One person who I will mention was Paul White, who travelled from New Zealand to his first PASS Summit. He gave two sessions (a regular session and a half-day), packed large rooms of people, and had everyone buzzing with enthusiasm. I spoke to him after the event, and he told me that his expectations were blown away. Paul isn’t normally a fan of crowds, and the thought of 4000 people would have been scary. But he told me he had no idea that people would welcome him so well, be so friendly and so down to earth. He’s seen the significance of the SQL Server community, and says he’ll be back. It’ll be good to see him there. Will you be there too?

    Read the article

  • Identity Globe Trotters (Sep Edition): The Social Customer

    - by Tanu Sood
    Welcome to the inaugural edition of our monthly series - Identity Globe Trotters. Starting today, the last Friday of every month, we will explore regional commentary on Identity Management. We will invite guest contributors from around the world to share their opinions and experiences around Identity Management and highlight regional nuances, specific drivers, solutions and more. Today's feature is contributed by Michael Krebs, Head of Business Development at esentri consulting GmbH, a (SOA) specialized Oracle Gold Partner based in Ettlingen, Germany. In his current role, Krebs is dealing with the latest developments in Enterprise Social Networking and the Integration of Social Media within business processes.  By Michael Krebs The relevance of "easy sign-on" in the age of the "Social Customer" With the growth of Social Networks, the time people spend within those closed "eco-systems" is growing year by year. With social networks looking to integrate search engines, like Facebook announced some weeks ago, their relevance will continue to grow in contrast to the more conventional search engines. This is one of the reasons why social network accounts of the users are getting more and more like a virtual fingerprint. With the growing relevance of social networks the importance of a simple way for customers to get in touch with say, customer care or contract departments, will be crucial for sales processes in critical markets. Customers want to have one single point of contact and also an easy "login-method" with no dedicated usernames, passwords or proprietary accounts. The golden rule in the future social media driven markets will be: The lower the complexity of the initial contact, the better a company can profit from social networks. If you, for example, can generate a smart way of how an existing customer can use self-service portals, the cost in providing phone support can be lowered significantly. Recruiting and Hiring of "Digital Natives" Another particular example is "social" recruiting processes. The so called "digital natives" don´t want to type in their profile facts and CV´s in proprietary systems. Why not use the actual LinkedIn profile? In German speaking region, the market in the area of professional social networks is dominated by XING, the equivalent to LinkedIn. A few weeks back, this network also opened up their interfaces for integrating social sign-ons or the usage of profile data for recruiting-purposes. In the European (and especially the German) employment market, where the number of young candidates is shrinking because of the low birth rate in the region, it will become essential to use social-media supported hiring processes to find and on-board the rare talents. In fact, you will see traditional recruiting websites integrated with social hiring to attract the best talents in the market, where the pool of potential candidates has decreased dramatically over the years. Identity Management as a key factor in the Customer Experience process To create the biggest value for customers and also future employees, companies need to connect their HCM or CRM-systems with powerful Identity management solutions. With the highly efficient Oracle (social & mobile enabling) Identity Management solution, enterprises can combine easy sign on with secure connections to the backend infrastructure. This combination enables a "one-stop" service with personalized content for customers and talents. In addition, companies can collect valuable data for the enrichment of their CRM-data. The goal is to enrich the so called "Customer Experience" via all available customer channels and contact points. Those systems have already gained importance in the B2C-markets and will gradually spread out to B2B-channels in the near future. Conclusion: Central and "Social" Identity management is key to Customer Experience Management and Talent Management For a seamless delivery of "Customer Experience Management" and a modern way of recruiting the best talent, companies need to integrate Social Sign-on capabilities with modern CX - and Talent management infrastructure. This lowers the barrier for existing and future customers or employees to get in touch with sales, support or human resources. Identity management is the technology enabler and backbone for a modern Customer Experience Infrastructure. Oracle Identity management solutions provide the opportunity to secure Social Applications and connect them with modern CX-solutions. At the end, companies benefit from "best of breed" processes and solutions for enriching customer experience without compromising security. About esentri: esentri is a provider of enterprise social networking and brings the benefits of social network communication into business environments. As one key strength, esentri uses Oracle Identity Management solutions for delivering Social and Mobile access for Oracle’s CRM- and HCM-solutions. …..End Guest Post…. With new and enhanced features optimized to secure the new digital experience, the recently announced Oracle Identity Management 11g Release 2 enables organizations to securely embrace cloud, mobile and social infrastructures and reach new user communities to help further expand and develop their businesses. Additional Resources: Oracle Identity Management 11gR2 release Oracle Identity Management website Datasheet: Mobile and Social Access (pdf) IDM at OOW: Focus on Identity Management Facebook: OracleIDM Twitter: OracleIDM We look forward to your feedback on this post and welcome your suggestions for topics to cover in Identity Globe Trotters. Last Friday, every month!

    Read the article

  • How You Helped Shape Java EE 7...

    - by reza_rahman
    I have been working with the JCP in various roles since EJB 3/Java EE 5 (much of it on my own time), eventually culminating in my decision to accept my current role at Oracle (despite it's inevitable set of unique challenges, a role I find by and large positive and fulfilling). During these years, it has always been clear to me that pretty much everyone in the JCP genuinely cares about openness, feedback and developer participation. Perhaps the most visible sign to date of this high regard for grassroots level input is a survey on Java EE 7 gathered a few months ago. The survey was designed to get open feedback on a number of critical issues central to the Java EE 7 umbrella specification including what APIs to include in the standard. When we started the survey, I don't think anyone was certain what the level of participation from developers would really be. I also think everyone was pleasantly surprised that a large number of developers (around 1100) took the time out to vote on these very important issues that could impact their own professional life. And it wasn't just a matter of the quantity of responses. I was particularly impressed with the quality of the comments made through the survey (some of which I'll try to do justice to below). With Java EE 7 under our belt and the horizons for Java EE 8 emerging, this is a good time to thank everyone that took the survey once again for their thoughts and let you know what the impact of your voice actually was. As an aside, you may be happy to know that we are working hard behind the scenes to try to put together a similar survey to help kick off the agenda for Java EE 8 (although this is by no means certain). I'll break things down by the questions asked in the survey, the responses and the resulting change in the specification. APIs to Add to Java EE 7 Full/Web Profile The first question in the survey asked which of four new candidate APIs (WebSocket, JSON-P, JBatch and JCache) should be added to the Java EE 7 Full and Web profile respectively. Developers by and large wanted all the new APIs added to the full platform. The comments expressed particularly strong support for WebSocket and JCache. Others expressed dissatisfaction over the lack of a JSON binding (as opposed to JSON processing) API. WebSocket, JSON-P and JBatch are now part of Java EE 7. In addition, the long-awaited Java EE Concurrency Utilities API was also included in the Full Profile. Unfortunately, JCache was not finalized in time for Java EE 7 and the decision was made not to hold up the Java EE release any longer. JCache continues to move forward strongly and will very likely be included in Java EE 8 (it will be available much sooner than Java EE 8 to boot). An emergent standard for JSON-B is also a strong possibility for Java EE 8. When it came to the Web Profile, developers were supportive of adding WebSocket and JSON-P, but not JBatch and JCache. Both WebSocket and JSON-P are now part of the Web Profile, now also including the already popular JAX-RS API. Enabling CDI by Default The second question asked whether CDI should be enabled in Java EE by default. The overwhelming majority of developers supported the default enablement of CDI. In addition, developers expressed a desire for better CDI/Java EE alignment (with regards to EJB and JSF in particular). Some developers expressed legitimate concerns over the performance implications of enabling CDI globally as well as the potential conflict with other JSR 330 implementations like Spring and Guice. CDI is enabled by default in Java EE 7. Respecting the legitimate concerns, CDI 1.1 was very careful to add additional controls around component scanning. While a lot of work was done in Java EE 6 and Java EE 7 around CDI alignment, further alignment is under serious consideration for Java EE 8. Consistent Usage of @Inject The third question was around using CDI/JSR 330 @Inject consistently vs. allowing JSRs to create their own injection annotations (e.g. @BatchContext). A majority of developers wanted consistent usage of @Inject. The comments again reflected a strong desire for CDI/Java EE alignment. A lot of emphasis in Java EE 7 was put into using @Inject consistently. For example, the JBatch specification is focused on using @Inject wherever possible. JAX-RS remains an exception with it's existing custom injection annotations. However, the JAX-RS specification leads understand the importance of eventual convergence, hopefully in Java EE 8. Expanding the Use of @Stereotype The fourth question was about expanding CDI @Stereotype to cover annotations across Java EE beyond just CDI. A solid majority of developers supported the idea of making @Stereotype more universal in Java EE. The comments maintained the general theme of strong support for CDI/Java EE alignment Unfortunately, there was not enough time and resources in Java EE 7 to implement this fairly pervasive feature. However, it remains a serious consideration for Java EE 8. Expanding Interceptor Use The final set of questions was about expanding interceptors further across Java EE. Developers strongly supported the concept. Along with injection, interceptors are now supported across all Java EE 7 components including Servlets, Filters, Listeners, JAX-WS endpoints, JAX-RS resources, WebSocket endpoints and so on. I hope you are encouraged by how your input to the survey helped shape Java EE 7 and continues to shape Java EE 8. Participating in these sorts of surveys is of course just one way of contributing to Java EE. Another great way to stay involved is the Adopt-A-JSR Program. A large number of developers are already participating through their local JUGs. You could of course become a Java EE JSR expert group member or observer. You should stay tuned to The Aquarium for the progress of Java EE 8 JSRs if that's something you want to look into...

    Read the article

  • Reading the tea leaves from Windows Azure support

    - by jamiet
    A few idle thoughts… Three months ago I had an issue regarding Windows Azure where I was unable to login to the management portal. At the time I contacted Azure support, the issue was soon resolved and I thought no more about it. Until today that is when I received an email from Azure support providing a detailed analysis of the root cause, the fix and moreover precise details about when and where things occurred. The email itself is interesting and I have included the entirety of it below. A few things were interesting to me: The level of detail and the diligence in investigating and reporting the issue I found really rather impressive. They even outline the number of users that were affected (127 in case you can’t be bothered reading). Compare this to the quite pathetic support that another division within Microsoft, Skype, provided to Greg Low recently: Skype support and dead parrot sketches   This line: “Windows Azure performed a planned change from using the Microsoft account service (formerly Windows Live ID) to the Azure Active Directory (AAD) as its primary authentication mechanism on August 24th. This change was made to enable future innovation in the area of authentication – particularly for organizationally owned identities, identity federation, stronger authentication methods and compliance certification. ” I also found to be particularly interesting. I have long thought that one of the reasons Microsoft has proved to be such a money-making machine in the enterprise is because they provide the infrastructure and then upsell on top of that – and nothing is more infrastructural than Active Directory. It has struck me of late that they are trying to make the same play of late in the cloud by tying all their services into Azure Active Directory and here we see a clear indication of that by making AAD the authentication mechanism for anyone using Windows Azure. I get the feeling that we’re going to hear much much more about AAD in the future; isn’t it about time we could log on to SQL Azure Windows Azure SQL Database without resorting to SQL authentication, for example? And why do Microsoft have two identity providers – Microsoft Account (aka Windows Live ID) and AAD – isn’t it about time those things were combined? As I said, just some idle thoughts. Below is the transcript of the email if you are interested. @Jamiet  This is regarding the support request <redacted> where in you were not able to login into the windows azure management portal with live id. We are providing you with the summary, root cause analysis and information about permanent fix: Incident Title: You were unable to access Windows Azure Portal after Microsoft Account to Azure Active Directory account Migration. Service Impacted: Management Portal Incident Start Date and Time: 8/24/2012 4:30:00 PM Date and Time Service was Restored: 10/17/2012 12:00:00 AM Summary: Windows Azure performed a planned change from using the Microsoft account service (formerly Windows Live ID) to the Azure Active Directory (AAD) as its primary authentication mechanism on August 24th.   This change was made to enable future innovation in the area of authentication – particularly for organizationally owned identities, identity federation, stronger authentication methods and compliance certification.   While this migration was largely transparent to Windows Azure users, a small number of users whose sign-in names were part of a Windows Live Custom Domain were unable to login.   This incompatibility was not discovered during the Quality Assurance testing phase prior to the migration. Customer Impact: Customers whose sign-in names were part of a Windows Live Custom Domain were unable to sign-in the Management Portal after ~4:00 p.m. PST on August 24th, 2012.   We determined that the issue did impact at least 127 users in 98 of these Windows Live Custom Domains and had a maximum potential impact of 1,110 users in total. Root Cause: The root cause of the issue was an incompatibility in the AAD authentication service to handle logins from Microsoft accounts whose sign-in names were part of a Windows Live Custom Domains.  This issue was not discovered during the Quality Assurance testing phase prior to the migration from Microsoft Account (MSA) to AAD. Mitigations: The issue was mitigated for the majority of affected users by 8:20 a.m. PST on August 25th, 2012 by running some internal scripts to correct many known Windows Live Custom Domains.   The remaining affected domains fell into two categories: Windows Live Custom Domains that were not corrected by 8/25/2012. An additional 48 Windows Live Custom Domains were fixed in the weeks following the incident within 2 business days after the AAD team received an escalation from product support regarding those accounts. Windows Live Custom domains that were also provisioned in Office365. Some of the affected Windows Live Custom Domains had already been provisioned in AAD because their owners signed up for Office365 which is a service that also uses AAD.   In these cases the Azure customers had to work around the issue by renaming their Microsoft Account or using a different Microsoft Account to administer their Azure subscription. Permanent Fix: The Azure Active Directory team permanently fixed the issue for all customers on 10/17/2012 in an upgraded release of the AAD service.

    Read the article

  • What Counts for A DBA: Observant

    - by drsql
    When walking up to the building where I work, I can see CCTV cameras placed here and there for monitoring access to the building. We are required to wear authorization badges which could be checked at any time. Do we have enemies?  Of course! No one is 100% safe; even if your life is a fairy tale, there is always a witch with an apple waiting to snack you into a thousand years of slumber (or at least so I recollect from elementary school.) Even Little Bo Peep had to keep a wary lookout.    We nerdy types (or maybe it was just me?) generally learned on the school playground to keep an eye open for unprovoked attack from simpler, but more muscular souls, and take steps to avoid messy confrontations well in advance. After we’d apprehensively negotiated adulthood with varying degrees of success, these skills of watching for danger, and avoiding it,  translated quite well to the technical careers so many of us were destined for. And nowhere else is this talent for watching out for irrational malevolence so appropriate as in a career as a production DBA.   It isn’t always active malevolence that the DBA needs to watch out for, but the even scarier quirks of common humanity.  A large number of the issues that occur in the enterprise happen just randomly or even just one time ever in a spurious manner, like in the case where a person decided to download the entire MSDN library of software, cross join every non-indexed billion row table together, and simultaneously stream the HD feed of 5 different sporting events, making the network access slow while the corporate online sales just started. The decent DBA team, like the going, gets tough under such circumstances. They spring into action, checking all of the sources of active information, observes the issue is no longer happening now, figures that either it wasn’t the database’s fault and that the reboot of the whatever device on the network fixed the problem.  This sort of reactive support is good, and will be the initial reaction of even excellent DBAs, but it is not the end of the story if you really want to know what happened and avoid getting called again when it isn’t even your fault.   When fires start raging within the corporate software forest, the DBA’s instinct is to actively find a way to douse the flames and get back to having no one in the company have any idea who they are.  Even better for them is to find a way of killing a potential problem while the fires are small, long before they can be classified as raging. The observant DBA will have already been monitoring the server environment for months in advance.  Most troubles, such as disk space and security intrusions, can be predicted and dealt with by alerting systems, whereas other trouble can come out of the blue and requires a skill of observing ongoing conditions and noticing inexplicable changes that could signal an emerging problem.  You can’t automate the DBA, because the bankable skill of a DBA is in detecting the early signs of unexpected problems, and working out how to deal with them before anyone else notices them.    To achieve this, the DBA will check the situation as it is currently happening,  and in many cases is likely to have been the person who submitted the problem to the level 1 support person in the first place, just to let the support team know of impending issues (always well received, I tell you what!). Database and host computer settings, configurations, and even critical data might be profiled and captured for later comparisons. He’ll use Monitoring tools, built-in, commercial (Not to be too crassly commercial or anything, but there is one such tool is SQL Monitor) and lots of homebrew monitoring tools to monitor for problems and changes in the server environment.   You will know that you have it right when a support call comes in and you can look at your monitoring tools and quickly respond that “response time is well within the normal range, the query that supports the failing interface works perfectly and has actually only been called 67% as often as normal, so I am more than willing to help diagnose the problem, but it isn’t the database server’s fault and is probably a client or networking slowdown causing the interface to be used less frequently than normal.” And that is the best thing for any DBA to observe…

    Read the article

  • Using XA Transactions in Coherence-based Applications

    - by jpurdy
    While the costs of XA transactions are well known (e.g. increased data contention, higher latency, significant disk I/O for logging, availability challenges, etc.), in many cases they are the most attractive option for coordinating logical transactions across multiple resources. There are a few common approaches when integrating Coherence into applications via the use of an application server's transaction manager: Use of Coherence as a read-only cache, applying transactions to the underlying database (or any system of record) instead of the cache. Use of TransactionMap interface via the included resource adapter. Use of the new ACID transaction framework, introduced in Coherence 3.6.   Each of these may have significant drawbacks for certain workloads. Using Coherence as a read-only cache is the simplest option. In this approach, the application is responsible for managing both the database and the cache (either within the business logic or via application server hooks). This approach also tends to provide limited benefit for many workloads, particularly those workloads that either have queries (given the complexity of maintaining a fully cached data set in Coherence) or are not read-heavy (where the cost of managing the cache may outweigh the benefits of reading from it). All updates are made synchronously to the database, leaving it as both a source of latency as well as a potential bottleneck. This approach also prevents addressing "hot data" problems (when certain objects are updated by many concurrent transactions) since most database servers offer no facilities for explicitly controlling concurrent updates. Finally, this option tends to be a better fit for key-based access (rather than filter-based access such as queries) since this makes it easier to aggressively invalidate cache entries without worrying about when they will be reloaded. The advantage of this approach is that it allows strong data consistency as long as optimistic concurrency control is used to ensure that database updates are applied correctly regardless of whether the cache contains stale (or even dirty) data. Another benefit of this approach is that it avoids the limitations of Coherence's write-through caching implementation. TransactionMap is generally used when Coherence acts as system of record. TransactionMap is not generally compatible with write-through caching, so it will usually be either used to manage a standalone cache or when the cache is backed by a database via write-behind caching. TransactionMap has some restrictions that may limit its utility, the most significant being: The lock-based concurrency model is relatively inefficient and may introduce significant latency and contention. As an example, in a typical configuration, a transaction that updates 20 cache entries will require roughly 40ms just for lock management (assuming all locks are granted immediately, and excluding validation and writing which will require a similar amount of time). This may be partially mitigated by denormalizing (e.g. combining a parent object and its set of child objects into a single cache entry), at the cost of increasing false contention (e.g. transactions will conflict even when updating different child objects). If the client (application server JVM) fails during the commit phase, locks will be released immediately, and the transaction may be partially committed. In practice, this is usually not as bad as it may sound since the commit phase is usually very short (all locks having been previously acquired). Note that this vulnerability does not exist when a single NamedCache is used and all updates are confined to a single partition (generally implying the use of partition affinity). The unconventional TransactionMap API is cumbersome but manageable. Only a few methods are transactional, primarily get(), put() and remove(). The ACID transactions framework (accessed via the Connection class) provides atomicity guarantees by implementing the NamedCache interface, maintaining its own cache data and transaction logs inside a set of private partitioned caches. This feature may be used as either a local transactional resource or as logging XA resource. However, a lack of database integration precludes the use of this functionality for most applications. A side effect of this is that this feature has not seen significant adoption, meaning that any use of this is subject to the usual headaches associated with being an early adopter (greater chance of bugs and greater risk of hitting an unoptimized code path). As a result, for the moment, we generally recommend against using this feature. In summary, it is possible to use Coherence in XA-oriented applications, and several customers are doing this successfully, but it is not a core usage model for the product, so care should be taken before committing to this path. For most applications, the most robust solution is normally to use Coherence as a read-only cache of the underlying data resources, even if this prevents taking advantage of certain product features.

    Read the article

  • R12.0 Cash Management Consolidated Patch Collection (CPC) And R12.1 Cash Management Recommended Patch Collection (RPC)

    - by user793553
    If you have Oracle E-Business Suite's Cash Management (CE) application installed, you'll want to be sure to install the latest CPC (Consolidated Patch Collection) if you are using a R12.0 version of the apps, or the latest RPC (Recommended Patch Collection) for the R12.1 version of the apps. These collections give you all the fixes currently available for known issues in the specified versions of the application, including all of the latest Root Cause Analysis Fixes (RCAs)! What is an "RPC" (for R12.1 users)? Since the release of 12.1, a number of recommended patches for Oracle Cash Management have been made available as standalone patches to help address important business process issues. Adoption of these patches was highly recommended at the time, but not always implemented, so to further facilitate adoption of these patches, Oracle consolidated them into product-specific Recommended Patch Collections (RPCs) - a collection of recommended patches. They were created by Oracle Development with the following goals in mind: Stability: To address data integrity issues that have been identified by Oracle Development and Oracle Software Support as having the potential to interfere with the normal completion of important business processes (such as, period close, etc.). Root Cause Fixes (RCAs): To make available root cause fixes for known data integrity issues. Compact: To keep the file footprint as small as possible to help facilitate the install process and minimize testing. Granular: To compile the collection of patches based on functional areas, allowing a customer to apply multiple RPCs at once, or in phases (based on individual needs and goals). Where to start ALL R12 Cash Management users (R12.0 and R12.1 users) should start with the following Note on My Oracle Support (MOS): Doc ID 1367845.1: R12: Cash Management Recommended Patch Collections It's a great place for important implementation information about both sets of critical patch collections! For R12.1x users R12.1 users should also take a look at the documents below for even more information about the RPC for the R12.1.x versions of the Cash Management application, and other related available RPCs: Note Number  Title                                                                                                      1489997.1 Master Troubleshooting Guide for CE: Reconciliation & Clearing [VIDEO] 954704.1 EBS: R12.1 Oracle Financials Recommended Patch Collections (RPCs) 1316506.1 R12: Oracle CE: Upgrading from R11i to R12.1: Latest Recommended Patches Patch Wizard Utility While a patch may contain several hundred files, the impact on your system may actually be minimal. Patches contain hard prerequisites that are intended to make a patch work on a very low code baseline. The Patch Wizard Utility will give you a detailed analysis of the patch’s impact on your instance BEFORE it’s applied, so you’ll know exactly what to expect from the application. Please refer to Doc ID 976188.1 for more information on this important utility

    Read the article

  • "Mega Menus" for SEO [duplicate]

    - by Thought Space Designs
    This question already has an answer here: How do I handle having to many links on a webpage because of my menu 4 answers I'm using the term "Mega Menus" loosely here. I'm redesigning my WordPress site (it's going to be responsive), and as part of the redesign, I was debating incorporating some sort of descriptive menu setup. For example, normal navigation drop down menus come in the form of unordered lists of links like so: <nav> <ul> <li> <a href="#">Link1</a> </li> <li> <a href="#">Link2</a> </li> <li> <a href="#">Link3</a> <ul> <li> <a href="#">Sub Link1</a> </li> <li> <a href="#">Sub Link2</a> </li> <li> <a href="#">Sub Link3</a> </li> </ul> </li> <li> <a href="#">Link4</a> </li> </ul> </nav> What I'm looking to do is build my drop down menus with more information than your standard menu. For example, I have a top level link named "Team", and under that link, I want to make a large drop down that contains head shots, headers (in the form of styled p tags) and brief (<100 words) descriptions of each team member (only 2 currently). I want to accompany this with a "Read More" link that takes you to their actual team page. This is just one example, of course, and the other top level links would also have descriptive drop downs in the same fashion. On mobile, I was planning on hiding the "mega menu", and delivering a standard unordered list of links. Here's what I was thinking for overall structure and syntax: <nav> <ul> <li> <a href="#">Home</a> </li> <li> <a href="#">About</a> </li> <li> <a href="#">Team</a> <ul> <!-- DESKTOP --> <li class="mega-menu row"> <a class="col-sm-6" href="#"> <div class="row"> <div class="col-sm-4"> <img src="#" alt="Team Member 1" /> </div> <div class="col-sm-8"> <p class="header">Team Member 1</p> <p>Short description goes here.</p> </div> </div> </a> <a class="col-sm-6" href="#"> <!-- OTHER TEAM MEMBER INFO --> </a> </li> <!-- END DESKTOP --> <!-- MOBILE --> <li> <a href="#">Team Member 1</a> </li> <li> <a href="#">Team Member 2</a> </li> <!-- END MOBILE --> </ul> </li> <li> <a href="#">Contact</a> </li> </ul> </nav> Can anybody think of any potential SEO ramifications of doing this? I'm not going to be loading these menus full of links, so it shouldn't hurt page rank, but what are the effects of having a good bit of text and maybe even forms within nav elements? Is there such a thing as overloading nav with HTML? EDIT: Here's an example of what the menu would look like rendered on desktop. I'm currently hovering the "Team" menu, but you can't see because my mouse went away when I took the screenshot. EDIT 2: This question is not a duplicate. I'm not going to have "too many" links in my menus. I'm wondering how having images and text inside of header navigation will affect my menus. Also, I don't just want "yes, this is bad" answers. Please cite your sources and be specific with reasoning.

    Read the article

  • Blocking access to websites with objective-C / root privileges in objective-C

    - by kvaruni
    I am writing a program in Objective-C (XCode 3.2, on Snow Leopard) that is capable of either selectively blocking certain sites for a duration or only allow certain sites (and thus block all others) for a duration. The reasoning behind this program is rather simple. I tend to get distracted when I have full internet access, but I do need internet access during my working hours to get to a number of work-related websites. Clearly, this is not a permanent block, but only helps me to focus whenever I find myself wandering a bit too much. At the moment, I am using a Unix script that is called via AppleScript to obtain Administrator permissions. It then activates a number of ipfw rules and clears those after a specific duration to restore full internet access. Simple and effective, but since I am running as a standard user, it gets cumbersome to enter my administrator password each and every time I want to go "offline". Furthermore, this is a great opportunity to learn to work with XCode and Objective-C. At the moment, everything works as expected, minus the actual blocking. I can add a number of sites in a list, specify whether or not I want to block or allow these websites and I can "start" the blocking by specifying a time until which I want to stay "offline". However, I find it hard to obtain clear information on how I can run a privileged Unix command from Objective-C. Ideally, I would like to be able to store information with respect to the Administrator account into the Keychain to use these later on, so that I can simply move into "offline" mode with the convenience of clicking a button. Even more ideally, there might be some class in Objective-C with which I can block access to some/all websites for this particular user without needing to rely on privileged Unix commands. A third possibility is in starting this program with root permissions and the reducing the permissions until I need them, but since this is a GUI application that is nested in the menu bar of OS X, the results are rather awkward and getting it to run each and every time with root permission is no easy task. Anyone who can offer me some pointers or advice? Please, no security-warnings, I am fully aware that what I want to do is a potential security threat.

    Read the article

  • NSFetchedResultsController crashing on performFetch: when using a cache

    - by Oliver
    I make use of NSFetchedResultsController to display a bunch of objects, which are sectioned using dates. On a fresh install, it all works perfectly and the objects are displayed in the table view. However, it seems that when the app is relaunched I get a crash. I specify a cache when initialising the NSFetchedResultsController, and when I don't it works perfectly. Here is how I create my NSFetchedResultsController: - (NSFetchedResultsController *)results { // If we are not nil, stop here if (results != nil) return results; // Create the fetch request, entity and sort descriptors NSFetchRequest *fetch = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Event" inManagedObjectContext:self.managedObjectContext]; NSSortDescriptor *descriptor = [[NSSortDescriptor alloc] initWithKey:@"utc_start" ascending:YES]; NSArray *descriptors = [[NSArray alloc] initWithObjects:descriptor, nil]; // Set properties on the fetch [fetch setEntity:entity]; [fetch setSortDescriptors:descriptors]; // Create a fresh fetched results controller NSFetchedResultsController *fetched = [[NSFetchedResultsController alloc] initWithFetchRequest:fetch managedObjectContext:self.managedObjectContext sectionNameKeyPath:@"day" cacheName:@"Events"]; fetched.delegate = self; self.results = fetched; // Release objects and return our controller [fetched release]; [fetch release]; [descriptor release]; [descriptors release]; return results; } These are the messages I get when the app crashes: FATAL ERROR: The persistent cache of section information does not match the current configuration. You have illegally mutated the NSFetchedResultsController's fetch request, its predicate, or its sort descriptor without either disabling caching or using +deleteCacheWithName: *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'FATAL ERROR: The persistent cache of section information does not match the current configuration. You have illegally mutated the NSFetchedResultsController's fetch request, its predicate, or its sort descriptor without either disabling caching or using +deleteCacheWithName:' I really have no clue as to why it's saying that, as I don't believe I'm doing anything special that would cause this. The only potential issue is the section header (day), which I construct like this when creating a new object: // Set the new format [formatter setDateFormat:@"dd MMMM"]; // Set the day of the event [event setValue:[formatter stringFromDate:[event valueForKey:@"utc_start"]] forKey:@"day"]; Like I mentioned, all of this works fine if there is no cache involved. Any help appreciated!

    Read the article

  • UML assignment question

    - by waitinforatrain
    Hi guys, Sorry, I know this is a very lame question to ask and not of any use to anyone else. I have an assignment in UML due tomorrow and I don't even know the basics (all-nighter ahead!). I'm not looking for a walkthrough, I simply want your opinion on something. The assignment is as follows (you only need to skim over it!): ============= Gourmet Surprise (GS) is a small catering firm with five employees. During a typical weekend, GS caters fifteen events with twenty to fifty people each. The business has grown rapidly over the past year and the owner wants to install a new computer system for managing the ordering and buying process. GS has a set of ten standard menus. When potential customers call, the receptionist describes the menus to them. If the customer decides to book an event (dinner, lunch, picnic, finger food etc.), the receptionist records the customer information (e.g., name, address, phone number, etc.) and the information about the event (e.g., place, date, time, which one of the standard menus, total price) on a contract. The customer is then faxed a copy of the contract and must sign and return it along with a deposit (often a credit card or by check) before the event is officially booked. The remaining money is collected when the catering is delivered. Sometimes, the customer wants something special (e.g., birthday cake). In this case, the receptionist takes the information and gives it to the owner who determines the cost; the receptionist then calls the customer back with the price information. Sometimes the customer accepts the price, other times, the customer requests some changes that have to go back to the owner for a new cost estimate. Each week, the owner looks through the events scheduled for that weekend and orders the supplies (e.g., plates) and food (e.g., bread, chicken) needed to make them. The owner would like to use the system for marketing as well. It should be able to track how customers learned about GS, and identify repeat customers, so that GS can mail special offers to them. The owner also wants to track the events on which GS sent a contract, but the customer never signed the contract and actually booked a GS. Exercise: Create an activity diagram and a use case model (complete with a set of detail use case descriptions) for the above system. Produce an initial domain model (class diagram) based on these descriptions. Elaborate the use cases into sequence diagrams, and include any state diagrams necessary. Finally use the information from these dynamic models to expand the domain model into a full application model. ============= In your opinion, do you think this question is asking me to come up with a package for an online ordering system to replace the system described above, or to create UML diagrams that facilitate the existing telephone-based system?

    Read the article

  • Application Architecture using WCF and System.AddIn

    - by Silverhalide
    A little background -- we're designing an application that uses a client/server architecture consisting of: A server which loads server-side modules, potentially developed by other teams. A client which loads corresponding client-side modules (also potentially developed by those other teams; each client module corresponds with a server module). The client side communicates with the server side for general coordination, and as well as module specific tasks. (At this point, I think that means client talks to server, client modules talk to server modules.) Environment is .NET 3.5, and client side is WPF. The deployment scenario introduces the potential to upgrade the server, any server-side module, the client, and any client-side module independently. However, being able to "work" using mismatched versions is required. I'm therefore concerned about versioning issues. My thinking so far: A Windows Service for the server. Using System.AddIn for the server to load and communicate with the server modules will give us the greatest flexibility in terms of version compatability between server and server modules. The server and each server module vend WCF services for communication to the client side; communication between the server and a server module, or between two server modules use the AddIn contracts. (One advantage of this is that a module can expose a different interface within the server and outside it.) Similarly, the client uses System.AddIn to find, load, and communicate with the client modules. Client communications with client modules is via the AddIn interface; communications from the client and from client modules to the server side are via WCF. For maximum resilience, each module will run in a separate app-domain. In general, the system has modest performance requirements, so marshalling and crossing process boundaries is not expected to be a performance concern. (Performance requirement is basically summed up by: don't get in the way of the other parts of the system not described here.) My questions are around the idea of having two different communication and versioning models to work with which will be an added burden on our developers. System.AddIn seems quite powerful, but also a little unwieldly. (I'm also unsure of Microsoft's commitment to it in the future.) On the other hand, I'm not thrilled with WCF's versioning capabilities. I have a feeling that it would be possible to implement the System.AddIn view/adapter/contract system within WCF, but being fairly new to both technologies, I would have no idea of where to start. So... Am I on the right track here? Am I doing this the hard way? Are there gotchas I need to be aware of on this road? Thanks.

    Read the article

  • UITableView with background UIImageView hides table controls

    - by Khanzor
    I am having a problem setting the background of UITableView to a UIImageView (see below for why I am doing this), once the view is set, it works fine, and scrolls with the UITableView, but it hides the elements of the Table View. I need to have a UIImageView as the background for a UITableView. I know this has been answered before, but the answers are to use: [UIColor colorWithPatternImage:[UIImage imageNamed:@"myImage.png"]]; Or something like (which I need to use): UIImageView *background = [MainWindow generateBackgroundWithFrame:tableView.bounds]; [tableView addSubview:background]; [tableView sendSubviewToBack:background]; The reason I need to use the latter is because of my generateBackgroundWithFrame method, which takes a large image, and draws a border around that image to the dimensions specified, and clips the remainder of the image: + (UIImageView *) generateBackgroundWithFrame: (CGRect)frame { UIImageView *background = [[[UIImageView alloc] initWithFrame:frame] autorelease]; background.image = [UIImage imageNamed:@"globalBackground.png"]; [background.layer setMasksToBounds:YES]; [background.layer setCornerRadius:10.0]; [background.layer setBorderColor:[[UIColor grayColor] CGColor]]; [background.layer setBorderWidth:3.0]; return background; } Please note: I understand that this might poorly effect performance, but I don't have the resources to go through and make those images for each potential screen in the app. Please do not answer this question with a mindless "you shouldn't do this" response. I am aware that it is possibly the wrong thing to do. How do I show my UITableView control elements? Is there something that I am doing wrong in my delegate? Here is a simplified version: - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithFrame:CGRectMake(20, 20, 261, 45) reuseIdentifier:CellIdentifier] autorelease]; cell.accessoryType = UITableViewCellAccessoryDetailDisclosureButton; UIImage *rowBackground; NSString *imageName = @"standAloneTVButton.png"; rowBackground = [UIImage imageNamed:imageName]; UITextView *textView = [[UITextView alloc] initWithFrame:CGRectMake(0, 5, 300, 200)]; textView.backgroundColor = [UIColor clearColor]; textView.textColor = [UIColor blackColor]; textView.font = [UIFont fontWithName:@"Helvetica" size:18.0f]; Purchase *purchase = [[PurchaseModel productsPurchased] objectAtIndex:indexPath.row]; textView.text = [purchase Title]; selectedTextView.text = textView.text; UIImageView *normalBackground = [[[UIImageView alloc] init] autorelease]; normalBackground.image = rowBackground; [normalBackground insertSubview:textView atIndex:0]; cell.backgroundView = normalBackground; [textView release]; } return cell; }

    Read the article

  • Revisiting .NET, but what should I focus on?

    - by Wayne M
    After about a two-year hiatus, I'm brushing up on my .NET skills to find a .NET job (my previous two positions have very little development, or development using legacy technologies, so apart from a few very minor apps I have not touched .NET in close to two years). I'm aware of things like ASP.NET MVC, and I have previously read on things like NHibernate and DI/IOC, albeit I have yet to use them apart from very trivial "Hello World" type applications. I have a subscription to Rob Conery's Tekpub website and occasionally watch these videos when I have free time. My concern is this: I don't live in a very technical area. I would be surprised if any but the most tech-savvy companies have heard of, let alone use, ASP.NET MVC, NHibernate (or even LINQ/EF), or know about IoC. I would be willing to bet a large sum of money that 95% of the possible jobs I could obtain will use the following: Visual Source Safe, if any VCS at all ASP.NET 2.0 Webforms (3.5 if lucky) Raw ADO.NET on top of a very thin implementation of the Gateway pattern Stored Procedures in the database for most CRUD operations Gratuitous use of code-behind, with a Service layer if I'm lucky If I were extremely lucky, I might find a shop that has heard of ORMs and either uses one, or has wrote their own data abstraction. Also if I were lucky, the company would be using Model-View-Presenter. In light of this I'm not sure what I should focus on learning. Personally, I would prefer to be using the latest stuff - ASP.NET MVC, NHibernate, jQuery, WCF etc. Reality says I should go back to the basics, since it looks like most potential opportunities aren't going to be anywhere near the cutting edge, or anywhere close to it. And, as much as I would like to find a position and start to show the other developers the benefits, in my past experience this has usually resulted in my being fired for "not being a team player" and doing things the bad old way. So, I am curious how you would approach a situation like this? What should I focus on, in order to A) Reaquaint myself with .NET, and B) Prepare myself to obtain a .NET job again that is more than likely going to use techniques that I and most other knowledgeable developers will scoff at?

    Read the article

  • C# Silverlight - Delay Child Window Load?!

    - by Goober
    The Scenario Currently I have a C# Silverlight Application That uses the domainservice class and the ADO.Net Entity Framework to communicate with my database. I want to load a child window upon clicking a button with some data that I retrieve from a server-side query to the database. The Process The first part of this process involves two load operations to load separate data from 2 tables. The next part of the process involves combining those lists of data to display in a listbox. The Problem The problem with this is that the first two asynchronous load operations haven't returned the data by the time the section of code to combine these lists of data is reached, thus result in a null value exception..... Initial Load Operations To Get The Data: public void LoadAudits(Guid jobID) { var context = new InmZenDomainContext(); var imageLoadOperation = context.Load(context.GetImageByIDQuery(jobID)); imageLoadOperation.Completed += (sender3, e3) => { imageList = ((LoadOperation<InmZen.Web.Image>)sender3).Entities.ToList(); }; var auditLoadOperation = context.Load(context.GetAuditByJobIDQuery(jobID)); auditLoadOperation.Completed += (sender2, e2) => { auditList = ((LoadOperation<Audit>)sender2).Entities.ToList(); }; } I Then Want To Execute This Immediately: IEnumerable<JobImageAudit> jobImageAuditList = from a in auditList join ai in imageList on a.ImageID equals ai.ImageID select new JobImageAudit { JobID = a.JobID, ImageID = a.ImageID.Value, CreatedBy = a.CreatedBy, CreatedDate = a.CreatedDate, Comment = a.Comment, LowResUrl = ai.LowResUrl, }; auditTrailList.ItemsSource = jobImageAuditList; However I can't because the async calls haven't returned with the data yet... Thus I have to do this (Perform the Load Operations, Then Press A Button On The Child Window To Execute The List Concatenation and binding): private void LoadAuditsButton_Click(object sender, RoutedEventArgs e) { IEnumerable<JobImageAudit> jobImageAuditList = from a in auditList join ai in imageList on a.ImageID equals ai.ImageID select new JobImageAudit { JobID = a.JobID, ImageID = a.ImageID.Value, CreatedBy = a.CreatedBy, CreatedDate = a.CreatedDate, Comment = a.Comment, LowResUrl = ai.LowResUrl, }; auditTrailList.ItemsSource = jobImageAuditList; } Potential Ideas for Solutions: Delay the child window displaying somehow? Potentially use DomainDataSource and the Activity Load control?! Any thoughts, help, solutions, samples comments etc. greatly appreciated.

    Read the article

  • Why is code quality not popular?

    - by Peter Kofler
    I like my code being in order, i.e. properly formatted, readable, designed, tested, checked for bugs, etc. In fact I am fanatic about it. (Maybe even more than fanatic...) But in my experience actions helping code quality are hardly implemented. (By code quality I mean the quality of the code you produce day to day. The whole topic of software quality with development processes and such is much broader and not the scope of this question.) Code quality does not seem popular. Some examples from my experience include Probably every Java developer knows JUnit, almost all languages implement xUnit frameworks, but in all companies I know, only very few proper unit tests existed (if at all). I know that it's not always possible to write unit tests due to technical limitations or pressing deadlines, but in the cases I saw, unit testing would have been an option. If a developer wanted to write some tests for his/her new code, he/she could do so. My conclusion is that developers do not want to write tests. Static code analysis is often played around in small projects, but not really used to enforce coding conventions or find possible errors in enterprise projects. Usually even compiler warnings like potential null pointer access are ignored. Conference speakers and magazines would talk a lot about EJB3.1, OSGI, Cloud and other new technologies, but hardly about new testing technologies or tools, new static code analysis approaches (e.g. SAT solving), development processes helping to maintain higher quality, how some nasty beast of legacy code was brought under test, ... (I did not attend many conferences and it propably looks different for conferences on agile topics, as unit testing and CI and such has a higer value there.) So why is code quality so unpopular/considered boring? EDIT: Thank your for your answers. Most of them concern unit testing (and has been discussed in a related question). But there are lots of other things that can be used to keep code quality high (see related question). Even if you are not able to use unit tests, you could use a daily build, add some static code analysis to your IDE or development process, try pair programming or enforce reviews of critical code.

    Read the article

  • Problem with waveOutWrite and waveOutGetPosition deadlock

    - by MusiGenesis
    I'm working on an app that plays audio continuously using the waveOut... API from winmm.dll. The app uses "leapfrog" buffers, which are basically a bunch of arrays of samples that you dump into the audio queue. Windows plays them seamlessly in sequence, and as each buffer completes Windows calls a callback function. Inside this function, I load the next set of samples into the buffer, process them however, and then dump the buffer back into the audio queue. In this way, the audio plays indefinitely. For animation purposes, I'm trying to incorporate waveOutGetPosition into the application (since the "buffer done" callbacks are irregular enough to cause jerky animation). waveOutGetPosition returns the current position of playback, so it's hyper-precise. The problem is that in my application, making calls to waveOutGetPosition eventually causes the application to lock up - the sound stops and the call never returns. I've boiled things down to a simple app that demonstrates the problem. You can run the app here: http://www.musigenesis.com/SO/waveOut%20demo.exe If you just hear a tiny bit of piano over and over, it's working. It's just meant to demonstrate the problem. The source code for this project is here: http://www.musigenesis.com/SO/WaveOutDemo.zip The first button runs the app in leapfrog mode without making the calls to waveOutGetPosition. If you click this, the app will play forever without breaking (the X button will close it and shut it off). The second button starts the leapfrogger and also starts a forms timer that calls the waveOutGetPosition and displays the current position. Click this and the app will run for a short while and then lock up. On my laptop, it usually locks up in 15-30 seconds; at most it's taken a minute. I have no idea how to fix this, so any help or suggestions would be most welcome. I've found very few posts on this issue, but it seems that there is a potential deadlock, either from multiple calls to waveOutGetPosition or from calls to that and waveOutWrite that occur at the same time. It's possible that I'm calling this too frequently for the system to handle.

    Read the article

  • Featureful commercial text editors?

    - by wrp
    I'm willing to buy tools if they add genuine value over a FOSS equivalent. One thing I wouldn't mind having is an editor with the power of Emacs, but made more user-friendly. There seem to be several commercial editors out there, but I can't find much discussion of them online. Maybe it's because the kind of people who use commercial software don't have time to do much blogging. ;-) If you have used any, what was your evaluation? I'd especially like to hear how you would compare them to Emacs. I'm thinking of editors like VEDIT, Boxer, Crisp, UltraEdit, SlickEdit, etc. To get things started, I tried EditPad Pro because I needed something on a Win98SE box. I was attracted by its powerful support for regexps, but I didn't use it for long. One annoyance was that find-in-files was only available in a separate product you had to buy. The main problem, though, was stability. It sometimes hung and I lost a few files because it corrupted them while editing. After a couple weeks, I found that I was avoiding using it, so I just uninstalled. Edit: Ah...I need to remove some ambiguity. With reference to Emacs, "power" often means its potential for customization. This malleability comes from having an architecture in which most of the functionality is written in a scripting language that runs on a compiled core. Emacs (with elisp) is by far the most widely known such system among home users, but there have been other heavily used editors such as Freemacs (MINT), JED (S-Lang), XEDIT (Rexx), ADAM (TPU), and SlickEdit (Slick-C). In this case, by "power" I'm not referring to extensibility but to realized features. There are three main areas which I think a commercial text editor might be an improvement over Emacs: Stability The only apps I regularly use on Linux that give me flaky behavior are Emacs, Gedit, and Geany. On Windows, I like the look and features of Notepad++, but I find it extremely unstable, especially if I try to use the plugins. Whatever I happen to be doing, I'm using some text editor practically all day long. If I could switch to an editor that never gave me problems, it would definitely lower my stress level. Tools When I started using Emacs, I searched the manual cover to cover to gleam ideas for clever, useful things I could do with it. I'd like to see lots of useful features for editing code, based on detailed knowledge of what the system can do and the accumulated feedback of users. Polish The rule of threes goes that if you develop something for yourself, it's three times harder to make it usable in-house, and three times harder again to make it a viable product for sale. It's understandable, but free software development doesn't seem to benefit from much usability testing. BTW, texteditors.org is a fantastic resource for researching text editors.

    Read the article

< Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >