Search Results

Search found 69138 results on 2766 pages for 'oracle data mining'.

Page 402/2766 | < Previous Page | 398 399 400 401 402 403 404 405 406 407 408 409  | Next Page >

  • Where’s my MD.050?

    - by Dave Burke
    A question that I’m sometimes asked is “where’s my MD.050 in OUM?” For those not familiar with an MD.050, it serves the purpose of being a Functional Design Document (FDD) in one of Oracle’s legacy Methods. Functional Design Documents have existed for many years with their primary purpose being to describe the functional aspects of one or more components of an IT system, typically, a Custom Extension of some sort. So why don’t we have a direct replacement for the MD.050/FDD in OUM? In simple terms, the disadvantage of the MD.050/FDD approach is that it tends to lead practitioners into “Design mode” too early in the process. Whereas OUM encourages more emphasis on gathering, and describing the functional requirements of a system ahead of the formal Analysis and Design process. So that just means more work up front for the Business Analyst or Functional Consultants right? Well no…..the design of a solution, particularly when it involves a complex custom extension, does not necessarily take longer just because you put more thought into the functional requirements. In fact, one could argue the complete opposite, in that by putting more emphasis on clearly understanding the nuances of functionality requirements early in the process, then the overall time and cost incurred during the Analysis to Design process should be less. In short, as your understanding of requirements matures over time, it is far easier (and more cost effective) to update a document or a diagram, than to change lines of code. So how does that translate into Tasks and Work Products in OUM? Let us assume you have reached a point on a project where a Custom Extension is needed. One of the first things you should consider doing is creating a Use Case, and remember, a Use Case could be as simple as a few lines of text reflecting a “User Story”, or it could be what Cockburn1 describes a “fully dressed Use Case”. It is worth mentioned at this point the highly scalable nature of OUM in the sense that “documents” should not be produced just because that is the way we have always done things. Some projects may well be predicated upon a base of electronic documents, whilst other projects may take a much more Agile approach to describing functional requirements; through “User Stories” perhaps. In any event, it is quite common for a Custom Extension to involve the creation of several “components”, i.e. some new screens, an interface, a report etc. Therefore several Use Cases might be required, which in turn can then be assembled into a Use Case Package. Once you have the Use Cases attributed to an appropriate (fit-for-purpose) level of detail, and assembled into a Package, you can now create an Analysis Model for the Package. An Analysis Model is conceptual in nature, and depending on the solution being developing, would involve the creation of one or more diagrams (i.e. Sequence Diagrams, Collaboration Diagrams etc.) which collectively describe the Data, Behavior and Use Interface requirements of the solution. If required, the various elements of the Analysis Model may be indexed via an Analysis Specification. For Custom Extension projects that follow a pure Object Orientated approach, then the Analysis Model will naturally support the development of the Design Model without any further artifacts. However, for projects that are transitioning to this approach, then the various elements of the Analysis Model may be represented within the Analysis Specification. If we now return to the original question of “Where’s my MD.050”. The full answer would be: Capture the functional requirements within a Use Case Group related Use Cases into a Package Create an Analysis Model for each Package Consider creating an Analysis Specification (AN.100) as a index to each Analysis Model artifact An alternative answer for a relatively simple Custom Extension would be: Capture the functional requirements within a Use Case Optionally, group related Use Cases into a Package Create an Analysis Specification (AN.100) for each package 1 Cockburn, A, 2000, Writing Effective Use Case, Addison-Wesley Professional; Edition 1

    Read the article

  • SQL Server – SafePeak “Logon Trigger” Feature for Managing Data Access

    - by pinaldave
    Lately I received an interesting question about the abilities of SafePeak for SQL Server acceleration software: Q: “I would like to use SafePeak to make my CRM application faster. It is an application we bought from some vendor, after a while it became slow and we can’t reprogram it. SafePeak automated caching sounds like an easy and good solution for us. But, in my application there are many servers and different other applications services that address its main database, and some even change data, and I feel that there is a chance that some servers that during the connection process we may miss some. Is there a way to ensure that SafePeak will be aware of all connections to the SQL Server, so its cache will remain intact?” Interesting question, as I remember that SafePeak (http://www.safepeak.com/Product/SafePeak-Overview) likes that all traffic to the database will go thru it. I decided to check out the features of SafePeak latest version (2.1) and seek for an answer there. A: Indeed I found SafePeak has a feature they call “Logon Trigger” and is designed for that purpose. It is located in the user interface, under: Settings -> SQL instances management  ->  [your instance]  ->  [Logon Trigger] tab. From here you activate / deactivate it and control a white-list of enabled server IPs and Login names that SafePeak will ignore them. Click to Enlarge After activation of the “logon trigger” Safepeak server is notified by the SQL Server itself on each new opened connection. Safepeak monitors those connections and decides if there is something to do with them or not. On a typical installation SafePeak likes all application and users connections to go via SafePeak – this way it knows about data and schema updates immediately (real time). With activation of the safepeak “logon trigger”  a special CLR trigger is deployed on the SQL server and notifies Safepeak on any connection that has not arrived via SafePeak. In such cases Safepeak can act to clear and lock the cache or to ignore it. This feature enables to make sure SafePeak will be aware of all connections so SafePeak cache will maintain exactly correct all times. So even if a user, like a DBA will connect to the SQL Server not via SafePeak, SafePeak will know about it and take actions. The notification does not impact the work of that connection, the user or application still continue to do whatever they planned to do. Note: I found that activation of logon trigger in SafePeak requires that SafePeak SQL login will have the next permissions: 1) CONTROL SERVER; 2) VIEW SERVER STATE; 3) And the SQL Server instance is CLR enabled; Seeing SafePeak in action, I can say SafePeak brings fantastic resource for those who seek to get performance for SQL Server critical apps. SafePeak promises to accelerate SQL Server applications in just several hours of installation, automatic learning and some optimization configuration (no code changes!!!). If better application and database performance means better business to you – I suggest you to download and try SafePeak. The solution of SafePeak is indeed unique, and the questions I receive are very interesting. Have any more questions on SafePeak? Please leave your question as a comment and I will try to get an answer for you. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Installing SQL Server Data Tools and SSRS

    - by Pinal Dave
    This example is from the Beginning SSRS by Kathi Kellenberger. Supporting files are available with a free download from the www.Joes2Pros.com web site. If you have installed SQL Server, but are missing the Data Tools or Reporting Services Double-click the SQL Server 2012 installation media. Click the Installation link on the left to view the Installation options. Click the top link New SQL Server stand-alone installation or add features to an existing installation. Follow the SQL Server Setup wizard until you get to the Installation Type screen. At that screen, select Add features to an existing instance of SQL Server 2012. Click Next to move to the Feature Selection page. Select Reporting Services – Native and SQL Server Data Tools. If the Management Tools have not been installed, go ahead and choose them as well. Continue through the wizard and reboot the computer at the end of the installation if instructed to do so. Configure Reporting Services If you installed Reporting Services during the installation of the SQL Server instance, SSRS will be configured automatically for you. If you install SSRS later, then you will have to go back and configure it as a subsequent step. Click Start > All Programs > Microsoft SQL Server 2012 > Configuration Tools > Reporting Services Configuration Manager > Connect on the Reporting Services Configuration Connection dialog box. On the left-hand side of the Reporting Services Configuration Manager, click Database. Click the Change Database button on the right side of the screen. Select Create a new report server database and click Next. Click through the rest of the wizard accepting the defaults. This wizard creates two databases: ReportServer, used to store report definitions and security, and ReportServerTempDB which is used as scratch space when preparing reports for user requests. Now click Web Service URL on the left-hand side of the Reporting Services Configuration Manager. Click the Apply button to accept the defaults. If the Apply button has been grayed out, move on to the next step. This step sets up the SSRS web service. The web service is the program that runs in the background that communicates between the web page, which you will set up next, and the databases. The final configuration step is to select the Report Manager URL link on the left. Accept the default settings and click Apply. If the Apply button was already grayed out, this means the SSRS was already configured. This step sets up the Report Manager web site where you will publish reports. You may be wondering if you also must install a web server on your computer. SQL Server does not require that the Internet Information Server (IIS), the Microsoft web server, be installed to run Report Manager. Click Exit to dismiss the Reporting Services Configuration Manager dialog box. Tomorrow’s Post Tomorrow’s blog post will show how to create your first report using the Report Wizard. If you want to learn SSRS in easy to simple words – I strongly recommend you to get Beginning SSRS book from Joes 2 Pros. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Reporting Services, SSRS

    Read the article

  • Patching and PCI Compliance

    - by Joel Weise
    One of my friends and master of the security universe, Darren Moffat, pointed me to Dan Anderson's blog the other day.  Dan went to Toorcon which is a security conference where he went to a talk on security patching titled, "Stop Patching, for Stronger PCI Compliance".  I realize that often times speakers will use a headline grabbing title to create interest in their talk and this one certainly got my attention.  I did not go to the conference and did not see the presentation, so I can only go by what is in the Toorcon agenda summary and on Dan's blog, but the general statement to stop patching for stronger PCI compliance seems a bit misleading to me.  Clearly patching is important to all systems management and should be a part of any organization's security hygiene.  Further, PCI does require the patching of systems to maintain compliance.  So it's important to mention that organizations should not simply stop patching their systems; and I want to believe that was not the speakers intent. So let's look at PCI requirement 6: "Unscrupulous individuals use security vulnerabilities to gain privileged access to systems. Many of these vulnerabilities are fixed by vendor- provided security patches, which must be installed by the entities that manage the systems. All critical systems must have the most recently released, appropriate software patches to protect against exploitation and compromise of cardholder data by malicious individuals and malicious software." Notice the word "appropriate" in the requirement.  This is stated to give organizations some latitude and apply patches that make sense in their environment and that target the vulnerabilities in question.  Haven't we all seen a vulnerability scanner throw a false positive and flag some module and point to a recommended patch, only to realize that the module doesn't exist on our system?  Applying such a patch would obviously not be appropriate.  This does not mean an organization can ignore the fact they need to apply security patches.  It's pretty clear they must.  Of course, organizations have other options in terms of compliance when it comes to patching.  For example, they could remove a system from scope and make sure that system does not process or contain cardholder data.  [This may or may not be a significant undertaking.  I just wanted to point out that there are always options available.] PCI DSS requirement 6.1 also includes the following note: "Note: An organization may consider applying a risk-based approach to prioritize their patch installations. For example, by prioritizing critical infrastructure (for example, public-facing devices and systems, databases) higher than less-critical internal devices, to ensure high-priority systems and devices are addressed within one month, and addressing less critical devices and systems within three months." Notice there is no mention to stop patching one's systems.  And the note also states organization may apply a risk based approach. [A smart approach but also not mandated].  Such a risk based approach is not intended to remove the requirement to patch one's systems.  It is meant, as stated, to allow one to prioritize their patch installations.   So what does this mean to an organization that must comply with PCI DSS and maintain some sanity around their patch management and overall operational readiness?  I for one like to think that most organizations take a common sense and balanced approach to their business and security posture.  If patching is becoming an unbearable task, review why that is the case and possibly look for means to improve operational efficiencies; but also recognize that security is important to maintaining the availability and integrity of one's systems.  Likewise, whether we like it or not, the cyber-world we live in is getting more complex and threatening - and I dont think it's going to get better any time soon.

    Read the article

  • Stretch in multiple components using af:popup, af:region, af:panelTabbed

    - by Arvinder Singh
    Case study: I have a pop-up(dialogue) that contains a region(separate taskflow) showing a tab. The contents of this tab is in a region having a separate taskflow. The jsff page of this taskflow contains a panelSplitter which in turn contains a table. In short the components are : pop-up(dialogue) --> region(separate taskflow) --> tab --> region(separate taskflow) --> panelSplitter --> table At times the tab is not displayed with 100% width or the table in panelSplitter is not 100% visible or the splitter is not visible. Maintaining the stretch for all the components is difficult......not any more!!! Below is the solution that you can make use of in many similar scenarios. I am mentioning the major code snippets affecting the stretch and alignment. pop-up: <af:popup> <af:dialog id="d2" type="none" title="" inlineStyle="width:1200px"> <af:region value="#{bindings.PriceChangePopupFlow1.regionModel}" id="r1"/> </af:dialog> The above region is a jsff containing multiple tabs. I am showing code for a single tab. I kept the tab in a panelStretchLayout. <af:panelStretchLayout id="psl1" topHeight="300px" styleClass="AFStretchWidth"> <af:panelTabbed id="pt1"> <af:showDetailItem text="PO Details" id="sdi1" stretchChildren="first" > <af:region value="#{bindings.PriceChangePurchaseOrderFlow1.regionModel}" id="r1" binding="# {pageFlowScope.priceChangePopupBean.poDetailsRegion}" /> This "region" displays a .jsff containing a table in a panelSplitter. <af:panelSplitter id="ps1"  orientation="horizontal" splitterPosition="700"> <f:facet name="first"> <af:panelHeader text="PurchaseOrder" id="ph1"> <af:table id="md1" rows="#{bindings.PurchaseOrderVO.rangeSize}" That's it!!! We're done... Note the stretchChildren="first" attribute in the af:showDetailItem. That does the trick for us. Oracle docs say the following about stretchChildren :  Valid Values: none, first The stretching behavior for children. Acceptable values include: "none": does not attempt to stretch any children (the default value and the value you need to use if you have more than a single child; also the value you need to use if the child does not support being stretched) "first": stretches the first child (not to be used if you have multiple children as such usage will produce unreliable results; also not to be used if the child does not support being stretched)

    Read the article

  • MySQL Cluster 7.3: On-Demand Webinar and Q&A Available

    - by Mat Keep
    The on-demand webinar for the MySQL Cluster 7.3 Development Release is now available. You can learn more about the design, implementation and getting started with all of the new MySQL Cluster 7.3 features from the comfort and convenience of your own device, including: - Foreign Key constraints in MySQL Cluster - Node.js NoSQL API  - Auto-installation of higher performance distributed, clusters We received some great questions over the course of the webinar, and I wanted to share those for the benefit of a broader audience. Q. What Foreign Key actions are supported: A. The core referential actions defined in the SQL:2003 standard are implemented: CASCADE RESTRICT NO ACTION SET NULL Q. Where are Foreign Keys implemented, ie data nodes or SQL nodes? A. They are implemented in the data nodes, therefore can be enforced for both the SQL and NoSQL APIs Q. Are they compatible with the InnoDB Foreign Key implementation? A. Yes, with the following exceptions: - InnoDB doesn’t support “No Action” constraints, MySQL Cluster does - You can choose to suspend FK constraint enforcement with InnoDB using the FOREIGN_KEY_CHECKS parameter; at the moment, MySQL Cluster ignores that parameter. - You cannot set up FKs between 2 tables where one is stored using MySQL Cluster and the other InnoDB. - You cannot change primary keys through the NDB API which means that the MySQL Server actually has to simulate such operations by deleting and re-adding the row. If the PK in the parent table has a FK constraint on it then this causes non-ideal behaviour. With Restrict or No Action constraints, the change will result in an error. With Cascaded constraints, you’d want the rows in the child table to be updated with the new FK value but, the implicit delete of the row from the parent table would remove the associated rows from the child table and the subsequent implicit insert into the parent wouldn’t reinstate the child rows. For this reason, an attempt to add an ON UPDATE CASCADE where the parent column is a primary key will be rejected. Q. Does adding or dropping Foreign Keys cause downtime due to a schema change? A. Nope, this is an online operation. MySQL Cluster supports a number of on-line schema changes, ie adding and dropping indexes, adding columns, etc. Q. Where can I see an example of node.js with MySQL Cluster? A. Check out the tutorial and download the code from GitHub Q. Can I use the auto-installer to support remote deployments? How about setting up MySQL Cluster 7.2? A. Yes to both! Q. Can I get a demo Check out the tutorial. You can download the code from http://labs.mysql.com/ Go to Select Build drop-down box Q. What is be minimum internet speen required for Geo distributed cluster with synchronous replication? A. if you're splitting you cluster between sites then we recommend a network latency of 20ms or less. Alternatively, use MySQL asynchronous replication where the latency of your WAN doesn't impact the latency of your reads/writes. Q. Where you can one learn more about the PayPal project with MySQL Cluster? A. Take a look at the following - you'll find press coverage, a video and slides from their keynote presentation  So, if you want to learn more, listen to the new MySQL Cluster 7.3 on-demand webinar  MySQL Cluster 7.3 is still in the development phase, so it would be great to get your feedback on these new features, and things you want to see!

    Read the article

  • Praise for Europe's Smart Metering & Conservation Efforts

    - by caroline.yu
    Recently, a writer at the Home Energy Team praised the UK for its efforts towards smart metering and energy conservation, with an article entitled UK Blazing A Trail With Smart Metering At Home? The article highlighted that the Department of Energy and Climate Change has announced that smart metering will be introduced in the next decade and that all UK households will have smart meters by the year 2020. In fact, the UK is not the only country striving to achieve carbon reduction targets, as many of its European counterparts have begun to take positive steps towards tackling the issue of energy conservation by implementing innovative new metering and billing technologies as well as promoting alternative energy solutions, such as wind and solar power. Since 1997, the states of the European Union, including France, Germany and Spain, have been working towards achieving a target of 12 percent renewable energy electricity by 2010. Germany in particular has made a significant achievement so far, having surpassed the target early in 2007. This success is largely due to the German Renewable Energy Act (EEG), which promoted the use of renewable energy. Recently, analysis from the European Wind Energy Association (EWEA) found that 21 of the EU Member States are meeting or exceeding their national target to achieve 20 percent renewable energy by 2020. However, six states - Belgium, Italy, Luxembourg, Malta, Bulgaria and Denmark - say they will not manage to reach their target through domestic action alone. Bulgaria and Denmark believe that with fresh national initiatives they could meet or exceed their targets, but others, including Italy, may need to import renewable energy from neighboring non-EU countries. Top achievers, according to the EWEA report, are Spain, which believes its renewable energy will reach 22.7 percent by 2020, as well as Germany, Estonia, Greece, Ireland, Poland, Slovakia and Sweden, who will all exceed their targets. "Importantly, the way that this renewable energy is controlled and distributed must be addressed in order to ensure its success," said Bastian Fischer, vice president and general manager EMEA, Oracle Utilities. "A smart gird infrastructure can enable utilities to deal with load distribution in times of increased need and ensure power is always available from these means. A smart grid also underpins the success of metering and billing technologies, such as smart metering, and allows utilities to deal with increased usage data and provide accurate billing." Outside of Europe, Australia has made significant steps towards improving water conservation. The Australian Department of Sustainability and Environment took some of the recent advancements made in the energy sector, including new metering and billing solutions, and applied them to the water industry, enhancing customer service and reducing consumption as a result. The adoption of smart metering in Europe is mainly driven by regulation, but significant technological improvements are being made the world over to change the way we use all kinds of energy. However, the developing markets are lagging behind. One of the primary reasons for this is the lack of infrastructure in place to use as a foundation for setting up energy-saving solutions, which is slowing the adoption of technologies such as smart meters. However, these countries do benefit from fewer outdated infrastructure and legacy systems, which is often cited by others as a difficult barrier to deploying new solutions. As a result, some countries should find new technologies easier to implement and adapt to in the immediate future, without this roadblock.

    Read the article

  • Jersey non blocking client

    - by Pavel Bucek
    Although Jersey already have support for making asynchronous requests, it is implemented by standard blocking way - every asynchronous request is handled by one thread and that thread is released only after request is completely processed. That is OK for lots of cases, but imagine how that will work when you need to do lots of parallel requests. Of course you can limit (and its really wise thing to do, you do want control your resources) number of threads used for asynchronous requests, but you'll get another maybe not pleasant consequence - obviously processing time will incerase. There are few projects which are trying to deal with that problem, commonly named as async http clients. I didn't want to "re-implement a wheel" and I decided I'll use AHC - Async Http Client made by Jeanfrancois Arcand. There is also interesting implementation from Apache - HttpAsyncClient, but it is still in "very early stages of development" and others haven't been in similar or better shape as AHC. How this works? Non-blocking clients allow users to make same asynchronous requests as we can do with standard approach but implementation is different - threads are better utilized, they don't spend most of time in idle state. Simply described - when you make a request (send it over the network), you are waiting for reply from other side. And there comes main advantage of non-blocking approach - it uses these threads for further work, like making other requests or processing responses etc.. Idle time is minimized and your resources (threads) will be far better used. Who should consider using this? Everyone who is making lots of asynchronous requests. I haven't done proper benchmark yet, but some simple dumb tests are showing huge improvement in cases where lots of concurrent asynchronous requests are made in short period. Last but not least - this module is still experimental, so if you don't like something or if you have ideas for improvements/any feedback, feel free to comment this blog post, send mail to [email protected] or contact me personally. All feedback is greatly appreciated! maven dependency (will be present in java.net maven 2 repo by the end of the day): link: http://download.java.net/maven/2/com/sun/jersey/experimental/jersey-non-blocking-client <dependency> <groupId>com.sun.jersey.experimental</groupId> <artifactId>jersey-non-blocking-client</artifactId> <version>1.9-SNAPSHOT</version> </dependency> code snippet: ClientConfig cc = new DefaultNonBlockingClientConfig(); cc.getProperties().put(NonBlockingClientConfig.PROPERTY_THREADPOOL_SIZE, 10); // default value, feel free to change Client c = NonBlockingClient.create(cc); AsyncWebResource awr = c.asyncResource("http://oracle.com"); Future<ClientResponse> responseFuture = awr.get(ClientResponse.class); // or awr.get(new TypeListener<ClientResponse>(ClientResponse.class) { @Override public void onComplete(Future<ClientResponse> f) throws InterruptedException { ... } }); javadoc (temporary location, won't be updated): http://anise.cz/~paja/jersey-non-blocking-client/

    Read the article

  • Bad Spot to Be In: Playing Catch-up with Mobile Advertising

    - by Mike Stiles
    You probably noticed, there’s a mass migration going on from online desktop/laptop usage to smartphone/tablet usage.  It’s an indicator of how we live our lives in the modern world: always on the go, with no intention of being disconnected while out there. Consequently, paid as it relates to mobile advertising is taking the social spotlight. eMarketer estimated that in 2013, US adults would spend about 2 hours, 21 minutes a day on mobile, not counting talking time. More people in the world own smartphones than own toothbrushes (bad news I suppose if you’re marketing toothpaste). They’re using those mobile devices to access social networks, consuming at least 17% of their mobile time on them. Frankly, you don’t need a deep dive into mobile usage stats to know what’s going on. Just look around you in any store, venue or coffee shop. It’s really obvious…our mobile devices are now where we “are,” so that’s where marketers can increasingly reach us. And it’s a smart place for them to do just that. Mobile devices can be viewed more and more as shopping facilitators. Usually when someone is on mobile, they are not in passive research mode. They are likely standing near a store or in front of a product, using their mobile to seek reassurance that buying that product is the right move. They are the hottest of hot prospects. Consider that 4 out of 5 consumers use smartphones to shop, 52% of Americans use mobile devices for in-store for research, 70% of mobile searches lead to online action inside of an hour, and people that find you on mobile convert at almost 3x the rate as those that find you on desktop or laptop. But what are marketers doing? Enter statistics from Mary Meeker’s latest State of the Internet report. Common sense says you buy advertising where people are spending their eyeball time, right? But while mobile is 20% of media use and rising, the ad spend there is 4%. Conversely, while print usage is at 5% and falling, ad spend there is 19%. We all love nostalgia, but come on. There are reasons marketing dollar migration to mobile has not matched user migration, including the availability of mobile ad products and the ability to measure user response to mobile ads. But interesting things are happening now. First came Facebook’s mobile ad, which let app developers pay to get potential downloads. Then their mobile ad network was announced at F8, allowing marketers to target users across non-Facebook apps while leveraging the wealth of diverse data Facebook has on those users, a big deal since Nielsen has pointed out mobile apps make up 89% of the media time spent on mobile. Twitter has a similar play in motion with their MoPub acquisition. And now mobile deeplinks have arrived, which can take users straight to sub-pages of mobile apps for a faster, more direct shopper/researcher user experience. The sooner the gratification, the smoother and faster the conversion. To be clear, growth in mobile ad spending is well underway. After posting $13.1 billion in 2013, Gartner expects global mobile ad spending to reach $18 billion this year, then go to $41.9 billion by 2017. Cheap smartphones and data plans are spreading worldwide, further fueling the shift to mobile. Mobile usage in India alone should grow 400% by 2018. And, of course, there’s the famous statistic that mobile should overtake desktop Internet usage this year. How can we as marketers mess up this opportunity? Two ways. We could position ourselves in perpetual “catch-up” mode and keep spending ad dollars where the public used to be. And we could annoy mobile users with horrid old-school marketing practices. Two-thirds of users told Forrester they think interruptive in-app ads are more annoying than TV ads. Make sure your brand’s social marketing technology platform is delivering a crystal clear picture of your social connections so the mobile touch point is highly relevant, mobile optimized, and delivering real value and satisfying experiences. Otherwise, all we’ve done is find a new way to be unwanted. @mikestiles @oraclesocialPhoto: Kate Mallatratt, freeimages.com

    Read the article

  • Use Case Actors - Primary versus Secondary

    - by Dave Burke
    The Unified Modeling Language (UML1) defines an Actor (from UseCases) as: An actor specifies a role played by a user or any other system that interacts with the subject. In Alistair Cockburn’s book “Writing Effective Use Cases” (2) Actors are further defined as follows: Primary Actor: The primary actor of a use case is the stakeholder that calls on the system to deliver one of its services. It has a goal with respect to the system – one that can be satisfied by its operation. The primary actor is often, but not always, the actor who triggers the use case. Supporting Actors: A supporting actor in a use case in an external actor that provides a service to the system under design. It might be a high-speed printer, a web service, or humans that have to do some research and get back to us. In a 2006 article (3) Cockburn refined the definitions slightly to read: Primary Actors: The Actor(s) using the system to achieve a goal. The Use Case documents the interactions between the system and the actors to achieve the goal of the primary actor. Secondary Actors: Actors that the system needs assistance from to achieve the primary actor’s goal. Finally, the Oracle Unified Method (OUM) concurs with the UML definition of Actors, along with Cockburn’s refinement, but OUM also includes the following: Secondary actors may or may not have goals that they expect to be satisfied by the use case, the primary actor always has a goal, and the use case exists to satisfy the primary actor. Now that we are on the same “page”, let’s consider two examples: A bank loan officer wants to review a loan application from a customer, and part of the process involves a real-time credit rating check. Use Case Name: Review Loan Application Primary Actor: Loan Officer Secondary Actors: Credit Rating System A Human Resources manager wants to change the job code of an employee, and as part of the process, automatically notify several other departments within the company of the change. Use Case Name: Maintain Job Code Primary Actor: Human Resources Manager Secondary Actors: None The first example is quite straight forward; we need to define the Secondary Actor because without the “Credit Rating System” we cannot successfully complete the Use Case. In other words, the goal of the Primary Actor is to successfully complete the Loan Application, but they need the explicit “help” of the Secondary Actor (Credit Rating System) to achieve this goal. The second example is where people sometimes get confused. Within OUM we would not include the “other departments” as Secondary Actors and therefore not include them on the Use Case diagram for the following reasons: The other departments are not required for the successful completion of the Use Case We are not expecting any response from the other departments (at least within the bounds of the Use Case under discussion) Having said that, within the detail of the Use Case Specification Main Success Scenario, we would include something like: “The system sends a notification to the related department heads (ref. Business Rule BR101)” Now let’s consider one final example. A Procurement Manager wants to place a “bid” for some goods using an On-Line Trading Community (B2B version of eBay) Use Case Name: Create Bid Primary Actor: Procurement Manager Secondary Actors: On-Line Trading Community You might wonder why the Trading Community is listed as a Secondary Actor, i.e. if all we are going to do is place a bid for a specific quantity of goods at a given price and send that off to the Trading Community, then why would the Trading Community need to “assist” in that Use Case? Well, once again, it comes back to the “User Experience” and how we want to optimize that when we think about our Use Case, and ultimately, when the developer comes to assembling some code. In this final example, the Procurement Manager cannot successfully complete the “Create Bid” Use Case until they receive an affirmative confirmation back from the Trading Community that the Bid has been accepted. Therefore, the Trading Community must become a Secondary Actor and be referenced both on the Use Case diagram and Use Case Specification. Any astute readers who are wondering about the “single sitting” rule will have to wait for a follow-up Blog entry to find out how that consideration can be factored in!!! Happy Use Case writing! (1) OMG Unified Modeling LanguageTM (OMG UML), Superstructure Version 2.4.1 (2) Cockburn, A, 2000, Writing Effective Use Case, Addison-Wesley Professional; Edition 1 (3) Cockburn, A, 2006 “Use Case fundamentals” viewed 20th March 2012, http://alistair.cockburn.us/Use+case+fundamentals

    Read the article

  • Upgrades in 5 Easy Pieces

    - by Anne R.
    Even though there are a few select tasks that I have to do once or twice a year, I can’t remember how to do them! Or where to find the bits and pieces to complete the task. So I love it when someone consolidates everything under one spot. That’s what the CRM On Demand team has done with the upgrade information. Specifically, they have: Provided a “one-stop” area for managing upgrades at your company. Broken down the upgrade process into 5 (yes, 5) steps. Explained when and how to perform each step with dates specific to your pod. Included details about each step, visible by expanding the step. Translated the steps into 11 languages. Added a list of release-specific resources with links from the page. Now, just head for the Training and Support portal, click the Release Info tab, and walk through the “5 Essential Steps to a Successful Upgrade.” Before you continue, though, select your language from the drop-down list on the Release Info page. CRM On Demand now has the upgrade steps translated into 11 languages. On the Step page, you can expand each section in sequence and follow the more detailed instructions that appear. This will ensure that you’ve covered all your bases for each upgrade. Here’s a shortened version of the information that you’ll find: 1. Verify your Primary Contact Information. Have you checked your primary contact information to make sure you’re being notified of all upgrade information? Or do you want more users to receive upgrade announcements? This section provides you with the navigation path to do that in CRM On Demand. 2. Review your Key Upgrade Dates. If you expand this step, a nice table appears with your critical dates for the various milestones. IMPORTANT: When your CRM On Demand pod has been officially added to the upgrade schedule, closer to the release date itself, this table will display your specific timetable. 3. Migrate your Customizations from the Staging Environment before the Snapshot Date. Oracle refreshes the Staging data with a copy of your Production data made on the Production Snapshot Date. So this section lists considerations relevant to this step. It also reminds you of the 2-week period when you should not be making any changes in your Staging environment.   4. Conduct your Upgrade Validation on the Staging Environment. When the Customer Validation Testing period begins, you need to log in to your Staging Environment to validate that your key business processes and customizations continue to behave as expected. If your company utilizes Web Services, Web Links, Web Applets or Workflow, focus on testing these first. You generally have about two weeks for testing. If you run into problems during this time, follow the instructions shown in this section for logging a service request. It describes exactly how to fill out the fields in the SR for the fastest resolution. 5. Conduct "White Glove" Testing in your Upgraded Production Environment. Before users start using the upgrade, you should access a few tabs and reports. Doing this actually warms up the cache so that frequently used pages and reports will come up at normal speed on Monday morning, when users log in to the upgraded system. Resources listed under this step help you in further preparing for the upgrade. Now there’s also a new Documentation section on the right with links to these release-specific resources.   Very nice, I commented, when discussing these improvements with the “responsible party.” She confirmed that, yes, they tried to consolidate the upgrade information, translate it for better communication, simplify it into 5 easy pieces, and drive admins responsible for handling upgrades to this one site instead of sending out elaborate emails. Yes, I just love it when someone practically reaches out and holds my hand through a process. Next best thing to a wizard!

    Read the article

  • 2013 Predictions for Retail

    - by David Dorf
    Its that time of year to roll out the predictions for next year.  I can't say I've really nailed it in the past, but feel free to look back at my 2012, 2011, and 2010 predictions.  I'm not expecting anything earth-shattering this year; just continued maturation of several technologies that are finally taking hold. 1. Next day delivery -- Amazon finally decided it wasn't worth fighting state taxes and instead decided to place distribution centers everywhere so they can potentially offer next-day deliveries.  Not to be outdone, Walmart is looking to leverage its huge physical presence to offer the same.  Clubs like ShopRunner are pushing delivery barriers as well, so the norm is shifting to free shipping in a few days or relatively cheap shipping overnight.  Retailers need be thinking about how to ship from physical stores. 2. Bring your own device -- Earlier this year Intuit bought AisleBuyer, a mobile self-checkout start-up, at least somewhat validating the BYOD approach.  Grocery stores, especially in Europe, have been supporting in-aisle self-scanning for a while and I'm betting it will find a home in certain verticals in the US too.  There's also the BYOD concept for employees.  Some retailers are considering issuing mobile devices at hiring along side the shirt and name-tag.  Employees become responsible for the hardware until they leave. 3. TV shopping -- Will Apple finally release a TV product in 2013?  Who knows?  But the industry isn't standing still. Companies like QVC and HSN are already successfully combining the TV and online experiences for shopping.  Comcast is partnering with Tivo to allow viewers to interact with ads with Paypal handing payment.  This will be a slow maturation, but expect TVs to get smarter and eventually become a new selling channel (pun intended) for retailers. 4. Privacy backlash -- It only takes one big incident to stir the public, and I'm betting we have one in 2013.  Facebook, Google, or Apple will test the boundaries of what the public is willing to accept.  It could involve a retailer using geo-location technology, or possibly video analytics.  And as is always the case, the offender will apologize, temporarily remove the technology, and wait 2-3 years for it to be generally accepted.  Privacy is a moving target. 5. More NFC -- I've come to the conclusion that adoption of any banking technology is going to be slow.  It was slow for credit cards, ATMs, and online billpay so why should it be any different for NFC?  Maybe, just maybe the iPhone 5S will have an NFC chip, but we're not going to see mainstream uptake for years.  Next year we'll continue to see incremental improvements from Isis, Google, and Paypal and a plethora of new startups, but don't toss your magstripe cards just yet. 6. In-store location -- The technologies for tracking people inside stores is really improving.  Retailers can track people using video cameras, infrared, and by the WiFi radios in mobile phones.  We're getting closer to the point where accuracy could be a shelf-facing, which will help retailers understand how people shop, where they spend time, and what displays attract them.  Expect CPG companies to get involved and partner with retailers, since the data benefits both parties.  Consumers will benefit by being directed right to the products they seek.  (In 2013 ARTS is forming a workteam to develop new standards in this area.) 7. M&A -- Looking back at 2012 there were some really big deals involving IBM, Oracle, JDA, and NCR and I expect that trend will likely continue as vendors add assets to bolster their portfolios.  Many retailers are due for an IT transformation to support anywhere, anytime shoppers, and one-stop-vendors can minimize complexity and costs. Predictions from other sources: Independent Retailer Stores Magazine IDC Insights Mobile Commerce Daily

    Read the article

  • Creating a new naming context in OUD

    - by Sylvain Duloutre
    A naming context (also known as a directory suffix) is a DN that identifies the top entry in a locally held directory hierarchy. A new naming context can be created using ODSM, the OUD gui admin console, as described in http://docs.oracle.com/cd/E29407_01/admin.111200/e22648/server_config.htm#CBDGCJGF It can also be created using the dsconfig command lione as described below: Creation of a new naming context consists in 3 steps: First create a Local Backend Workflow element (myNewDb in this exemple) ,  responsible for the naming context base dn, e.g o=example. dsconfig create-workflow-element \           --set base-dn:o=example \           --set enabled:true \           --type db-local-backend \           --element-name myNewDb \           --hostname <your host> \           --port <admin port> \           --bindDN cn=Directory\ Manager \           --bindPasswordFile ****** \           --no-prompt Second, create a Workflow element (workFlowForMyNewDb in this exemple) associated with the Local Backend Workflow element. WorkFlow elements are used to route LDAP requests to the appropriate database, based on the target base dn. dsconfig create-workflow \           --set base-dn:o=example \           --set enabled:true \           --set workflow-element:myNewDb \           --type generic \           --workflow-name workFlowForMyNewDb \           --hostname <your host name> \           --port <admin port>\           --bindDN cn=Directory\ Manager \           --bindPasswordFile ****** \           --no-prompt Then, the workflow element must be made visible outside of the directory, i.e added to the internal "routing table". This is done by adding the Workflow to the appropriate Network Group. A Network group  is used to classify incoming client connections and route requests to workflows. dsconfig set-network-group-prop \           --group-name network-group \           --add workflow:workFlowForMyNewDb \           --hostname <your hostname> \           --port <admin port>\           --bindDN cn=Directory\ Manager \           --bindPasswordFile ****** \           --no-prompt At that stage, it is possible to import entries to the new naming context o=example.

    Read the article

  • Hey Retailers, Are You Ready For The Holiday Season?

    - by Jeri Kelley
    With online holiday spending reaching $35.3 billion in 2011 and American shoppers spending just under $750 on average on their holiday purchases this year, how ready is your business for the 2012 holiday season?   ?? Today’s shoppers do not take their purchases lightly.  They are more connected, interact with more resources to make decisions, diligently compare products and services, seek out the best deals, and ask for input from friends and family.   This holiday season, as consumers browse for apparel, tablets, toys, and much more, they will be bombarded with retailer communication - from emails and commercials to countless search engine results and social recommendations.  With a flurry of activity coming at consumers from every channel and competitor, your success this year will rely on communicating a consistent, personalized message no matter where your customers are shopping.  Here are a few ideas to help with your commerce strategy this holiday season: CONSISTENCY COUNTS FOR MULTICHANNEL SHOPPERS??According to a November 2011 study commissioned by Oracle, “Channel Commerce 2011: The Consumer View,” 54% of consumers in the U.S. and Canada regularly employ two or more channels before they make a purchase.  While each channel has its own unique benefit, user profile, and purpose, it’s critical that your shoppers have a consistent core experience wherever they’re looking for information or making a purchase.  Be sure consumers can consistently search and browse the same product information and receive the same promotions online, on their mobile devices, and in-store.? USE YOUR CUSTOMER’S CONTEXT TO SURFACE RELEVANT CONTENTYour Web site is likely the hub of your holiday activity.  According to a Monetate infographic, 39% of shoppers will visit your Web site directly to find out about the best holiday deals.   Use everything you know about your customers from past purchase data to browsing history to provide a relevant experience at every click, and assemble content in a context that entices shoppers to buy online, or influences an offline purchase.? TAKE ADVANTAGE OF MOBILE BEHAVIOR?Having a mobile program is no longer a choice.   Armed with smartphones and tablets, consumers now have access to more and more product information and can compare products and prices from anywhere.  In fact, approximately 52% of smartphone users will use their device to research products, redeem coupons and use apps to assist in their holiday gift purchase.  At a minimum, be sure your mobile environment has store information, consistent pricing and promotions, and simple checkout capabilities. ARM IN-STORE ASSOCIATES WITH TABLETS?According to RISNews.com, 31% of retailers plan to begin testing tablets in stores in 2012, 22% have already begun such testing and 6% had fully deployed tablets within stores.   Take advantage of this compelling sales tool to get shoppers interacting with videos, user reviews, how-to guides, side-by-side product comparisons, and specs.  Automatically trigger upsell and cross sell suggestions for store associates to recommend for each product or category, build in alerts for promotions, and allow associates to place orders and check inventory from their tablet.  ? WISDOM OF THE CROWDS IS GOOD, BUT WISDOM FROM FRIENDS IS BETTER?Shoppers who grapple with options are looking for recommendations; they’d rather get advice from friends, and they’re more likely to spend more while doing so.    In fact, according to an infographic by Mr. Youth, 66% of social media users made a purchase on Black Friday or Cyber Monday as a direct result of social media interactions with brands or family.   This holiday season, be sure you are leveraging your social channels from Facebook to Pinterest to drive consistent promotions and help your brand to become part of the conversation. So, are you ready for the holidays this year?  

    Read the article

  • Standards Corner: Preventing Pervasive Monitoring

    - by independentid
     Phil Hunt is an active member of multiple industry standards groups and committees and has spearheaded discussions, creation and ratifications of industry standards including the Kantara Identity Governance Framework, among others. Being an active voice in the industry standards development world, we have invited him to share his discussions, thoughts, news & updates, and discuss use cases, implementation success stories (and even failures) around industry standards on this monthly column. Author: Phil Hunt On Wednesday night, I watched NBC’s interview of Edward Snowden. The past year has been tumultuous one in the IT security industry. There has been some amazing revelations about the activities of governments around the world; and, we have had several instances of major security bugs in key security libraries: Apple's ‘gotofail’ bug  the OpenSSL Heartbleed bug, not to mention Java’s zero day bug, and others. Snowden’s information showed the IT industry has been underestimating the need for security, and highlighted a general trend of lax use of TLS and poorly implemented security on the Internet. This did not go unnoticed in the standards community and in particular the IETF. Last November, the IETF (Internet Engineering Task Force) met in Vancouver Canada, where the issue of “Internet Hardening” was discussed in a plenary session. Presentations were given by Bruce Schneier, Brian Carpenter,  and Stephen Farrell describing the problem, the work done so far, and potential IETF activities to address the problem pervasive monitoring. At the end of the presentation, the IETF called for consensus on the issue. If you know engineers, you know that it takes a while for a large group to arrive at a consensus and this group numbered approximately 3000. When asked if the IETF should respond to pervasive surveillance attacks? There was an overwhelming response for ‘Yes'. When it came to 'No', the room echoed in silence. This was just the first of several consensus questions that were each overwhelmingly in favour of response. This is the equivalent of a unanimous opinion for the IETF. Since the meeting, the IETF has followed through with the recent publication of a new “best practices” document on Pervasive Monitoring (RFC 7258). This document is extremely sensitive in its approach and separates the politics of monitoring from the technical ones. Pervasive Monitoring (PM) is widespread (and often covert) surveillance through intrusive gathering of protocol artefacts, including application content, or protocol metadata such as headers. Active or passive wiretaps and traffic analysis, (e.g., correlation, timing or measuring packet sizes), or subverting the cryptographic keys used to secure protocols can also be used as part of pervasive monitoring. PM is distinguished by being indiscriminate and very large scale, rather than by introducing new types of technical compromise. The IETF community's technical assessment is that PM is an attack on the privacy of Internet users and organisations. The IETF community has expressed strong agreement that PM is an attack that needs to be mitigated where possible, via the design of protocols that make PM significantly more expensive or infeasible. Pervasive monitoring was discussed at the technical plenary of the November 2013 IETF meeting [IETF88Plenary] and then through extensive exchanges on IETF mailing lists. This document records the IETF community's consensus and establishes the technical nature of PM. The draft goes on to further qualify what it means by “attack”, clarifying that  The term is used here to refer to behavior that subverts the intent of communicating parties without the agreement of those parties. An attack may change the content of the communication, record the content or external characteristics of the communication, or through correlation with other communication events, reveal information the parties did not intend to be revealed. It may also have other effects that similarly subvert the intent of a communicator.  The past year has shown that Internet specification authors need to put more emphasis into information security and integrity. The year also showed that specifications are not good enough. The implementations of security and protocol specifications have to be of high quality and superior testing. I’m proud to say Oracle has been a strong proponent of this, having already established its own secure coding practices. 

    Read the article

  • Get the Picture: Pinterest for Marketers

    - by Mike Stiles
    When trying to determine on which networks to conduct social marketing, the usual suspects immediately rise to the top; Facebook & Twitter, then LinkedIn (especially if you’re B2B), then maybe some Google Plus to hedge SEO bets.  So at what juncture do brands get excited about Pinterest? Pinterest has been easy for marketers to de-prioritize thanks to the perception its usage is so dominated by women. Um, what’s wrong with that? Women make an estimated 85% of all consumer purchases. So if there are indeed over 30 million US women active on it monthly, and they do 92% of the pinning, and 84% are still active on it after 4 years, when did an audience of highly engaged, very likely sales conversions become low priority? Okay, if you’re a tech B2B SaaS product like the Oracle Social Cloud, Pinterest may not be where you focus. But if you operate in the top Pinterest categories, which are truly far-reaching, it’s time to take note of Pinterest’s performance to date: 40.1 million monthly users in the US (eMarketer). Over 30 billion pins, half of which were pinned in the last 6 months. (Big momentum) 75% of usage is on their mobile app. (In solid shape for the mobile migration) Pinterest sharing grew 58% in 2013, beating Facebook, Twitter, or LinkedIn. (ShareThis) Pinterest is the 3rd most popular sharing platform overall (over email), with 48% of all sharing on tablets. Users referred by Pinterest are 10% more likely to buy on e-commerce sites and tend to spend twice that of users coming from Facebook. (Shopify) To be fair, brands haven’t had any paid marketing opportunities on that platform…until recently. Users are seeing Promoted Pins in both category and search feeds from rollout brands like Gap, ABC Family, Ziploc, and Nestle. Are the paid pins annoying users? It seems more so than other social networks, they’re fitting right in to the intended user experience and being accepted, getting almost as many click-throughs as user pins. New York Magazine’s Kevin Roose laid it out succinctly; Pinterest offers a place that’s image-centric, search-friendly, makes things easy to purchase, makes things easy to share, and puts users in an aspirational mood to buy. Pinterest is very confident in the value of that combo and that audience, with CPM rates 5x that of the most expensive Facebook ad, plus (at least for now) required spending commitments and required pin review by Pinterest for quality. The latest developments; a continued move toward search and discovery with enhancements like Guided Search to help you hone in on what interests you, Custom Categories, and the rumored Visual Search that stands to be a liberation from text. And most recently, Pinterest has opened up its API so brands can get access to deeper insights into the best search terms and categories in which to play ball, as well as what kinds of pins stand to perform best in those areas. As we learned in our rundown this week of Social Media Examiner’s Social Media Marketing Industry Report, around 50% of marketers specifically intend on upping their use of Pinterest. If you’re a big believer in fishing where the fish are, that’s probably an efficient position to take. @mikestiles @oraclesocialPhoto: Adam Lambert_Gorwyn, freeimages.com

    Read the article

  • Come see us at JavaU at JavaOne!

    - by tmcginn
    In just a little under a month, JavaOne will be in full swing (no pun intended) and thousands of Java developers will gather to hear the latest Java news, immerse themselves in Java technology and learn some new things. This year, I am fortunate enough to be able to attend, along with my Java curriculum development colleagues Matt Heimer and Mike Williams. We start our week at JavaOne teaching a one-day session at JavaU on Sunday morning. If you have never attended a training session through JavaU, you should check it out. There are some terrific sessions this year, and it might help to justify your trip to JavaOne if you can say it was for training! This year I am teaching a one day session on Java SE 7 New Features - a great session for anyone interested in the specific details of what is new in Java SE 7. Matt is teaching a one-day session on Developing Portable Java EE applications with the Enterprise JavaBeans 3.1 API and Java Persistence 2.0 API  EJB, and Mike is doing a one-day session on developing Rich Client applications with Java SE 7 using Java FX 2. I asked Matt and Mike to tell me what developers can expect from their sessions. Matt: "My session will get you up to speed on everything you need to know to create portable Java EE 6 applications using EJB 3.1 and JPA 2. I am going to cover why everyone can benefit from using EJBs (and why developers should relearn them if they haven't looked at them for years). Students who attend my session will see JPA examples showcasing how to use relational databases in an enterprise applications without programming to JDBC and without writing SQL statements. EJB and JPA benefit from being paired together, so I will also show how transaction management is easier in a container. I encourage students to bring a laptop and code as they learn!" Mike: "My session covers how to develop a rich client application using Java FX 2. Starting with the basic concepts of JavaFX, students will see how a JavaFX application is built from its layout, to its controls, to its data structures. In addition, more advanced controls like charts, smart tables, and transitions will be added to the application. Finally, a quick review of JavaFX concurrency and data binding is included. Blended with the core concepts the session will include some of the latest JavaFX technology. This includes using Scene Builder to create a JavaFX UI and connecting your XML UI definition to Java code.  In addition, packaging of the JavaFX application will be covered with some examples of the new native packaging features." As I mentioned, my session covers the changes in the Java for SE 7, including the  language changes that were voted into Java SE 7 from Project Coin. I will also look at how you can take advantage if the the new I/O library (NIO.2) for writing applications that work with files, directories and file systems. We will also look at the changes in Asynchronous I/O that are a part of the changes in NIO/2. We will spend some time looking at the changes to the Java Virtual Machine as well, including support for dynamically typed languages (JSR-292). We will spend some time looking at the Java Concurrency enhancements (JSR-166), including the new Fork/Join framework. And we'll round out the day with a look at changes in Swing, XML and a number of smaller changes in the API's. And, if these topics aren't grabbing your interest, take a look at the other 10 sessions that range from topics on architecture to how to pass the Oracle Certified Programmer I and II exams. See you soon!

    Read the article

  • database design help for game / user levels / progress

    - by sprugman
    Sorry this got long and all prose-y. I'm creating my first truly gamified web app and could use some help thinking about how to structure the data. The Set-up Users need to accomplish tasks in each of several categories before they can move up a level. I've got my Users, Tasks, and Categories tables, and a UserTasks table which joins the three. ("User 3 has added Task 42 in Category 8. Now they've completed it.") That's all fine and working wonderfully. The Challenge I'm not sure of the best way to track the progress in the individual categories toward each level. The "business" rules are: You have to achieve a certain number of points in each category to move up. If you get the number of points needed in Cat 8, but still have other work to do to complete the level, any new Cat 8 points count toward your overall score, but don't "roll over" into the next level. The number of Categories is small (five currently) and unlikely to change often, but by no means absolutely fixed. The number of points needed to level-up will vary per level, probably by a formula, or perhaps a lookup table. So the challenge is to track each user's progress toward the next level in each category. I've thought of a few potential approaches: Possible Solutions Add a column to the users table for each category and reset them all to zero each time a user levels-up. Have a separate UserProgress table with a row for each category for each user and the number of points they have. (Basically a Many-to-Many version of #1.) Add a userLevel column to the UserTasks table and use that to derive their progress with some kind of SUM statement. Their current level will be a simple int in the User table. Pros & Cons (1) seems like by far the most straightforward, but it's also the least flexible. Perhaps I could use a naming convention based on the category ids to help overcome some of that. (With code like "select cats; for each cat, get the value from Users.progress_{cat.id}.") It's also the one where I lose the most data -- I won't know which points counted toward leveling up. I don't have a need in mind for that, so maybe I don't care about that. (2) seems complicated: every time I add or subtract a user or a category, I have to maintain the other table. I foresee synchronization challenges. (3) Is somewhere in between -- cleaner than #2, but less intuitive than #1. In order to find out where a user is, I'd have mildly complex SQL like: SELECT categoryId, SUM(points) from UserTasks WHERE userId={user.id} & countsTowardLevel={user.level} groupBy categoryId Hmm... that doesn't seem so bad. I think I'm talking myself into #3 here, but would love any input, advice or other ideas. P.S. Sorry for the cross-post. I wrote this up on SO and then remembered that there was a game dev-focused one. Curious to see if I get different answers one place than the other....

    Read the article

  • How to ace Skype Interviews

    - by FelixWehmeyer
    Many companies these days opt to include a Skype interview in the recruitment process, as it comes close to a face-to-face interview without the time and costs involved for both the company and the candidate. In some cases during the recruitment process at Oracle you also might be asked to conduct a Skype interview. To help you get started with this, we researched some websites to give you several tips and tricks. What most of the bloggers say about this topic is collected in this article to help you prepare. It is all about Technology The bit that can make a Skype interview more complicated than a face-to-face or phone interview is the fact you are using additional technology. Always check the video and audio capabilities of your computer to make sure they work properly. Be prepared for connections to be limited during the interview. Using a webcam can also be confusing, if you do not have a lot of experience using it. Make sure you look at the camera and not the monitor to avoid the impression you are looking away. Practice If you do not feel comfortable using the camera, do a mock interview with a friend or family member before you have the actual interview. Be aware that facial impressions or reactions come across differently on a monitor, so make sure to practice how you  come across during the interview. Good lighting in the room also helps you make you look the best for the interviewer. You and your room Dress code, as in any face-to-face interview,is important to think about. Dress the same way as you would for face-to-face interviews and avoid patterns or informal clothing. Another tip,is to be aware of your surroundings. Make sure the room you use looks good on camera, making sure it is neat and tidy, also think about how the walls look behind you. Also make sure you do not get distracted during the interview by anyone or anything, as this will directly have an impact on your interview and your ability to focus and concentrate. What is in a name What goes for any account that you share during the recruitment process, either your email address or Skype name, is to make sure it comes across as professional. Try to avoid using nicknames or strange words in your accounts, stick to using a first name – last name or an abbreviation of the same. If you would like to read more about this topic, have a look at the links below which we used as inspiration for this blog article. 7 Deadly Skype Interview Sins is fun to read and to gives you some good advice to keep in mind. ·         http://www.inc.com/guides/201103/4-tips-for-conducting-a-job-interview-using-skype.html ·         http://blog.simplyhired.com/2012/05/5-tips-to-a-great-skype-interview.html ·         http://www.cnn.com/2011/LIVING/07/11/skype.interview.tips.cb/index.html http://www.ehow.com/how_5648281_prepare-skype-interview.html

    Read the article

  • Programmatically disclosing a node in af:tree and af:treeTable

    - by Frank Nimphius
    A common developer requirement when working with af:tree or af:treeTable components is to programmatically disclose (expand) a specific node in the tree. If the node to disclose is not a top level node, like a location in a LocationsView -> DepartmentsView -> EmployeesView hierarchy, you need to also disclose the node's parent node hierarchy for application users to see the fully expanded tree node structure. Working on ADF Code Corner sample #101, I wrote the following code lines that show a generic option for disclosing a tree node starting from a handle to the node to disclose. The use case in ADF Coder Corner sample #101 is a drag and drop operation from a table component to a tree to relocate employees to a new department. The tree node that receives the drop is a department node contained in a location. In theory the location could be part of a country and so on to indicate the depth the tree may have. Based on this structure, the code below provides a generic solution to parse the current node parent nodes and its child nodes. The drop event provided a rowKey for the tree node that received the drop. Like in af:table, the tree row key is not of type oracle.jbo.domain.Key but an implementation of java.util.List that contains the row keys. The JUCtrlHierBinding class in the ADF Binding layer that represents the ADF tree binding at runtime provides a method named findNodeByKeyPath that allows you to get a handle to the JUCtrlHierNodeBinding instance that represents a tree node in the binding layer. CollectionModel model = (CollectionModel) your_af_tree_reference.getValue(); JUCtrlHierBinding treeBinding = (JUCtrlHierBinding ) model.getWrappedData(); JUCtrlHierNodeBinding treeDropNode = treeBinding.findNodeByKeyPath(dropRowKey); To disclose the tree node, you need to create a RowKeySet, which you do using the RowKeySetImpl class. Because the RowKeySet replaces any existing row key set in the tree, all other nodes are automatically closed. RowKeySetImpl rksImpl = new RowKeySetImpl(); //the first key to add is the node that received the drop //operation (departments).            rksImpl.add(dropRowKey);    Similar, from the tree binding, the root node can be obtained. The root node is the end of all parent node iteration and therefore important. JUCtrlHierNodeBinding rootNode = treeBinding.getRootNodeBinding(); The following code obtains a reference to the hierarchy of parent nodes until the root node is found. JUCtrlHierNodeBinding dropNodeParent = treeDropNode.getParent(); //walk up the tree to expand all parent nodes while(dropNodeParent != null && dropNodeParent != rootNode){    //add the node's keyPath (remember its a List) to the row key set    rksImpl.add(dropNodeParent.getKeyPath());      dropNodeParent = dropNodeParent.getParent(); } Next, you disclose the drop node immediate child nodes as otherwise all you see is the department node. Its not quite exactly "dinner for one", but the procedure is very similar to the one handling the parent node keys ArrayList<JUCtrlHierNodeBinding> childList = (ArrayList<JUCtrlHierNodeBinding>) treeDropNode.getChildren();                     for(JUCtrlHierNodeBinding nb : childList){   rksImpl.add(nb.getKeyPath()); } Next, the row key set is defined as the disclosed row keys on the tree so when you refresh (PPR) the tree, the new disclosed state shows tree.setDisclosedRowKeys(rksImpl); AdfFacesContext.getCurrentInstance().addPartialTarget(tree.getParent()); The refresh in my use case is on the tree parent component (a layout container), which usually shows the best effect for refreshing the tree component. 

    Read the article

  • You Probably Already Have a “Private Cloud”

    - by BuckWoody
    I’ve mentioned before that I’m not a fan of the word “Cloud”. It’s too marketing-oriented, gimmicky and non-specific. A better definition (in many cases) is “Distributed Computing”. That means that some or all of the computing functions are handled somewhere other than under your specific control. But there is a current use of the word “Cloud” that does not necessarily mean that the computing is done somewhere else. In fact, it’s a vector of Cloud Computing that can better be termed “Utility Computing”. This has to do with the provisioning of a computing resource. That means the setup, configuration, management, balancing and so on that is needed so that a user – which might actually be a developer – can do some computing work. To that person, the resource is just “there” and works like they expect, like the phone system or any other utility. The interesting thing is, you can do this yourself. In fact, you probably already have been, or are now. It’s got a cool new trendy term – “Private Cloud”, but the fact is, if you have your setup automated, the HA and DR handled, balancing and performance tuning done, and a process wrapped around it all, you can call yourself a “Cloud Provider”. A good example here is your E-Mail system. your users – pretty much your whole company – just logs into e-mail and expects it to work. To them, you are the “Cloud” provider. On your side, the more you automate and provision the system, the more you act like a Cloud Provider. Another example is a database server. In this case, the “end user” is usually the development team, or perhaps your SharePoint group and so on. The data professionals configure, monitor, tune and balance the system all the time. The more this is automated, the more you’re acting like a Cloud Provider. Lots of companies help you do this in your own data centers, from VMWare to IBM and many others. Microsoft's offering in this is based around System Center – they have a “cloud in a box” provisioning system that’s actually pretty slick. The most difficult part of operating a Private Cloud is probably the scale factor. In the case of Windows and SQL Azure, we handle this in multiple ways – and we're happy to share how we do it. It’s not magic, and the algorithms for balancing (like the one we started with called Paxos) are well known. The key is the knowledge, infrastructure and people. Sure, you can do this yourself, and in many cases such as top-secret or private systems, you probably should. But there are times where you should evaluate using Azure or other vendors, or even multiple vendors to spread your risk. All of this should be based on client need, not on what you know how to do already. So congrats on your new role as a “Cloud Provider”. If you have an E-mail system or a database platform, you can just put that right on your resume.

    Read the article

  • What You Can Learn from the NFL Referee Lockout

    - by Christina McKeon
    American football is a lot like religion. The fans are devoted followers that take brand loyalty to a whole new level. These fans that worship their teams each week showed that they are powerful customers whose voice has an impact. Yesterday, these fans proved that their opinion could force the hand of a large and powerful institution. With a three-month NFL referee lockout that seemed like it was nowhere close to resolution, the Green Bay Packers and the Seattle Seahawks competed last Monday night. For those of you that might have been out of the news cycle the past few days, Green Bay lost the game due to a controversial call that many experts and analysts agree should have resulted in Green Bay winning the game. Outrage ensued. The NFL had pulled replacement referees from the high school ranks, and these replacements did not have the knowledge and experience to handle high intensity NFL games. Fans protested about their customer experience. Their anger-filled rants were heard in social media, in the headlines of newspapers, on radio, and on national TV. Suddenly, the NFL was moved to reach an agreement with the referees. That agreement was reached late in the night on Wednesday with many believing that the referees had the upper hand forcing the owners into submission. Some might argue that the referees benefited, not the fans. Since the fans wanted qualified and competent referees, I would say the fans did benefit. The referees are scheduled to return to the field this Sunday, so the fans got what they wanted. What can you learn from this negative customer experience? Customers are in control. NFL owners thought they were controlling this situation with the upper hand over referees. The owners figured out they weren’t in control when their fans reacted negatively. Customers can make or break you more now than ever before, which is why it is more important to connect with them, engage them in a personal manner, and create rewarding relationships. Protect your brand. Whether knowingly or unknowingly, the NFL put their brand and each team’s brand at risk with replacement referees. Think about each business decision you make, and how it may impact your brand at different points in time. A decision that results in a gain today could result in a larger loss down the road. Customer experience matters. The NFL likely foresaw declining revenues in ticket sales, merchandising, advertising, and other areas if the lockout continued. While fans primarily spoke with their minds in the days following the Green Bay debacle, their wallets would be the next things to speak. Customer experience directly affects your success and is one of the few areas where you can differentiate your business. What would you do if your brand got such negative attention? Would you be prepared to navigate such stormy waters? Would you be able to prevent such a fiasco? If you don’t have a good answer to these questions, consider joining us October 3-5, 2012 at the Oracle Customer Experience Summit in San Francisco. You’ll have the opportunity to learn even more about customer experience from industry experts such as best-selling author Seth Godin, Paul Hagen and Kerry Bodine from Forrester Research, Inc., George Kembel from the Stanford d.School, Bruce Temkin of The Temkin Group, and Gene Alvarez from Gartner Inc.. There will also be plenty of your peers and customer experience experts available for networking and discussions.

    Read the article

  • What is the difference between Workcenters, Dashboards, and the Interaction Hub?

    - by Matthew Haavisto
    Oracle Open World has just concluded.  Over the course of the conference, we presented several sessions covering different aspects of the PeopleSoft user experience, including Workcenters, Dashboards, and the PeopleSoft Interaction Hub (formerly known as the PeopleSoft Applications Portal).  Although we've produced collateral on these features and covered them in sessions, it became apparent at the conference that customers still have many questions about the these products, including how they are licensed, how they are installed, what their various purposes are, and how they can be used together synergistically. Let's Start with Licensing and Installation As you may know, we've extended the restricted use license (RUL) for the Interaction Hub.  This grants customers with PeopleTools 8.52 licenses the right to install the Interaction Hub for free for use as specified in the Tools license notes.  Note that this means customers receive a restricted use license for the Interaction Hub that doesn't cost them an additional license fee, but it is a separate product, not part of PeopleTools or PeopleSoft applications, and is a separate installation.  This means customers must provide the infrastructure to install and run the Hub, just like any other application.  The benefits of using the Hub to unify your PeopleSoft user experience can be great.  PeopleSoft applications have not yet delivered instances of the Hub with their products, though they may in the future. Workcenters and Dashboards, on the other hand, are frameworks provided by PeopleTools.  No other license is required, and no additional installation of a separate product is needed (apart from PeopleTools and PeopleSoft applications).  PeopleSoft applications are delivering instances of the workcenters and dashboards with their products.  Some are available now, and more are coming in future releases.  These delivered workcenter and dashboard instances require no additional licenses, and no additional installations beyond Tools and the applications that provide them.  In addition, the workcenter and dashboard frameworks provided by PeopleTools can be used by customers to build their own workcenters and dashboards, and it's quite easy and simple to do so. What are Their Differences?  What Purposes do they Serve? Workcenters, Dashboards and the Interaction Hub appear somewhat similar.  They all contain pagelets, and have some visual characteristics in common.  However, their strengths and purposes are very different, and they were designed to provide different benefits to your PeopleSoft ecosystem. Workcenters and Dashboards have the following characteristics: Designed for specific roles Focus on the daily tasks of those roles Help to streamline the work performed most often Personal view of my work world Makes navigation and search easier and quicker, particularly for transactions and decision support Reports and data needed for day-to-day work Personalizable, but minimal Delivered by PS Apps, but can be altered by customer for their requirements Customers can create their own Workcenters can be used for guided processes  The Interaction Hub is designed to aggregate content from multiple applications, and is is used to unify the user experience of those applications.  It offers a rich, web site-based user experience, and is often used to provide access to infrequently performed activities like benefits enrollment, payroll inquiries, life event changes, onboarding, and so on. Full-featured and robust Centrally administered Pushed to large audience Broad info like Company News Infrequent activities like benefits, not day-to-day tasks Self-service, access to employer info Central launch point for other activities and can navigate to workcenters and dashboards Deployed by customers or consultants, instances not delivered by PeopleSoft (at this time) Content management Unified PS application navigation Although these products are quite different and serve different purposes in your PeopleSoft environment, they can be used together to provide a richer, more efficient and engaging user experience for your all your user communities.

    Read the article

  • How to display Sharepoint Data in a Windows Forms Application

    - by Michael M. Bangoy
    In this post I'm going to demonstrate how to retrieve Sharepoint data and display it on a Windows Forms Application. 1. Open Visual Studio 2010 and create a new Project. 2. In the project template select Windows Forms Application. 3. In order to communicate with Sharepoint from a Windows Forms Application we need to add the 2 Sharepoint Client DLL located in c:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\ISAPI. 4. Select the Microsoft.Sharepoint.Client.dll and Microsoft.Sharepoint.Client.Runtime.dll. (Your solution should look like the one below) 5. Open the Form1 in design view and from the Toolbox menu Add a Button, TextBox, Label and DataGridView on the form. 6. Next double click on the Load Button, this will open the code view of the form. Add Using statement to reference the Sharepoint Client Library then create two method for the Load Site Title and LoadList. See below:   using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Security; using System.Windows.Forms; using SP = Microsoft.SharePoint.Client;   namespace ClientObjectModel {     public partial class Form1 : Form     {         // url of the Sharepoint site         const string _context = "theurlofthesharepointsite";         public Form1()         {             InitializeComponent();         }         private void Form1_Load(object sender, EventArgs e)         {                    }         private void getsitetitle()         {             SP.ClientContext context = new SP.ClientContext(_context);             SP.Web _site = context.Web;             context.Load(_site);             context.ExecuteQuery();             txttitle.Text = _site.Title;             context.Dispose();         }                 private void loadlist()         {             using (SP.ClientContext _clientcontext = new SP.ClientContext(_context))             {                 SP.Web _web = _clientcontext.Web;                 SP.ListCollection _lists = _clientcontext.Web.Lists;                 _clientcontext.Load(_lists);                 _clientcontext.ExecuteQuery();                 DataTable dt = new DataTable();                 DataColumn column;                 DataRow row;                 column = new DataColumn();                 column.DataType = Type.GetType("System.String");                 column.ColumnName = "List Title";                 dt.Columns.Add(column);                 foreach (SP.List listitem in _lists)                 {                     row = dt.NewRow();                     row["List Title"] = listitem.Title;                     dt.Rows.Add(row);                 }                 dataGridView1.DataSource = dt;             }                   }       private void cmdload_Click(object sender, EventArgs e)         {             getsitetitle();             loadlist();          }     } } 7. That’s it. Hit F5 to run the application then click the Load Button. Your screen should like the one below. Hope this helps.

    Read the article

  • The Healthy Tension That Mobility Creates

    - by Kathryn Perry
    A guest post by Hernan Capdevila, Vice President, Oracle Fusion Apps In my previous post, I talked about the value of the mobile revolution on businesses and workers. Now let me put on a different hat and view the world from the IT department and the IT leader’s viewpoint. The IT leader has different concerns – around privacy, potential liability of information leakage, and intellectual property protection. These concerns and the leader’s goals create a healthy tension with the users. For example, effective device management becomes a must have for the IT leader, especially if you look at the Android ecosystem as an example. There are benefits to the Android strategy, but there are also drawbacks, such as uniformity – in device management, in operating systems, and in the application taxonomy and capabilities. Whereas, if you compare Android to iOS, Apple's operating system, iOS is more unified, more streamlined, and easier to manage. In either case, this is where mobile device management in the cloud makes good sense. I don't think IT departments should be hosting device management and managing that complexity. It should be a cloud service and I predict it's going to be key for our customers. A New Focus for IT Departments So where does that leave the IT departments? I think their futures are in governance, which is a more strategic play than a tactical one. Device management is tactical and it's the “now” topic. But the mobile phenomenon, if you will, is going to drive significant change in terms of how IT plans, hosts, and deploys enterprise applications. For example, opening up enterprise applications for mobile users presents some challenges unless you deploy more complicated network topologies, such as virtual private networks and threat protection technology. If you really want employees to be mobile you need to remove those kinds of barriers. But I don’t think IT departments want to wrestle with exposing their private enterprise data centers and being responsible for hosted business applications – applications in a sense that they’re making vulnerable to the public world. This opens up a significant need and a significant driver for cloud applications. However, it's not just about taking away the complexity – it's also about taking away the responsibility. Why should every business have to carry the responsibility and figure out all the nuts and bolts of how to protect themselves in this public, mobile world? When you use apps in the cloud, either your vendor or your hosting partner should have figured all that out. They need to assure the business that they are adhering to all sorts of security and compliance regulations so users can be connected and have access to information anywhere anytime. More Ideas and Better Service What’s more interesting is the world of possibilities that the connected, cloud-based world enables. I believe that the one-size-fits-all, uber-best practices, lowest-common denominator-like capabilities will go away. IT will now be able to solve very specific business challenges for the different corporate functions it serves. In this new world, IT will play a key role in enabling different organizations within a company to be best in class and delivering greater value to the line of business managers. IT will actually help to differentiate. Net result is a more agile workforce and business because each department is getting work done its own way.

    Read the article

< Previous Page | 398 399 400 401 402 403 404 405 406 407 408 409  | Next Page >