Search Results

Search found 74621 results on 2985 pages for 'oracle platform migration data migration'.

Page 405/2985 | < Previous Page | 401 402 403 404 405 406 407 408 409 410 411 412  | Next Page >

  • Android - passing data between Activities

    - by Bill Osuch
    (To follow along with this, you should understand the basics of starting new activities: Link ) The easiest way to pass data from one activity to another is to create your own custom bundle and pass it to your new class. First, create two new activities called Search and SearchResults (make sure you add the second one you create to the AndroidManifest.xml file!), and create xml layout files for each. Search's file should look like this: <?xml version="1.0" encoding="utf-8"?> <LinearLayout     xmlns:android="http://schemas.android.com/apk/res/android"     android:layout_width="fill_parent"     android:layout_height="fill_parent"     android:orientation="vertical">     <TextView          android:layout_width="fill_parent"      android:layout_height="wrap_content"      android:text="Name:"/>     <EditText                android:id="@+id/edittext"         android:layout_width="fill_parent"         android:layout_height="wrap_content"/>     <TextView          android:layout_width="fill_parent"         android:layout_height="wrap_content"         android:text="ID Number:"/>     <EditText                android:id="@+id/edittext2"                android:layout_width="fill_parent"                android:layout_height="wrap_content"/>     <Button           android:id="@+id/btnSearch"          android:layout_width="fill_parent"         android:layout_height="wrap_content"         android:text="Search" /> </LinearLayout> and SearchResult's should look like this: <?xml version="1.0" encoding="utf-8"?> <LinearLayout     xmlns:android="http://schemas.android.com/apk/res/android"     android:layout_width="fill_parent"     android:layout_height="fill_parent"     android:orientation="vertical">     <TextView          android:id="@+id/txtName"         android:layout_width="fill_parent"         android:layout_height="wrap_content"/>     <TextView          android:id="@+id/txtState"         android:layout_width="fill_parent"         android:layout_height="wrap_content"         android:text="No data"/> </LinearLayout> Next, we'll override the OnCreate method of Search: @Override public void onCreate(Bundle savedInstanceState) {     super.onCreate(savedInstanceState);     setContentView(R.layout.search);     Button search = (Button) findViewById(R.id.btnSearch);     search.setOnClickListener(new View.OnClickListener() {         public void onClick(View view) {                           Intent intent = new Intent(Search.this, SearchResults.class);              Bundle b = new Bundle();                           EditText txt1 = (EditText) findViewById(R.id.edittext);             EditText txt2 = (EditText) findViewById(R.id.edittext2);                                      b.putString("name", txt1.getText().toString());             b.putInt("state", Integer.parseInt(txt2.getText().toString()));                              //Add the set of extended data to the intent and start it             intent.putExtras(b);             startActivity(intent);          }     }); } This is very similar to the previous example, except here we're creating our own bundle, adding some key/value pairs to it, and adding it to the intent. Now, to retrieve the data, we just need to grab the Bundle that was passed to the new Activity and extract our values from it: @Override public void onCreate(Bundle savedInstanceState) {     super.onCreate(savedInstanceState);     setContentView(R.layout.search_results);     Bundle b = getIntent().getExtras();     int value = b.getInt("state", 0);     String name = b.getString("name");             TextView vw1 = (TextView) findViewById(R.id.txtName);     TextView vw2 = (TextView) findViewById(R.id.txtState);             vw1.setText("Name: " + name);     vw2.setText("State: " + String.valueOf(value)); }

    Read the article

  • How to store a shmup level?

    - by pek
    I am developing a 2D shmup (i.e. Aero Fighters) and I was wondering what are the various ways to store a level. Assuming that enemies are defined in their own xml file, how would you define when an enemy spawns in the level? Would it be based on time? Updates? Distance? Currently I do this based on "level time" (the amount of time the level is running - pausing doesn't update the time). Here is an example (the serialization was done by XNA): <?xml version="1.0" encoding="utf-8"?> <XnaContent xmlns:level="pekalicious.xanor.XanorContentShared.content.level"> <Asset Type="level:Level"> <Enemies> <Enemy> <EnemyType>data/enemies/smallenemy</EnemyType> <SpawnTime>PT0S</SpawnTime> <NumberOfSpawns>60</NumberOfSpawns> <SpawnOffset>PT0.2S</SpawnOffset> </Enemy> <Enemy> <EnemyType>data/enemies/secondenemy</EnemyType> <SpawnTime>PT0S</SpawnTime> <NumberOfSpawns>10</NumberOfSpawns> <SpawnOffset>PT0.5S</SpawnOffset> </Enemy> <Enemy> <EnemyType>data/enemies/secondenemy</EnemyType> <SpawnTime>PT20S</SpawnTime> <NumberOfSpawns>10</NumberOfSpawns> <SpawnOffset>PT0.5S</SpawnOffset> </Enemy> <Enemy> <EnemyType>data/enemies/boss1</EnemyType> <SpawnTime>PT30S</SpawnTime> <NumberOfSpawns>1</NumberOfSpawns> <SpawnOffset>PT0S</SpawnOffset> </Enemy> </Enemies> </Asset> </XnaContent> Each Enemy element is basically a wave of specific enemy types. The type is defined in EnemyType while SpawnTime is the "level time" this wave should appear. NumberOfSpawns and SpawnOffset is the number of enemies that will show up and the time it takes between each spawn respectively. This could be a good idea or there could be better ones out there. I'm not sure. I would like to see some opinions and ideas. I have two problems with this: spawning an enemy correctly and creating a level editor. The level editor thing is an entirely different problem (which I will probably post in the future :P). As for spawning correctly, the problem lies in the fact that I have a variable update time and so I need to make sure I don't miss an enemy spawn because the spawn offset is too small, or because the update took a little more time. I kinda fixed it for the most part, but it seems to me that the problem is with how I store the level. So, any ideas? Comments? Thank you in advance.

    Read the article

  • Developer Profile: Marcelo Quinta

    - by Tori Wieldt
    As the Java developer community lead for Oracle, the best part of my job is going to conferences and meeting Java developers. I’ve had the pleasure to meet men and women who are smart, fun and passionate about Java—they make the Java community happen. The current issue of Java Magazine provides profiles of other young Java developers around the world. Subscribe to read them! Marcelo Quinta Age: 24Occupation: Professor, Federal University of GoiasLocation: Goias, Brazil Twitter: @mrquinta Marcelo (white polo shirt, center) and class OTN: When did you realize that you were good at programming? When I was in graduate school, I developed a Java system that displayed worked out the logics of getting the maximum coverage using the fewest resources (for example, the minimum number of soldiers [and positions] needed for a battlefield. It may seems not difficult, but it's a hard problem to solve, mathematically. Here I was, a freshman, who came up with an app  "solving" it. Some Master's students use my software today. It was then I began to believe in what I could do.OTN: What most inspires you about programming?I'm really inspired by the challenges and tension that comes from solving a complicated problems. Lately, I've been doing a new system focused on education and digital inclusion and was very gratifying to see it working and the results. I felt useful for the community. OTN: What are some things you would like to accomplish using Java?Java is a very strong platform and that gives us power to develop applications for different devices and purposes, from home automation with little microcontrollers to systems in big servers. I would like to build more systems that integrate the people life or different business contexts, from PCs to cell phones and tablets, ubiquitously. I think IT has reached a level where the current challenge is to make systems that leverage existing technologies that are present in daily life. Java gives us a very interesting set of options to put it into practice, especially in systems that require more strength.OTN: What technical insights into Java technology have been most important to you?I have really enjoyed the way that Java has evolved with Oracle, with new features added, many of them which were suggested by the community. Java 7 came with substantial improvements in the language syntax and it seems that Java 8 takes it even further. I also made some applications in JavaFX and liked the new version. The Java GUI is on a higher level than is offered out there. I saw some JavaFX prototypes running in modern tablets and I got excited. OTN: What would you like to be doing 10 years from now?I want my work to make a difference for individuals or an institution. It would be interesting to be improving one of the systems that I am making today. Recently I've been mixing my hobbies and work, playing with Arduino and home automation. The JHome project, winner of the Duke's Choice Award in 2011, is very interesting to me.OTN: Do you listen to music when you write code? If so, what kind?Absolutely! I usually listen to electronic music (Prodigy, Fatboy Slim and Paul Oakenfold), rock (Metallica, Strokes, The Black Keys) and a bit of local alternative music. I live in Goiânia, "The Brazilian Seattle" and I profit from it very well. OTN: What do you do when you're not programming?I like to play guitar and to fish. Last year I sold my economy car and bought a old jeep. Some people called me crazy, but since then I've been having a great time and having adventures on the backroads of Brazil. Once I broke my glasses in a funny game involving my car's suspension and the airbags. OTN: Does your girlfriend think you are crazy?Crazy is someone who doesn't have courage to do strange things! My girlfriend likes my style. =D Subscribe to the free Java Magazine to read profiles of other young Java developers. Visit the Java channel on YouTube to see a video of Marcelo in action.

    Read the article

  • Have you ever wondered...?

    - by diana.gray
    I've often wondered why folks do the same thing over and over. For some of us, it's because we "don't get it" and there's an abundance of TV talk shows that will help us analyze the why of it. Dr. Phil is all too eager to ask "...and how's that working for you?". But I'm not referring to being stuck in a destructive pattern or denial. I'm really talking about doing something over and over because you have found a joy, a comfort, a boost of energy from an activity or event. For example, how many times have I planted bulbs in November or December only to be amazed by their reach, colors, and fragrance in early spring? Or baked fresh cookies and allowed the aroma to fill the house? Or kissed a sleeping baby held gently in my arms and being reminded of how tiny and fragile we all are. I've often wondered why it is that I get so much out of something I've done so many times. I think it's because I've changed. The activity may be the same but in the preceding days, months and years I've had new experiences, challenges, joys and sorrows that have shaped me. I'm different. The same is true about attending the Professional Businesswomen of California (PBWC) conference. Although the conference is an annual event held at San Francisco's Moscone Center, I still enjoy being with 3,000 other women like me. Yes, we work at different companies and in different industries, have different lifestyles and are at different stages in our professional careers and personal lives; but we are all alike in that we bring the NEW me each year that we attend. This year I can cheer when Safra Catz, President of Oracle, encourages us to trust our intuition; that "if something doesn't make sense, it doesn't make sense". And I can warmly introduce myself to Lisa Askins, Cheryl Melching's business partner at Center Stage Group, when I would have been too intimated to do so last year. This year I can commit to new challenges such as "no whining, no excuses and no gossip" as suggested by Roxanne Emmerich, a goal that I would have wavered on last year. I can also embrace the suggestion given by Dr. Ian Smith to "spend one hour each day" on me - giving myself time to rejuvenate. A friend, when asked if she was attending PBWC this year, said "I've attended the conference several times and there's nothing new!" My perspective is that WE are what makes PBWC's annual conference new. We are far different in 2010 than we were in 2009. We are learning, growing, developing and shedding and that's what makes the conference fresh, vibrant, rewarding, and lasting. It is the diversity of women coming together that makes it new. By sharing our experiences, we discover. By meeting with one another professionally and personally, we connect. And by applying the wisdom learned, we shine. We are reNEW-ed. It shows in our fresh ideas, confident interactions, strategic decisions and successful businesses. This refreshed approach is what our companies want and need, our families depend on, our communities and nation look to for creative solutions to pressing concerns. Thanks Oracle for your continued support and thanks PBWC for providing an annual day to be reNEW-ed.

    Read the article

  • perl comparing 2 data file as array 2D for finding match one to one [migrated]

    - by roman serpa
    I'm doing a program that uses combinations of variables ( combiData.txt 63 rows x different number of columns) for analysing a data table ( j1j2_1.csv, 1000filas x 19 columns ) , to choose how many times each combination is repeated in data table and which rows come from (for instance, tableData[row][4]). I have tried to compile it , however I get the following message : Use of uninitialized value $val in numeric eq (==) at rowInData.pl line 34. Use of reference "ARRAY(0x1a2eae4)" as array index at rowInData.pl line 56. Use of reference "ARRAY(0x1a1334c)" as array index at rowInData.pl line 56. Use of uninitialized value in subtraction (-) at rowInData.pl line 56. Modification of non-creatable array value attempted, subscript -1 at rowInData.pl line 56. nothing This is my code: #!/usr/bin/perl use strict; use warnings; my $line_match; my $countTrue; open (FILE1, "<combiData.txt") or die "can't open file text1.txt\n"; my @tableCombi; while(<FILE1>) { my @row = split(' ', $_); push(@tableCombi, \@row); } close FILE1 || die $!; open (FILE2, "<j1j2_1.csv") or die "can't open file text1.txt\n"; my @tableData; while(<FILE2>) { my @row2 = split(/\s*,\s*/, $_); push(@tableData, \@row2); } close FILE2 || die $!; #function transform combiData.txt variable (position ) to the real value that i have to find in the data table. sub trueVal($){ my ($val) = $_[0]; if($val == 7){ return ('nonsynonymous_SNV'); } elsif( $val == 14) { return '1'; } elsif( $val == 15) { return '1';} elsif( $val == 16) { return '1'; } elsif( $val == 17) { return '1'; } elsif( $val == 18) { return '1';} elsif( $val == 19) { return '1';} else { print 'nothing'; } } #function IntToStr ( ) , i'm not sure if it is necessary) that transforms $ to strings , to use the function <eq> in the third loop for the array of combinations compared with the data array . sub IntToStr { return "$_[0]"; } for my $combi (@tableCombi) { $line_match = 0; for my $sheetData (@tableData) { $countTrue=0; for my $cell ( @$combi) { #my $temp =\$tableCombi[$combi][$cell] ; #if ( trueVal($tableCombi[$combi][$cell] ) eq $tableData[$sheetData][ $tableCombi[$combi][$cell] - 1 ] ){ #if ( IntToStr(trueVal($$temp )) eq IntToStr( $tableData[$sheetData][ $$temp-1] ) ){ if ( IntToStr(trueVal($tableCombi[$combi][$cell]) ) eq IntToStr($tableData[$sheetData][ $tableCombi[$combi][$cell] -1]) ){ $countTrue++;} if ($countTrue==@$combi){ $line_match++; #if ($line_match < 50){ print $tableData[$sheetData][4]." "; #} } } } print $line_match." \n"; }

    Read the article

  • Get the Picture: Pinterest for Marketers

    - by Mike Stiles
    When trying to determine on which networks to conduct social marketing, the usual suspects immediately rise to the top; Facebook & Twitter, then LinkedIn (especially if you’re B2B), then maybe some Google Plus to hedge SEO bets.  So at what juncture do brands get excited about Pinterest? Pinterest has been easy for marketers to de-prioritize thanks to the perception its usage is so dominated by women. Um, what’s wrong with that? Women make an estimated 85% of all consumer purchases. So if there are indeed over 30 million US women active on it monthly, and they do 92% of the pinning, and 84% are still active on it after 4 years, when did an audience of highly engaged, very likely sales conversions become low priority? Okay, if you’re a tech B2B SaaS product like the Oracle Social Cloud, Pinterest may not be where you focus. But if you operate in the top Pinterest categories, which are truly far-reaching, it’s time to take note of Pinterest’s performance to date: 40.1 million monthly users in the US (eMarketer). Over 30 billion pins, half of which were pinned in the last 6 months. (Big momentum) 75% of usage is on their mobile app. (In solid shape for the mobile migration) Pinterest sharing grew 58% in 2013, beating Facebook, Twitter, or LinkedIn. (ShareThis) Pinterest is the 3rd most popular sharing platform overall (over email), with 48% of all sharing on tablets. Users referred by Pinterest are 10% more likely to buy on e-commerce sites and tend to spend twice that of users coming from Facebook. (Shopify) To be fair, brands haven’t had any paid marketing opportunities on that platform…until recently. Users are seeing Promoted Pins in both category and search feeds from rollout brands like Gap, ABC Family, Ziploc, and Nestle. Are the paid pins annoying users? It seems more so than other social networks, they’re fitting right in to the intended user experience and being accepted, getting almost as many click-throughs as user pins. New York Magazine’s Kevin Roose laid it out succinctly; Pinterest offers a place that’s image-centric, search-friendly, makes things easy to purchase, makes things easy to share, and puts users in an aspirational mood to buy. Pinterest is very confident in the value of that combo and that audience, with CPM rates 5x that of the most expensive Facebook ad, plus (at least for now) required spending commitments and required pin review by Pinterest for quality. The latest developments; a continued move toward search and discovery with enhancements like Guided Search to help you hone in on what interests you, Custom Categories, and the rumored Visual Search that stands to be a liberation from text. And most recently, Pinterest has opened up its API so brands can get access to deeper insights into the best search terms and categories in which to play ball, as well as what kinds of pins stand to perform best in those areas. As we learned in our rundown this week of Social Media Examiner’s Social Media Marketing Industry Report, around 50% of marketers specifically intend on upping their use of Pinterest. If you’re a big believer in fishing where the fish are, that’s probably an efficient position to take. @mikestiles @oraclesocialPhoto: Adam Lambert_Gorwyn, freeimages.com

    Read the article

  • JDeveloper 11.1.2 : Command Link in Table Column Work Around

    - by Frank Nimphius
    Just figured that in Oracle JDeveloper 11.1.2, clicking on a command link in a table does not mark the table row as selected as it is the behavior in previous releases of Oracle JDeveloper. For the time being, the following work around can be used to achieve the "old" behavior: To mark the table row as selected, you need to build and queue the table selection event in the code executed by the command link action listener. To queue a selection event, you need to know about the rowKey of the row that the command link that you clicked on is located in. To get to this information, you add an f:attribute tag to the command link as shown below <af:column sortProperty="#{bindings.DepartmentsView1.hints.DepartmentId.name}" sortable="false"    headerText="#{bindings.DepartmentsView1.hints.DepartmentId.label}" id="c1">   <af:commandLink text="#{row.DepartmentId}" id="cl1" partialSubmit="true"       actionListener="#{BrowseBean.onCommandItemSelected}">     <f:attribute name="rowKey" value="#{row.rowKey}"/>   </af:commandLink>   ... </af:column> The f:attribute tag references #{row.rowKey} wich in ADF translates to JUCtrlHierNodeBinding.getRowKey(). This information can be used in the command link action listener to compose the RowKeySet you need to queue the selected row. For simplicitly reasons, I created a table "binding" reference to the managed bean that executes the command link action. The managed bean code that is referenced from the af:commandLink actionListener property is shown next: public void onCommandItemSelected(ActionEvent actionEvent) {   //get access to the clicked command link   RichCommandLink comp = (RichCommandLink)actionEvent.getComponent();   //read the added f:attribute value   Key rowKey = (Key) comp.getAttributes().get("rowKey");     //get the current selected RowKeySet from the table   RowKeySet oldSelection = table.getSelectedRowKeys();   //build an empty RowKeySet for the new selection   RowKeySetImpl newSelection = new RowKeySetImpl();     //RowKeySets contain List objects with key objects in them   ArrayList list = new ArrayList();   list.add(rowKey);   newSelection.add(list);     //create the selectionEvent and queue it   SelectionEvent selectionEvent = new SelectionEvent(oldSelection, newSelection, table);   selectionEvent.queue();     //refresh the table   AdfFacesContext.getCurrentInstance().addPartialTarget(table); }

    Read the article

  • Apprentice Boot Camp in South Africa (Part 2)

    - by Tim Koekkoek
    By Maximilian Michel (DE), Jorge Garnacho (ES), Daniel Maull (UK), Adam Griffiths (UK), Guillermo De Las Nieves (ES), Catriona McGill (UK), Ed Dunlop (UK) Today we have the second part of the adventures of seven apprentices from all over Europe in South-Africa!  Kruger National Park & other experiences Going to the Kruger National Park was definitely an experience we will all remember for the rest of our lives. This trip,organised by Patrick Fitzgerald, owner of the Travellers Nest (where we all stayed), took us from the hustle and bustle of Joburg to experience what Africa is all about, the wild! Although the first week’s training we had prior to this trip to the Kruger was going very well, we all knew this was to be a very nice break before we started the second week of training. And we were right, the animals, scenery and sights we saw were just simply incredible and like I said something we will remember for the rest of our lives. To see lions, elephants, cheetahs and rhinos and many more in a zoo is one thing, but to see them in the wild, in their natural habitat is very special and I personally only realised this from the early 5 am start on the first morning in the Kruger, which was definitely worth it. Not only was it all about the safari, we ate some wonderful food, in particular on the Saturday night, Patrick made us a traditional South African Braai which was one of my favourite meals of the whole two weeks. After the Kruger National Park we had a whole day of traveling back to Johannesburg but even this was made to be a good day by our hosts. Despite the early start on the road it was all worth it by the time we reached God's Window. The walk to the top was made a lot harder by all the steaks we had eaten in the first week but the hard walk was worth it at the top, with views that stretched for miles. The Food The food in South Africa is typically meat and in big amounts, while there we ate a lot of big beef steaks, ribs and kudu sausage. All of the meat we ate was usually cooked with a sauce such as a Barbeque glaze. The restaurants we visited were: Upperdeck Restaurant, with live music and a great terrace to eat, the atmosphere was good for enjoying the music and eating our food. Most of us ate  Spare ribs that weighed 600 kg, with barbecue sauce that was delicious. Die Bosvelder Pub & Restaurant is a restaurant with a very surprising decor, this is because the walls had many of south Africa’s famous animals on them. The food was maybe the best we ate in South Africa. Our orders were: Springbokvlakte Lambs' Neck Stew, beef in gravy and steaks topped with cheese and then more meat on top! All meals were accompanied by a selection of white sauce cauliflower, spinach and zanhorias. Pepper Chair Restaurant, where the specialty is T-Bone steaks of 1.4 kg, but most of us were happy to attempt the 1 kg. Cooked with barbecue sauce over the meat, it was very good!  The only problem was their size causing the  the meat to get cold if you did not eat it very fast! We’re all waiting for our 1.0 kg t-bone steak including our Senior Director EMEA Systems Support Germany & Switzerland: Werner Hoellrigl The Godfather Restaurant, the food here was more meat in abundance. We ate: great ribs, hamburgers, steaks and all accompanied with a small plates of carrot and sauteed spinach, very good. We had two great weeks in South-Africa! If you want to join Oracle, then check http://campus.oracle.com 

    Read the article

  • Setting up a Carousel Component in ADF Mobile

    - by Shay Shmeltzer
    The Carousel component is one of the slickier ways of showing collections of data, and on a mobile device it works really great with the finger swipe gesture. Using the Carousel component in ADF Mobile is similar to using it in regular web ADF applications, with one major change - right now you can't drag a collection from the data control palette and drop it as a carousel. So here is a quick work around for that, and details about setting up carousels in your application. First thing you'll need is a data control that returns an array of records. In my demo I'm using the Emps collection that you can get from following this tutorial. Then you drag the emps and drop it in your amx page as an ADF mobile iterator. We are doing this as a short cut to getting the right binding needed for a carousel in our page. If you look now in your page's binding you'll see something like this: You can now remark the whole iterator code in your page's source. Next let's add the carousel From the component palette drag the carousel (from the data view category) to the page. Next drag a carousel item and drop it in the nodestamp facet of the carousel. Now we'll hook up the carousel to the binding we got from the iterator - this is quite simple just copy the var and value attributes from the iterator tag to the carousel tag: var="row" value="#{bindings.emps.collectionModel}" Next drop a panelForm, or another layout panel in to the carousel item. Into that panelForm you can now drop items and bind their value property to row.attributeNames - basically copying the way it is in the fields in the iterator for example: value="#{row.hireDate}". By the way you can also copy other attributes like the label. And that's it. Your code should end up looking something like this:     <amx:carousel id="c1" var="row" value="#{bindings.emps.collectionModel}">      <amx:facet name="nodeStamp">        <amx:carouselItem id="ci1">          <amx:panelFormLayout id="pfl1">            <amx:inputText label="#{bindings.emps.hints.salary.label}" value="#{row.salary}" id="it1"/>            <amx:inputText label="#{bindings.emps.hints.name.label}" value="#{row.name}" id="it2"/>          </amx:panelFormLayout>        </amx:carouselItem>      </amx:facet>    </amx:carousel> And when you run your application it will look like this:

    Read the article

  • Where Twitter Stands Heading Into 2013

    - by Mike Stiles
    As Twitter continued throughout 2012 to be a stage on which global politics and culture played itself out, the company itself underwent some adjustments that give us a good indication of what users and brands can expect from the platform in 2013. The power of the network did anything but fade. Celebrities continued to use it to connect one-on-one. Even the Pope signed on this year. It continued to fuel revolutions. It played an exponentially large factor in this US Presidential election. And around the world, the freedom to speak was challenged as users were fired, sued, sometimes even jailed for their tweets. Expect more of the same in 2013, as Twitter has entrenched itself, for individuals, causes and brands, as the fastest, easiest, most efficient way to message the masses so some measure of impact can come from it. It’s changed everything, and it’s not finished. These fun facts reveal the position of strength with which Twitter enters 2013: It now generates a billion tweets every 2.5 days It has 500 million+ users The average Twitter user has tweeted 307 times 32% of everyone using the Internet uses Twitter It’s expected to bring in $540 million in ad revenue by 2014 11 new accounts are created every second High-level Executive Summary: people love it, people use it, and they’re going to keep loving and using it. Whether or not outside developers love it is a different matter. 2012 marked a shift from welcoming the third party support that played at least some role in Twitter being so warmly embraced, to discouraging anything that replicates what Twitter can do itself…or plans to do itself. It’s not the open playground it once was. Now Twitter must spend 2013 proving it can innovate in-house and keep us just as entranced. Likewise, Twitter is distancing itself from Facebook. Images from the #1 platform’s Instagram don’t work on Twitter anymore, and Twitter’s rolling out their own photo filter product. Where the two have lived in a “plenty of room for everybody” symbiosis up to now, 2013 could see the giants ramping up a full-on rivalry. Twitter is exhibiting a deliberate strategy. Updates have centered on more visually appealing search results, and making finding and sharing content easier. Deals have been cut with some media entities so their content stands out. CEO Dick Costolo has said tweets aren’t the attraction, they’re what leads you to content. Twitter aims to be a key distributor of media and info. Add the addition of former News Corp. President Peter Chernin to the board, and their hashtag landing page experience for events, and their media behemoth ambitions get pretty clear. There are challenges ahead and Costolo has also laid those out; entry into China, figuring out how to have Twitter deliver both comprehensive and relevant, targeted experiences, and the visualization of big data. What does this mean for corporations? They can expect a more media-rich evolution and growing emphases on imagery. They can expect more opportunities to create great media content and leverage Twitter for its distribution. And they can expect new ways to surface in searches. Are brands diving in? 56% of customer tweets to companies get completely and totally ignored. Ugh. A study Twitter recently conducted with Compete shows people who see tweets from retailers are more likely to buy a product. And, the more retailer tweets they see, the more likely they are to purchase on the retail site. As more of those tweets point to engaging media content from the brand, the results should get even better. Twitter appears ready for 2013. Enterprise brands have some work to do. @mikestilesPhoto Stuart Miles, freedigitalphotos.net

    Read the article

  • An Actionable Common Approach to Federal Enterprise Architecture

    - by TedMcLaughlan
    The recent “Common Approach to Federal Enterprise Architecture” (US Executive Office of the President, May 2 2012) is extremely timely and well-organized guidance for the Federal IT investment and deployment community, as useful for Federal Departments and Agencies as it is for their stakeholders and integration partners. The guidance not only helps IT Program Planners and Managers, but also informs and prepares constituents who may be the beneficiaries or otherwise impacted by the investment. The FEA Common Approach extends from and builds on the rapidly-maturing Federal Enterprise Architecture Framework (FEAF) and its associated artifacts and standards, already included to a large degree in the annual Federal Portfolio and Investment Management processes – for example the OMB’s Exhibit 300 (i.e. Business Case justification for IT investments).A very interesting element of this Approach includes the very necessary guidance for actually using an Enterprise Architecture (EA) and/or its collateral – good guidance for any organization charged with maintaining a broad portfolio of IT investments. The associated FEA Reference Models (i.e. the BRM, DRM, TRM, etc.) are very helpful frameworks for organizing, understanding, communicating and standardizing across agencies with respect to vocabularies, architecture patterns and technology standards. Determining when, how and to what level of detail to include these reference models in the typically long-running Federal IT acquisition cycles wasn’t always clear, however, particularly during the first interactions of a Program’s technical and functional leadership with the Mission owners and investment planners. This typically occurs as an agency begins the process of describing its strategy and business case for allocation of new Federal funding, reacting to things like new legislation or policy, real or anticipated mission challenges, or straightforward ROI opportunities (for example the introduction of new technologies that deliver significant cost-savings).The early artifacts (i.e. Resource Allocation Plans, Acquisition Plans, Exhibit 300’s or other Business Case materials, etc.) of the intersection between Mission owners, IT and Program Managers are far easier to understand and discuss, when the overlay of an evolved, actionable Enterprise Architecture (such as the FEA) is applied.  “Actionable” is the key word – too many Public Service entity EA’s (including the FEA) have for too long been used simply as a very highly-abstracted standards reference, duly maintained and nominally-enforced by an Enterprise or System Architect’s office. Refreshing elements of this recent FEA Common Approach include one of the first Federally-documented acknowledgements of the “Solution Architect” (the “Problem-Solving” role). This role collaborates with the Enterprise, System and Business Architecture communities primarily on completing actual “EA Roadmap” documents. These are roadmaps grounded in real cost, technical and functional details that are fully aligned with both contextual expectations (for example the new “Digital Government Strategy” and its required roadmap deliverables - and the rapidly increasing complexities of today’s more portable and transparent IT solutions.  We also expect some very critical synergies to develop in early IT investment cycles between this new breed of “Federal Enterprise Solution Architect” and the first waves of the newly-formal “Federal IT Program Manager” roles operating under more standardized “critical competency” expectations (including EA), likely already to be seriously influencing the quality annual CPIC (Capital Planning and Investment Control) processes.  Our Oracle Enterprise Strategy Team (EST) and associated Oracle Enterprise Architecture (OEA) practices are already engaged in promoting and leveraging the visibility of Enterprise Architecture as a key contributor to early IT investment validation, and we look forward in particular to seeing the real, citizen-centric benefits of this FEA Common Approach in particular surface across the entire Public Service CPIC domain - Federal, State, Local, Tribal and otherwise. Read more Enterprise Architecture blog posts for additional EA insight!

    Read the article

  • The Latest Major Release of AutoVue is Now Available!

    - by Pam Petropoulos
    Click here to read the full press release. To learn more about AutoVue 20.2, check out the What's New in AutoVue 20.2 Datasheet AutoVue 20.2 continues to set the standard for enterprise level visualization with Augmented Business Visualization, a new paradigm which reconciles information and business data from multiple sources into a single view, providing rich and actionable visual decision-making environments. The release also includes; capabilities that enhance end-to-end approval workflow; solutions to visually enable the mobile workforce; and support for the latest manufacturing and high tech formats.     New capabilities in release 20.2 include: ·         Enhancements to the Augmented Business Visualization framework o    Creation of 2D hotspots has been extended in 2D drawings, PDF and image files and can now be defined as regional boxes, rather than just text strings o    New 3D Hotspot links in models and drawings. Parts or components of 3D models can be selected to create hotspot links. ·         Enhanced end-to-end approval workflows with digital stamping and batch stamping improvements ·         Solutions that visually enable the mobile workforce and extend enterprise visualization to mobile devices, including iPads through OVDI (Oracle Virtual Desktop Infrastructure) ·         Enhancements to AutoVue enterprise readiness: reliability and performance improvements, as well as security enhancements which adhere to Oracle’s Software Security Assurance standards ·         Timely support for new MCAD, ECAD, and Office formats ·         New 20.2 versions of AutoVue Document Print Services and Integration SDK (iSDK) ·         New Dutch language availability   The press release also contains terrific supporting quotes from AutoVue customers and partners.        “AutoVue’s stamping enhancements will greatly benefit our building permit management processes,” said Ties Kremer, Information Manager, Noordenveld Municipality, Netherlands. “The ability to batch stamp documents will speed up our approval processes, enable us to save time and money, and help us meet our regulatory compliance obligations.”          “AutoVue provides our non-technical teams in marketing and sales with access to customer order requirements and supporting CAD documents and drawings,” said James Lim, Regional Technical Systems Manager at Molex Incorporated. “AutoVue 20.2 has enabled us to refine our quotation process, and reduce order errors.”         “We are excited about our use of AutoVue’s Augmented Business Visualization framework, which will offer Meridian users enhanced access to related technical documentation,” said Edwin van Dijk, Director of Product Management, BlueCielo.  “By including AutoVue’s new regional hotspot capabilities within BlueCielo Meridian Enterprise, the context of engineering information is carried over into the visual representation of complex assets, thereby helping us to improve productivity and operational excellence.”    

    Read the article

  • Non use of persisted data

    - by Dave Ballantyne
    Working at a client site, that in itself is good to say, I ran into a set of circumstances that made me ponder, and appreciate, the optimizer engine a bit more. Working on optimizing a stored procedure, I found a piece of code similar to : select BillToAddressID, Rowguid, dbo.udfCleanGuid(rowguid) from sales.salesorderheaderwhere BillToAddressID = 985 A lovely scalar UDF was being used,  in actuality it was used as part of the WHERE clause but simplified here.  Normally I would use an inline table valued function here, but in this case it wasn't a good option. So this seemed like a pretty good case to use a persisted column to improve performance. The supporting index was already defined as create index idxBill on sales.salesorderheader(BillToAddressID) include (rowguid) and the function code is Create Function udfCleanGuid(@GUID uniqueidentifier)returns varchar(255)with schemabindingasbegin Declare @RetStr varchar(255) Select @RetStr=CAST(@Guid as varchar(255)) Select @RetStr=REPLACE(@Retstr,'-','') return @RetStrend Executing the Select statement produced a plan of : Nothing surprising, a seek to find the data and compute scalar to execute the UDF. Lets get optimizing and remove the UDF with a persisted column Alter table sales.salesorderheaderadd CleanedGuid as dbo.udfCleanGuid(rowguid)PERSISTED A subtle change to the SELECT statement… select BillToAddressID,CleanedGuid from sales.salesorderheaderwhere BillToAddressID = 985 and our new optimized plan looks like… Not a lot different from before!  We are using persisted data on our table, where is the lookup to fetch it ?  It didnt happen,  it was recalculated.  Looking at the properties of the relevant Compute Scalar would confirm this ,  but a more graphic example would be shown in the profiler SP:StatementCompleted event. Why did the lookup happen ? Remember the index definition,  it has included the original guid to avoid the lookup.  The optimizer knows this column will be passed into the UDF, run through its logic and decided that to recalculate is cheaper than the lookup.  That may or may not be the case in actuality,  the optimizer has no idea of the real cost of a scalar udf.  IMO the default cost of a scalar UDF should be seen as a lot higher than it is, since they are invariably higher. Knowing this, how do we avoid the function call?  Dropping the guid from the index is not an option, there may be other code reliant on it.   We are left with only one real option,  add the persisted column into the index. drop index Sales.SalesOrderHeader.idxBillgocreate index idxBill on sales.salesorderheader(BillToAddressID) include (rowguid,cleanedguid) Now if we repeat the statement select BillToAddressID,CleanedGuid from sales.salesorderheaderwhere BillToAddressID = 985 We still have a compute scalar operator, but this time it wasnt used to recalculate the persisted data.  This can be confirmed with profiler again. The takeaway here is,  just because you have persisted data dont automatically assumed that it is being used.

    Read the article

  • The Rise of Project Intelligence and Why It Matters

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} By Amy DeWolf Are you doing any of these in your organization? How are you leveraging historical data to forecast projects? There’s a lot going on in government today. The economic pressures agencies feel from the uncertainty of budget cuts and sequestration effect every part of an organization, including the Project Management Office (PMO).  The PMO is responsible for monitoring and administering government IT projects. As time goes on, priorities shift, technology advances, and new regulations are imposed, all of which make planning and executing projects more difficult.  For example, think about your own projects.  How many boxes do you need to check and hoops do you need to jump through to ensure you comply with new regulations? While new regulations and technology advancements can be a good thing, they add an additional layer of complexity to already complex projects. To overcome some of these pressures, particularly new regulations, many in the PMO world are adopting a new approach- Project Intelligence (PI). According to a new Oracle Primavera white paper, The Rise of Project Intelligence: When Project Management is Just Not Enough, “PI uses Business Intelligence methods to leverage historical project data to make more informed decisions and greatly enhance project execution.” Currently, project managers plan and forecast the possible phases in an execution cycle.  However, most project managers don’t have the proper tools to do this as effectively as they would like. As the white paper noted, “The underlying deficiencies in most forecasting approaches are that 1) the PM fails in most instances to leverage historical data and 2) the PM doesn’t employ current Business Intelligence tools.” PI seeks to overturn this by combining modeling tools used in Business Intelligence for projects with the understanding of Emotional Intelligence for managing people.   Simply put, Project Intelligence is built off four main pillars: Actively use historical data to forecast project cycles Understand the intricacies of complex projects Enhance social and emotional intelligence in projects Actively use Business intelligence tools Read our complimentary whitepaper and discover the importance of emotional intelligence and best practices for improving projects, specifically in terms of communication.

    Read the article

  • Ubiquitous BIP

    - by Tim Dexter
    The last number I heard from Mike and the PM team was that BIP is now embedded in more than 40 oracle products. That's a lot of products to keep track of and to help out with new releases, etc. Its interesting to see how internal Oracle product groups have integrated BIP into their products. Just as you might integrate BIP they have had to make a choice about how to integrate. 1. Library level - BIP is a pure java app and at the bottom of the architecture are a group of java libraries that expose APIs that you can use. they fall into three main areas, data extraction, template processing and formatting and delivery. There are post processing capabilities but those APIs are embedded withing the template processing libraries. Taking this integration route you are going to need to manage templates, data extraction and processing. You'll have your own UI to allow users to control all of this for themselves. Ultimate control but some effort to build and maintain. I have been trawling some of the products during a coffee break. I found a great post on the reporting capabilities provided by BIP in the records management product within WebCenter Content 11g. This integration falls into the first category, content manager looks after the report artifacts itself and provides you the UI to manage and run the reports. 2. Web Service level - further up in the stack is the web service layer. This is sitting on the BI Publisher server as a set of services, runReport and scheduleReport are the main protagonists. However, you can also manage the reports and users (locally managed) on the server and the catalog itself via the services layer.Taking this route, you still need to provide the user interface to choose reports and run them but the creation and management of the reports is all handled by the Publisher server. I have worked with a few customer on this approach. The web services provide the ability to retrieve a list of reports the user can access; then the parameters and LOVs for the selected report and finally a service to submit the report on the server. 3. Embedded BIP server UI- the final level is not so well supported yet. You can currently embed a report and its various levels of surrounding  'chrome' inside another html based application using a URL. Check the docs here. The look and feel can be customized but again, not easy, nor documented. I have messed with running the server pages inside an IFRAME, not bad, but not great. Taking this path should present the least amount of effort on your part to get BIP integrated but there are a few gotchas you need to get around. So a reasonable amount of choices with varying amounts of effort involved. There is another option coming soon for all you ADF developers out there, the ability to drop a BIP report into your application pages. But that's for another post.

    Read the article

  • Managing Operational Risk of Financial Services Processes – part 2/2

    - by Sanjeev Sharma
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} In my earlier blog post, I had described the factors that lead to compliance complexity of financial services processes. In this post, I will outline the business implications of the increasing process compliance complexity and the specific role of BPM in addressing the operational risk reduction objectives of regulatory compliance. First, let’s look at the business implications of increasing complexity of process compliance for financial institutions: · Increased time and cost of compliance due to duplication of effort in conforming to regulatory requirements due to process changes driven by evolving regulatory mandates, shifting business priorities or internal/external audit requirements · Delays in audit reporting due to quality issues in reconciling non-standard process KPIs and integrity concerns arising from the need to rely on multiple data sources for a given process Next, let’s consider some approaches to managing the operational risk of business processes. Financial institutions considering reducing operational risk of their processes, generally speaking, have two choices: · Rip-and-replace existing applications with new off-the shelf applications. · Extend capabilities of existing applications by modeling their data and process interactions, with other applications or user-channels, outside of the application boundary using BPM. The benefit of the first approach is that compliance with new regulatory requirements would be embedded within the boundaries of these applications. However pre-built compliance of any packaged application or custom-built application should not be mistaken as a one-shot fix for future compliance needs. The reason is that business needs and regulatory requirements inevitably out grow end-to-end capabilities of even the most comprehensive packaged or custom-built business application. Thus, processes that originally resided within the application will eventually spill outside the application boundary. It is precisely at such hand-offs between applications or between overlaying processes where vulnerabilities arise to unknown and accidental faults that potentially result in errors and lead to partial or total failure. The gist of the above argument is that processes which reside outside application boundaries, in other words, span multiple applications constitute a latent operational risk that spans the end-to-end value chain. For instance, distortion of data flowing from an account-opening application to a credit-rating system if left un-checked renders compliance with “KYC” policies void even when the “KYC” checklist was enforced at the time of data capture by the account-opening application. Oracle Business Process Management is enabling financial institutions to lower operational risk of such process ”gaps” for Financial Services processes including “Customer On-boarding”, “Quote-to-Contract”, “Deposit/Loan Origination”, “Trade Exceptions”, “Interest Claim Tracking” etc.. If you are faced with a similar challenge and need any guidance on the same feel free to drop me a note.

    Read the article

  • Working Towards Specialization? Your VAD Can Help You Score!

    - by Kristin Rose
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} TOUCH DOWN! That’s right folks, football is in full swing and what better way to kickoff football season than with a great Oracle play? Partners can now score big by side passing the ball to their VADs, enabling them to help in the process of becoming a Specialized partner. With the new functionality now available on the OPN Competency Center, Partner PRM Administrators can grant access to their VADs and have them assist in achieving their Specialization requirements. Here are the rules of the game: Partner Administrator must provide authorization Details do not include individual users data Access can be removed anytime Follow the steps below to grant your VAD access to your company Specialization progress reports. It’s as simple as 1,2,3…Go team go! Login to the OPN Competency Center and go to “My Preferences” on the top right corner of the screen. Under “My VAD”, select your Region, Country and Value Added Distributor name, then simple click in “ADD VAD”. Your VAD can now access your Specialization Tracker report! For those MVP’s who want to learn more, be sure to watch this 3 minute play by play video on the new OPN Competency Center VAD/VAR Specialization Tracker below, and click here before game day! Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Are You Ready For Some Oracle Football? The OPN Communications Team Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";}

    Read the article

  • Web Services Example - Part 1: Declarative

    - by Denis T
    In this edition of the ADF Mobile blog we'll tackle part 1 of our Web Service examples. In this posting we'll take a look at using a declarative SOAP Web Service. Getting the sample code: Just click here to download a zip of the entire project. You can unzip it and load it into JDeveloper and deploy it either to iOS or Android. Please follow the previous blog posts if you need help getting JDeveloper or ADF Mobile installed. Defining our Web Service: First off, we should mention that this sample code is using a public web service provided free by CDYNE Corporation that provides weather forecasts by zipcode. Sometimes this service goes down so please ensure you know it's up before reporting this example isn't working. Let's take a look at the web service.  We created this by using the "Web Service Data Control" from the New Gallery and using this link to this wsdl:  "http://wsf.cdyne.com/WeatherWS/Weather.asmx?WSDL"   This web service has several methods but we're interested in GetCityForecastByZIP which takes a single string parameter for the zipcode and the second method, GetWeatherInformation that enumerates all possible forecast descriptions and associated image URLs.  The latter we'll use in the next edition but we included it here for completeness. Defing the Application: After adding a feature to the adfmf-feature.xml file, we added a taskflow to host the application flow.  This comprises of a home screen with a list with items for each method in the web service, "Forecast by Zip" and "Weather Info".  In this application we've also decided to hide the navigation bar since there is only one feature in the application. Forecast by Zip: The "Forecast By ZIP" option first presents the user with a screen where they can enter a zipcode and when the "Search" button is tapped, it executes the GetCityForecastByZIP method.  This is done by binding an Action binding to that method. The easiest way to accomplish this is to just drag & drop the method from the Data Control palette to the AMX page and drop it as a button and let the framework hook it up for you.  There is an inputText component on the page that is bound to a pageFlowScope variable called "zip".  This is used as the parameter to the Action binding when it is executed.  Because the actionListener attribute of the commandButton executes the Web Service each time, we ensure that the method is invoked every time the button is clicked. Weather Info: Unlike the previous method, this time instead of explictly executing the web service method we are using deferred invocation.  What this means is that we will bind to the results of the method and the framework will execute the method when it the data is required to be rendered.  We do this by simply doing a drag & drop of the results of the GetWeatherInformation to the AMX page.  When the page is rendered and the bindings are resolved the framework invokes the method.  This executes the method only when it is needed and fills the Data Control provider.  Because we never re-execute the method, you can click from Home to Weather Info and back many times and the web service is only ever invoked once. Issues and Possible Improvements: One thing you will quickly realize with this example is that the error handling is done by the framework for you. For simple examples this is fine but for real applications you'll want to customize these error messages.  With the declarative invocation of web services, this is difficult.  This is one aspect we'll address in the second installment of the web service examples where we will show you how to do programmatic invocation which allows you better error handling. Another issue you will notice with this example is that we can enumerate the weather information but there isn't an easy way to use that information to show the corresponding description and image as part of the forecast results.  We'll show you how to do this in the next example.

    Read the article

  • Consolidation in a Database Cloud

    - by B R Clouse
    Consolidation of multiple databases onto a shared infrastructure is the next step after Standardization.  The potential consolidation density is a function of the extent to which the infrastructure is shared.  The three models provide increasing degrees of sharing: Server: each database is deployed in a dedicated VM. Hardware is shared, but most of the software infrastructure is not. Standardization is often applied incompletely since operating environments can be moved as-is onto the shared platform. The potential for VM sprawl is an additional downside. Database: multiple database instances are deployed on a shared software / hardware infrastructure. This model is very efficient and easily implemented with the features in the Oracle Database and supporting products. Many customers have moved to this model and achieved significant, measurable benefits. Schema: multiple schemas are deployed within a single database instance. The most efficient model, it places constraints on the environment. Usually this model will be implemented only by customers deploying their own applications.  (Note that a single deployment can combine Database and Schema consolidations.) Customer value: lower costs, better system utilization In this phase of the maturity model, under-utilized hardware can be used to host more workloads, or retired and those workloads migrated to consolidation platforms. Customers benefit from higher utilization of the hardware resources, resulting in reduced data center floor space, and lower power and cooling costs. And, the OpEx savings from Standardization are multiplied, since there are fewer physical components (both hardware and software) to manage. Customer value: higher productivity The OpEx benefits from Standardization are compounded since not only are there fewer types of things to manage, now there are fewer entities to manage. In this phase, customers discover that their IT staff has time to move away from "day-to-day" tasks and start investing in higher value activities. Database users benefit from consolidating onto shared infrastructures by relieving themselves of the requirement to maintain their own dedicated servers. Also, if the shared infrastructure offers capabilities such as High Availability / Disaster Recovery, which are often beyond the budget and skillset of a standalone database environment, then moving to the consolidation platform can provide access to those capabilities, resulting in less downtime. Capabilities / Characteristics In this phase, customers will typically deploy fixed-size clusters and consolidate on a cluster until that cluster is deemed "full," at which point a new cluster is built. Customers will define one or a few cluster architectures that are used wherever possible; occasionally there may be deployments which must be handled as exceptions. The "full" policy may be based on number of databases deployed on the cluster, or observed peak workload, etc. IT will own the provisioning of new databases on a cluster, making the decision of when and where to place new workloads. Resources may be managed dynamically, e.g., as a priority workload increases, it may be given more CPU and memory to handle the spike. Users will be charged at a fixed, relatively coarse level; or in some cases, no charging will be applied. Activities / Tasks Oracle offers several tools to plan a successful consolidation. Real Application Testing (RAT) has a feature to help plan and validate database consolidations. Enterprise Manager 12c's Cloud Management Pack for Database includes a planning module. Looking ahead, customers should start planning for the Services phase by defining the Service Catalog that will be made available for database services.

    Read the article

  • The Future of M2M in a Connected World

    - by Kristin Rose
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} There is no denying that the technological landscape as we know it is drastically changing all thanks to three little words – Machine to Machine. The M2M platform has taken over as one of the industry’s main buzz words and there is no question as to why! Just 5 months ago we had a guest post on “Machine to Machine – The Internet of Things – It’s about the Data.” Now companies are extending the use of M2M data to increase opportunity and intelligence across the Enterprise. Just this week, Oracle announced the results of its “Designing an M2M Platform for the Connected World” research, examining the evolving drivers behind ‘Machine to Machine’ (M2M) projects and how those changes are impacting solution requirements. Be sure to read this exciting report here! To Infinity and Beyond, The OPN Communications Team Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • WebLogic How-to Videos: Install, Upgrade, & Patch

    - by Ruma Sanyal
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Here is another great YouTube video by our product manager Monica Riccelli. She talks about installers now being standardized in Oracle for greater consistency -- no more WebLogic native installers. Also, JDK is no longer a part of the WebLogic install. The various installers she discusses include OUI, ZIP, OEPE, Coherence and more. Monica then takes us through a step by step install process. After the install process is complete the video takes us through the configuration wizard. The ZIP installer is then discussed and its effectiveness, such as it being the smallest downloadable option, easy, and very popular with our customers and limitations (such as for development only and not to be used in production) highlighted. Monica then takes us through the configuration wizard, its usage, and when to use WLST scripts. The video then discusses NodeManager and its usage and discusses how to reconfigure a WebLogic domain on upgrade – through our GUI tools or through command line interface. Lastly, it highlights Opatch – a patch application tool used by our customers and standardized across all Oracle products. Really detailed video. Check it out!  /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • MySQL Enterprise Backup 3.8.2 has been released!

    - by Hema Sridharan
    MySQL Enterprise Backup v3.8.2, a maintenance release of online MySQL backup tool, is now available for download from My Oracle Support  (MOS) website as our latest GA release.  It will also be available via the Oracle Software Delivery Cloud in approximately 1-2 weeks. A brief summary of the changes in MySQL Enterprise Backup version 3.8.2 is given below.   A. Functionality Added or Changed:  MySQL Enterprise Backup has a new --on-disk-full command line option. mysqlbackup could hang when the disk became full, rather than detecting the low space condition. mysqlbackup now monitors disk space when running backup commands, and users can now specify the action to take at a disk-full condition with the --on-disk-full option. For more details, refer this page MySQL Enterprise Backup has a new progress report feature, which periodically outputs short progress indicators on its  operations to user-selected destinations (for example, stdout, stderr, a file, or other choices). For more details on progress report options, refer here   B. Bugs Fixed: When --innodb-file-per-table=ON, if a table was renamed and backup-to-image was in progress, apply-log would fail when being run on the backup. (Bug #16903973)   MySQL Server failed to start after a backup was restored if  there had been online DDL transactions on partitioned tables during the time of backup. (Bug #16924499)   apply-log failed if ALTER TABLE ... REORGANIZE PARTITION was applied to partitioned InnoDB tables during backup. (Bug #16721824, Bug #16903951)  apply-incremental-backup might fail with an assertion error if  the InnoDB tables being backed up were created in Barracuda format and with their KEY_BLOCK_SIZE  values  different from the innodb_page_size . This fix ensures that different KEY_BLOCK_SIZE  values are handled properly during incremental backup and apply-incremental-backup operations.  If a table was renamed following a full backup, a subsequent incremental backup could copy the .frm file with the new name, but not the associated .ibd file with the new name. After a  restore, the InnoDB data dictionary could be in an  inconsistent state. This issue primarily occurred if the table  was not changed between the full backup and the subsequent  incremental backup. Bug #16262690)  After a full backup, if a table was renamed and modified,  apply-incremental-backup would crash when run on the backup directory. (Bug #16262609) The value of the binary log position in backup_variables.txt  could be different from the output displayed during the   backup-and-apply-log operation. (This issue did not occur if  the backup and apply-log steps were done separately.) (Bug  #16195529) When using the --only-innodb-with-frm option, MySQL Enterprise Backup tried to create temporary files at unintended locations in the file system, which might cause a failure when, for example, the user had no write privilege for those locations.   This fix makes sure the paths for the temporary files are  correct. (Bug #14787324)  A backup process might hang when it ran into an LSN mismatch between a data file  and the redo log. This fix makes sure the process does not hang and it displays an error message showing the  name of the problematic data file (Bug #14791645) Please post your questions / comments about Backup in forums. Thanks, MEB Team

    Read the article

  • Challenges in Corporate Reporting - New Independent Research

    - by ndwyouell
    Earlier this year, Oracle and Accenture sponsored a global study on trends in financial close and reporting. We surveyed 1,123 finance professionals in large organizations in 12 countries around the world during February and March. Financial Consolidation and Reporting is the most mature aspect of Enterprise Performance Management with mainstream solutions having been around for over 30 years. But of course over this time there have been many changes and very significant increases in regulation. So just what is the current state is Financial Consolidation and Reporting in our major corporations across the world? We commissioned this independent research to find out. Highlights of the result are: •          Seeking change: Businesses recognize they need to invest in financial reporting to address the challenges they currently face. 47 percent of companies have made substantial investments over the last year to the financial close, filing, and reporting processes. •          Ineffective investments: Despite these investments, spreadsheets (72 percent) and e-mails (68 percent) are still being used daily to track and manage reporting, suggesting that new investments are falling short of expectations. •          Increased costs and uncertainty: The situation is so opaque that managers across the finance function are unable to fully understand the financial impact or cost implications of reporting, with 60 percent of respondents admitting they did not know the total cost of managing and publicizing their financial results. •          Persistent challenges: 68 percent of respondents admitted that they have inadequate visibility into reporting processes, while 84 percent of finance managers surveyed said they find it difficult to control the quality of financial data across the entire reporting process. •          Decreased effectiveness: 71 percent of finance managers feel their effectiveness is limited in some way by data-analysis–related issues, while 39 percent of C-level or VP-level respondents say their effectiveness is impaired by limited visibility. •          Missed deadlines: Due to late changes to the chart of accounts, 15 percent of global businesses have missed statutory filings, putting their companies at risk of financial penalties and potentially impacting share value. The report makes it clear that investments made to date by these large organizations around the world have been uneven across the close, reporting, and filing processes, which has led to the challenges these organizations currently face in the overall process. Regardless of whether companies are using a variety of solutions or a single solution, the report shows they continue to witness increased costs, ineffectual data management, and missed reporting, which—in extreme circumstances—can impact a company’s corporate image and share value. The good news is that businesses realize that these problems persist and 86 percent of companies are likely to make a significant investment during the next five years to address these issues. While they should invest, it is critical that they direct investments correctly to address the key issues this research identified: •          Improving data integrity •          Optimizing processes •          Integrating the extended financial close process By addressing these issues and with clear guidance on how to implement the correct business processes, infrastructure, and software solutions, finance teams will find that their reporting processes are much more effective, cost-efficient, and aligned with their performance expectations. To get a copy of the full report: http://www.oracle.com/webapps/dialogue/ns/dlgwelcome.jsp?p_ext=Y&p_dlg_id=11747758&src=7300117&Act=92 To replay a webcast discussing the findings: http://www.cfo.com/webcast.cfm?webcast=14639438&pcode=ORA061912_ORA

    Read the article

  • The builds tools for WindowsKernelModeDriver8.1 (Platform Toolset = 'WindowsKernelModeDriver8.1') cannot be found

    - by Rajeshwar
    I am attempting to build this example. However I am getting the error error MSB8020: The builds tools for WindowsApplicationForDrivers8.1 (Platform Toolset = 'WindowsApplicationForDrivers8.1') cannot be found. To build using the WindowsApplicationForDrivers8.1 build tools, either click the Project menu or right-click the solution, and then select "Update VC++ Projects...". Install WindowsApplicationForDrivers8.1 to build using the WindowsApplicationForDrivers8.1 build tools. I have already installed WDK 8.1 .exe andwhen I right click on the solution I cant find any thing that states "Update VC++ Projects" Any suggestions on how I may resolve this issue and build it. ? I am running Windows 7 and VS2012 Pro.

    Read the article

  • How to play youtube videos which using Youtube.apk for android 2.1 platform

    - by danny
    I have a problem, I have a Youtube.apk that version is 1.5.20, so I would like to play Youtube videos in android 2.1 platform, but some problems: ======================================================================================= I/ActivityManager( 52): Starting activity: Intent { act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] flg=0x10200000 cmp=com.google.android.youtube/.HomeA} I/ActivityManager( 52): Start proc com.google.android.youtube for activity com.google.android.youtube/.HomeActivity: pid=373 uid=10021 gids={3003} D/installd( 36): DexInv: --- BEGIN '/system/app/YouTube.apk' --- D/dalvikvm( 379): DexOpt: load 106ms, verify 597ms, opt 24ms D/installd( 36): DexInv: --- END '/system/app/YouTube.apk' (success) --- I/ActivityThread( 373): Publishing provider com.google.android.youtube.SuggestionProvider: com.google.android.youtube.suggest.SuggestionProvider I/YouTube ( 373): Distribution channel:mvapp-android-mid120 D/ ( 373): unable to unlink '/data/data/com.google.android.youtube/shared_prefs/youtube.xml.bak': No such file or directory (errno=2) I/AndroidRuntime( 373): AndroidRuntime onExit calling exit(-42) I/ActivityManager( 52): Process com.google.android.youtube (pid 373) has died. D/Zygote ( 33): Process 373 exited cleanly (214) I/UsageStats( 52): Unexpected resume of com.android.launcher while already resumed in com.google.android.youtube W/WindowManager( 52): Rebuild removed 2 windows but added 1 W/InputManagerService( 52): Window already focused, ignoring focus gain of: com.android.internal.view.IInputMethodClient$Stub$Proxy@4406d498 ============================================================================================= so, How to fix this problem thanks~

    Read the article

< Previous Page | 401 402 403 404 405 406 407 408 409 410 411 412  | Next Page >