Search Results

Search found 62853 results on 2515 pages for 'data success'.

Page 22/2515 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Bad Data is Really the Monster

    - by Dain C. Hansen
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Bad Data is really the monster – is an article written by Bikram Sinha who I borrowed the title and the inspiration for this blog. Sinha writes: “Bad or missing data makes application systems fail when they process order-level data. One of the key items in the supply-chain industry is the product (aka SKU). Therefore, it becomes the most important data element to tie up multiple merchandising processes including purchase order allocation, stock movement, shipping notifications, and inventory details… Bad data can cause huge operational failures and cost millions of dollars in terms of time, resources, and money to clean up and validate data across multiple participating systems. Yes bad data really is the monster, so what do we do about it? Close our eyes and hope it stays in the closet? We’ve tacked this problem for some years now at Oracle, and with our latest introduction of Oracle Enterprise Data Quality along with our integrated Oracle Master Data Management products provides a complete, best-in-class answer to the bad data monster. What’s unique about it? Oracle Enterprise Data Quality also combines powerful data profiling, cleansing, matching, and monitoring capabilities while offering unparalleled ease of use. What makes it unique is that it has dedicated capabilities to address the distinct challenges of both customer and product data quality – [different monsters have different needs of course!]. And the ability to profile data is just as important to identify and measure poor quality data and identify new rules and requirements. Included are semantic and pattern-based recognition to accurately parse and standardize data that is poorly structured. Finally all of the data quality components are integrated with Oracle Master Data Management, including Oracle Customer Hub and Oracle Product Hub, as well as Oracle Data Integrator Enterprise Edition and Oracle CRM. Want to learn more? On Tuesday Nov 15th, I invite you to listen to our webcast on Reduce ERP consolidation risks with Oracle Master Data Management I’ll be joined by our partner iGate Patni and be talking about one specific way to deal with the bad data monster specifically around ERP consolidation. Look forward to seeing you there!

    Read the article

  • Where can I locate business data to use in my application?

    - by Aaron McIver
    This question talks about any and all free public raw data which appeared to have valuable pieces but nothing that really provides what I am looking for. Instead of using a socially defined listing of businesses (foursquare), I would like a business listing data set of registered businesses and associated addresses that could then be searchable based on location (coordinates). The critical need is that the data set should be filterable based on varying criteria (give me all restaurants, coffee shops, etc...). If the data is free that is great but anywhere that sells this type of data would also suffice. Infochimps looked like a possibility but perhaps something a bit more extensive exists. Where can I find a free or for fee data set of registered business that is filterable based on type of business and location?

    Read the article

  • Sybase PowerDesigner Change Many (Find/Replace/Convert) Data Item's Data Types

    - by Andy
    Hello, I have a relatively large Conceptual Data Model in PowerDesigner. After generating a Physical Data Model and seeing the DBMS data types, I need to update all of data types(NUMBER/TEXT) for each data item. I'd like to either do a find/replace within the Conceptual Data Model or somehow map to different data types when creating the Physical Data Model. Ex. Change the auto conversion of Text - Clob, to Text - NVARCHAR(20). Thanks!

    Read the article

  • are there any useful datasets available on the web for data mining?

    - by niko
    Hi, Does anyone know any good resource where example (real) data can be downloaded for experimenting statistics and machine learning techniques such as decision trees etc? Currently I am studying machine learning techniques and it would be very helpful to have real data for evaluating the accuracy of various tools. If anyone knows any good resource (perhaps csv, xls files or any other format) I would be very thankful for a suggestion.

    Read the article

  • How Social Is Your Contact Center?

    - by Charles Knapp
    More than 75% of consumers have complained on a social site after a poor customer experience. Yet, 70% of companies have little understanding of the social media conversations about their brand. To deliver upon your brand promise, retain customers, and increase their lifetime value, you must deliver great customer experiences across social, mobile, phone, and chat channels. Siloed channels produce poor customer experiences. Social channels must integrate with the people, processes, technology, and traditional channels used to satisfy customers. The more effective a company’s social marketing, the greater the demand for effective social service. However, service is not a job for social marketers. It is a job for service specialists, focused on KPIs such as response time, first contact resolution, satisfaction, churn, retention, and customer lifetime value. Most social-enabled contact centers are at the early adopter stage, attempting to “bolt on” social media as a side process. Many are experiencing inconsistent customer experiences, higher costs, and negligible return on investments. Service leaders should consider carefully how to integrate social channels with their current customer service and support people, processes, technology, and channels. Here is one company realizing success: the pre-integrated Oracle RightNow Social Experience “empowers our contact center operations by enabling our agents to join customer conversations that are happening on social sites like Twitter and Facebook and integrate those conversations into our overall multichannel customer engagement processes.” — Lisa Larson, Drugstore.com

    Read the article

  • Subaru CIO wins SIM Leadership Award

    - by tony.berk
    Congratulations to Brian Simmermon, CIO at Subaru of America, Inc., for winning the Society for Information Management's (SIM) fifth annual SIM Leadership Award. Simmermon joined Subaru of America in 2005 as Chief Information Officer. Simmermon then performed a company-wide technology assessment and determined that the business ran a large collection of applications, many of which duplicated functionality. Establishing the mantra, "Simplicity, Flexibility, and Cost Effectiveness", he reduced the total number of applications, moved to a small core set of systems - including Oracle and Siebel. Tom Doll, COO for Subaru of America said, "We are very pleased Brian has been recognized. He has consistently shown vision and leadership and under his leadership, our technology group's innovations have helped our sales to grow to record levels, regardless of the economic circumstances." Simmermon's technology group's aggressive business deliverables have helped Subaru to become one of the most successful brands in the US with the brand reaching record sales in both 2009 and 2010. Click here to read the full press release. Click here to learn about Subaru's success with Oracle products. Congratulations Brian!

    Read the article

  • Why The Athene Group Chose Fusion CRM

    - by Tony Berk
    A guest post by Vikas Bhambri, Managing Partner, The Athene Group This year, The Athene Group (www.theathenegroup.com) celebrated our tenth anniversary. The company has accomplished a lot in ten years overcoming a number of hurdles and challenges to have grown organically to a 150+ person global company with offices in the US, UK, and India and customers in the US, Canada, and Europe. Now more than ever with the current global landscape from an economic and competitive standpoint it was vital that we make some changes to remain successful for the next ten years. There were two key initiatives that we discussed internally that would enable us to successfully accomplish this – collaboration and the concept of “insight to action”. With our existing Oracle CRM On Demand platform we had components of this but not the full depth and breadth that we were looking for. When we started to discuss Fusion CRM we immediately saw several next generation tools that would embrace these two objectives. For a consulting and development organization the collaboration required between business development and consulting delivery is as important as the collaboration required during the projects between the project delivery and account management teams. The Activity Streams functionality in Fusion CRM immediately addressed the communication of key discussion topics and exchanges around our clients. Of course when we saw the Oracle Social Network (which is part of our Fusion CRM roadmap) we were blown away. The combination OSN and our CRM is going to make us more effective as we discuss and work cohesively on client engagements – ensuring mutual success for both Athene and our clients. When we looked at “insight to action” we saw that we had a great platform when folks were at their desks, unfortunately a lot of our business development and consulting folks are on the road. The Fusion Mobile Sales and Fusion Outlook Desktop provide information to our teams when they are on the go. So that they can provide real-time information and react to real-time information provided by their peers. We are in the early stages of our transformative experience with Fusion CRM but we believe the platform along with our people and processes are going to help us achieve our goals in the future.

    Read the article

  • The Boston Globe Delivers Higher Satisfaction and Efficiency with Omni-Channel Support

    - by Tony Berk
    Unify customer interactions. Improve customer satisfaction. Increase agent efficiency. Better informed business decisions. These sound like a good set of goals for any business. Actually implementing processes to affect all of these is not necessarily easy for every business. On top of the normal challenges, throw in a rapidly changing industry and the challenge sounds daunting. But that's exactly what The Boston Globe took on, and customers are benefiting from a much improved experience. “We feel like we hit the bull’s eye with finding the right solution to support the growing digital environment,” said Robert Saurer, The Boston Globe's director of customer care and marketing.Oracle's RightNow CX solutions helped The Boston Globe to manage approximately 60,000 calls each month and respond to 5,000 monthly e-mails. More importantly, Web self-service rates are exploding and the online subscriber's most preferred support channel is chat. And what about social? The Boston Globe customer support team offers the same great level of support on their Facebook page and is monitoring Twitter and YouTube too! Read the full Customer Experience success story on The Boston Globe here.

    Read the article

  • Big Data: Size isn’t everything

    - by Simon Elliston Ball
    Big Data has a big problem; it’s the word “Big”. These days, a quick Google search will uncover terabytes of negative opinion about the futility of relying on huge volumes of data to produce magical, meaningful insight. There are also many clichéd but correct assertions about the difficulties of correlation versus causation, in massive data sets. In reading some of these pieces, I begin to understand how climatologists must feel when people complain ironically about “global warming” during snowfall. Big Data has a name problem. There is a lot more to it than size. Shape, Speed, and…err…Veracity are also key elements (now I understand why Gartner and the gang went with V’s instead of S’s). The need to handle data of different shapes (Variety) is not new. Data developers have always had to mold strange-shaped data into our reporting systems, integrating with semi-structured sources, and even straying into full-text searching. However, what we lacked was an easy way to add semi-structured and unstructured data to our arsenal. New “Big Data” tools such as MongoDB, and other NoSQL (Not Only SQL) databases, or a graph database like Neo4J, fill this gap. Still, to many, they simply introduce noise to the clean signal that is their sensibly normalized data structures. What about speed (Velocity)? It’s not just high frequency trading that generates data faster than a single system can handle. Many other applications need to make trade-offs that traditional databases won’t, in order to cope with high data insert speeds, or to extract quickly the required information from data streams. Unfortunately, many people equate Big Data with the Hadoop platform, whose batch driven queries and job processing queues have little to do with “velocity”. StreamInsight, Esper and Tibco BusinessEvents are examples of Big Data tools designed to handle high-velocity data streams. Again, the name doesn’t do the discipline of Big Data any favors. Ultimately, though, does analyzing fast moving data produce insights as useful as the ones we get through a more considered approach, enabled by traditional BI? Finally, we have Veracity and Value. In many ways, these additions to the classic Volume, Velocity and Variety trio acknowledge the criticism that without high-quality data and genuinely valuable outputs then data, big or otherwise, is worthless. As a discipline, Big Data has recognized this, and data quality and cleaning tools are starting to appear to support it. Rather than simply decrying the irrelevance of Volume, we need as a profession to focus how to improve Veracity and Value. Perhaps we should just declare the ‘Big’ silent, embrace these new data tools and help develop better practices for their use, just as we did the good old RDBMS? What does Big Data mean to you? Which V gives your business the most pain, or the most value? Do you see these new tools as a useful addition to the BI toolbox, or are they just enabling a dangerous trend to find ghosts in the noise?

    Read the article

  • Know your Data Lineage

    - by Simon Elliston Ball
    An academic paper without the footnotes isn’t an academic paper. Journalists wouldn’t base a news article on facts that they can’t verify. So why would anyone publish reports without being able to say where the data has come from and be confident of its quality, in other words, without knowing its lineage. (sometimes referred to as ‘provenance’ or ‘pedigree’) The number and variety of data sources, both traditional and new, increases inexorably. Data comes clean or dirty, processed or raw, unimpeachable or entirely fabricated. On its journey to our report, from its source, the data can travel through a network of interconnected pipes, passing through numerous distinct systems, each managed by different people. At each point along the pipeline, it can be changed, filtered, aggregated and combined. When the data finally emerges, how can we be sure that it is right? How can we be certain that no part of the data collection was based on incorrect assumptions, that key data points haven’t been left out, or that the sources are good? Even when we’re using data science to give us an approximate or probable answer, we cannot have any confidence in the results without confidence in the data from which it came. You need to know what has been done to your data, where it came from, and who is responsible for each stage of the analysis. This information represents your data lineage; it is your stack-trace. If you’re an analyst, suspicious of a number, it tells you why the number is there and how it got there. If you’re a developer, working on a pipeline, it provides the context you need to track down the bug. If you’re a manager, or an auditor, it lets you know the right things are being done. Lineage tracking is part of good data governance. Most audit and lineage systems require you to buy into their whole structure. If you are using Hadoop for your data storage and processing, then tools like Falcon allow you to track lineage, as long as you are using Falcon to write and run the pipeline. It can mean learning a new way of running your jobs (or using some sort of proxy), and even a distinct way of writing your queries. Other Hadoop tools provide a lot of operational and audit information, spread throughout the many logs produced by Hive, Sqoop, MapReduce and all the various moving parts that make up the eco-system. To get a full picture of what’s going on in your Hadoop system you need to capture both Falcon lineage and the data-exhaust of other tools that Falcon can’t orchestrate. However, the problem is bigger even that that. Often, Hadoop is just one piece in a larger processing workflow. The next step of the challenge is how you bind together the lineage metadata describing what happened before and after Hadoop, where ‘after’ could be  a data analysis environment like R, an application, or even directly into an end-user tool such as Tableau or Excel. One possibility is to push as much as you can of your key analytics into Hadoop, but would you give up the power, and familiarity of your existing tools in return for a reliable way of tracking lineage? Lineage and auditing should work consistently, automatically and quietly, allowing users to access their data with any tool they require to use. The real solution, therefore, is to create a consistent method by which to bring lineage data from these data various disparate sources into the data analysis platform that you use, rather than being forced to use the tool that manages the pipeline for the lineage and a different tool for the data analysis. The key is to keep your logs, keep your audit data, from every source, bring them together and use the data analysis tools to trace the paths from raw data to the answer that data analysis provides.

    Read the article

  • Ajax -- re-load div on success.

    - by RPM
    I am trying to accomplish a "re-load." More specifically, I need to be able to refresh a portion of my page as a result of another successful ajax call. Moreover, I load my portion of the page via ajax which obtains it's content from an ajax post. The result is my content being displayed inside my portion precisely. I need this portion of the page refreshed after a successful ajax post. Here is some of the code: /* Ajax-- this part is loaded automatically, and I need it reloaded upon success of another ajax post. This data comes from the outcome of my other ajax function. */ $('#newCo').load('click', function() { $.ajax({ url: 'index.php?dkd432k=uBus/310/Indeed', dataType: 'json', success: function(json) { if (json['newCompare']) { $('#newCo .newResults').html(json['newCompare']); } } }); }); The next portion of code is responsible for posting the data of which I obtain in this above ajax function. function ZgHiapud (ofWhich) { $.ajax({ url: 'index.php?dkd432k=uBus/310/update', type: 'post', data: 'product_id=' + product_id, dataType: 'json', success: function(json) { $('.success, .warning, .attention, .information').remove(); if (json['success']) { $('.attention').fadeIn('slow'); $('#compare_total').html(json['total']); $('html, body').animate({ scrollTop: 0 }, 'slow'); } } }); } In the end, I need to obtain the data that I send to the server immediately upon success of the second ajax call. This data that is sent via the second ajax call needs to fire the first ajax call upon success.

    Read the article

  • NSURLConnection receives data even if no data was thrown back

    - by Anna Fortuna
    Let me explain my situation. Currently, I am experimenting long-polling using NSURLConnection. I found this and I decided to try it. What I do is send a request to the server with a timeout interval of 300 secs. (or 5 mins.) Here is a code snippet: NSURL *url = [NSURL URLWithString:urlString]; NSURLRequest *request = [NSURLRequest requestWithURL:url cachePolicy:NSURLCacheStorageAllowedInMemoryOnly timeoutInterval:300]; NSData *data = [NSURLConnection sendSynchronousRequest:request returningResponse:&resp error:&err]; Now I want to test if the connection will "hold" the request if no data was thrown back from the server, so what I did was this: if (data != nil) [self performSelectorOnMainThread:@selector(dataReceived:) withObject:data waitUntilDone:YES]; And the function dataReceived: looks like this: - (void)dataReceived:(NSData *)data { NSLog(@"DATA RECEIVED!"); NSString *string = [NSString stringWithUTF8String:[data bytes]]; NSLog(@"THE DATA: %@", string); } Server-side, I created a function that will return a data once it fits the arguments and returns none if nothing fits. Here is a snippet of the PHP function: function retrieveMessages($vardata) { if (!empty($vardata)) { $result = check_data($vardata) //check_data is the function which returns 1 if $vardata //fits the arguments, and 0 if it fails to fit if ($result == 1) { $jsonArray = array('Data' => $vardata); echo json_encode($jsonArray); } } } As you can see, the function will only return data if the $result is equal to 1. However, even if the function returns nothing, NSURLConnection will still perform the function dataReceived: meaning the NSURLConnection still receives data, albeit an empty one. So can anyone help me here? How will I perform long-polling using NSURLConnection? Basically, I want to maintain the connection as long as no data is returned. So how will I do it? NOTE: I am new to PHP, so if my code is wrong, please point it out so I can correct it.

    Read the article

  • How to maintain an ordered table with Core Data (or SQL) with insertions/deletions?

    - by Jean-Denis Muys
    This question is in the context of Core Data, but if I am not mistaken, it applies equally well to a more general SQL case. I want to maintain an ordered table using Core Data, with the possibility for the user to: reorder rows insert new lines anywhere delete any existing line What's the best data model to do that? I can see two ways: 1) Model it as an array: I add an int position property to my entity 2) Model it as a linked list: I add two one-to-one relations, next and previous from my entity to itself 1) makes it easy to sort, but painful to insert or delete as you then have to update the position of all objects that come after 2) makes it easy to insert or delete, but very difficult to sort. In fact, I don't think I know how to express a Sort Descriptor (SQL ORDER BY clause) for that case. Now I can imagine a variation on 1): 3) add an int ordering property to the entity, but instead of having it count one-by-one, have it count 100 by 100 (for example). Then inserting is as simple as finding any number between the ordering of the previous and next existing objects. The expensive renumbering only has to occur when the 100 holes have been filled. Making that property a float rather than an int makes it even better: it's almost always possible to find a new float midway between two floats. Am I on the right track with solution 3), or is there something smarter?

    Read the article

  • How can I scrape specific data from a website

    - by Stoney
    I'm trying to scrape data from a website for research. The urls are nicely organized in an example.com/x format, with x as an ascending number and all of the pages are structured in the same way. I just need to grab certain headings and a few numbers which are always in the same locations. I'll then need to get this data into structured form for analysis in Excel. I have used wget before to download pages, but I can't figure out how to grab specific lines of text. Excel has a feature to grab data from the web (Data-From Web) but from what I can see it only allows me to download tables. Unfortunately, the data I need is not in tables.

    Read the article

  • How should I architect my Model and Data Access layer objects in my website?

    - by Robin Winslow
    I've been tasked with designing Data layer for a website at work, and I am very interested in architecture of code for the best flexibility, maintainability and readability. I am generally acutely aware of the value in completely separating out my actual Models from the Data Access layer, so that the Models are completely naive when it comes to Data Access. And in this case it's particularly useful to do this as the Models may be built from the Database or may be built from a Soap web service. So it seems to me to make sense to have Factories in my data access layer which create Model objects. So here's what I have so far (in my made-up pseudocode): class DataAccess.ProductsFromXml extends DataAccess.ProductFactory {} class DataAccess.ProductsFromDatabase extends DataAccess.ProductFactory {} These then get used in the controller in a fashion similar to the following: var xmlProductCreator = DataAccess.ProductsFromXml(xmlDataProvider); var databaseProductCreator = DataAccess.ProductsFromXml(xmlDataProvider); // Returns array of Product model objects var XmlProducts = databaseProductCreator.Products(); // Returns array of Product model objects var DbProducts = xmlProductCreator.Products(); So my question is, is this a good structure for my Data Access layer? Is it a good idea to use a Factory for building my Model objects from the data? Do you think I've misunderstood something? And are there any general patterns I should read up on for how to write my data access objects to create my Model objects?

    Read the article

  • Spotlight: How Scandinavia's Largest Nuclear Power Plant Increased Productivity and Reduced Costs wi

    - by [email protected]
    Ringhals nuclear power plant, which is part of the Vattenfall Group, is located about 60 km south-west of the beautiful coastal city of Gothenburg in Sweden. A deep concern to reduce environmental impact coupled with an effort to increase plant safety and operational efficiency have led to a recent surge in investments and initiatives around plant modification and plant optimization at Ringhals. A multitude of challenges were faced by the users in various groups that were involved in these projects. First, it was very difficult for users to easily access complex and layered asset and engineering information, which was critical to increased productivity and completing projects on time. Moreover, the 20 or so different solutions that were being used to view various document formats, not only resulted in collaboration complexity but also escalated IT administration costs and woes. Finally, there was a considerable non-engineering community comprising non-CAD specialists that needed easy access to plant data in an effort to minimize engineering disruption. Oracle's AutoVue significantly simplified the ability to efficiently view and use digital asset information by providing a standardized visualization solution for the enterprise. The key benefits achieved by Ringhals include: Increased productivity of plant optimization and plant modification by 3% Saved around $ 500 K annually Cut IT maintenance costs by 50% by using a single solution Reduced engineering disruption by allowing non-CAD users easy access to digital plant data The complete case-study can be found here

    Read the article

  • Calculating percentiles in Excel with "buckets" data instead of the data list itself

    - by G B
    I have a bunch of data in Excel that I need to get certain percentile information from. The problem is that instead of having the data set made up of each value, I instead have info on the number of or "bucket" data. For example, imagine that my actual data set looks like this: 1,1,2,2,2,2,3,3,4,4,4 The data set that I have is this: Value No. of occurrences 1 2 2 4 3 2 4 3 Is there an easy way for me to calculate percentile information (as well as the median) without having to explode the summary data out to full data set? (Once I did that, I know that I could just use the Percentile(A1:A5, p) function) This is important because my data set is very large. If I exploded the data out, I would have hundreds of thousands of rows and I would have to do it for a couple of hundred data sets. Help!

    Read the article

  • How do I setup a WCF Data Service with an ADO.NET Entity Entity Model in another assembly?

    - by lsb
    Hi! I have an ASP.NET 4.0 website that has an Entity Data Model hooked up to WCF Data Service. When the Service and Model are in the same assembly everything works. Unfortunately, when I move the Model to another "shared" assembly (and change the namespace) the service compiles but throws a 500 error when launched in a browser. The reason I want to have the Model in a common assembly (lets call it RiaTest.Shared) is that I want share common validation code between the client and service (by checking "Reuse types in referenced assemblies" in the Advanced tab of the Add Service Reference dialog). Anyway, I've spent a couple of hours on this to no avail so any help in the regard would be appreciated...

    Read the article

  • Open Data, Government and Transparency

    - by Tori Wieldt
    A new track at TDC (The Developer's Conference in Sao Paulo, Brazil) is titled Open Data. It deals with open data, government and transparency. Saturday will be a "transparency hacker day" where developers are invited to create applications using open data from the Brazilian government.  Alexandre Gomes, co-lead of the track, says "I want to inspire developers to become "Civic hackers:" developers who create apps to make society better." It is a chance for developers to do well and do good. There are many opportunities for developers, including monitoring government expenditures and getting citizens involved via social networks. The open data movement is growing worldwide. One initiative, the Open Government Partnership, is working to make government data easier to find and access. Making this data easily available means that with the right applications, it will be easier for people to make decisions and suggestions about government policies based on detailed information. Last April, the Open Government Partnership held its annual meeting in Brasilia, the capitol of Brazil. It was a great success showcasing the innovative work being done in open data by governments, civil societies and individuals around the world. For example, Bulgaria now publishes daily data on budget spending for all public institutions. Alexandre Gomes Explains Open Data At TDC, the Open Data track will include a presentation of examples of successful open data projects, an introduction to the semantic web, how to handle big data sets, techniques of data visualization, and how to design APIs.The other track lead is Christian Moryah Miranda, a systems analyst for the Brazilian Government's Ministry of Planning. "The Brazilian government wholeheartedly supports this effort. In order to make our data available to the public, it forces us to be more consistent with our data across ministries, and that's a good step forward for us," he said. He explained the government knows they cannot achieve everything they would like without help from the public. "It is not the government versus the people, rather citizens are partners with the government, and together we can achieve great things!" Miranda exclaimed. Saturday at TDC will be a "transparency hacker day" where developers will be invited to create applications using open data from the Brazilian government. Attendees are invited to pitch their ideas, work in small groups, and present their project at the end of the conference. "For example," Gomes said, "the Brazilian government just released the salaries of all government employees and I can't wait to see what developers can do with that." Resources Open Government Partnership  U.S. Government Open Data ProjectBrazilian Government Open Data ProjectU.K. Government Open Data Project 2012 International Open Government Data Conference 

    Read the article

  • UPK Customer Success Story: The City and County of San Francisco

    - by karen.rihs(at)oracle.com
    The value of UPK during an upgrade is a hot topic and was a primary focus during our latest customer roundtable featuring The City and County of San Francisco: Leveraging UPK to Accelerate Your PeopleSoft Upgrade. As the Change Management Analyst for their PeopleSoft 9.0 HCM project (Project eMerge), Jan Crosbie-Taylor provided a unique perspective on how they're utilizing UPK and UPK pre-built content early on to successfully manage change for thousands of city and county employees and retirees as they move to this new release. With the first phase of the project going live next September, it's important to the City and County of San Francisco to 1) ensure that the various constituents are brought along with the project team, and 2) focus on the end user aspects of the implementation, including training. Here are some highlights on how UPK and UPK pre-built content are helping them accomplish this: As a former documentation manager, Jan really appreciates the power of UPK as a single source content creation tool. It saves them time by streamlining the documentation creation process, enabling them to record content once, then repurpose it multiple times. With regard to change management, UPK has enabled them to educate the project team and gain critical buy in and support by familiarizing users with the application early on through User Experience Workshops and by promoting UPK at meetings whenever possible. UPK has helped create awareness for the project, making the project real to users. They are taking advantage of UPK pre-built content to: Educate the project team and subject matter experts on how PeopleSoft 9.0 works as delivered Create a guide/storyboard for their own recording Save time/effort and create consistency by enhancing their recorded content with text and conceptual information from the pre-built content Create PeopleSoft Help for their development databases by publishing and integrating the UPK pre-built content into the application help menu Look ahead to the next release of PeopleTools, comparing the differences to help the team evaluate which version to use with their implemtentation When it comes time for training, they will be utilizing UPK in the classroom, eliminating the time and cost of maintaining training databases. Instructors will be able to carry all training content on a thumb drive, allowing them to easily provide consistent training at their many locations, regardless of the environment. Post go-live, they will deploy the same UPK content to provide just-in-time, in-application support for the entire system via the PeopleSoft Help menu and their PeopleSoft Enterprise Portal. Users will already be comfortable with UPK as a source of help, having been exposed to it during classroom training. They are also using UPK for a non-Oracle application called JobAps, an online job application solution used by many government organizations. Jan found UPK's object recognition to be excellent, yet it's been incredibly easy for her to change text or a field name if needed. Please take time to listen to this recording. The City and County of San Francisco's UPK story is very exciting, and Jan shared so many great examples of how they're taking advantage of UPK and UPK pre-built content early on in their project. We hope others will be able to incorporate these into their projects. Many thanks to Jan for taking the time to share her experiences and creative uses of UPK with us! - Karen Rihs, Oracle UPK Outbound Product Management

    Read the article

  • Online Accounts auth over and over again without success

    - by Mike Pretzlaw
    I just added my Google account to the "Online Accounts" in Gnome. Before my last restart the account couldn't be added for unknown reason. I authorized Gnome access to my Google Account, the window closed and nothing happened. Now I authorized Ubuntu access to my Google Account which worked well: But I can not open the Gnome Online Accounts even when I delete every online account: It's icon show up that it is loading in the dash but then suddenly disappears without any message. How to debug that? What can I do?

    Read the article

  • UPK Professional Customer Success Story: Medtronic

    - by [email protected]
    In case you missed the live event, be sure to listen to last week's UPK Customer iSeminar featuring Medtronic. This was the first iSeminar in our quarterly series to showcase UPK Professional (UPK and Knowledge Pathways). Donna Miller and Staci Gilbert gave viewers an inside look at samples of Medtronic's content as they shared their experiences, methodology and best practices for use of the solution. Here are some highlights of the call: • Medtronic initially purchased UPK Professional to support a multi-year, global SAP rollout for 9,000 end users located in 24 countries. • As time went on, they expanded their use of UPK Professional to include several of their other enterprise applications: PeopleSoft, Siebel CRM, Hyperion Financial Management, a number of SAP bolt-ons, Documentum, TrackWise, and many others. • In combination with their Saba LMS, UPK Professional has allowed Medtronic to create, deploy, track and certify consistent end user training for critical transactions and processes across their organization worldwide - essential for a company in a heavily regulated industry. • For key pieces of content or certain end user populations, some Medtronic business units localize/translate the global UPK content. Staci demonstrated examples of their SAP content which has been translated into Japanese. • In the live SAP environment, end users rely on UPK's context sensitive in-application performance support. Medtronic has found this to be very helpful post go-live, giving just-in-time support so end users are confident in a new system or when performing tasks they don't often touch (at quarter or year end). UPK also serves as Medtronic's internal Google. • Medtronic has realized savings on many fronts: reduction in support calls due to in-application performance support, elimination of their training clients, and speedier training (1.5 days rather than 5-7 days) of temporary workers by moving from ILT to a blended solution that includes UPK simulations for eLearning. Thanks again to Donna and Staci for an exceptional presentation. They offered so many great examples for anyone who's looking for ways to get more out of UPK or interested in learning about UPK Professional: Knowledge Pathways. - Karen Rihs, Oracle UPK Outbound Product Management

    Read the article

  • Microsoft Claims Success Versus Autorun Malware

    Microsoft recently used a post on the Threat Research and Response Blog section of its Malware Protection Center to describe how it is winning the battle against autorun malware. Although there is certainly no shortage of malware in the virtual world Microsoft has plenty of statistics to back its claims and it has displayed them with pride.... DNS Configured Correctly? Test Your Internal DNS With Our Free DNS Advisor Tool From Infoblox.

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >