Search Results

Search found 71553 results on 2863 pages for 'form data'.

Page 101/2863 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • how find out the form is submit in JSP?

    - by user261002
    I am trying to create a an online exam with JSP. I want to get the questions one by one and show them on the screen, and create a "Next" button then user is able to to see the next question, but the problem is that I dont know how to find out that the user has clicked on the "Next" button, I know how to do it in PHP : if($_SERVER['REQUEST_METHOD']=='GET') if($_GET['action']=='Next') but I dont know how to do it in JSP. Please help me this is piece of my code: String result = ""; if (database.DatabaseManager.getInstance().connectionOK()) { database.SQLSelectStatement sqlselect = new database.SQLSelectStatement("question", "question", "0"); ResultSet _userExist = sqlselect.executeWithNoCondition(); ResultSetMetaData rsm = _userExist.getMetaData(); result+="<form method='post'>"; result += "<table border=2>"; for (int i = 0; i < rsm.getColumnCount(); i++) { result += "<th>" + rsm.getColumnName(i + 1) + "</th>"; } if (_userExist.next()) { result += "<tr>"; result += "<td>" + _userExist.getInt(1) + "</td>"; result += "<td>" + _userExist.getString(2) + "</td>"; result += "</tr>"; result += "<tr>"; result += "<tr> <td colspan='2'>asdas</td></tr>"; result += "</tr>"; } result += "</table>"; result+="<input type='submit' value='next' name='next'/></form>"; }

    Read the article

  • An error occurred creating the form. See Exception.InnerException for details. The error is: Object

    - by Ben
    I get this error when attempting to debug my form, I cannot see where at all the error could be (also does not highlight where), anyone have any suggestions? An error occurred creating the form. See Exception.InnerException for details. The error is: Object reference not set to an instance of an object. Public Class Form1 Dim dateCrap As String = "Date:" Dim IPcrap As String = "Ip:" Dim pcCrap As String = "Computer:" Dim programCrap As String = "Program:" Dim textz As String = TextBox1.Text Dim sep() As String = {vbNewLine & vbNewLine} Dim sections() As String = Text.Split(sep, StringSplitOptions.None) Dim NewArray() As String = TextBox1.Text.Split(vbNewLine) Private Sub Button2_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button2.Click For i = 0 To sections.Count - 1 TextBox2.Text = sections(i) Dim text2 As String = TextBox2.Text Dim sep2() As String = {vbNewLine & vbNewLine} Dim sections2() As String = Text.Split(sep, StringSplitOptions.None) Dim FTPinfo() As String = TextBox2.Text.Split(vbNewLine) Dim clsRequest As System.Net.FtpWebRequest = _ DirectCast(System.Net.WebRequest.Create(sections2(0).Replace("Url/Host:", "")), System.Net.FtpWebRequest) clsRequest.Credentials = New System.Net.NetworkCredential(sections2(1).Replace("Login:", ""), (sections2(2).Replace("Password:", ""))) clsRequest.Method = System.Net.WebRequestMethods.Ftp.UploadFile ' read in file... Dim bFile() As Byte = System.IO.File.ReadAllBytes(txtShellDir.Text) ' upload file... Dim clsStream As System.IO.Stream = clsRequest.GetRequestStream() clsStream.Write(bFile, 0, bFile.Length) clsStream.Close() clsStream.Dispose() Next End Sub Private Sub Button3_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button3.Click ListBox1.Items.Clear() Dim fdlg As OpenFileDialog = New OpenFileDialog() fdlg.Title = "Browse for the FTP List you wish to use." fdlg.InitialDirectory = Application.ExecutablePath fdlg.Filter = "All files (*.txt)|*.txt|txt files (*.txt)|*.txt" fdlg.FilterIndex = 2 fdlg.RestoreDirectory = True If fdlg.ShowDialog() = DialogResult.OK Then 'ListBox1.Items.AddRange(Split(My.Computer.FileSystem.ReadAllText(fdlg.FileName), vbNewLine)) TextBox1.Text = My.Computer.FileSystem.ReadAllText(fdlg.FileName) End If Dim tmp() As String = TextBox1.Text.Split(CChar(vbNewLine)) For Each line As String In tmp If line.Length > 1 Then TextBox1.AppendText(line & vbNewLine) End If Next TextBox1.Text = TextBox1.Text.Replace(" ", "") TextBox1.Text = TextBox1.Text.Replace("----------------------------------------------------------", vbNewLine) For Each item As String In NewArray ListBox1.Items.Add(item) Next Try For i = 0 To ListBox1.Items.Count - 1 If ListBox1.Items(i).Contains(dateCrap) Then ListBox1.Items.RemoveAt(i) End If Next Catch ex As Exception End Try Try For i = 0 To ListBox1.Items.Count - 1 If ListBox1.Items(i).Contains(IPcrap) Then ListBox1.Items.RemoveAt(i) End If Next Catch ex As Exception End Try Try For i = 0 To ListBox1.Items.Count - 1 If ListBox1.Items(i).Contains(pcCrap) Then ListBox1.Items.RemoveAt(i) End If Next Catch ex As Exception End Try Try For i = 0 To ListBox1.Items.Count - 1 If ListBox1.Items(i).Contains(programCrap) Then ListBox1.Items.RemoveAt(i) End If Next Catch ex As Exception End Try TextBox1.Text = "" For Each thing As Object In ListBox1.Items TextBox1.AppendText(thing.ToString & vbNewLine) Next End Sub Private Sub Button4_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) For i = 0 To ListBox1.SelectedItems.Count - 1 TextBox1.Text = TextBox1.Text & ListBox1.Items.Item(i).ToString() & vbNewLine Next End Sub End Class

    Read the article

  • SQLAuthority News – Download Whitepaper Using SharePoint List Data in PowerPivot

    - by pinaldave
    One of the many features of Microsoft SQL Server PowerPivot is the range of data sources that can be used to import data. Anything, from Microsoft SQL Server relational databases, Oracle databases, and Microsoft Access databases, to text documents, can be used as data sources in PowerPivot. In this paper, I explain one of the new and upcoming data sources that people are excited about – SharePoint list data in the form of Atom feeds. This white paper goes on to explain the different ways you can import SharePoint list data into PowerPivot, what types of lists are supported, various components that need to be installed to use this feature, and where to get those components. Download and read this whitepaper. Note: Abstract is taken from MSDN Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, T SQL, Technology

    Read the article

  • Visualising data a different way with Pivot collections

    - by Rob Farley
    Roger’s been doing a great job extending PivotViewer recently, and you can find the list of LobsterPot pivots at http://pivot.lobsterpot.com.au Many months back, the TED Talk that Gary Flake did about Pivot caught my imagination, and I did some research into it. At the time, most of what we did with Pivot was geared towards what we could do for clients, including making Pivot collections based on students at a school, and using it to browse PDF invoices by their various properties. We had actual commercial work based on Pivot collections back then, and it was all kinds of fun. Later, we made some collections for events that were happening, and even got featured in the TechEd Australia keynote. But I’m getting ahead of myself... let me explain the concept. A Pivot collection is an XML file (with .cxml extension) which lists Items, each linking to an image that’s stored in a Deep Zoom format (this means that it contains tiles like Bing Maps, so that the browser can request only the ones of interest according to the zoom level). This collection can be shown in a Silverlight application that uses the PivotViewer control, or in the Pivot Browser that’s available from getpivot.com. Filtering and sorting the items according to their facets (attributes, such as size, age, category, etc), the PivotViewer rearranges the way that these are shown in a very dynamic way. To quote Gary Flake, this lets us “see patterns which are otherwise hidden”. This browsing mechanism is very suited to a number of different methods, because it’s just that – browsing. It’s not searching, it’s more akin to window-shopping than doing an internet search. When we decided to put something together for the conferences such as TechEd Australia 2010 and the PASS Summit 2010, we did some screen-scraping to provide a different view of data that was already available online. Nick Hodge and Michael Kordahi from Microsoft liked the idea a lot, and after a bit of tweaking, we produced one that Michael used in the TechEd Australia keynote to show the variety of talks on offer. It’s interesting to see a pattern in this data: The Office track has the most sessions, but if the Interactive Sessions and Instructor-Led Labs are removed, it drops down to only the sixth most popular track, with Cloud Computing taking over. This is something which just isn’t obvious when you look an ordinary search tool. You get a much better feel for the data when moving around it like this. The more observant amongst you will have noticed some difference in the collection that Michael is demonstrating in the picture above with the screenshots I’ve shown. That’s because it’s been extended some more. At the SQLBits conference in the UK this year, I had some interesting discussions with the guys from Xpert360, particularly Phil Carter, who I’d met in 2009 at an earlier SQLBits conference. They had got around to producing a Pivot collection based on the SQLBits data, which we had been planning to do but ran out of time. We discussed some of ways that Pivot could be used, including the ways that my old friend Howard Dierking had extended it for the MSDN Magazine. I’m not suggesting I influenced Xpert360 at all, but they certainly inspired us with some of their posts on the matter So with LobsterPot guys David Gardiner and Roger Noble both having dabbled in Pivot collections (and Dave doing some for clients), I set Roger to work on extending it some more. He’s used various events and so on to be able to make an environment that allows us to do quick deployment of new collections, as well as showing the data in a grid view which behaves as if it were simply a third view of the data (the other two being the array of images and the ‘histogram’ view). I see PivotViewer as being a significant step in data visualisation – so much so that I feature it when I deliver talks on Spatial Data Visualisation methods. Any time when there is information that can be conveyed through an image, you have to ask yourself how best to show that image, and whether that image is the focal point. For Spatial data, the image is most often a map, and the map becomes the central mode for navigation. I show Pivot with postcode areas, since I can browse the postcodes based on their data, and many of the images are recognisable (to locals of South Australia). Naturally, the images could link through to the map itself, and so on, but generally people think of Spatial data in terms of navigating a map, which doesn’t always gel with the information you’re trying to extract. Roger’s even looking into ways to hook PivotViewer into the Bing Maps API, in a similar way to the Deep Earth project, displaying different levels of map detail according to how ‘zoomed in’ the images are. Some of the work that Dave did with one of the schools was generating the Deep Zoom tiles “on the fly”, based on images stored in a database, and Roger has produced a collection which uses images from flickr, that lets you move from one search term to another. Pulling the images down from flickr.com isn’t particularly ideal from a performance aspect, and flickr doesn’t store images in a small-enough format to really lend itself to this use, but you might agree that it’s an interesting concept which compares nicely to using Maps. I’m looking forward to future versions of the PivotViewer control, and hope they provide many more events that can be used, and even more hooks into it. Naturally, LobsterPot could help provide your business with a PivotViewer experience, but you can probably do a lot of it yourself too. There’s a thorough guide at getpivot.com, which is how we got into it. For some examples of what we’ve done, have a look at http://pivot.lobsterpot.com.au. I’d like to see PivotViewer really catch on a data visualisation tool.

    Read the article

  • Storing large data in HTTP Session (Java Application)

    - by Umesh Awasthi
    I am asking this question in continuation with http-session-or-database-approach. I am planning to follow this approach. When user add product to cart, create a Cart Model, add items to cart and save to DB. Convert Cart model to cart data and save it to HTTP session. Any update/ edit update underlying cart in DB and update data snap shot in Session. When user click on view cart page, just pick cart data from Session and display to customer. I have following queries regarding HTTP Session How good is it to store large data (Shopping Cart) in Session? How scalable this approach can be ? (With respect to Session) Won't my application going to eat and demand a lot of memory? Is my approach is fine or do i need to consider other points while designing this? Though, we can control what all cart data should be stored in the Session, but still we need to have certain information in cart data being stored in session?

    Read the article

  • Oracle Unbreakable Enterprise Kernel and Emulex HBA Eliminate Silent Data Corruption

    - by sergio.leunissen
    Yesterday, Emulex announced that it has added support for T10 Protection Information (T10-PI), formerly called T10-DIF, to a number of its HBAs. When used with Oracle's Unbreakable Enterprise Kernel, this will prevent silent data corruption and help ensure the integrity and regulatory compliance of user data as it is transferred from the application to the SAN From the press release: Traditionally, protecting the integrity of customers' data has been done with multiple discrete solutions, including Error Correcting Code (ECC) and Cyclic Redundancy Check (CRC), but there have been coverage gaps across the I/O path from the operating system to the storage. The implementation of the T10-PI standard via Emulex's BlockGuard feature, in conjunction with other industry player's implementations, ensures that data is validated as it moves through the data path, from the application, to the HBA, to storage, enabling seamless end-to-end integrity. Read the white paper and don't miss the live webcast on eliminating silent data corruption on December 16th!

    Read the article

  • Google I/O 2012 - OAuth 2.0 for Identity and Data Access

    Google I/O 2012 - OAuth 2.0 for Identity and Data Access Ryan Boyd Users like to keep their data in one place on the web where it's easily accessible. Whether it's YouTube videos, Google Drive files, Google contacts or one of many other types of data, users need a way to securely grant applications access to their data. OAuth is the key web standard for delegated data access and OAuth 2.0 is the next-generation version with additional security features. This session will cover the latest advances in how OAuth can be used for data access, but will also dive into how you can lower the barrier to entry for your application by allowing users to login using their Google accounts. You will learn, through an example written in Python, how to use OAuth 2.0 to incorporate user identity into your web application. Best practices for desktop applications, mobile applications and server-to-server use cases will also be discussed. From: GoogleDevelopers Views: 11 1 ratings Time: 58:56 More in Science & Technology

    Read the article

  • Aggregating cache data from OCEP in CQL

    - by Manju James
    There are several use cases where OCEP applications need to join stream data with external data, such as data available in a Coherence cache. OCEP’s streaming language, CQL, supports simple cache-key based joins of stream data with data in Coherence (more complex queries will be supported in a future release). However, there are instances where you may need to aggregate the data in Coherence based on input data from a stream. This blog describes a sample that does just that. For our sample, we will use a simplified credit card fraud detection use case. The input to this sample application is a stream of credit card transaction data. The input stream contains information like the credit card ID, transaction time and transaction amount. The purpose of this application is to detect suspicious transactions and send out a warning event. For the sake of simplicity, we will assume that all transactions with amounts greater than $1000 are suspicious. The transaction history is available in a Coherence distributed cache. For every suspicious transaction detected, a warning event must be sent with maximum amount, total amount and total number of transactions over the past 30 days, as shown in the diagram below. Application Input Stream input to the EPN contains events of type CCTransactionEvent. This input has to be joined with the cache with all credit card transactions. The cache is configured in the EPN as shown below: <wlevs:caching-system id="CohCacheSystem" provider="coherence"/> <wlevs:cache id="CCTransactionsCache" value-type="CCTransactionEvent" key-properties="cardID, transactionTime" caching-system="CohCacheSystem"> </wlevs:cache> Application Output The output that must be produced by the application is a fraud warning event. This event is configured in the spring file as shown below. Source for cardHistory property can be seen here. <wlevs:event-type type-name="FraudWarningEvent"> <wlevs:properties type="tuple"> <wlevs:property name="cardID" type="CHAR"/> <wlevs:property name="transactionTime" type="BIGINT"/> <wlevs:property name="transactionAmount" type="DOUBLE"/> <wlevs:property name="cardHistory" type="OBJECT"/> </wlevs:properties </wlevs:event-type> Cache Data Aggregation using Java Cartridge In the output warning event, cardHistory property contains data from the cache aggregated over the past 30 days. To get this information, we use a java cartridge method. This method uses Coherence’s query API on credit card transactions cache to get the required information. Therefore, the java cartridge method requires a reference to the cache. This may be set up by configuring it in the spring context file as shown below: <bean class="com.oracle.cep.ccfraud.CCTransactionsAggregator"> <property name="cache" ref="CCTransactionsCache"/> </bean> This is used by the java class to set a static property: public void setCache(Map cache) { s_cache = (NamedCache) cache; } The code snippet below shows how the total of all the transaction amounts in the past 30 days is computed. Rest of the information required by CardHistory object is calculated in a similar manner. Complete source of this class can be found here. To find out more information about using Coherence's API to query a cache, please refer Coherence Developer’s Guide. public static CreditHistoryData(String cardID) { … Filter filter = QueryHelper.createFilter("cardID = :cardID and transactionTime :transactionTime", map); CardHistoryData history = new CardHistoryData(); Double sum = (Double) s_cache.aggregate(filter, new DoubleSum("getTransactionAmount")); history.setTotalAmount(sum); … return history; } The java cartridge method is used from CQL as seen below: select cardID, transactionTime, transactionAmount, CCTransactionsAggregator.execute(cardID) as cardHistory from inputChannel where transactionAmount1000 This produces a warning event, with history data, for every credit card transaction over $1000. That is all there is to it. The complete source for the sample application, along with the configuration files, is available here. In the sample, I use a simple java bean to load the cache with initial transaction history data. An input adapter is used to create and send transaction events for the input stream.

    Read the article

  • Logical and Physical Modeling for Analytical Applications

    - by Dejan Sarka
    I am proud to announce that my first course for Pluralsight is released. The course title is Logical and Physical Modeling for Analytical Applications. Here is the description of the course. A bad data model leads to an application that does not perform well. Therefore, when developing an application, you should create a good data model from the start. However, even the best logical model can’t help when the physical implementation is bad. It is also important to know how SQL Server stores and accesses data, and how to optimize the data access. Database optimization starts by splitting transactional and analytical applications. In this course, you learn how to support analytical applications with logical design, get understanding of the problems with data access for queries that deal with large amounts of data, and learn about SQL Server optimizations that help solving these problems. Enjoy the course!

    Read the article

  • Telerik Releases the Data Service Wizard

    After a great beta cycle, Telerik is proud to announce today the commercial availability of the OpenAccess Data Service Wizard. You can download it and install it with Telerik OpenAccess Q1 2010 for both Visual Studio 2008 and 2010 RTM. If you are new to the Data Service Wizard, it is a great tool that will allow you to point a wizard at your OpenAccess generated data access classes and automatically build an WCF, Astoria (WCF Data Services), REST or ATOMPub collection endpoint, complete with the CRUD methods if applicable. If you are familiar with the Data Service Wizard already, there will be two new surprises in the release version. If you generated a domain model with the new OpenAccess Visual Entity Designer, you have only one file added to your project, mydomainmodel.rlinq for example. The first surprise of the new Data Service Wizard is that if you right click on the domain model in Visual Studio, ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • MSSQL: Copying data from one database to another

    - by DigiMortal
    I have database that has data imported from another server using import and export wizard of SQL Server Management Studio. There is also empty database with same tables but it also has primary keys, foreign keys and indexes. How to get data from first database to another? Here is the description of my crusade. And believe me – it is not nice one. Bugs in import and export wizard There is some awful bugs in import and export wizard that makes data imports and exports possible only on very limited manner: wizard is not able to analyze foreign keys, wizard wants to create tables always, whatever you say in settings. The result is faulty and useless package. Now let’s go step by step and make things work in our scenario. Database There are two databases. Let’s name them like this: PLAIN – contains data imported from remote server (no indexes, no keys, no nothing, just plain dumb data) CORRECT – empty database with same structure as remote database (indexes, keys and everything else but no data) Our goal is to get data from PLAIN to CORRECT. 1. Create import and export package In this point we will create faulty SSIS package using SQL Server Management Studio. Run import and export wizard and let it create SSIS package that reads data from CORRECT and writes it to, let’s say, CORRECT-2. Make sure you enable identity insert. Make sure there are no views selected. Make sure you don’t let package to create tables (you can miss this step because it wants to create tables anyway). Save package to SSIS. 2. Modify import and export package Now let’s clean up the package and remove all faulty crap. Connect SQL Server Management Studio to SSIS instance. Select the package you just saved and export it to your hard disc. Run Business Intelligence Studio. Create new SSIS project (DON’T MISS THIS STEP). Add package from disc as existing item to project and open it. Move to Control Flow page do one of following: Remove all preparation SQL-tasks and connect Data Flow tasks. Modify all preparation SQL-tasks so the existence of tables is checked before table is created (yes, you have to do it manually). Add new Execute-SQL task as first task in control flow: Open task properties. Assign destination connection as connection to use. Insert the following SQL as command:   EXEC sp_MSForEachTable 'ALTER TABLE ? NOCHECK CONSTRAINT ALL' GO   EXEC sp_MSForEachTable 'DELETE FROM ?' GO   Save task. Add new Execute-SQL task as last task in control flow: Open task properties. Assign destination connection as connection to use. Insert the following SQL as command:   EXEC sp_MSForEachTable 'ALTER TABLE ? CHECK CONSTRAINT ALL' GO   Save task Now connect first Execute-SQL task with first Data Flow task and last Data Flow task with second Execute-SQL task. Now move to Package Explorer tab and change connections under Connection Managers folder. Make source connection to use database PLAIN. Make destination connection to use database CORRECT. Save package and rebuilt the project. Update package using SQL Server Management Studio. Some hints: Make sure you take the package from solution folder because it is saved there now. Don’t overwrite existing package. Use numeric suffix and let Management Studio to create a new version of package. Now you are done with your package. Run it to test it and clean out all the errors you find. TRUNCATE vs DELETE You can see that I used DELETE FROM instead of TRUNCATE. Why? Because TRUNCATE has some nasty limits (taken from MSDN): “You cannot use TRUNCATE TABLE on a table referenced by a FOREIGN KEY constraint; instead, use DELETE statement without a WHERE clause. Because TRUNCATE TABLE is not logged, it cannot activate a trigger. TRUNCATE TABLE may not be used on tables participating in an indexed view.” As I am not sure what tables you have and how they are used I provided here the solution that should work for all scenarios. If you need better performance then in some cases you can use TRUNCATE table instead of DELETE. Conclusion My conclusion is bitter this time although I am very positive guy. It is A.D. 2010 and still we have to write stupid hacks for simple things. Simple tools that existed before are long gone and we have to live mysterious bloatware that is our only choice when using default tools. If you take a look at the length of this posting and the count of steps I had to do for one easy thing you should treat it as a signal that something has went wrong in last years. Although I got my job done I would be still more happy if out of box tools are more intelligent one day. References T-SQL Trick for Deleting All Data in Your Database (Mauro Cardarelli) TRUNCATE TABLE (MSDN Library) Error Handling in SQL 2000 – a Background (Erland Sommarskog) Disable/Enable Foreign Key and Check constraints in SQL Server (Decipher)

    Read the article

  • Enterprise MDM: Rationalizing Reference Data in a Fast Changing Environment

    - by Mala Narasimharajan
    By Rahul Kamath Enterprises must move at a rapid pace to establish and retain global market leadership by continuously focusing on operational efficiency, customer intimacy and relentless execution. Reference Data Management    As multi-national companies with a presence in multiple industry categories, market segments, and geographies, their ability to proactively manage changes and harness them to align their front office with back-office operations and performance management initiatives is critical to make the proverbial elephant dance. Managing reference data including types and codes, business taxonomies, complex relationships as well as mappings represent a key component of the broader agenda for enabling flexibility and agility, without sacrificing enterprise-level consistency, regulatory compliance and control. Financial Transformation  Periodically, companies find that processes implemented a decade or more ago no longer mirror the way of doing business and seek to proactively transform how they operate their business and underlying processes. Financial transformation often begins with the redesign of one’s chart of accounts. The ability to model and redesign one’s chart of accounts collaboratively, quickly validate against historical transaction bases and secure business buy-in across multiple line of business stakeholders, while continuing to manage changes within the legacy general ledger systems and downstream analytical applications while piloting the in-flight transformation can mean the difference between controlled success and project failure. Attend the session titled CON8275 - Oracle Hyperion Data Relationship Management: Enabling Enterprise Transformation at Oracle Openworld on Monday, October 1, 2012 at 4:45pm in Ballroom A of the InterContinental Hotel to learn how Oracle’s Data Relationship Management solution can help you stay ahead of the competition and proactively harness master (and reference) data changes to transform your enterprise. Hear in-depth customer testimonials from GE Healthcare and Old Mutual South Africa to learn how others have harnessed this technology effectively to build enduring competitive advantage through business process innovation and investments in master data governance. Hear GE Healthcare discuss how DRM has enabled financial transformation, ERP consolidation, mergers and acquisitions, and the alignment reference data across financial and management reporting applications. Also, learn how Old Mutual SA has upgraded to EBS R12 Financials and is transforming the management of chart of accounts for corporate reporting. Separately, an esteemed panel of DRM customers including Cisco Systems, Nationwide Insurance, Ralcorp Holdings and Mentor Graphics will discuss their perspectives on how DRM has helped them address business challenges associated with enterprise MDM including major change management initiatives including financial transformations, corporate restructuring, mergers & acquisitions, and the rationalization of financial and analytical master reference data to support alternate business perspectives for the alignment of EPM/BI initiatives. Attend the session titled CON9377 - Customer Showcase: Success with Oracle Hyperion Data Relationship Management at Openworld on Thursday, October 4, 2012 at 12:45pm in Ballroom of the InterContinental Hotel to interact with our esteemed speakers first hand.

    Read the article

  • Extending Database-as-a-Service to Provision Databases with Application Data

    - by Nilesh A
    Oracle Enterprise Manager 12c Database as a Service (DBaaS) empowers Self Service/SSA Users to rapidly spawn databases on demand in cloud. The configuration and structure of provisioned databases depends on respective service template selected by Self Service user while requesting for database. In EM12c, the DBaaS Self Service/SSA Administrator has the option of hosting various service templates in service catalog and based on underlying DBCA templates.Many times provisioned databases require production scale data either for UAT, testing or development purpose and managing DBCA templates with data can be unwieldy. So, we need to populate the database using post deployment script option and without any additional work for the SSA Users. The SSA Administrator can automate this task in few easy steps. For details on how to setup DBaaS Self Service Portal refer to the DBaaS CookbookIn this article, I will list steps required to enable EM 12c DBaaS to provision databases with application data in two distinct ways using: 1) Data pump 2) Transportable tablespaces (TTS). The steps listed below are just examples of how to extend EM 12c DBaaS and you can even have your own method plugged in part of post deployment script option. Using Data Pump to populate databases These are the steps to be followed to implement extending DBaaS using Data Pump methodolgy: Production DBA should run data pump export on the production database and make the dump file available to all the servers participating in the database zone [sample shown in Fig.1] -- Full exportexpdp FULL=y DUMPFILE=data_pump_dir:dpfull1%U.dmp, data_pump_dir:dpfull2%U.dmp PARALLEL=4 LOGFILE=data_pump_dir:dpexpfull.log JOB_NAME=dpexpfull Figure-1:  Full export of database using data pump Create a post deployment SQL script [sample shown in Fig. 2] and this script can either be uploaded into the software library by SSA Administrator or made available on a shared location accessible from servers where databases are likely to be provisioned Normal 0 -- Full importdeclare    h1   NUMBER;begin-- Creating the directory object where source database dump is backed up.    execute immediate 'create directory DEST_LOC as''/scratch/nagrawal/OracleHomes/oradata/INITCHNG/datafile''';-- Running import    h1 := dbms_datapump.open (operation => 'IMPORT', job_mode => 'FULL', job_name => 'DB_IMPORT10');    dbms_datapump.set_parallel(handle => h1, degree => 1);    dbms_datapump.add_file(handle => h1, filename => 'IMP_GRIDDB_FULL.LOG', directory => 'DATA_PUMP_DIR', filetype => 3);    dbms_datapump.add_file(handle => h1, filename => 'EXP_GRIDDB_FULL_%U.DMP', directory => 'DEST_LOC', filetype => 1);    dbms_datapump.start_job(handle => h1);    dbms_datapump.detach(handle => h1);end;/ Figure-2: Importing using data pump pl/sql procedures Using DBCA, create a template for the production database – include all the init.ora parameters, tablespaces, datafiles & their sizes SSA Administrator should customize “Create Database Deployment Procedure” and provide DBCA template created in the previous step. In “Additional Configuration Options” step of Customize “Create Database Deployment Procedure” flow, provide the name of the SQL script in the Custom Script section and lock the input (shown in Fig. 3). Continue saving the deployment procedure. Figure-3: Using Custom script option for calling Import SQL Now, an SSA user can login to Self Service Portal and use the flow to provision a database that will also  populate the data using the post deployment step. Using Transportable tablespaces to populate databases Copy of all user/application tablespaces will enable this method of populating databases. These are the required steps to extend DBaaS using transportable tablespaces: Production DBA needs to create a backup of tablespaces. Datafiles may need conversion [such as from Big Endian to Little Endian or vice versa] based on the platform of production and destination where DBaaS created the test database. Here is sample backup script shows how to find out if any conversion is required, describes the steps required to convert datafiles and backup tablespace. SSA Administrator should copy the database (tablespaces) backup datafiles and export dumps to the backup location accessible from the hosts participating in the database zone(s). Create a post deployment SQL script and this script can either be uploaded into the software library by SSA Administrator or made available on a shared location accessible from servers where databases are likely to be provisioned. Here is sample post deployment SQL script using transportable tablespaces. Using DBCA, create a template for the production database – all the init.ora parameters should be included. NOTE: DO NOT choose to bring tablespace data into this template as they will be created SSA Administrator should customize “Create Database Deployment Procedure” and provide DBCA template created in the previous step. In the “Additional Configuration Options” step of the flow, provide the name of the SQL script in the Custom Script section and lock the input. Continue saving the deployment procedure. Now, an SSA user can login to Self Service Portal and use the flow to provision a database that will also populate the data using the post deployment step. More Information: Database-as-a-Service on Exadata Cloud Podcast on Database as a Service using Oracle Enterprise Manager 12c Oracle Enterprise Manager 12c Installation and Administration guide, Cloud Administration guide DBaaS Cookbook Screenwatch: Private Database Cloud: Set Up the Cloud Self-Service Portal Screenwatch: Private Database Cloud: Use the Cloud Self-Service Portal Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • How To: Spell Check InfoPath web form in SharePoint

    - by JeremyRamos
    This post is a compiled version of Steve Cavanagh's blog post on How To: Spell Check an InfoPath form displayed via XmlFormView. Many are not able to follow Steve's instructions due to lack of details. See below a downloadable zip of all changes need installed for your InfoPath Spell Checker. File Contents: CustomSpellCheckEntirePage.js - This is a customized SpellCheckEntirePage.js which includes changes outlined in Steve's post above.   FormServer.aspx - Note that this will replace the exisitng FormServer.aspx - this file acts like a masterpage for all infopath forms. So this change will add the spellchecker to all infopath forms in the sharepoint farm. Only thing i changed here is to add the 'Spell Check' link before and after the form.   ReadMe.rtf - Contains instructions where to copy the files to in your MOSS WFE server.

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >