Search Results

Search found 58817 results on 2353 pages for 'data paging'.

Page 5/2353 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Welcome Oracle Data Integration 12c: Simplified, Future-Ready Solutions with Extreme Performance

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 The big day for the Oracle Data Integration team has finally arrived! It is my honor to introduce you to Oracle Data Integration 12c. Today we announced the general availability of 12c release for Oracle’s key data integration products: Oracle Data Integrator 12c and Oracle GoldenGate 12c. The new release delivers extreme performance, increase IT productivity, and simplify deployment, while helping IT organizations to keep pace with new data-oriented technology trends including cloud computing, big data analytics, real-time business intelligence. With the 12c release Oracle becomes the new leader in the data integration and replication technologies as no other vendor offers such a complete set of data integration capabilities for pervasive, continuous access to trusted data across Oracle platforms as well as third-party systems and applications. Oracle Data Integration 12c release addresses data-driven organizations’ critical and evolving data integration requirements under 3 key themes: Future-Ready Solutions Extreme Performance Fast Time-to-Value       There are many new features that support these key differentiators for Oracle Data Integrator 12c and for Oracle GoldenGate 12c. In this first 12c blog post, I will highlight only a few:·Future-Ready Solutions to Support Current and Emerging Initiatives: Oracle Data Integration offer robust and reliable solutions for key technology trends including cloud computing, big data analytics, real-time business intelligence and continuous data availability. Via the tight integration with Oracle’s database, middleware, and application offerings Oracle Data Integration will continue to support the new features and capabilities right away as these products evolve and provide advance features. E    Extreme Performance: Both GoldenGate and Data Integrator are known for their high performance. The new release widens the gap even further against competition. Oracle GoldenGate 12c’s Integrated Delivery feature enables higher throughput via a special application programming interface into Oracle Database. As mentioned in the press release, customers already report up to 5X higher performance compared to earlier versions of GoldenGate. Oracle Data Integrator 12c introduces parallelism that significantly increases its performance as well. Fast Time-to-Value via Higher IT Productivity and Simplified Solutions:  Oracle Data Integrator 12c’s new flow-based declarative UI brings superior developer productivity, ease of use, and ultimately fast time to market for end users.  It also gives the ability to seamlessly reuse mapping logic speeds development.Oracle GoldenGate 12c ‘s Integrated Delivery feature automatically optimally tunes the process, saving time while improving performance. This is just a quick glimpse into Oracle Data Integrator 12c and Oracle GoldenGate 12c. On November 12th we will reveal much more about the new release in our video webcast "Introducing 12c for Oracle Data Integration". Our customer and partner speakers, including SolarWorld, BT, Rittman Mead will join us in launching the new release. Please join us at this free event to learn more from our executives about the 12c release, hear our customers’ perspectives on the new features, and ask your questions to our experts in the live Q&A. Also, please continue to follow our blogs, tweets, and Facebook updates as we unveil more about the new features of the latest release. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • ASP.Net GridView UpdatePanel Paging Gives Error On Second Click

    - by joe
    I'm trying to implement a GridView with paging inside a UpdatePanel. Everything works great when I do my first click. The paging kicks in and the next set of data is loaded quickly. However, when I then try to click a link for another page of data, I get the following error: Sys.WebForms.PageRequestManagerServerErrorException: An unknown error occurred while processing the request on the server. The status code returned from the server was: 12030 aspx code <asp:UpdatePanel ID="UpdatePanel1" runat="server"> <contenttemplate> <asp:GridView ID="GridView1" runat="server" CellPadding="2" AllowPaging="true" AllowSorting="true" PageSize="20" OnPageIndexChanging="GridView1_PageIndexChanging" OnSorting="GridView1_PageSorting" AutoGenerateColumns="False"> <Columns> <asp:BoundField DataField="ActivityLogID" HeaderText="Activity Log ID" SortExpression="ActivityLogID" /> <asp:BoundField DataField="ActivityDate" HeaderText="Activity Date" SortExpression="ActivityDate" /> <asp:BoundField DataField="ntUserID" HeaderText="NTUserID" SortExpression="ntUserID" /> <asp:BoundField DataField="ActivityStatus" HeaderText="Activity Status" SortExpression="ActivityStatus" /> </Columns> </asp:GridView> </contenttemplate> </asp:UpdatePanel> code behind private void bindGridView(string sortExp, string sortDir) { SqlCommand mySqlCommand = new SqlCommand(sSQL, mySQLconnection); SqlDataAdapter mySqlAdapter = new SqlDataAdapter(mySqlCommand); mySqlAdapter.Fill(dtDataTable); DataView myDataView = new DataView(); myDataView = dt.DefaultView; if (sortExp != string.Empty) { myDataView.Sort = string.Format("{0} {1}", sortExp, sortDir); } GridView1.DataSource = myDataView; GridView1.DataBind(); if (mySQLconnection.State == ConnectionState.Open) { mySQLconnection.Close(); } } protected void GridView1_PageIndexChanging(object sender, GridViewPageEventArgs e) { GridView1.PageIndex = e.NewPageIndex; bindGridView(); } protected void GridView1_PageSorting(object sender, GridViewSortEventArgs e) { bindGridView(e.SortExpression, sortOrder); } any clues on what is causing the error on the second click?

    Read the article

  • MVC Paging and Sorting Patterns: How to Page or Sort Re-Using Form Criteria

    - by CRice
    What is the best ASP.NET MVC pattern for paging data when the data is filtered by form criteria? This question is similar to: http://stackoverflow.com/questions/1425000/preserve-data-in-net-mvc but surely there is a better answer? Currently, when I click the search button this action is called: [AcceptVerbs(HttpVerbs.Post)] public ActionResult Search(MemberSearchForm formSp, int? pageIndex, string sortExpression) {} That is perfect for the initial display of the results in the table. But I want to have page number links or sort expression links re-post the current form data (the user entered it the first time - persisted because it is returned as viewdata), along with extra route params 'pageIndex' or 'sortExpression', Can an ActionLink or RouteLink (which I would use for page numbers) post the form to the url they specify? <%= Html.RouteLink("page 2", "MemberSearch", new { pageIndex = 1 })%> At the moment they just do a basic redirect and do not post the form values so the search page loads fresh. In regular old web forms I used to persist the search params (MemberSearchForm) in the ViewState and have a GridView paging or sorting event reuse it.

    Read the article

  • List<T> paging asp.net

    - by user1397978
    Using a three-tier architecture, I have a list of objects List<object> careerList = new List<object>(); ModuleDTO module = new ModuleDTO(); careerList = module.getDegreeCodeByQualification(qualificationCode); which I then add to a gridview like so: gridViewMaster.DataSource = careerList; gridViewMaster.DataBind(); What I'd like to do is then enable paging on the gridview. My gridview so far is: <asp:GridView ID="gridViewMaster" runat="server" AutoGenerateColumns="False" GridLines="None" BorderWidth="1px" CellPadding="2" DataKeyNames="Grouping" ForeColor="Black" onrowdatabound="gridViewMaster_RowDataBound" CssClass="mGrid" PagerStyle-CssClass="pgr" AlternatingRowStyle-CssClass="alt" OnPageIndexChanging="gridView_PageIndexChanging" AllowPaging="True" > Is it possible to do enable paging on a list's without having to change that list to a Datatable or Dataview? If there is a way, this would help a lot. So far my events are as follows: protected void gridView_PageIndexChanging(object sender, GridViewPageEventArgs e) { gridViewMaster.PageIndex = e.NewPageIndex; List<object> careerList = new List<object>(); ModuleDTO module = new ModuleDTO(); careerList = module.getDegreeCodeByQualification(qualificationCode); ModalProgress.Show(); System.Threading.Thread.Sleep(1000); JobPanel.Visible = true; gridViewMaster.DataSource = careerList.Distinct(); gridViewMaster.DataBind(); } Someone PLEASE HELP ME!!! Thank you

    Read the article

  • Big Data – Final Wrap and What Next – Day 21 of 21

    - by Pinal Dave
    In yesterday’s blog post we explored various resources related to learning Big Data and in this blog post we will wrap up this 21 day series on Big Data. I have been exploring various terms and technology related to Big Data this entire month. It was indeed fun to write about Big Data in 21 days but the subject of Big Data is much bigger and larger than someone can cover it in 21 days. My first goal was to write about the basics and I think we have got that one covered pretty well. During this 21 days I have received many questions and answers related to Big Data. I have covered a few of the questions in this series and a few more I will be covering in the next coming months. Now after understanding Big Data basics. I am personally going to do a list of the things next. I thought I will share the same with you as this will give you a good idea how to continue the journey of the Big Data. Build a schedule to read various Apache documentations Watch all Pluralsight Courses Explore HortonWorks Sandbox Start building presentation about Big Data – this is a great way to learn something new Present in User Groups Meetings on Big Data Topics Write more blog posts about Big Data I am going to continue learning about Big Data – I want you to continue learning Big Data. Please leave a comment how you are going to continue learning about Big Data. I will publish all the informative comments on this blog with due credit. I want to end this series with the infographic by UMUC. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Partner Webcast - Focus on Oracle Data Profiling and Data Quality 11g

    - by lukasz.romaszewski(at)oracle.com
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-ansi-language:RO;} Partner Webcast Focus on Oracle Data Profiling and Data Quality 11g February 24th, 12am  CET   Oracle offers an integrated suite Data Quality software architected to discover and correct today's data quality problems and establish a platform prepared for tomorrow's yet unknown data challenges. Oracle Data Profiling provides data investigation, discovery, and profiling in support of quality, migration, integration, stewardship, and governance initiatives. It includes a broad range of features that expand upon basic profiling, including automated monitoring, business-rule validation, and trend analysis. Oracle Data Quality for Data Integrator provides cleansing, standardization, matching, address validation, location enrichment, and linking functions for global customer data and operational business data. It ensures that data adheres to established standards that are adaptable to fit each organization's specific needs.  Both single - and double - byte data are processed in local languages to provide a unique and centralized view of customers, products and services.   During this in-person briefing, Data Integration Solution Specialists will be providing a technical overview and a walkthrough.   Agenda ·         Oracle Data Integration Strategy overview ·         A focus on Oracle Data Profiling and Oracle Data Quality for Data Integrator: o   Oracle Data Profiling o   Oracle Data Quality for Data Integrator o   Live demoo   Q&A Delivery Format  This FREE online LIVE eSeminar will be delivered over the Web and Conference Call. Registrations   received less than 24hours  prior to start time may not receive confirmation to attend. To register , click here. For any questions please contact [email protected]

    Read the article

  • CellTable + AsyncListViewAdapter<T>, stuck at 'loading' when paging

    - by Jaroslav Záruba
    Hi I'm trying to make CellTable, AsyncListViewAdapter<T> and SimplePager<T> working together. I managed to display my data, but whenever I click either "go to start" or "go to end" button I get that 'loading' indicator which stays there till the end of days. ( AsyncListViewAdapter<T>.onRangeChanged doesn't even get called this time.) As I have only single row in my data those two buttons should be (and appear to be) disabled. I went through the code-snippets in this thread, but I can't see nothing suspicions in what I've been doing. Is there some obvious answer to a rookie mistake? I shrinked my variable names, hopefully it won't wrap too much. protected class MyAsyncAdapter extends AsyncListViewAdapter<DTO> { @Override protected void onRangeChanged(ListView<DTO> v) { // doesn't get called on go2start/go2end :( Range r = v.getRange(); fetchData(r.getStart(), r.getLength()); } } private void addTable() { // table: CellTable<DTO> table = new CellTable<DTO>(10); table.addColumn(new Column<DTO, String>(new TextCell()) { @Override public String getValue(DTO namespace) { return namespace.getName(); } }, "Name"); // pager: SimplePager<DTO> pager = new SimplePager<DTO>(table); table.setPager(pager); adapter = new MyAsyncAdapter(); adapter.addView(table); // does not make any difference: // adapter.updateDataSize(0, false); // adapter.updateDataSize(10, true); VerticalPanel vPanel = new VerticalPanel(); vPanel.add(table); vPanel.add(pager); RootLayoutPanel.get().add(vPanel); } // success-handler of my fetching AsyncCallback @Override public void onSuccess(List<DTO> data) { // AsyncCallback<List<DTO>> has start field adapter.updateViewData(start, data.size(), data); if(data.size() < length) adapter.updateDataSize(start + data.size(), true); } Regards J. Záruba

    Read the article

  • Android horizontal scollview behave like iPhone (paging)

    - by Davide Vosti
    I have a LinearLayout inside a HorizontalScrollView. The content is just a image. While scrolling, I need to achieve the same behavior you get when setting the paging option on a the iPhone equivalent of the HSW (scrolling the list should stop at every page on the list, not continue moving). How is this done in Android? Should I implement this features by myself or there is a particular property to set or a subclass of HSV to implement? Thanks

    Read the article

  • Big Data – Buzz Words: What is HDFS – Day 8 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned what is MapReduce. In this article we will take a quick look at one of the four most important buzz words which goes around Big Data – HDFS. What is HDFS ? HDFS stands for Hadoop Distributed File System and it is a primary storage system used by Hadoop. It provides high performance access to data across Hadoop clusters. It is usually deployed on low-cost commodity hardware. In commodity hardware deployment server failures are very common. Due to the same reason HDFS is built to have high fault tolerance. The data transfer rate between compute nodes in HDFS is very high, which leads to reduced risk of failure. HDFS creates smaller pieces of the big data and distributes it on different nodes. It also copies each smaller piece to multiple times on different nodes. Hence when any node with the data crashes the system is automatically able to use the data from a different node and continue the process. This is the key feature of the HDFS system. Architecture of HDFS The architecture of the HDFS is master/slave architecture. An HDFS cluster always consists of single NameNode. This single NameNode is a master server and it manages the file system as well regulates access to various files. In additional to NameNode there are multiple DataNodes. There is always one DataNode for each data server. In HDFS a big file is split into one or more blocks and those blocks are stored in a set of DataNodes. The primary task of the NameNode is to open, close or rename files and directory and regulate access to the file system, whereas the primary task of the DataNode is read and write to the file systems. DataNode is also responsible for the creation, deletion or replication of the data based on the instruction from NameNode. In reality, NameNode and DataNode are software designed to run on commodity machine build in Java language. Visual Representation of HDFS Architecture Let us understand how HDFS works with the help of the diagram. Client APP or HDFS Client connects to NameSpace as well as DataNode. Client App access to the DataNode is regulated by NameSpace Node. NameSpace Node allows Client App to connect to the DataNode based by allowing the connection to the DataNode directly. A big data file is divided into multiple data blocks (let us assume that those data chunks are A,B,C and D. Client App will later on write data blocks directly to the DataNode. Client App does not have to directly write to all the node. It just has to write to any one of the node and NameNode will decide on which other DataNode it will have to replicate the data. In our example Client App directly writes to DataNode 1 and detained 3. However, data chunks are automatically replicated to other nodes. All the information like in which DataNode which data block is placed is written back to NameNode. High Availability During Disaster Now as multiple DataNode have same data blocks in the case of any DataNode which faces the disaster, the entire process will continue as other DataNode will assume the role to serve the specific data block which was on the failed node. This system provides very high tolerance to disaster and provides high availability. If you notice there is only single NameNode in our architecture. If that node fails our entire Hadoop Application will stop performing as it is a single node where we store all the metadata. As this node is very critical, it is usually replicated on another clustered as well as on another data rack. Though, that replicated node is not operational in architecture, it has all the necessary data to perform the task of the NameNode in the case of the NameNode fails. The entire Hadoop architecture is built to function smoothly even there are node failures or hardware malfunction. It is built on the simple concept that data is so big it is impossible to have come up with a single piece of the hardware which can manage it properly. We need lots of commodity (cheap) hardware to manage our big data and hardware failure is part of the commodity servers. To reduce the impact of hardware failure Hadoop architecture is built to overcome the limitation of the non-functioning hardware. Tomorrow In tomorrow’s blog post we will discuss the importance of the relational database in Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • SQL SERVER – Server Side Paging in SQL Server 2011 Performance Comparison

    - by pinaldave
    Earlier, I have written about SQL SERVER – Server Side Paging in SQL Server 2011 – A Better Alternative. I got many emails asking for performance analysis of paging. Here is the quick analysis of it. The real challenge of paging is all the unnecessary IO reads from the database. Network traffic was one of the reasons why paging has become a very expensive operation. I have seen many legacy applications where a complete resultset is brought back to the application and paging has been done. As what you have read earlier, SQL Server 2011 offers a better alternative to an age-old solution. This article has been divided into two parts: Test 1: Performance Comparison of the Two Different Pages on SQL Server 2011 Method In this test, we will analyze the performance of the two different pages where one is at the beginning of the table and the other one is at its end. Test 2: Performance Comparison of the Two Different Pages Using CTE (Earlier Solution from SQL Server 2005/2008) and the New Method of SQL Server 2011 We will explore this in the next article. This article will tackle test 1 first. Test 1: Retrieving Page from two different locations of the table. Run the following T-SQL Script and compare the performance. SET STATISTICS IO ON; USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 5 SELECT * FROM Sales.SalesOrderDetail ORDER BY SalesOrderDetailID OFFSET @PageNumber*@RowsPerPage ROWS FETCH NEXT 10 ROWS ONLY GO USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 12100 SELECT * FROM Sales.SalesOrderDetail ORDER BY SalesOrderDetailID OFFSET @PageNumber*@RowsPerPage ROWS FETCH NEXT 10 ROWS ONLY GO You will notice that when we are reading the page from the beginning of the table, the database pages read are much lower than when the page is read from the end of the table. This is very interesting as when the the OFFSET changes, PAGE IO is increased or decreased. In the normal case of the search engine, people usually read it from the first few pages, which means that IO will be increased as we go further in the higher parts of navigation. I am really impressed because using the new method of SQL Server 2011,  PAGE IO will be much lower when the first few pages are searched in the navigation. Test 2: Retrieving Page from two different locations of the table and comparing to earlier versions. In this test, we will compare the queries of the Test 1 with the earlier solution via Common Table Expression (CTE) which we utilized in SQL Server 2005 and SQL Server 2008. Test 2 A : Page early in the table -- Test with pages early in table USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 5 ;WITH CTE_SalesOrderDetail AS ( SELECT *, ROW_NUMBER() OVER( ORDER BY SalesOrderDetailID) AS RowNumber FROM Sales.SalesOrderDetail PC) SELECT * FROM CTE_SalesOrderDetail WHERE RowNumber >= @PageNumber*@RowsPerPage+1 AND RowNumber <= (@PageNumber+1)*@RowsPerPage ORDER BY SalesOrderDetailID GO SET STATISTICS IO ON; USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 5 SELECT * FROM Sales.SalesOrderDetail ORDER BY SalesOrderDetailID OFFSET @PageNumber*@RowsPerPage ROWS FETCH NEXT 10 ROWS ONLY GO Test 2 B : Page later in the table -- Test with pages later in table USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 12100 ;WITH CTE_SalesOrderDetail AS ( SELECT *, ROW_NUMBER() OVER( ORDER BY SalesOrderDetailID) AS RowNumber FROM Sales.SalesOrderDetail PC) SELECT * FROM CTE_SalesOrderDetail WHERE RowNumber >= @PageNumber*@RowsPerPage+1 AND RowNumber <= (@PageNumber+1)*@RowsPerPage ORDER BY SalesOrderDetailID GO SET STATISTICS IO ON; USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 12100 SELECT * FROM Sales.SalesOrderDetail ORDER BY SalesOrderDetailID OFFSET @PageNumber*@RowsPerPage ROWS FETCH NEXT 10 ROWS ONLY GO From the resultset, it is very clear that in the earlier case, the pages read in the solution are always much higher than the new technique introduced in SQL Server 2011 even if we don’t retrieve all the data to the screen. If you carefully look at both the comparisons, the PAGE IO is much lesser in the case of the new technique introduced in SQL Server 2011 when we read the page from the beginning of the table and when we read it from the end. I consider this as a big improvement as paging is one of the most used features for the most part of the application. The solution introduced in SQL Server 2011 is very elegant because it also improves the performance of the query and, at large, the database. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Oracle Data Integration 12c: Perspectives of Industry Experts, Customers and Partners

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 As you may have seen from our recent blog posts on Oracle Data Integrator 12c and Oracle GoldenGate 12c, we are very excited to share with you the great new features the 12c release brings to Oracle’s data integration solutions. And, fortunately we are not alone in this sentiment. Since the press announcement October 17th, which incorporates our customers' and experts' testimonials, we have seen positive comments in leading technology publications and social media as well. Here are some examples: In CIO and PCWorld you can find Joab Jackson’s article, Oracle Data Integrator 12c ready for real-time analysis, where wrote about the tight integration between Oracle Data Integrator and Oracle GoldenGate . He noted “Heeding the call from enterprise customers who clamor for more immediacy in their data-driven reports, Oracle has updated its data-integration software portfolio so that it can more rapidly deliver data to data warehouses and analysis applications.” Integration Developer News’ Vance McCarthy wrote the article Oracle Ships ‘Future Proofs’ Integration Tools for Traditional, Cloud, Big Data, Real-Time Projects and mentioned that “Oracle Data Integrator 12c and Oracle GoldenGate 12c sport a wide range of improvements to let devs more easily deliver data integration for cloud, analytics, big data and other new projects that leverage multiple datasets for business.“ InformationWeek’s Doug Henschen gave a great overview to several key features including the new flow-based UI in Oracle Data Integrator. Doug said “Oracle Data Integrator 12c introduces a complete makeover of the job-building experience, while real-time oriented GoldenGate 12c introduces performance gains “. In Database Trends and Applications’ article Oracle Strengthens Data Integration with Release of Oracle Data Integrator 12c and Oracle GoldenGate 12c highlighted the productivity aspect of the new solution with his remarks: “tight integration between Oracle Data Integrator 12c and Oracle GoldenGate 12c enables developers to leverage Oracle GoldenGate’s low overhead, real-time change data capture completely within the Oracle Data Integrator Studio without additional training”. We are also thrilled about what our customers and partners have to say about our products and the new release. And we are equally excited to share those perspectives with you in our upcoming launch video webcast on November 12th. SolarWorld Industries America’s Senior Database Manager, Russ Toyama will join our executives in our studio in Redwood Shores to discuss GoldenGate’s core benefits and the new release, while Surren Partharb, CTO of Strategic Technology Services for BT, and Mark Rittman, CTO of Rittman Mead, will provide their comments via the interviews conducted in the UK. This interactive panel discussion in the video webcast will unveil the new release with the expertise of our development executives and the great insight from our customers and partners. In addition, our product experts will be available online to answer chat questions. This is really a great opportunity to learn how Oracle's data integration offering has changed the integration and replication technology space with the new release, and established itself as the new leader. If you have not registered for this free event yet, you can do so via this link. We will run the live event at 8am PT/4pm GMT, followed by a replay of the event with live chat for Q&A  at 10am PT/6pm GMT. The replay will be available on-demand for those who register but cannot attend either session on November 12th. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Times New Roman","serif"; mso-fareast-font-family:"Times New Roman";}

    Read the article

  • CellTable + AsyncListViewAdapter<T> + SimplePager<T>, stuck with 'loading' bar when paging

    - by Jaroslav Záruba
    Hi I'm trying to make CellTable, AsyncListViewAdapter<T> and SimplePager<T> working together. I managed to display my data, but whenever I click either "go to start" or "go to end" button I get that 'loading' indicator which stays there till the end of days. (And AsyncListViewAdapter<T>.onRangeChanged doesn't even get called this time.) I went through the code-snippets in this thread, but I can't see nothing suspicions in what I've been doing. Is there some obvious answer to a rookie mistake? I shrinked my variable names, hopefully it won't wrap too much. protected class MyAsyncAdapter extends AsyncListViewAdapter<DTO> { @Override protected void onRangeChanged(ListView<DTO> v) { // doesn't get called on go2start/go2end :( Range r = v.getRange(); fetchData(r.getStart(), r.getLength()); } } private void addTable() { // table: CellTable<DTO> table = new CellTable<DTO>(10); table.addColumn(new Column<DTO, String>(new TextCell()) { @Override public String getValue(DTO namespace) { return namespace.getName(); } }, "Name"); // pager: SimplePager<DTO> pager = new SimplePager<DTO>(table); table.setPager(pager); adapter = new MyAsyncAdapter(); adapter.addView(table); // does not make any difference: // adapter.updateDataSize(0, false); // adapter.updateDataSize(10, true); VerticalPanel vPanel = new VerticalPanel(); vPanel.add(table); vPanel.add(pager); RootLayoutPanel.get().add(vPanel); } // success-handler of my fetching AsyncCallback @Override public void onSuccess(List<DTO> data) { // AsyncCallback<List<DTO>> has start field adapter.updateViewData(start, data.size(), data); if(data.size() < length) adapter.updateDataSize(start + data.size(), true); } Regards J. Záruba

    Read the article

  • New Big Data Appliance Security Features

    - by mgubar
    The Oracle Big Data Appliance (BDA) is an engineered system for big data processing.  It greatly simplifies the deployment of an optimized Hadoop Cluster – whether that cluster is used for batch or real-time processing.  The vast majority of BDA customers are integrating the appliance with their Oracle Databases and they have certain expectations – especially around security.  Oracle Database customers have benefited from a rich set of security features:  encryption, redaction, data masking, database firewall, label based access control – and much, much more.  They want similar capabilities with their Hadoop cluster.    Unfortunately, Hadoop wasn’t developed with security in mind.  By default, a Hadoop cluster is insecure – the antithesis of an Oracle Database.  Some critical security features have been implemented – but even those capabilities are arduous to setup and configure.  Oracle believes that a key element of an optimized appliance is that its data should be secure.  Therefore, by default the BDA delivers the “AAA of security”: authentication, authorization and auditing. Security Starts at Authentication A successful security strategy is predicated on strong authentication – for both users and software services.  Consider the default configuration for a newly installed Oracle Database; it’s been a long time since you had a legitimate chance at accessing the database using the credentials “system/manager” or “scott/tiger”.  The default Oracle Database policy is to lock accounts thereby restricting access; administrators must consciously grant access to users. Default Authentication in Hadoop By default, a Hadoop cluster fails the authentication test. For example, it is easy for a malicious user to masquerade as any other user on the system.  Consider the following scenario that illustrates how a user can access any data on a Hadoop cluster by masquerading as a more privileged user.  In our scenario, the Hadoop cluster contains sensitive salary information in the file /user/hrdata/salaries.txt.  When logged in as the hr user, you can see the following files.  Notice, we’re using the Hadoop command line utilities for accessing the data: $ hadoop fs -ls /user/hrdataFound 1 items-rw-r--r--   1 oracle supergroup         70 2013-10-31 10:38 /user/hrdata/salaries.txt$ hadoop fs -cat /user/hrdata/salaries.txtTom Brady,11000000Tom Hanks,5000000Bob Smith,250000Oprah,300000000 User DrEvil has access to the cluster – and can see that there is an interesting folder called “hrdata”.  $ hadoop fs -ls /user Found 1 items drwx------   - hr supergroup          0 2013-10-31 10:38 /user/hrdata However, DrEvil cannot view the contents of the folder due to lack of access privileges: $ hadoop fs -ls /user/hrdata ls: Permission denied: user=drevil, access=READ_EXECUTE, inode="/user/hrdata":oracle:supergroup:drwx------ Accessing this data will not be a problem for DrEvil. He knows that the hr user owns the data by looking at the folder’s ACLs. To overcome this challenge, he will simply masquerade as the hr user. On his local machine, he adds the hr user, assigns that user a password, and then accesses the data on the Hadoop cluster: $ sudo useradd hr $ sudo passwd $ su hr $ hadoop fs -cat /user/hrdata/salaries.txt Tom Brady,11000000 Tom Hanks,5000000 Bob Smith,250000 Oprah,300000000 Hadoop has not authenticated the user; it trusts that the identity that has been presented is indeed the hr user. Therefore, sensitive data has been easily compromised. Clearly, the default security policy is inappropriate and dangerous to many organizations storing critical data in HDFS. Big Data Appliance Provides Secure Authentication The BDA provides secure authentication to the Hadoop cluster by default – preventing the type of masquerading described above. It accomplishes this thru Kerberos integration. Figure 1: Kerberos Integration The Key Distribution Center (KDC) is a server that has two components: an authentication server and a ticket granting service. The authentication server validates the identity of the user and service. Once authenticated, a client must request a ticket from the ticket granting service – allowing it to access the BDA’s NameNode, JobTracker, etc. At installation, you simply point the BDA to an external KDC or automatically install a highly available KDC on the BDA itself. Kerberos will then provide strong authentication for not just the end user – but also for important Hadoop services running on the appliance. You can now guarantee that users are who they claim to be – and rogue services (like fake data nodes) are not added to the system. It is common for organizations to want to leverage existing LDAP servers for common user and group management. Kerberos integrates with LDAP servers – allowing the principals and encryption keys to be stored in the common repository. This simplifies the deployment and administration of the secure environment. Authorize Access to Sensitive Data Kerberos-based authentication ensures secure access to the system and the establishment of a trusted identity – a prerequisite for any authorization scheme. Once this identity is established, you need to authorize access to the data. HDFS will authorize access to files using ACLs with the authorization specification applied using classic Linux-style commands like chmod and chown (e.g. hadoop fs -chown oracle:oracle /user/hrdata changes the ownership of the /user/hrdata folder to oracle). Authorization is applied at the user or group level – utilizing group membership found in the Linux environment (i.e. /etc/group) or in the LDAP server. For SQL-based data stores – like Hive and Impala – finer grained access control is required. Access to databases, tables, columns, etc. must be controlled. And, you want to leverage roles to facilitate administration. Apache Sentry is a new project that delivers fine grained access control; both Cloudera and Oracle are the project’s founding members. Sentry satisfies the following three authorization requirements: Secure Authorization:  the ability to control access to data and/or privileges on data for authenticated users. Fine-Grained Authorization:  the ability to give users access to a subset of the data (e.g. column) in a database Role-Based Authorization:  the ability to create/apply template-based privileges based on functional roles. With Sentry, “all”, “select” or “insert” privileges are granted to an object. The descendants of that object automatically inherit that privilege. A collection of privileges across many objects may be aggregated into a role – and users/groups are then assigned that role. This leads to simplified administration of security across the system. Figure 2: Object Hierarchy – granting a privilege on the database object will be inherited by its tables and views. Sentry is currently used by both Hive and Impala – but it is a framework that other data sources can leverage when offering fine-grained authorization. For example, one can expect Sentry to deliver authorization capabilities to Cloudera Search in the near future. Audit Hadoop Cluster Activity Auditing is a critical component to a secure system and is oftentimes required for SOX, PCI and other regulations. The BDA integrates with Oracle Audit Vault and Database Firewall – tracking different types of activity taking place on the cluster: Figure 3: Monitored Hadoop services. At the lowest level, every operation that accesses data in HDFS is captured. The HDFS audit log identifies the user who accessed the file, the time that file was accessed, the type of access (read, write, delete, list, etc.) and whether or not that file access was successful. The other auditing features include: MapReduce:  correlate the MapReduce job that accessed the file Oozie:  describes who ran what as part of a workflow Hive:  captures changes were made to the Hive metadata The audit data is captured in the Audit Vault Server – which integrates audit activity from a variety of sources, adding databases (Oracle, DB2, SQL Server) and operating systems to activity from the BDA. Figure 4: Consolidated audit data across the enterprise.  Once the data is in the Audit Vault server, you can leverage a rich set of prebuilt and custom reports to monitor all the activity in the enterprise. In addition, alerts may be defined to trigger violations of audit policies. Conclusion Security cannot be considered an afterthought in big data deployments. Across most organizations, Hadoop is managing sensitive data that must be protected; it is not simply crunching publicly available information used for search applications. The BDA provides a strong security foundation – ensuring users are only allowed to view authorized data and that data access is audited in a consolidated framework.

    Read the article

  • How to present a stable data model in a public API that allows internal data structures to be changed without breaking the public view of the data?

    - by Max Palmer
    I am in the process of developing an application that allows users to write C# scripts. These scripts allow users to call selected methods and to access and manipulate data in a document. This works well, however, in the development version, scripts access the document's (internal) data structures directly. This means that if we were to change the internal data model/structure, there is a good chance that someone's script will no longer compile. We obviously want to prevent this breaking change from happening, but still want to allow the user to write sensible C# code (whilst not restricting how we develop our internal data model as a result). We therefore need to decouple our scripting API and its data structures from our internal methods and data structures. We've a few ideas as to how we might allow the user to access a what is effectively a stable public version of the document's internal data*, but I wanted to throw the question out there to someone who might have some real experience of this problem. NB our internal document's data structure is quite complex and it could be quite difficult to wrap. We know we want to expose as little as possible in our public API, especially as once it's out there, it's out there for good. Can anyone help? How do scripting languages / APIs decouple their public API and data structures from their internal data structures? Is there no real alternative to having to write a complex interaction layer? If we need to do this, what's a good approach or pattern for wrapping complex data structures that include nested objects, including collections? I've looked at the API facade pattern, which looks like it's trying to address these kinds of issues, but are there alternatives? *One idea is to build a data facade that is kept stable across versions of our application. The facade exposes a set of facade data objects that are used in the script code. These maintain backwards compatibility and wrap access to our internal document's data model.

    Read the article

  • How to avoid the refetch of records when paging button is clicked on the radgrid.

    - by Pravin
    Iam using the radgrid, in my web application. Iwant to avoid the refetch of records when paging button is clicked on the radgrid. I have a method SetTodaysAlerts which gets near about 100 records and binds to my radgrid. The page size of the radgrid is 10, hence First, Next, Previous and Last buttons are available. When I click the next button how can I avoid the re fetching of the records again. FYI: Iam using the radgrid_NeedDataSource event which does the datafetch again when any navigatin button is clicked on the radgrid.

    Read the article

  • Recordset Paging works in IIS6 but not in IIIS7

    - by Spudhead
    Hi there, I've got a recordset/paging set up - works fine in IIS6 but when I run the site on an IIS7 server I get the following error: Microsoft OLE DB Provider for SQL Server error '80004005' [DBNETLIB][ConnectionOpen (Connect()).]SQL Server does not exist or access denied. /orders.asp, line 197 the code looks like this: Set objPagingConn = Server.CreateObject("ADODB.Connection") objPagingConn.Open CONN_STRING Set objPagingRS = Server.CreateObject("ADODB.Recordset") objPagingRS.PageSize = iPageSize objPagingRS.CacheSize = iPageSize objPagingRS.Open strSQL, objPagingConn, adOpenStatic, adLockReadOnly, adCmdText iPageCount = objPagingRS.PageCount iRecordCount = objPagingRS.RecordCount Line 197 is the objPagingConn,Open ... line. I've got about 10 sites like this to migrate - is there a simple fix in IIS7??? Help is greatly appreciated! Many thanks, Martin

    Read the article

  • Paging enormous tables on DB2

    - by grenade
    We have a view that, without constraints, will return 90 million rows and a reporting application that needs to display paged datasets of that view. We're using nhibernate and recently noticed that its paging mechanism looks like this: select * from (select rownumber() over() as rownum, this_.COL1 as COL1_20_0_, this_.COL2 as COL2_20_0_ FROM SomeSchema.SomeView this_ WHERE this_.COL1 = 'SomeValue') as tempresult where rownum between 10 and 20 The query brings the db server to its knees. I think what's happening is that the nested query is assigning a row number to every row satisfied by the where clause before selecting the subset (rows 10 - 20). Since the nested query will return a lot of rows, the mechanism is not very efficient. I've seen lots of tips and tricks for doing this efficiently on other SQL platforms but I'm struggling to find a DB2 solution. In fact an article on IBM's own site recommends the approach that nhibernate has taken. Is there a better way?

    Read the article

  • Virtual Memory and Paging

    - by Kenshin
    Hello, I am doing some exercices to understand how the virtual memory and paging works, my question is as follows : Suppose we use a paged memory with pages of 1024 bytes, the virtual address space is of 8 pages but the physical memory can only contain 4 frames of pages. Replacement policy is LRU. What is the physical address in main memory that corresponds to virtual address 4096? and how do you get to that result? Same thing as question 1 but with virtual address 1024 A page fault occurs when accessing a word in page 0, which page frame will be used to receive the virtual page 0? Page Image

    Read the article

  • Objectdatasource and Gridview : Sorting, paging, filtering

    - by Simon
    Hi there, Im using entity framework 1.0 and trying to feed out a Gridview with a objectdatasource that have access to my facade. The problem is, that it seems to be particulary difficult and haven't seen anything that realy do what i want it to do on the internet. For those who know, a gridview feeded with an objectdatasource, it can't sort automaticaly then you must do it manually. It's not that bad. Where it becomes a nightmare, its when we add paging and filter settings to a gridview's datasource. After many hours searching on the internet, i'm asking you, guys, if anyone knows a link that can explain me how to mix Pagging, Sorting and filtering for a gridview and an objectdatasource! Thanks in advance and sorry for my english.

    Read the article

  • textbox disappears in paging - php

    - by fusion
    i'm calling the search.php page via ajax to search.html. the problem is, since i've implemented paging, the textbox with the search keyword from search.html 'disappears' when the user clicks the 'Next' button [because the page goes to search.php which has no textbox element] i'd like the textbox with the search keyword to be there, when the user goes through the records via paging. how'd i achieve this? search.html: <body> <form name="myform" class="wrapper"> <input type="text" name="q" onkeyup="showPage();" class="txt_search"/> <input type="button" name="button" onclick="showPage();" class="button"/> <p> <div id="txtHint"></div> <div id="page"></div> </form> </body> search.php [the relevant part]: $self = $_SERVER['PHP_SELF']; $limit = 5; //Number of results per page $numpages=ceil($totalrows/$limit); $query = $query." ORDER BY idQuotes LIMIT " . ($page-1)*$limit . ",$limit"; $result = mysql_query($query, $conn) or die('Error:' .mysql_error()); ?> <div class="caption">Search Results</div> <div class="center_div"> <table> <?php while ($row= mysql_fetch_array($result, MYSQL_ASSOC)) { $cQuote = highlightWords(htmlspecialchars($row['cQuotes']), $search_result); ?> <tr> <td style="text-align:right; font-size:15px;"><?php h($row['cArabic']); ?></td> <td style="font-size:16px;"><?php echo $cQuote; ?></td> <td style="font-size:12px;"><?php h($row['vAuthor']); ?></td> <td style="font-size:12px; font-style:italic; text-align:right;"><?php h($row['vReference']); ?></td> </tr> <?php } ?> </table> </div> <?php //Create and print the Navigation bar $nav=""; if($page > 1) { $nav .= "<a href=\"$self?page=" . ($page-1) . "&q=" .urlencode($search_result) . "\">< Prev</a>"; $first = "<a href=\"$self?page=1&q=" .urlencode($search_result) . "\"><< First</a>" ; } else { $nav .= "&nbsp;"; $first = "&nbsp;"; } for($i = 1 ; $i <= $numpages ; $i++) { if($i == $page) { $nav .= "<B>$i</B>"; }else{ $nav .= "<a href=\"$self?page=" . $i . "&q=" .urlencode($search_result) . "\">$i</a>"; } } if($page < $numpages) { $nav .= "<a href=\"$self?page=" . ($page+1) . "&q=" .urlencode($search_result) . "\">Next ></a>"; $last = "<a href=\"$self?page=$numpages&q=" .urlencode($search_result) . "\">Last >></a> "; } else { $nav .= "&nbsp;"; $last = "&nbsp;"; } echo "<br /><br />" . $first . $nav . $last; }

    Read the article

  • ASP.NET MVC and Paging - Search & Result Scenario

    - by devforall
    I have forms in my page a get and a post and i want add pager on my get form .. so i cant page through the results.. The problem that i am having is when i move to the second page it does not display anything.. I am using this library for paging .. http://stephenwalther.com/Blog/archive/2008/09/18/asp-net-mvc-tip-44-create-a-pager-html-helper.aspx this my actions code. [AcceptVerbs("GET")] public ActionResult SearchByAttraction() { return View(); } [AcceptVerbs("POST")] public ActionResult SearchByAttraction(int? id, FormCollection form) {.... } and this is what i am using on my get form to page through <%= Html.Pager(ViewData.Model)% //but when i do this it goes to this method [AcceptVerbs("GET")] public ActionResult SearchByAttraction() instead of going to this this [AcceptVerbs("POST")] public ActionResult SearchByAttraction(int? id, FormCollection form) which sort of makes sence .. but i cant really think of any other way of doing this Any help would be very appreciated.. Thanx

    Read the article

  • Why Cornell University Chose Oracle Data Masking

    - by Troy Kitch
    One of the eight Ivy League schools, Cornell University found itself in the unfortunate position of having to inform over 45,000 University community members that their personal information had been breached when a laptop was stolen. To ensure this wouldn’t happen again, Cornell took steps to ensure that data used for non-production purposes is de-identified with Oracle Data Masking. A recent podcast highlights why organizations like Cornell are choosing Oracle Data Masking to irreversibly de-identify production data for use in non-production environments. Organizations often copy production data, that contains sensitive information, into non-production environments so they can test applications and systems using “real world” information. Data in non-production has increasingly become a target of cyber criminals and can be lost or stolen due to weak security controls and unmonitored access. Similar to production environments, data breaches in non-production environments can cost millions of dollars to remediate and cause irreparable harm to reputation and brand. Cornell’s applications and databases help carry out the administrative and academic mission of the university. They are running Oracle PeopleSoft Campus Solutions that include highly sensitive faculty, student, alumni, and prospective student data. This data is supported and accessed by a diverse set of developers and functional staff distributed across the university. Several years ago, Cornell experienced a data breach when an employee’s laptop was stolen.  Centrally stored backup information indicated there was sensitive data on the laptop. With no way of knowing what the criminal intended, the university had to spend significant resources reviewing data, setting up service centers to handle constituent concerns, and provide free credit checks and identity theft protection services—all of which cost money and took time away from other projects. To avoid this issue in the future Cornell came up with several options; one of which was to sanitize the testing and training environments. “The project management team was brought in and they developed a project plan and implementation schedule; part of which was to evaluate competing products in the market-space and figure out which one would work best for us.  In the end we chose Oracle’s solution based on its architecture and its functionality.” – Tony Damiani, Database Administration and Business Intelligence, Cornell University The key goals of the project were to mask the elements that were identifiable as sensitive in a consistent and efficient manner, but still support all the previous activities in the non-production environments. Tony concludes,  “What we saw was a very minimal impact on performance. The masking process added an additional three hours to our refresh window, but it was well worth that time to secure the environment and remove the sensitive data. I think some other key points you can keep in mind here is that there was zero impact on the production environment. Oracle Data Masking works in non-production environments only. Additionally, the risk of exposure has been significantly reduced and the impact to business was minimal.” With Oracle Data Masking organizations like Cornell can: Make application data securely available in non-production environments Prevent application developers and testers from seeing production data Use an extensible template library and policies for data masking automation Gain the benefits of referential integrity so that applications continue to work Listen to the podcast to hear the complete interview.  Learn more about Oracle Data Masking by registering to watch this SANS Institute Webcast and view this short demo.

    Read the article

  • Removing Barriers to Create Effective Data Models

    After years of creating and maintaining data models, I have started to notice common barriers that decrease the accuracy and usefulness of models. In my opinion, the main causes of these barriers are the lack of knowledge and communication from within a company. The lack of knowledge in regards to data models or data modeling can take many forms. Company Culture Knowledge Whether documented or undocumented, existing business rules of a company can affect how data is modeled. For example, if a company only allows 1 assigned person per customer to be able to manipulate a customer’s record then then a data model that includes an associated table that joins customers and employee’s would be unneeded because that would allow for the possibility of multiple employees to handle a customer because of the potential for a many to many relationship between Customers and Employees. Technical Knowledge Depending on the data modeler’s proficiency in modeling data they can inadvertently cause issues and/or complications with a design without even noticing. It is important that companies share data modeling responsibilities so that the models are developed from multiple perspectives of a system, company and the original problem.  In addition, the tools that a company selects to create data models can also affect the accuracy of the model if designer are not familiar with the tools or the tools are too complex to use for the designer. Existing System Knowledge In order for a data modeler to model data for an existing system so that new changes can be applied to a system then they need to at least know the basic concepts of a system so that they can work within it. This will promote reusability of data and prevent the chance of duplicating data. Project Knowledge This should be pretty obvious, but it is very hard to create an accurate data model without knowing what data needs to be modeled. I have always found it strange that I have been asked to start modeling data prior to a client formalizing any requirements. Usually when this happens I have to make several iterations to a model, and the client still does not know exactly what they want.  In addition additional issues can arise when certain stakeholders of a project are not consulted prior to the design or after the project is over because it can cause miss understandings and confusion by the end user as well as possibly not solving the original problem for which a project is intended to solve. One common thread between each type of knowledge is that they can all be avoided through the use of good communication. For example, if a modeler is new to a company then they should ask older employees about any business specific rules that may be documented or undocumented that must be applied to projects in general. Furthermore, if a modeler is not really familiar with a specific data modeling software then they need to speak up and ask for help form other employees or their manager. This will not only help the modeler in the project, but also help them in future projects that they do for the company. Additionally, if a project is not clearly defined prior to a data modeler being assigned the modeling project then it is their responsibility to communicate with the other stakeholders to clarify any part of a project that is unclear so that the data model that is created is accurately aligned with a project.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >