Search Results

Search found 64086 results on 2564 pages for 'algebraic data types'.

Page 114/2564 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • Using the Data Form Web Part (SharePoint 2010) Site Agnostically!

    - by David Jacobus
    Originally posted on: http://geekswithblogs.net/djacobus/archive/2013/10/24/154465.aspxAs a Developer whom has worked closely with web designers (Power users) in a SharePoint environment, I have come across the issue of making the Data Form Web Part reusable across the site collection! In SharePoint 2007 it was very easy and this blog pointed the way to make it happen: Josh Gaffey's Blog. In SharePoint 2010 something changed! This method failed except for using a Data Form Web Part that pointed to a list in the Site Collection Root! I am making this discussion relative to a developer whom creates a solution (WSP) with all the artifacts embedded and the user shouldn’t have any involvement in the process except to activate features. The Scenario: 1. A Power User creates a Data Form Web Part using SharePoint Designer 2010! It is a great web part the uses all the power of SharePoint Designer and XSLT (Conditional formatting, etc.). 2. Other Users in the site collection want to use that specific web part in sub sites in the site collection. Pointing to a list with the same name, not at the site collection root! The Issues: 1. The Data Form Web Part Data Source uses a List ID (GUID) to point to the specific list. Which means a list in a sub site will have a list with a new GUID different than the one which was created with SharePoint Designer! Obviously, the List needs to be the same List (Fields, Content Types, etc.) with different data. 2. How can we make this web part site agnostic, and dependent only on the lists Name? I had this problem come up over and over and decided to put my solution forward! The Solution: 1. Use the XSL of the Data Form Web Part Created By the Power User in SharePoint Designer! 2. Extend the OOTB Data Form Web Part to use this XSL and Point to a List by name. The solution points to a hybrid solution that requires some coding (Developer) and the XSL (Power User) artifacts put together in a Visual Studio SharePoint Solution. Here are the solution steps in summary: 1. Create an empty SharePoint project in Visual Studio 2. Create a Module and Feature and put the XSL file created by the Power User into it a. Scope the feature to web 3. Create a Feature Receiver to Create the List. The same list from which the Data Form Web Part was created with by the Power User. a. Scope the feature to web 4. Create a Web Part extending the Data Form Web a. Point the Data Form Web Part to point to the List by Name b. Point the Data Form Web Part XSL link to the XSL added using the Module feature c. Scope The feature to Site i. This is because all web parts are in the site collection web part gallery. So in a Narrative Summary: We are creating a list in code which has the same name and (site Columns) as the list from which the Power User created the Data Form Web Part Using SharePoint Designer. We are creating a Web Part in code which extends the OOTB Data Form Web Part to point to a list by name and use the XSL created by the Power User. Okay! Here are the steps with images and code! At the end of this post I will provide a link to the code for a solution which works in any site! I want to TOOT the HORN for the power of this solution! It is the mantra a use with all my clients! What is a basic skill a SharePoint Developer: Create an application that uses the data from a SharePoint list and make that data visible to the user in a manner which meets requirements! Create an Empty SharePoint 2010 Project Here I am naming my Project DJ.DataFormWebPart Create a Code Folder Copy and paste the Extension and Utilities classes (Found in the solution provided at the end of this post) Change the Namespace to match this project The List to which the Data Form Web Part which was used to make the XSL by the Power User in SharePoint Designer is now going to be created in code! If already in code, then all the better! Here I am going to create a list in the site collection root and add some data to it! For the purpose of this discussion I will actually create this list in code before using SharePoint Designer for simplicity! So here I create the List and deploy it within this solution before I do anything else. I will use a List I created before for demo purposes. Footer List is used within the footer of my master page. Add a new Feature: Here I name the Feature FooterList and add a Feature Event Receiver: Here is the code for the Event Receiver: I have a previous blog post about adding lists in code so I will not take time to narrate this code: using System; using System.Runtime.InteropServices; using System.Security.Permissions; using Microsoft.SharePoint; using DJ.DataFormWebPart.Code; namespace DJ.DataFormWebPart.Features.FooterList { /// <summary> /// This class handles events raised during feature activation, deactivation, installation, uninstallation, and upgrade. /// </summary> /// <remarks> /// The GUID attached to this class may be used during packaging and should not be modified. /// </remarks> [Guid("a58644fd-9209-41f4-aa16-67a53af7a9bf")] public class FooterListEventReceiver : SPFeatureReceiver { SPWeb currentWeb = null; SPSite currentSite = null; const string columnGroup = "DJ"; const string ctName = "FooterContentType"; // Uncomment the method below to handle the event raised after a feature has been activated. public override void FeatureActivated(SPFeatureReceiverProperties properties) { using (SPWeb spWeb = properties.GetWeb() as SPWeb) { using (SPSite site = new SPSite(spWeb.Site.ID)) { using (SPWeb rootWeb = site.OpenWeb(site.RootWeb.ID)) { //add the fields addFields(rootWeb); //add content type SPContentType testCT = rootWeb.ContentTypes[ctName]; // we will not create the content type if it exists if (testCT == null) { //the content type does not exist add it addContentType(rootWeb, ctName); } if ((spWeb.Lists.TryGetList("FooterList") == null)) { //create the list if it dosen't to exist CreateFooterList(spWeb, site); } } } } } #region ContentType public void addFields(SPWeb spWeb) { Utilities.addField(spWeb, "Link", SPFieldType.URL, false, columnGroup); Utilities.addField(spWeb, "Information", SPFieldType.Text, false, columnGroup); } private static void addContentType(SPWeb spWeb, string name) { SPContentType myContentType = new SPContentType(spWeb.ContentTypes["Item"], spWeb.ContentTypes, name) { Group = columnGroup }; spWeb.ContentTypes.Add(myContentType); addContentTypeLinkages(spWeb, myContentType); myContentType.Update(); } public static void addContentTypeLinkages(SPWeb spWeb, SPContentType ct) { Utilities.addContentTypeLink(spWeb, "Link", ct); Utilities.addContentTypeLink(spWeb, "Information", ct); } private void CreateFooterList(SPWeb web, SPSite site) { Guid newListGuid = web.Lists.Add("FooterList", "Footer List", SPListTemplateType.GenericList); SPList newList = web.Lists[newListGuid]; newList.ContentTypesEnabled = true; var footer = site.RootWeb.ContentTypes[ctName]; newList.ContentTypes.Add(footer); newList.ContentTypes.Delete(newList.ContentTypes["Item"].Id); newList.Update(); var view = newList.DefaultView; //add all view fields here //view.ViewFields.Add("NewsTitle"); view.ViewFields.Add("Link"); view.ViewFields.Add("Information"); view.Update(); } } } Basically created a content type with two site columns Link and Information. I had to change some code as we are working at the SPWeb level and need Content Types at the SPSite level! I’ll use a new Site Collection for this demo (Best Practice) keep old artifacts from impinging on development: Next we will add this list to the root of the site collection by deploying this solution, add some data and then use SharePoint Designer to create a Data Form Web Part. The list has been added, now let’s add some data: Okay let’s add a Data Form Web Part in SharePoint Designer. Create a new web part page in the site pages library: I will name it TestWP.aspx and edit it in advanced mode: Let’s add an empty Data Form Web Part to the web part zone: Click on the web part to add a data source: Choose FooterList in the Data Source menu: Choose appropriate fields and select insert as multiple item view: Here is what it look like after insertion: Let’s add some conditional formatting if the information filed is not blank: Choose Create (right side) apply formatting: Choose the Information Field and set the condition not null: Click Set Style: Here is the result: Okay! Not flashy but simple enough for this demo. Remember this is the job of the Power user! All we want from this web part is the XLS-Style Sheet out of SharePoint Designer. We are going to use it as the XSL for our web part which we will be creating next. Let’s add a web part to our project extending the OOTB Data Form Web Part. Add new item from the Visual Studio add menu: Choose Web Part: Change WebPart to DataFormWebPart (Oh well my namespace needs some improvement, but it will sure make it readily identifiable as an extended web part!) Below is the code for this web part: using System; using System.ComponentModel; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using Microsoft.SharePoint; using Microsoft.SharePoint.WebControls; using System.Text; namespace DJ.DataFormWebPart.DataFormWebPart { [ToolboxItemAttribute(false)] public class DataFormWebPart : Microsoft.SharePoint.WebPartPages.DataFormWebPart { protected override void OnInit(EventArgs e) { base.OnInit(e); this.ChromeType = PartChromeType.None; this.Title = "FooterListDF"; try { //SPSite site = SPContext.Current.Site; SPWeb web = SPContext.Current.Web; SPList list = web.Lists.TryGetList("FooterList"); if (list != null) { string queryList1 = "<Query><Where><IsNotNull><FieldRef Name='Title' /></IsNotNull></Where><OrderBy><FieldRef Name='Title' Ascending='True' /></OrderBy></Query>"; uint maximumRowList1 = 10; SPDataSource dataSourceList1 = GetDataSource(list.Title, web.Url, list, queryList1, maximumRowList1); this.DataSources.Add(dataSourceList1); this.XslLink = web.Url + "/Assests/Footer.xsl"; this.ParameterBindings = BuildDataFormParameters(); this.DataBind(); } } catch (Exception ex) { this.Controls.Add(new LiteralControl("ERROR: " + ex.Message)); } } private SPDataSource GetDataSource(string dataSourceId, string webUrl, SPList list, string query, uint maximumRow) { SPDataSource dataSource = new SPDataSource(); dataSource.UseInternalName = true; dataSource.ID = dataSourceId; dataSource.DataSourceMode = SPDataSourceMode.List; dataSource.List = list; dataSource.SelectCommand = "" + query + ""; Parameter listIdParam = new Parameter("ListID"); listIdParam.DefaultValue = list.ID.ToString( "B").ToUpper(); Parameter maximumRowsParam = new Parameter("MaximumRows"); maximumRowsParam.DefaultValue = maximumRow.ToString(); QueryStringParameter rootFolderParam = new QueryStringParameter("RootFolder", "RootFolder"); dataSource.SelectParameters.Add(listIdParam); dataSource.SelectParameters.Add(maximumRowsParam); dataSource.SelectParameters.Add(rootFolderParam); dataSource.UpdateParameters.Add(listIdParam); dataSource.DeleteParameters.Add(listIdParam); dataSource.InsertParameters.Add(listIdParam); return dataSource; } private string BuildDataFormParameters() { StringBuilder parameters = new StringBuilder("<ParameterBindings><ParameterBinding Name=\"dvt_apos\" Location=\"Postback;Connection\"/><ParameterBinding Name=\"UserID\" Location=\"CAMLVariable\" DefaultValue=\"CurrentUserName\"/><ParameterBinding Name=\"Today\" Location=\"CAMLVariable\" DefaultValue=\"CurrentDate\"/>"); parameters.Append("<ParameterBinding Name=\"dvt_firstrow\" Location=\"Postback;Connection\"/>"); parameters.Append("<ParameterBinding Name=\"dvt_nextpagedata\" Location=\"Postback;Connection\"/>"); parameters.Append("<ParameterBinding Name=\"dvt_adhocmode\" Location=\"Postback;Connection\"/>"); parameters.Append("<ParameterBinding Name=\"dvt_adhocfiltermode\" Location=\"Postback;Connection\"/>"); parameters.Append("</ParameterBindings>"); return parameters.ToString(); } } } The OnInit method we use to set the list name and the XSL Link property of the Data Form Web Part. We do not have the link to XSL in our Solution so we will add the XSL now: Add a Module in the Visual Studio add menu: Rename Sample.txt in the module to footer.xsl and then copy the XSL from SharePoint Designer Look at elements.xml to where the footer.xsl is being provisioned to which is Assets/footer.xsl, make sure the Web parts xsl link is pointing to this url: Okay we are good to go! Let’s check our features and package: DataFormWebPart should be scoped to site and have the web part: The Footer List feature should be scoped to web and have the Assets module (Okay, I see, a spelling issue but it won’t affect this demo) If everything is correct we should be able to click a couple of sub site feature activations and have our list and web part in a sub site. (In fact this solution can be activated anywhere) Here is the list created at SubSite1 with new data It. Next let’s add the web part on a test page and see if it works as expected: It does! So we now have a repeatable way to use a WSP to move a Data Form Web Part around our sites! Here is a link to the code: DataFormWebPart Solution

    Read the article

  • Implementing Linked Lists in C#

    - by nijhawan.saurabh
    Why? The question is why you need Linked Lists and why it is the foundation of any Abstract Data Structure. Take any of the Data Structures - Stacks, Queues, Heaps, Trees; there are two ways to go about implementing them - Using Arrays Using Linked Lists Now you use Arrays when you know about the size of the Nodes in the list at Compile time and Linked Lists are helpful where you are free to add as many Nodes to the List as required at Runtime.   How? Now, let's see how we go about implementing a Simple Linked List in C#. Note: We'd be dealing with singly linked list for time being, there's also another version of linked lists - the Doubly Linked List which maintains two pointers (NEXT and BEFORE).   Class Diagram Let's see the Class Diagram first:     Code     1 // -----------------------------------------------------------------------     2 // <copyright file="Node.cs" company="">     3 // TODO: Update copyright text.     4 // </copyright>     5 // -----------------------------------------------------------------------     6      7 namespace CSharpAlgorithmsAndDS     8 {     9     using System;    10     using System.Collections.Generic;    11     using System.Linq;    12     using System.Text;    13     14     /// <summary>    15     /// TODO: Update summary.    16     /// </summary>    17     public class Node    18     {    19         public Object data { get; set; }    20     21         public Node Next { get; set; }    22     }    23 }    24         1 // -----------------------------------------------------------------------     2 // <copyright file="LinkedList.cs" company="">     3 // TODO: Update copyright text.     4 // </copyright>     5 // -----------------------------------------------------------------------     6      7 namespace CSharpAlgorithmsAndDS     8 {     9     using System;    10     using System.Collections.Generic;    11     using System.Linq;    12     using System.Text;    13     14     /// <summary>    15     /// TODO: Update summary.    16     /// </summary>    17     public class LinkedList    18     {    19         private Node Head;    20     21         public void AddNode(Node n)    22         {    23             n.Next = this.Head;    24             this.Head = n;    25     26         }    27     28         public void printNodes()    29         {    30     31             while (Head!=null)    32             {    33                 Console.WriteLine(Head.data);    34                 Head = Head.Next;    35     36             }    37     38         }    39     }    40 }    41          1 using System;     2 using System.Collections.Generic;     3 using System.Linq;     4 using System.Text;     5      6 namespace CSharpAlgorithmsAndDS     7 {     8     class Program     9     {    10         static void Main(string[] args)    11         {    12             LinkedList ll = new LinkedList();    13             Node A = new Node();    14             A.data = "A";    15     16             Node B = new Node();    17             B.data = "B";    18     19             Node C = new Node();    20             C.data = "C";    21             ll.AddNode(A);    22             ll.AddNode(B);    23             ll.AddNode(C);    24     25             ll.printNodes();    26         }    27     }    28 }    29        Final Words This is just a start, I will add more posts on Linked List covering more operations like Delete etc. and will also explore Doubly Linked List / Implementing Stacks/ Heaps/ Trees / Queues and what not using Linked Lists.   Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Trace File Source Adapter

    The Trace File Source adapter is a useful addition to your SSIS toolbox.  It allows you to read 2005 and 2008 profiler traces stored as .trc files and read them into the Data Flow.  From there you can perform filtering and analysis using the power of SSIS. There is no need for a SQL Server connection this just uses the trace file. Example Usages Cache warming for SQL Server Analysis Services Reading the flight recorder Find out the longest running queries on a server Analyze statements for CPU, memory by user or some other criteria you choose Properties The Trace File Source adapter has two properties, both of which combine to control the source trace file that is read at runtime. SQL Server 2005 and SQL Server 2008 trace files are supported for both the Database Engine (SQL Server) and Analysis Services. The properties are managed by the Editor form or can be set directly from the Properties Grid in Visual Studio. Property Type Description AccessMode Enumeration This property determines how the Filename property is interpreted. The values available are: DirectInput Variable Filename String This property holds the path for trace file to load (*.trc). The value is either a full path, or the name of a variable which contains the full path to the trace file, depending on the AccessMode property. Trace Column Definition Hopefully the majority of you can skip this section entirely, but if you encounter some problems processing a trace file this may explain it and allow you to fix the problem. The component is built upon the trace management API provided by Microsoft. Unfortunately API methods that expose the schema of a trace file have known issues and are unreliable, put simply the data often differs from what was specified. To overcome these limitations the component uses  some simple XML files. These files enable the trace column data types and sizing attributes to be overridden. For example SQL Server Profiler or TMO generated structures define EventClass as an integer, but the real value is a string. TraceDataColumnsSQL.xml  - SQL Server Database Engine Trace Columns TraceDataColumnsAS.xml    - SQL Server Analysis Services Trace Columns The files can be found in the %ProgramFiles%\Microsoft SQL Server\100\DTS\PipelineComponents folder, e.g. "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsSQL.xml" "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml" If at runtime the component encounters a type conversion or sizing error it is most likely due to a discrepancy between the column definition as reported by the API and the actual value encountered. Whilst most common issues have already been fixed through these files we have implemented specific exception traps to direct you to the files to enable you to fix any further issues due to different usage or data scenarios that we have not tested. An example error that you can fix through these files is shown below. Buffer exception writing value to column 'Column Name'. The string value is 999 characters in length, the column is only 111. Columns can be overridden by the TraceDataColumns XML files in "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml". Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Finally you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Trace File Source transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations. Please note that the Microsoft Trace classes used in the component are not supported on 64-bit platforms. To use the Trace File Source on a 64-bit host you need to ensure you have the 32-bit (x86) tools available, and the way you execute your package is setup to use them, please see the help topic 64-bit Considerations for Integration Services for more details. Downloads Trace Sources for SQL Server 2005 -- Trace Sources for SQL Server 2008 Version History SQL Server 2008 Version 2.0.0.382 - SQL Sever 2008 public release. (9 Apr 2009) SQL Server 2005 Version 1.0.0.321 - SQL Server 2005 public release. (18 Nov 2008) -- Screenshots

    Read the article

  • 12c??? - Active Data Guard Far Sync

    - by Jian Zhang(??)
    ?? ================ Active Data Guard Far Sync?Oracle 12c????(???Far Sync Standby),Far Sync?????????????(Primary Database)?????????Far Sync??,??(Primary Database) ??(synchronous)??redo?Far Sync??,??Far Sync????redo??(asynchronous)???????(Standby Database)???????????????????????Far Sync????????,init?????????,???????? ??redo ????Maximum Availability??,???????????(Primary Database)?????????Far Sync??,??(Primary Database)??(synchronous)??redo?Far Sync??,???????(zero data loss),?????Far Sync????,??????,??????????????Far Sync????redo??(asynchronous)???????(Standby Database)? ??redo ????Maximum Performance??,???????????(Primary Database)?????????Far Sync??,??(Primary Database) ????redo?Far Sync??,??Far Sync???????redo?????????(Standby Database)????????????????(Standby Database)??redo???(offload)? Far Sync????Data Guard ????(role transitions)????,?switchover/failover?????12c????? ???????Data Guard ????,?switchover/failover,???????????????Far Sync??,??Far Sync???????????????????? ???Far Sync???????,??????????????2?Far Sync??,???????? ???????Far Sync????? Far Sync??? ================ ????Far Sync ================ 1. ??Data Guard,???11.2??,??????«Active Database Duplication for A standby database» 2. ????Far Sync??,Far Sync????????,init?????????,???????? ??Far Sync???????,?????: SQL> ALTER DATABASE CREATE FAR SYNC INSTANCE CONTROLFILE AS '/tmp/controlfs01.ctl'; 3. ????redo?????Far Sync??,????LOG_ARCHIVE_DEST_2??: LOG_ARCHIVE_DEST_2='SERVICE=dg12cfs SYNC AFFIRM MAX_FAILURE=1 ALTERNATE=LOG_ARCHIVE_DEST_3 VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=dg12cfs' 4. ??Far Sync??????redo???,??Far Sync??LOG_ARCHIVE_DEST_2??: LOG_ARCHIVE_DEST_2='SERVICE=dg12cs ASYNC VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=dg12cs' 5. ????Far Sync???????,??????????????2?Far Sync??? 6. ???????: SQL> select * from  V$DATAGUARD_CONFIG; DB_UNIQUE_NAME       PARENT_DBUN       DEST_ROLE         CURRENT_SCN     CON_ID ------------------------------ ------------------------------     ----------------- ----------- ---------- dg12cfs                        dg12cp          FAR SYNC INSTANCE      682995          0 dg12cs                         dg12cfs         PHYSICAL STANDBY       682995          0 dg12cp                        NONE             PRIMARY DATABASE      683138          0 ????????????????:Oracle_12c_Active_Data_Guard_Far_Sync_v1.pdf

    Read the article

  • Oracle GoldenGate 12c - Leading Enterprise Replication

    - by Doug Reid
    Oracle GoldenGate 12c released  on October 17th and includes several new cutting edge features that firmly establishes GoldenGate's leader position in the data replication space.   In fact, this release more than doubles the performance of data delivery, supports Oracle's new multitenant database feature,  it's more secure, has more options for high availability, and has made great strides to simplify the configuration and deployment of the product.     Read through the press release if you haven't already and do not miss the quote from Cern's Eva Dafonte Perez, regarding Oracle GoldenGate 12c "….performs five times faster compared to previous GoldenGate versions and simplifies the management of a multi-tier environment" There are a variety of new and improved features in the Oracle GoldenGate 12c.  Here are the highlights: Optimized for Oracle Database 12c -  GoldenGate 12c is custom tailored to the unique capabilities of Oracle database 12c and out of the box GoldenGate 12c supports multitenant (pluggable database (PDB)) and non-consolidated deployments of Oracle Database 12c.   The naming convention used by database 12c is now in three parts (PDB-name, schema-name, and object name).  We have made changes to the GoldenGate capture process to support the new naming convention and streamlined the whole process so a single GoldenGate capture process is being used at the container level rather than at each individual PDB.  By having the capture process at the container level resource usage and the number of processes are reduced. To view a conceptual architecture diagram click here. Integrated Delivery for the Oracle Database - Leveraging a lightweight streaming API built exclusively for Oracle GoldenGate 12c, this process distributes load, auto tunes the degree of parallelism, scales better, and delivers blinding rates of changed data delivery to the Oracle database.  One of the goals for Oracle GoldenGate 12c was to reduce IT costs by simplifying the configuration and reduce the time to manage complex infrastructures.  In previous versions of Oracle GoldenGate, customers would split transaction loads by grouping tables into multiple different delivery processes (click here to view the previous method). Each delivery process executed independently and without any interaction or knowledge of other delivery processes.  This setup was complicated to configure and time consuming as the developer needed in-depth knowledge of the source and target schemas and the transaction profile. With GoldenGate 12c and Integrated Delivery we have made it easier to configure and faster to deploy.  To view a conceptual architecture diagram of integrated delivery click here Coordinated Delivery for Non-Oracle Databases - Coordinated Delivery orchestrates high-speed apply processes and simplifies the configuration of GoldenGate for non-Oracle targets. In Oracle GoldenGate 12c a single delivery process is used with multiple threads (click here) and key events, such as primary key updates, event markers, DDL, etc, are coordinated between the various threads to insure that the transactions are applied in the same sequence as they were captured, all while delivery improved performance.  Replication Between On-Premises and Cloud-Based systems. - The trend for business to utilize both on-premises and cloud-based systems is rising and businesses need to replicate data back and forth.   GoldenGate 12c can be configured in a variety of ways to provide real-time replication when unrestricted or restricted (limited ports or HTTP tunneling) networks are between on-premises and cloud-based systems.    Expanded Heterogeneity - It wouldn't be a GoldenGate release without new and improved platform support.   Release 1 includes support for MySQL 5.6 and Sybase 15.7.   Upcoming in the next release GoldenGate, support will be expanded for MS SQL Server, DB2, and Teradata. Tighter Security - Oracle GoldenGate 12c is integrated with the Oracle wallet to shield usernames and passwords using strong encryption and aliases.   Customers accustomed to using the Oracle Wallet with other Oracle products will instantly be familiar with how to use this great new feature Expanded Oracle Application and Technology Support -   GoldenGate can be used along with Oracle Coherence to enable real-time changed data feeds to the Coherence cache using Toplink and the Oracle GoldenGate JMS adapter.     Plus,  Oracle Advanced Customer Services (ACS) now offers a low downtime E-Business Suite platform and database migrations using GoldenGate as the enabling technology.  Keep tuned for more blogs on the new features and the upcoming launch webcast where we will go into these new features in more detail.   In the mean time make sure to read through our white paper "Oracle GoldenGate 12c Release 1 New Features Overview"

    Read the article

  • How do you clean Core Data generated models and code from a project?

    - by Hazmit
    I'm having an extremely annoying problem with Core Data in the iPhone SDK. I would say in general Core Data for the most part appears easy to use and nice to implement. I have a sqlite database that is being used as a read only reference to pull data elements out for an iPhone app. It would seem there are really mysterious issues relating to what seems to be migration of the database to the most recent versions of my schema. Why can't you just clean out your stored objects and models and let a project redo all of it when you compile next? You would think if you setup a stored object model there would be a way to just reset it and recompile. I've tried what feels like a thousand 'tips' that have been the results of hours of google searches and documentation prowling to figure out how to do this. My most recent error during compile time is below. 2010-04-07 18:23:51.891 PE[1962:207] * Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: 'Can't merge models with two different entities named 'PElement'' All of this code has been working in the simulator and is only causing me troubles now because I made a change to the schema. I also have the database options for automatically migrating set as below. NSMutableDictionary *optionsDictionary = [NSMutableDictionary dictionary]; [optionsDictionary setObject:[NSNumber numberWithBool:YES] forKey:NSMigratePersistentStoresAutomaticallyOption]; [optionsDictionary setObject:[NSNumber numberWithBool:YES] forKey:NSInferMappingModelAutomaticallyOption];

    Read the article

  • Distinguishing between .NET exception types

    - by Swingline Rage
    For the love of all things holy, how do you distinguish between different "exception flavors" within the predefined .NET exception classes? For example, a piece of code might throw an XmlException under the following conditions: The root element of the document is NULL Invalid chars are in the document The document is too long All of these are thrown as XmlException objects and all of the internal "tell me more about this exception" fields (such as Exception.HResult, Exception.Data, etc.) are usually empty or null. That leaves Exception.Message as the only thing that allows you to distinguish among these exception types, and you can't really depend on it because, you guessed it, the Exception.Message string is glocabilized, and can change when the culture changes. At least that's my read on the documentation. Exception.HResult and Exception.Data are widely ignored across the .NET libraries. They are the red-headed stepchildren of the world's .NET error-handling code. And even assuming they weren't, the HRESULT type is still the worst, downright nastiest error code in the history of error codes. Why we are still looking at HRESULTs in 2010 is beyond me. I mean if you're doing Interop or P/Invoke that's one thing but... HRESULTs have no place in System.Exception. HRESULTs are a wart on the proboscis of System.Exception. But seriously, it means I have to set up a lot of detailed specific error-handling code in order to figure out the same information that should have been passed as part of the exception data. Exceptions are useless if they force you to work like this. What am I doing wrong?

    Read the article

  • How would you implement API key in WCF Data Service?

    - by rushonerok
    Is there a way to require an API key in the URL / or some other way of passing the service a private key in order to grant access to the data? I have this right now... using System; using System.Data.Services; using System.Data.Services.Common; using System.Collections.Generic; using System.Linq; using System.ServiceModel.Web; using Numina.Framework; using System.Web; using System.Configuration; [System.ServiceModel.ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class odata : DataService { public static void InitializeService(DataServiceConfiguration config) { config.SetEntitySetAccessRule("*", EntitySetRights.AllRead); //config.SetServiceOperationAccessRule("*", ServiceOperationRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } protected override void OnStartProcessingRequest(ProcessRequestArgs args) { HttpRequest Request = HttpContext.Current.Request; if(Request["apikey"] != ConfigurationManager.AppSettings["ApiKey"]) throw new DataServiceException("ApiKey needed"); base.OnStartProcessingRequest(args); } } ...This works but it's not perfect because you cannot get at the metadata and discover the service through the Add Service Reference explorer. I could check if $metadata is in the url but it seems like a hack. Is there a better way?

    Read the article

  • Unsupported operand types when value is integer

    - by Adam Tester
    I'm getting this error when trying to add 2 to an integer. I am using the Codeigniter framework. Fatal error: Unsupported operand types in D:\wamp\www\application\libraries\Gen_images.php on line 180 Here is where its called: // Now process the image var_dump($this->upload->data('image_width')); $this->gen_login->resize($file_name, $this->upload->data('image_width'), $this->upload->data('image_height')); I get the error on line 180 which is: $config['width'] = $width + 2; So I thought $width must be an array or string so I wouldn't be able to add to it, but the var dump shows this: array 'file_name' => string 'genyx_1341414096.jpg' (length=20) 'file_type' => string 'image/jpeg' (length=10) 'file_path' => string 'D:/wamp/www/beer/uploads/' (length=25) 'full_path' => string 'D:/wamp/www/beer/uploads/genyx_1341414096.jpg' (length=45) 'raw_name' => string 'genyx_1341414096' (length=16) 'orig_name' => string 'genyx_1341414096.jpg' (length=20) 'client_name' => string '294207_177080222375077_100002193022560_361510_991268937_s.jpg' (length=61) 'file_ext' => string '.jpg' (length=4) 'file_size' => float 55.85 'is_image' => boolean true 'image_width' => int 720 'image_height' => int 479 'image_type' => string 'jpeg' (length=4) 'image_size_str' => string 'width="720" height="479"' (length=24) Can anyone help me?

    Read the article

  • wsdl return an array of complex types

    - by Anand
    hi, I have defined a web service that will return the data from my mysql data base. I have written the web service in php. Now I have defined a complex type as follows: $server->wsdl->addComplexType( 'Category', 'complexType', 'struct', 'all', '', array( 'category_parent_id' => array('name' => 'category_parent_id', 'type' => 'xsd:int'), 'category_child_id' => array('name' => 'category_child_id', 'type' => 'xsd:int'), 'category_list' => array('name' => 'category_list', 'type' => 'xsd:int') ) ); The above complex type is a row in a table in my database. Now my function must send an array of these rows so how do I achieve the same My code is as follows: require_once('./nusoap/nusoap.php'); $server = new soap_server; $server-configureWSDL('productwsdl', 'urn:productwsdl'); // Register the data structures used by the service $server-wsdl-addComplexType( 'Category', 'complexType', 'struct', 'all', '', array( 'category_parent_id' = array('name' = 'category_parent_id', 'type' = 'xsd:int'), 'category_child_id' = array('name' = 'category_child_id', 'type' = 'xsd:int'), 'category_list' = array('name' = 'category_list', 'type' = 'xsd:int') ) ); $server-register('getaproduct', // method name array(), // input parameters //array('return' = array('result' = 'tns:Category')), // output parameters array('return' = 'tns:Category'), // output parameters 'urn:productwsdl', // namespace 'urn:productwsdl#getaproduct', // soapaction 'rpc', // style 'encoded', // use 'Get the product categories' // documentation ); function getaproduct() { $conn = mysql_connect('localhost','root',''); mysql_select_db('sssl', $conn); $sql = "SELECT * FROM jos_vm_category_xref"; $q = mysql_query($sql); while($r = mysql_fetch_array($q)) { $items[] = array('category_parent_id'=$r['category_parent_id'], 'category_child_id'=$r['category_child_id'], 'category_list'=$r['category_list']); } return $items; } // Use the request to (try to) invoke the service $HTTP_RAW_POST_DATA = isset($HTTP_RAW_POST_DATA) ? $HTTP_RAW_POST_DATA : ''; $server-service($HTTP_RAW_POST_DATA);

    Read the article

  • Best fit curve for trend line

    - by Dave Jarvis
    Problem Constraints Size of the data set, but not the data itself, is known. Data set grows by one data point at a time. Trend line is graphed one data point at a time (using a spline/Bezier curve). Graphs The collage below shows data sets with reasonably accurate trend lines: The graphs are: Upper-left. By hour, with ~24 data points. Upper-right. By day for one year, with ~365 data points. Lower-left. By week for one year, with ~52 data points. Lower-right. By month for one year, with ~12 data points. User Inputs The user can select: the type of time series (hourly, daily, monthly, quarterly, annual); and the start and end dates for the time series. For example, the user could select a daily report for 30 days in June. Trend Weight To calculate the window size (i.e., the number of data points to average when calculating the trend line), the following expression is used: data points / trend weight Where data points is derived from user inputs and trend weight is 6.4. Even though a trend weight of 6.4 produces good fits, it is rather arbitrary, and might not be appropriate for different user inputs. Question How should trend weight be calculated given the constraints of this problem?

    Read the article

  • How to test a class that makes HTTP request and parse the response data in Obj-C?

    - by GuidoMB
    I Have a Class that needs to make an HTTP request to a server in order to get some information. For example: - (NSUInteger)newsCount { NSHTTPURLResponse *response; NSError *error; NSURLRequest *request = ISKBuildRequestWithURL(ISKDesktopURL, ISKGet, cookie, nil, nil); NSData *data = [NSURLConnection sendSynchronousRequest:request returningResponse:&response error:&error]; if (!data) { NSLog(@"The user's(%@) news count could not be obtained:%@", username, [error description]); return 0; } NSString *regExp = @"Usted tiene ([0-9]*) noticias? no leídas?"; NSString *stringData = [[NSString alloc] initWithData:data encoding:NSASCIIStringEncoding]; NSArray *match = [stringData captureComponentsMatchedByRegex:regExp]; [stringData release]; if ([match count] < 2) return 0; return [[match objectAtIndex:1] intValue]; } The things is that I'm unit testing (using OCUnit) the hole framework but the problem is that I need to simulate/fake what the NSURLConnection is responding in order to test different scenarios and because I can't relay on the server to test my framework. So the question is Which is the best ways to do this?

    Read the article

  • What makes these two R data frames not identical?

    - by Matt Parker
    UPDATE: I remembered dput() about the time Sharpie mentioned it. It's probably the row names. Back in a moment with an answer. I have two small data frames, this_tx and last_tx. They are, in every way that I can tell, completely identical. this_tx == last_tx results in a frame of identical dimensions, all TRUE. this_tx %in% last_tx, two TRUEs. Inspected visually, clearly identical. But when I call identical(this_tx, last_tx) I get a FALSE. Hilariously, even identical(str(this_tx), str(last_tx)) will return a TRUE. If I set this_tx <- last_tx, I'll get a TRUE. What is going on? I don't have the deepest understanding of R's internal mechanics, but I can't find a single difference between the two data frames. If it's relevant, the two variables in the frames are both factors - same levels, same numeric coding for the levels, both just subsets of the same original data frame. Converting them to character vectors doesn't help. Background (because I wouldn't mind help on this, either): I have records of drug treatments given to patients. Each treatment record essentially specifies a person and a date. A second table has a record for each drug and dose given during a particular treatment (usually, a few drugs are given each treatment). I'm trying to identify contiguous periods during which the person was taking the same combinations of drugs at the same doses. The best plan I've come up with is to check the treatments chronologically. If the combination of drugs and doses for treatment[i] is identical to the combination at treatment[i-1], then treatment[i] is a part of the same phase as treatment[i-1]. Of course, if I can't compare drug/dose combinations, that's right out.

    Read the article

  • Checking data of all same class elements

    - by Tiffani
    I need the code to check the data-name value of all instances of .account-select. Right now it just checks the first .account-select element and not any subsequent ones. The function right now is on click of an element such as John Smith, it checks the data-name of the .account-select lis. If the data-names are the same, it does not create a new li with the John Smith data. If no data-names are equal to John Smith, then it adds an li with John Smith. This is the JS-Fiddle I made for it so you can see what I am referring to: http://jsfiddle.net/rsxavior/vDCNy/22/ Any help would be greatly appreciated. This is the Jquery Code I am using right now. $('.account').click(function () { var acc = $(this).data("name"); var sel = $('.account-select').data("name"); if (acc === sel) { } else { $('.account-hidden-li').append('<li class="account-select" data-name="'+ $(this).data("name") +'">' + $(this).data("name") + '<a class="close bcn-close" data-dismiss="alert" href="#">&times;</a></li>'); } }); And the HTML: <ul> <li><a class="account" data-name="All" href="#">All</a></li> <li><a class="account" data-name="John Smith" href="#">John Smith</a></li> </ul> <ul class="account-hidden-li"> <ul>

    Read the article

  • How to organize live data integrity tests and code unit tests?

    - by karlthorwald
    I have several files with code testing code (which uses a "unittest" class). Later I found it would be nice to test database integrity also. I put this into a separate directory tree. (Things like the keys have correct format, parent and child nodes are pointing correctly and such.) I use the same unittest class for the integrity tests. Now I wonder if it makes really sense to keep this separate. To test the integrity of data I often duplicate parts of code that I use to test the code that handles the data. But it is not the same. The code tests use test databases (that get deleted after each test) and the integrity tests connect to the live data and analyze it. The integrity tests I want to call from cron and send an alarm if something happens in the live database. How would you handle that? Are there standards for such a setup? What is your experience? My tendency is to put everything in the same file, which would result in the code tests also being executed by the cron on the production environment.

    Read the article

  • Populate a tree from Hierarchical data using 1 LINQ statement

    - by Midhat
    Hi. I have set up this programming exercise. using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace ConsoleApplication2 { class DataObject { public int ID { get; set; } public int ParentID { get; set; } public string Data { get; set; } public DataObject(int id, int pid, string data) { this.ID = id; this.ParentID = pid; this.Data = data; } } class TreeNode { public DataObject Data {get;set;} public List<DataObject> Children { get; set; } } class Program { static void Main(string[] args) { List<DataObject> data = new List<DataObject>(); data.Add(new DataObject(1, 0, "Item 1")); data.Add(new DataObject(2, 0, "Item 2")); data.Add(new DataObject(21, 2, "Item 2.1")); data.Add(new DataObject(22, 2, "Item 2.2")); data.Add(new DataObject(221, 22, "Item 2.2.1")); data.Add(new DataObject(3, 0, "Item 3")); } } } The desired output is a List of 3 treenodes, having items 1, 2 and 3. Item 2 will have a list of 2 dataobjects as its children member and so on. I have been trying to populate this tree (or rather a forest) using just 1 SLOC using LINQ. A simple group by gives me the desired data but the challenge is to organize it in TreeNode objects. Can someone give a hint or an impossibility result for this?

    Read the article

  • Observer Design Pattern - multiple event types

    - by David
    I'm currently implementing the Observer design pattern and using it to handle adding items to the session, create error logs and write messages out to the user giving feedback on their actions (e.g. You've just logged out!). I began with a single method on the subject called addEvent() but as I added more Observers I found that the parameters required to detail all the information I needed for each listener began to grow. I now have 3 methods called addMessage(), addStorage() and addLog(). These add data into an events array that has a key related to the event type (e.g. log, message, storage) but I'm starting to feel that now the subject needs to know too much about the listeners that are attached. My alternative thought is to go back to addEvent() and pass an event type (e.g. USER_LOGOUT) along with the data associated and each Observer maintains it's own list of event handles it is looking for (possibly in a switch statement), but this feels cumbersome. Also, I'd need to check that sufficient data had also been passed along with the event type. What is the correct way of doing this? Please let me know if I can explain any parts of this further. I hope you can help and see the problem I'm battling with.

    Read the article

  • Is it Possible to Use Constraints on Hierarchical Data in a Self-Referential Table?

    - by pbarney
    Suppose you have the following table, intended to represent hierarchical data: +--------+-------------+ | Field | Type | +--------+-------------+ | id | int(10) | | parent | int(10) | | name | varchar(45) | +--------+-------------+ The table is self-referential in that the parent_id refers to id. So you might have the following data: +----+--------+---------------+ | id | parent | name | +----+--------+---------------+ | 1 | 0 | fruit | | 2 | 0 | vegetable | | 3 | 1 | apple | | 4 | 1 | orange | | 5 | 3 | red delicious | | 6 | 3 | granny smith | | 7 | 3 | gala | +----+--------+---------------+ Using MySQL, I am trying to impose a (self-referential) foreign key constraint upon the data to cascade on update and prevent deletion of a record if it has any "children." So I used the following: CREATE TABLE `test`.`fruit` ( `id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT, `parent` INT(10) UNSIGNED, `name` VARCHAR(45) NOT NULL, PRIMARY KEY (`id`), CONSTRAINT `fk_parent` FOREIGN KEY (`parent`) REFERENCES `fruit` (`id`) ON UPDATE CASCADE ON DELETE RESTRICT ) ENGINE = InnoDB; From what I understand, this should fit my requirements. (And parent must default to null to allow insertions, correct?) The problem is, if I change the id of a record, it will not cascade: Cannot delete or update a parent row: a foreign key constraint fails (`test`.`fruit`, CONSTRAINT `fk_parent` FOREIGN KEY (`parent`) REFERENCES `fruit` (`id`) ON UPDATE CASCADE) What am I missing? Feel free to correct me if my terminology is screwed up... I'm new to constraints.

    Read the article

  • Where can I find my iPhone app's Core Data persistent store?

    - by Dr Dork
    I'm diving into iPhone development, so I apologize in advance if this is a ridiculous question, but in a new iPad app project using the Core Data framework, here's the generated code for creating the persistentStoreCoordinator... - (NSPersistentStoreCoordinator *)persistentStoreCoordinator { if (persistentStoreCoordinator != nil) { return persistentStoreCoordinator; } NSURL *storeUrl = [NSURL fileURLWithPath: [[self applicationDocumentsDirectory] stringByAppendingPathComponent: @"ApplicationName.sqlite"]]; NSError *error = nil; persistentStoreCoordinator = [[NSPersistentStoreCoordinator alloc] initWithManagedObjectModel:[self managedObjectModel]]; if (![persistentStoreCoordinator addPersistentStoreWithType:NSSQLiteStoreType configuration:nil URL:storeUrl options:nil error:&error]) { /* Replace this implementation with code to handle the error appropriately. abort() causes the application to generate a crash log and terminate. You should not use this function in a shipping application, although it may be useful during development. If it is not possible to recover from the error, display an alert panel that instructs the user to quit the application by pressing the Home button. Typical reasons for an error here include: * The persistent store is not accessible * The schema for the persistent store is incompatible with current managed object model Check the error message to determine what the actual problem was. */ NSLog(@"Unresolved error %@, %@", error, [error userInfo]); abort(); } return persistentStoreCoordinator; } My questions are... The first time I run the app, is the ApplicationName.sqllite database created automatically if it doesn't exist? If not, when is it created? When data is added to it programmatically? Once the DB does exist, where can I locate the file? I'd like to open it with a different program so I can manually manipulate the data. Thanks so much in advance for your help! I'm going to continue researching these questions right now.

    Read the article

  • SQL Query to separate data into two fields

    - by Phillip
    I have data in one column that I want to separate into two columns. The data is separated by a comma if present. This field can have no data, only one set of data or two sets of data saperated by the comma. Currently I pull the data and save as a comma delimited file then use an FoxPro to load the data into a table then process the data as needed then I re-insert the data back into a different SQL table for my use. I would like to drop the FoxPro portion and have the SQL query saperate the data for me. Below is a sample of what the data looks like. Store Amount Discount 1 5.95 1 5.95 PO^-479^2 1 5.95 PO^-479^2 2 5.95 2 5.95 PO^-479^2 2 5.95 +CA8A09^-240^4,CORDRC^-239^7 3 5.95 3 5.95 +CA8A09^-240^4,CORDRC^-239^7 3 5.95 +CA8A09^-240^4,CORDRC^-239^7 In the data above I want to sum the data in the amount field to get a gross amount. Then pull out the specific discount amount which is located between the carat characters and sum it to get the total discount amount. Then add the two together and get the total net amount. The query I want to write will separate the discount field as needed, see store 2 line 3 for two discounts being applied, then pull out the value between carat characters.

    Read the article

  • Is there a Java data structure that is effectively an ArrayList with double indicies and built-in in

    - by Bob Cross
    I am looking for a pre-built Java data structure with the following characteristics: It should look something like an ArrayList but should allow indexing via double-precision rather than integers. Note that this means that it's likely that you'll see indicies that don't line up with the original data points (i.e., asking for the value that corresponds to key "1.5"). As a consequence, the value returned will likely be interpolated. For example, if the key is 1.5, the value returned could be the average of the value at key 1.0 and the value at key 2.0. The keys will be sorted but the values are not ensured to be monotonically increasing. In fact, there's no assurance that the first derivative of the values will be continuous (making it a poor fit for certain types of splines). Freely available code only, please. For clarity, I know how to write such a thing. In fact, we already have an implementation of this and some related data structures in legacy code that I want to replace due to some performance and coding issues. What I'm trying to avoid is spending a lot of time rolling my own solution when there might already be such a thing in the JDK, Apache Commons or another standard library. Frankly, that's exactly the approach that got this legacy code into the situation that it's in right now.... Is there such a thing out there in a freely available library?

    Read the article

  • MATLAB: What is an appropriate Data Structure for a Matrix with Random Variable Entries?

    - by user568249
    I'm currently working in an area that is related to simulation and trying to design a data structure that can include random variables within matrices. To motivate this let me say I have the following matrix: [a b; c d] I want to find a data structure that will allow for a, b, c, d to either be real numbers or random variables. As an example, let's say that a = 1, b = -1, c = 2 but let d be a normally distributed random variable with mean 0 and SD 1. The data structure that I have in mind will give no value to d. However, I also want to be able to design a function that can take in the structure, simulate an uniform(0,1), obtain a value for d using an inverse CDF and then spit out an actual matrix. I have several ideas to do this (all related to the MATLAB icdf function) but would like to know how more experienced programmers would do this. In this application, it's important that the structure is as "lean" as possible since I will be working with very very large matrices and memory will be an issue.

    Read the article

  • Can't return data to parent activity

    - by user23
    I'm trying to return data (the position of the item picked from the grid) to parent activity, but my code fails. The debbuger shows how 'data' gets right the key and data in the "data.putExtra("POS_ICON", position)" at child activity, but after in onActivityResult() at parent activity the debbuger shows 'data' with no key nor data returned...it's like if data loses its content. I've followed other posts and tutorials but no way. Please help. Parent activity: public void selIcono(View v){ Intent intent = new Intent (this, SelIconoActivity.class); startActivityForResult(intent,PICK_ICON_REQUEST); } protected void onActivityResult(int requestCode, int resultCode, Intent data) { //here's the problem: no data is returned!! if (requestCode == PICK_ICON_REQUEST) { if (resultCode == RESULT_OK) { // An icon was picked. putIcon(data.getIntExtra("POS_ICON", -1)); } } } Child activity: public class SelIconoActivity extends Activity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_sel_icono); GridView gridview = (GridView)findViewById(R.id.gr_iconos); gridview.setAdapter(new ImageAdapter (this)); gridview.setOnItemClickListener(new OnItemClickListener() { public void onItemClick(AdapterView<?> parent, View v, int position, long id) { Intent data = new Intent(); data.putExtra("POS_ICON", position); setResult(Activity.RESULT_OK, data); finish(); } }); } }

    Read the article

  • How to recover deleted files?

    - by vijay.shad
    Hi My laptop has two os. one is windows vista. and other is Ubuntu. I am currently on ubuntu system, this is my primary OS. There are 4 partitions of my hard disk Windows OS Linux(Ubuntu OS) Data Now the problem part. The data partition is NTFS. I have mounted this partition on the location /media/windrive-a under ubuntu OS. A little while back i decided to delete the mounting of the data partition and i fired command rm -r /media/windrive-a/. To give me a shock; all my data on data drive is gone. Now, I know this is not the command to remove mounted partition. But I have committed the wrong. Is there any way i can get my data back. These are very important data for me. Please suggest.

    Read the article

  • Correct XML serialization and deserialization of "mixed" types in .NET

    - by Stefan
    My current task involves writing a class library for processing HL7 CDA files. These HL7 CDA files are XML files with a defined XML schema, so I used xsd.exe to generate .NET classes for XML serialization and deserialization. The XML Schema contains various types which contain the mixed="true" attribute, specifying that an XML node of this type may contain normal text mixed with other XML nodes. The relevant part of the XML schema for one of these types looks like this: <xs:complexType name="StrucDoc.Paragraph" mixed="true"> <xs:sequence> <xs:element name="caption" type="StrucDoc.Caption" minOccurs="0"/> <xs:choice minOccurs="0" maxOccurs="unbounded"> <xs:element name="br" type="StrucDoc.Br"/> <xs:element name="sub" type="StrucDoc.Sub"/> <xs:element name="sup" type="StrucDoc.Sup"/> <!-- ...other possible nodes... --> </xs:choice> </xs:sequence> <xs:attribute name="ID" type="xs:ID"/> <!-- ...other attributes... --> </xs:complexType> The generated code for this type looks like this: /// <remarks/> [System.CodeDom.Compiler.GeneratedCodeAttribute("xsd", "2.0.50727.3038")] [System.SerializableAttribute()] [System.Diagnostics.DebuggerStepThroughAttribute()] [System.ComponentModel.DesignerCategoryAttribute("code")] [System.Xml.Serialization.XmlTypeAttribute(TypeName="StrucDoc.Paragraph", Namespace="urn:hl7-org:v3")] public partial class StrucDocParagraph { private StrucDocCaption captionField; private object[] itemsField; private string[] textField; private string idField; // ...fields for other attributes... /// <remarks/> public StrucDocCaption caption { get { return this.captionField; } set { this.captionField = value; } } /// <remarks/> [System.Xml.Serialization.XmlElementAttribute("br", typeof(StrucDocBr))] [System.Xml.Serialization.XmlElementAttribute("sub", typeof(StrucDocSub))] [System.Xml.Serialization.XmlElementAttribute("sup", typeof(StrucDocSup))] // ...other possible nodes... public object[] Items { get { return this.itemsField; } set { this.itemsField = value; } } /// <remarks/> [System.Xml.Serialization.XmlTextAttribute()] public string[] Text { get { return this.textField; } set { this.textField = value; } } /// <remarks/> [System.Xml.Serialization.XmlAttributeAttribute(DataType="ID")] public string ID { get { return this.idField; } set { this.idField = value; } } // ...properties for other attributes... } If I deserialize an XML element where the paragraph node looks like this: <paragraph>first line<br /><br />third line</paragraph> The result is that the item and text arrays are read like this: itemsField = new object[] { new StrucDocBr(), new StrucDocBr(), }; textField = new string[] { "first line", "third line", }; From this there is no possible way to determine the exact order of the text and the other nodes. If I serialize this again, the result looks exactly like this: <paragraph> <br /> <br />first linethird line </paragraph> The default serializer just serializes the items first and then the text. I tried implementing IXmlSerializable on the StrucDocParagraph class so that I could control the deserialization and serialization of the content, but it's rather complex since there are so many classes involved and I didn't come to a solution yet because I don't know if the effort pays off. Is there some kind of easy workaround to this problem, or is it even possible by doing custom serialization via IXmlSerializable? Or should I just use XmlDocument or XmlReader/XmlWriter to process these documents?

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >