Search Results

Search found 14684 results on 588 pages for 'static rtti'.

Page 340/588 | < Previous Page | 336 337 338 339 340 341 342 343 344 345 346 347  | Next Page >

  • Currency Conversion in Oracle BI applications

    - by Saurabh Verma
    Authored by Vijay Aggarwal and Hichem Sellami A typical data warehouse contains Star and/or Snowflake schema, made up of Dimensions and Facts. The facts store various numerical information including amounts. Example; Order Amount, Invoice Amount etc. With the true global nature of business now-a-days, the end-users want to view the reports in their own currency or in global/common currency as defined by their business. This presents a unique opportunity in BI to provide the amounts in converted rates either by pre-storing or by doing on-the-fly conversions while displaying the reports to the users. Source Systems OBIA caters to various source systems like EBS, PSFT, Sebl, JDE, Fusion etc. Each source has its own unique and intricate ways of defining and storing currency data, doing currency conversions and presenting to the OLTP users. For example; EBS stores conversion rates between currencies which can be classified by conversion rates, like Corporate rate, Spot rate, Period rate etc. Siebel stores exchange rates by conversion rates like Daily. EBS/Fusion stores the conversion rates for each day, where as PSFT/Siebel store for a range of days. PSFT has Rate Multiplication Factor and Rate Division Factor and we need to calculate the Rate based on them, where as other Source systems store the Currency Exchange Rate directly. OBIA Design The data consolidation from various disparate source systems, poses the challenge to conform various currencies, rate types, exchange rates etc., and designing the best way to present the amounts to the users without affecting the performance. When consolidating the data for reporting in OBIA, we have designed the mechanisms in the Common Dimension, to allow users to report based on their required currencies. OBIA Facts store amounts in various currencies: Document Currency: This is the currency of the actual transaction. For a multinational company, this can be in various currencies. Local Currency: This is the base currency in which the accounting entries are recorded by the business. This is generally defined in the Ledger of the company. Global Currencies: OBIA provides five Global Currencies. Three are used across all modules. The last two are for CRM only. A Global currency is very useful when creating reports where the data is viewed enterprise-wide. Example; a US based multinational would want to see the reports in USD. The company will choose USD as one of the global currencies. OBIA allows users to define up-to five global currencies during the initial implementation. The term Currency Preference is used to designate the set of values: Document Currency, Local Currency, Global Currency 1, Global Currency 2, Global Currency 3; which are shared among all modules. There are four more currency preferences, specific to certain modules: Global Currency 4 (aka CRM Currency) and Global Currency 5 which are used in CRM; and Project Currency and Contract Currency, used in Project Analytics. When choosing Local Currency for Currency preference, the data will show in the currency of the Ledger (or Business Unit) in the prompt. So it is important to select one Ledger or Business Unit when viewing data in Local Currency. More on this can be found in the section: Toggling Currency Preferences in the Dashboard. Design Logic When extracting the fact data, the OOTB mappings extract and load the document amount, and the local amount in target tables. It also loads the exchange rates required to convert the document amount into the corresponding global amounts. If the source system only provides the document amount in the transaction, the extract mapping does a lookup to get the Local currency code, and the Local exchange rate. The Load mapping then uses the local currency code and rate to derive the local amount. The load mapping also fetches the Global Currencies and looks up the corresponding exchange rates. The lookup of exchange rates is done via the Exchange Rate Dimension provided as a Common/Conforming Dimension in OBIA. The Exchange Rate Dimension stores the exchange rates between various currencies for a date range and Rate Type. Two physical tables W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are used to provide the lookups and conversions between currencies. The data is loaded from the source system’s Ledger tables. W_EXCH_RATE_G stores the exchange rates between currencies with a date range. On the other hand, W_GLOBAL_EXCH_RATE_G stores the currency conversions between the document currency and the pre-defined five Global Currencies for each day. Based on the requirements, the fact mappings can decide and use one or both tables to do the conversion. Currency design in OBIA also taps into the MLS and Domain architecture, thus allowing the users to map the currencies to a universal Domain during the implementation time. This is especially important for companies deploying and using OBIA with multiple source adapters. Some Gotchas to Look for It is necessary to think through the currencies during the initial implementation. 1) Identify various types of currencies that are used by your business. Understand what will be your Local (or Base) and Documentation currency. Identify various global currencies that your users will want to look at the reports. This will be based on the global nature of your business. Changes to these currencies later in the project, while permitted, but may cause Full data loads and hence lost time. 2) If the user has a multi source system make sure that the Global Currencies and Global Rate Types chosen in Configuration Manager do have the corresponding source specific counterparts. In other words, make sure for every DW specific value chosen for Currency Code or Rate Type, there is a source Domain mapping already done. Technical Section This section will briefly mention the technical scenarios employed in the OBIA adaptors to extract data from each source system. In OBIA, we have two main tables which store the Currency Rate information as explained in previous sections. W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are the two tables. W_EXCH_RATE_G stores all the Currency Conversions present in the source system. It captures data for a Date Range. W_GLOBAL_EXCH_RATE_G has Global Currency Conversions stored at a Daily level. However the challenge here is to store all the 5 Global Currency Exchange Rates in a single record for each From Currency. Let’s voyage further into the Source System Extraction logic for each of these tables and understand the flow briefly. EBS: In EBS, we have Currency Data stored in GL_DAILY_RATES table. As the name indicates GL_DAILY_RATES EBS table has data at a daily level. However in our warehouse we store the data with a Date Range and insert a new range record only when the Exchange Rate changes for a particular From Currency, To Currency and Rate Type. Below are the main logical steps that we employ in this process. (Incremental Flow only) – Cleanup the data in W_EXCH_RATE_G. Delete the records which have Start Date > minimum conversion date Update the End Date of the existing records. Compress the daily data from GL_DAILY_RATES table into Range Records. Incremental map uses $$XRATE_UPD_NUM_DAY as an extra parameter. Generate Previous Rate, Previous Date and Next Date for each of the Daily record from the OLTP. Filter out the records which have Conversion Rate same as Previous Rates or if the Conversion Date lies within a single day range. Mark the records as ‘Keep’ and ‘Filter’ and also get the final End Date for the single Range record (Unique Combination of From Date, To Date, Rate and Conversion Date). Filter the records marked as ‘Filter’ in the INFA map. The above steps will load W_EXCH_RATE_GS. Step 0 updates/deletes W_EXCH_RATE_G directly. SIL map will then insert/update the GS data into W_EXCH_RATE_G. These steps convert the daily records in GL_DAILY_RATES to Range records in W_EXCH_RATE_G. We do not need such special logic for loading W_GLOBAL_EXCH_RATE_G. This is a table where we store data at a Daily Granular Level. However we need to pivot the data because the data present in multiple rows in source tables needs to be stored in different columns of the same row in DW. We use GROUP BY and CASE logic to achieve this. Fusion: Fusion has extraction logic very similar to EBS. The only difference is that the Cleanup logic that was mentioned in step 0 above does not use $$XRATE_UPD_NUM_DAY parameter. In Fusion we bring all the Exchange Rates in Incremental as well and do the cleanup. The SIL then takes care of Insert/Updates accordingly. PeopleSoft:PeopleSoft does not have From Date and To Date explicitly in the Source tables. Let’s look at an example. Please note that this is achieved from PS1 onwards only. 1 Jan 2010 – USD to INR – 45 31 Jan 2010 – USD to INR – 46 PSFT stores records in above fashion. This means that Exchange Rate of 45 for USD to INR is applicable for 1 Jan 2010 to 30 Jan 2010. We need to store data in this fashion in DW. Also PSFT has Exchange Rate stored as RATE_MULT and RATE_DIV. We need to do a RATE_MULT/RATE_DIV to get the correct Exchange Rate. We generate From Date and To Date while extracting data from source and this has certain assumptions: If a record gets updated/inserted in the source, it will be extracted in incremental. Also if this updated/inserted record is between other dates, then we also extract the preceding and succeeding records (based on dates) of this record. This is required because we need to generate a range record and we have 3 records whose ranges have changed. Taking the same example as above, if there is a new record which gets inserted on 15 Jan 2010; the new ranges are 1 Jan to 14 Jan, 15 Jan to 30 Jan and 31 Jan to Next available date. Even though 1 Jan record and 31 Jan have not changed, we will still extract them because the range is affected. Similar logic is used for Global Exchange Rate Extraction. We create the Range records and get it into a Temporary table. Then we join to Day Dimension, create individual records and pivot the data to get the 5 Global Exchange Rates for each From Currency, Date and Rate Type. Siebel: Siebel Facts are dependent on Global Exchange Rates heavily and almost none of them really use individual Exchange Rates. In other words, W_GLOBAL_EXCH_RATE_G is the main table used in Siebel from PS1 release onwards. As of January 2002, the Euro Triangulation method for converting between currencies belonging to EMU members is not needed for present and future currency exchanges. However, the method is still available in Siebel applications, as are the old currencies, so that historical data can be maintained accurately. The following description applies only to historical data needing conversion prior to the 2002 switch to the Euro for the EMU member countries. If a country is a member of the European Monetary Union (EMU), you should convert its currency to other currencies through the Euro. This is called triangulation, and it is used whenever either currency being converted has EMU Triangulation checked. Due to this, there are multiple extraction flows in SEBL ie. EUR to EMU, EUR to NonEMU, EUR to DMC and so on. We load W_EXCH_RATE_G through multiple flows with these data. This has been kept same as previous versions of OBIA. W_GLOBAL_EXCH_RATE_G being a new table does not have such needs. However SEBL does not have From Date and To Date columns in the Source tables similar to PSFT. We use similar extraction logic as explained in PSFT section for SEBL as well. What if all 5 Global Currencies configured are same? As mentioned in previous sections, from PS1 onwards we store Global Exchange Rates in W_GLOBAL_EXCH_RATE_G table. The extraction logic for this table involves Pivoting data from multiple rows into a single row with 5 Global Exchange Rates in 5 columns. As mentioned in previous sections, we use CASE and GROUP BY functions to achieve this. This approach poses a unique problem when all the 5 Global Currencies Chosen are same. For example – If the user configures all 5 Global Currencies as ‘USD’ then the extract logic will not be able to generate a record for From Currency=USD. This is because, not all Source Systems will have a USD->USD conversion record. We have _Generated mappings to take care of this case. We generate a record with Conversion Rate=1 for such cases. Reusable Lookups Before PS1, we had a Mapplet for Currency Conversions. In PS1, we only have reusable Lookups- LKP_W_EXCH_RATE_G and LKP_W_GLOBAL_EXCH_RATE_G. These lookups have another layer of logic so that all the lookup conditions are met when they are used in various Fact Mappings. Any user who would want to do a LKP on W_EXCH_RATE_G or W_GLOBAL_EXCH_RATE_G should and must use these Lookups. A direct join or Lookup on the tables might lead to wrong data being returned. Changing Currency preferences in the Dashboard: In the 796x series, all amount metrics in OBIA were showing the Global1 amount. The customer needed to change the metric definitions to show them in another Currency preference. Project Analytics started supporting currency preferences since 7.9.6 release though, and it published a Tech note for other module customers to add toggling between currency preferences to the solution. List of Currency Preferences Starting from 11.1.1.x release, the BI Platform added a new feature to support multiple currencies. The new session variable (PREFERRED_CURRENCY) is populated through a newly introduced currency prompt. This prompt can take its values from the xml file: userpref_currencies_OBIA.xml, which is hosted in the BI Server installation folder, under :< home>\instances\instance1\config\OracleBIPresentationServicesComponent\coreapplication_obips1\userpref_currencies.xml This file contains the list of currency preferences, like“Local Currency”, “Global Currency 1”,…which customers can also rename to give them more meaningful business names. There are two options for showing the list of currency preferences to the user in the dashboard: Static and Dynamic. In Static mode, all users will see the full list as in the user preference currencies file. In the Dynamic mode, the list shown in the currency prompt drop down is a result of a dynamic query specified in the same file. Customers can build some security into the rpd, so the list of currency preferences will be based on the user roles…BI Applications built a subject area: “Dynamic Currency Preference” to run this query, and give every user only the list of currency preferences required by his application roles. Adding Currency to an Amount Field When the user selects one of the items from the currency prompt, all the amounts in that page will show in the Currency corresponding to that preference. For example, if the user selects “Global Currency1” from the prompt, all data will be showing in Global Currency 1 as specified in the Configuration Manager. If the user select “Local Currency”, all amount fields will show in the Currency of the Business Unit selected in the BU filter of the same page. If there is no particular Business Unit selected in that filter, and the data selected by the query contains amounts in more than one currency (for example one BU has USD as a functional currency, the other has EUR as functional currency), then subtotals will not be available (cannot add USD and EUR amounts in one field), and depending on the set up (see next paragraph), the user may receive an error. There are two ways to add the Currency field to an amount metric: In the form of currency code, like USD, EUR…For this the user needs to add the field “Apps Common Currency Code” to the report. This field is in every subject area, usually under the table “Currency Tag” or “Currency Code”… In the form of currency symbol ($ for USD, € for EUR,…) For this, the user needs to format the amount metrics in the report as a currency column, by specifying the currency tag column in the Column Properties option in Column Actions drop down list. Typically this column should be the “BI Common Currency Code” available in every subject area. Select Column Properties option in the Edit list of a metric. In the Data Format tab, select Custom as Treat Number As. Enter the following syntax under Custom Number Format: [$:currencyTagColumn=Subjectarea.table.column] Where Column is the “BI Common Currency Code” defined to take the currency code value based on the currency preference chosen by the user in the Currency preference prompt.

    Read the article

  • PerformanceCounters on .NET 4.0 & Windows 7

    - by scott
    I have a program that works fine on VS2008 and Vista, but I'm trying it on Windows 7 and VS2010 / .NET Framework 4.0 and it's not working. Ultimately the problem is that System.Diagnostics.PerformanceCounterCategory.GetCategories() (and other PerformanceCounterCategory methods) is not working. I'm getting a System.InvalidOperationException with the message "Cannot load Counter Name data because an invalid index '' was read from the registry." I can reproduce this with the very simple program shown below: class Program { static void Main(string[] args) { foreach (var pc in System.Diagnostics.PerformanceCounterCategory.GetCategories()) { Console.WriteLine(pc.CategoryName); } } } I did make sure I'm running the program as an admin. It doesn't matter if I run it with VS/Debugger attached or not. I don't have another machine with Windows 7 or VS2010 to test it on, so I'm not sure which is complicating things here (or both?). It is Windows 7 x64 and I've tried forcing the app to run in both x32 and x64 but get the same results.

    Read the article

  • Avoiding unsafe cast for generic situation involving runtime passing of class

    - by Bart van Heukelom
    public class AutoKeyMap<K,V> { public interface KeyGenerator<K> { public K generate(); } private KeyGenerator<K> generator; public AutoKeyMap(Class<K> keyType) { // WARNING: Unchecked cast from AutoKeyMap.IntKeyGen to AutoKeyMap.KeyGenerator<K> if (keyType == Integer.class) generator = (KeyGenerator<K>) new IntKeyGen(); else throw new RuntimeException("Cannot generate keys for " + keyType); } public void put(V value) { K key = generator.generate(); ... } private static class IntKeyGen implements KeyGenerator<Integer> { private final AtomicInteger ai = new AtomicInteger(1); @Override public Integer generate() { return ai.getAndIncrement(); } } } In the code sample above, what is the correct way to prevent the given warning, without adding a @SuppressWarnings, if any?

    Read the article

  • Using the Data Form Web Part (SharePoint 2010) Site Agnostically!

    - by David Jacobus
    Originally posted on: http://geekswithblogs.net/djacobus/archive/2013/10/24/154465.aspxAs a Developer whom has worked closely with web designers (Power users) in a SharePoint environment, I have come across the issue of making the Data Form Web Part reusable across the site collection! In SharePoint 2007 it was very easy and this blog pointed the way to make it happen: Josh Gaffey's Blog. In SharePoint 2010 something changed! This method failed except for using a Data Form Web Part that pointed to a list in the Site Collection Root! I am making this discussion relative to a developer whom creates a solution (WSP) with all the artifacts embedded and the user shouldn’t have any involvement in the process except to activate features. The Scenario: 1. A Power User creates a Data Form Web Part using SharePoint Designer 2010! It is a great web part the uses all the power of SharePoint Designer and XSLT (Conditional formatting, etc.). 2. Other Users in the site collection want to use that specific web part in sub sites in the site collection. Pointing to a list with the same name, not at the site collection root! The Issues: 1. The Data Form Web Part Data Source uses a List ID (GUID) to point to the specific list. Which means a list in a sub site will have a list with a new GUID different than the one which was created with SharePoint Designer! Obviously, the List needs to be the same List (Fields, Content Types, etc.) with different data. 2. How can we make this web part site agnostic, and dependent only on the lists Name? I had this problem come up over and over and decided to put my solution forward! The Solution: 1. Use the XSL of the Data Form Web Part Created By the Power User in SharePoint Designer! 2. Extend the OOTB Data Form Web Part to use this XSL and Point to a List by name. The solution points to a hybrid solution that requires some coding (Developer) and the XSL (Power User) artifacts put together in a Visual Studio SharePoint Solution. Here are the solution steps in summary: 1. Create an empty SharePoint project in Visual Studio 2. Create a Module and Feature and put the XSL file created by the Power User into it a. Scope the feature to web 3. Create a Feature Receiver to Create the List. The same list from which the Data Form Web Part was created with by the Power User. a. Scope the feature to web 4. Create a Web Part extending the Data Form Web a. Point the Data Form Web Part to point to the List by Name b. Point the Data Form Web Part XSL link to the XSL added using the Module feature c. Scope The feature to Site i. This is because all web parts are in the site collection web part gallery. So in a Narrative Summary: We are creating a list in code which has the same name and (site Columns) as the list from which the Power User created the Data Form Web Part Using SharePoint Designer. We are creating a Web Part in code which extends the OOTB Data Form Web Part to point to a list by name and use the XSL created by the Power User. Okay! Here are the steps with images and code! At the end of this post I will provide a link to the code for a solution which works in any site! I want to TOOT the HORN for the power of this solution! It is the mantra a use with all my clients! What is a basic skill a SharePoint Developer: Create an application that uses the data from a SharePoint list and make that data visible to the user in a manner which meets requirements! Create an Empty SharePoint 2010 Project Here I am naming my Project DJ.DataFormWebPart Create a Code Folder Copy and paste the Extension and Utilities classes (Found in the solution provided at the end of this post) Change the Namespace to match this project The List to which the Data Form Web Part which was used to make the XSL by the Power User in SharePoint Designer is now going to be created in code! If already in code, then all the better! Here I am going to create a list in the site collection root and add some data to it! For the purpose of this discussion I will actually create this list in code before using SharePoint Designer for simplicity! So here I create the List and deploy it within this solution before I do anything else. I will use a List I created before for demo purposes. Footer List is used within the footer of my master page. Add a new Feature: Here I name the Feature FooterList and add a Feature Event Receiver: Here is the code for the Event Receiver: I have a previous blog post about adding lists in code so I will not take time to narrate this code: using System; using System.Runtime.InteropServices; using System.Security.Permissions; using Microsoft.SharePoint; using DJ.DataFormWebPart.Code; namespace DJ.DataFormWebPart.Features.FooterList { /// <summary> /// This class handles events raised during feature activation, deactivation, installation, uninstallation, and upgrade. /// </summary> /// <remarks> /// The GUID attached to this class may be used during packaging and should not be modified. /// </remarks> [Guid("a58644fd-9209-41f4-aa16-67a53af7a9bf")] public class FooterListEventReceiver : SPFeatureReceiver { SPWeb currentWeb = null; SPSite currentSite = null; const string columnGroup = "DJ"; const string ctName = "FooterContentType"; // Uncomment the method below to handle the event raised after a feature has been activated. public override void FeatureActivated(SPFeatureReceiverProperties properties) { using (SPWeb spWeb = properties.GetWeb() as SPWeb) { using (SPSite site = new SPSite(spWeb.Site.ID)) { using (SPWeb rootWeb = site.OpenWeb(site.RootWeb.ID)) { //add the fields addFields(rootWeb); //add content type SPContentType testCT = rootWeb.ContentTypes[ctName]; // we will not create the content type if it exists if (testCT == null) { //the content type does not exist add it addContentType(rootWeb, ctName); } if ((spWeb.Lists.TryGetList("FooterList") == null)) { //create the list if it dosen't to exist CreateFooterList(spWeb, site); } } } } } #region ContentType public void addFields(SPWeb spWeb) { Utilities.addField(spWeb, "Link", SPFieldType.URL, false, columnGroup); Utilities.addField(spWeb, "Information", SPFieldType.Text, false, columnGroup); } private static void addContentType(SPWeb spWeb, string name) { SPContentType myContentType = new SPContentType(spWeb.ContentTypes["Item"], spWeb.ContentTypes, name) { Group = columnGroup }; spWeb.ContentTypes.Add(myContentType); addContentTypeLinkages(spWeb, myContentType); myContentType.Update(); } public static void addContentTypeLinkages(SPWeb spWeb, SPContentType ct) { Utilities.addContentTypeLink(spWeb, "Link", ct); Utilities.addContentTypeLink(spWeb, "Information", ct); } private void CreateFooterList(SPWeb web, SPSite site) { Guid newListGuid = web.Lists.Add("FooterList", "Footer List", SPListTemplateType.GenericList); SPList newList = web.Lists[newListGuid]; newList.ContentTypesEnabled = true; var footer = site.RootWeb.ContentTypes[ctName]; newList.ContentTypes.Add(footer); newList.ContentTypes.Delete(newList.ContentTypes["Item"].Id); newList.Update(); var view = newList.DefaultView; //add all view fields here //view.ViewFields.Add("NewsTitle"); view.ViewFields.Add("Link"); view.ViewFields.Add("Information"); view.Update(); } } } Basically created a content type with two site columns Link and Information. I had to change some code as we are working at the SPWeb level and need Content Types at the SPSite level! I’ll use a new Site Collection for this demo (Best Practice) keep old artifacts from impinging on development: Next we will add this list to the root of the site collection by deploying this solution, add some data and then use SharePoint Designer to create a Data Form Web Part. The list has been added, now let’s add some data: Okay let’s add a Data Form Web Part in SharePoint Designer. Create a new web part page in the site pages library: I will name it TestWP.aspx and edit it in advanced mode: Let’s add an empty Data Form Web Part to the web part zone: Click on the web part to add a data source: Choose FooterList in the Data Source menu: Choose appropriate fields and select insert as multiple item view: Here is what it look like after insertion: Let’s add some conditional formatting if the information filed is not blank: Choose Create (right side) apply formatting: Choose the Information Field and set the condition not null: Click Set Style: Here is the result: Okay! Not flashy but simple enough for this demo. Remember this is the job of the Power user! All we want from this web part is the XLS-Style Sheet out of SharePoint Designer. We are going to use it as the XSL for our web part which we will be creating next. Let’s add a web part to our project extending the OOTB Data Form Web Part. Add new item from the Visual Studio add menu: Choose Web Part: Change WebPart to DataFormWebPart (Oh well my namespace needs some improvement, but it will sure make it readily identifiable as an extended web part!) Below is the code for this web part: using System; using System.ComponentModel; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using Microsoft.SharePoint; using Microsoft.SharePoint.WebControls; using System.Text; namespace DJ.DataFormWebPart.DataFormWebPart { [ToolboxItemAttribute(false)] public class DataFormWebPart : Microsoft.SharePoint.WebPartPages.DataFormWebPart { protected override void OnInit(EventArgs e) { base.OnInit(e); this.ChromeType = PartChromeType.None; this.Title = "FooterListDF"; try { //SPSite site = SPContext.Current.Site; SPWeb web = SPContext.Current.Web; SPList list = web.Lists.TryGetList("FooterList"); if (list != null) { string queryList1 = "<Query><Where><IsNotNull><FieldRef Name='Title' /></IsNotNull></Where><OrderBy><FieldRef Name='Title' Ascending='True' /></OrderBy></Query>"; uint maximumRowList1 = 10; SPDataSource dataSourceList1 = GetDataSource(list.Title, web.Url, list, queryList1, maximumRowList1); this.DataSources.Add(dataSourceList1); this.XslLink = web.Url + "/Assests/Footer.xsl"; this.ParameterBindings = BuildDataFormParameters(); this.DataBind(); } } catch (Exception ex) { this.Controls.Add(new LiteralControl("ERROR: " + ex.Message)); } } private SPDataSource GetDataSource(string dataSourceId, string webUrl, SPList list, string query, uint maximumRow) { SPDataSource dataSource = new SPDataSource(); dataSource.UseInternalName = true; dataSource.ID = dataSourceId; dataSource.DataSourceMode = SPDataSourceMode.List; dataSource.List = list; dataSource.SelectCommand = "" + query + ""; Parameter listIdParam = new Parameter("ListID"); listIdParam.DefaultValue = list.ID.ToString( "B").ToUpper(); Parameter maximumRowsParam = new Parameter("MaximumRows"); maximumRowsParam.DefaultValue = maximumRow.ToString(); QueryStringParameter rootFolderParam = new QueryStringParameter("RootFolder", "RootFolder"); dataSource.SelectParameters.Add(listIdParam); dataSource.SelectParameters.Add(maximumRowsParam); dataSource.SelectParameters.Add(rootFolderParam); dataSource.UpdateParameters.Add(listIdParam); dataSource.DeleteParameters.Add(listIdParam); dataSource.InsertParameters.Add(listIdParam); return dataSource; } private string BuildDataFormParameters() { StringBuilder parameters = new StringBuilder("<ParameterBindings><ParameterBinding Name=\"dvt_apos\" Location=\"Postback;Connection\"/><ParameterBinding Name=\"UserID\" Location=\"CAMLVariable\" DefaultValue=\"CurrentUserName\"/><ParameterBinding Name=\"Today\" Location=\"CAMLVariable\" DefaultValue=\"CurrentDate\"/>"); parameters.Append("<ParameterBinding Name=\"dvt_firstrow\" Location=\"Postback;Connection\"/>"); parameters.Append("<ParameterBinding Name=\"dvt_nextpagedata\" Location=\"Postback;Connection\"/>"); parameters.Append("<ParameterBinding Name=\"dvt_adhocmode\" Location=\"Postback;Connection\"/>"); parameters.Append("<ParameterBinding Name=\"dvt_adhocfiltermode\" Location=\"Postback;Connection\"/>"); parameters.Append("</ParameterBindings>"); return parameters.ToString(); } } } The OnInit method we use to set the list name and the XSL Link property of the Data Form Web Part. We do not have the link to XSL in our Solution so we will add the XSL now: Add a Module in the Visual Studio add menu: Rename Sample.txt in the module to footer.xsl and then copy the XSL from SharePoint Designer Look at elements.xml to where the footer.xsl is being provisioned to which is Assets/footer.xsl, make sure the Web parts xsl link is pointing to this url: Okay we are good to go! Let’s check our features and package: DataFormWebPart should be scoped to site and have the web part: The Footer List feature should be scoped to web and have the Assets module (Okay, I see, a spelling issue but it won’t affect this demo) If everything is correct we should be able to click a couple of sub site feature activations and have our list and web part in a sub site. (In fact this solution can be activated anywhere) Here is the list created at SubSite1 with new data It. Next let’s add the web part on a test page and see if it works as expected: It does! So we now have a repeatable way to use a WSP to move a Data Form Web Part around our sites! Here is a link to the code: DataFormWebPart Solution

    Read the article

  • Error using traits class.: "expected constructor destructor or type conversion before '&' token"

    - by Mark
    I have a traits class that's used for printing out different character types: template <typename T> class traits { public: static std::basic_ostream<T>& tout; }; template<> std::ostream& traits<char>::tout = std::cout; template<> std::wostream& traits<unsigned short>::tout = std::wcout; gcc (g++) version 3.4.5 (yes somewhat old) is throwing an error: "expected constructor destructor or type conversion before '&' token" And I'm wondering if there's a good way to resolve this. (it's also angry about _O_WTEXT so if anyone's got some insight into that, I'd also appreciate it)

    Read the article

  • The Inkremental Architect&acute;s Napkin - #4 - Make increments tangible

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/06/12/the-inkremental-architectacutes-napkin---4---make-increments-tangible.aspxThe driver of software development are increments, small increments, tiny increments. With an increment being a slice of the overall requirement scope thin enough to implement and get feedback from a product owner within 2 days max. Such an increment might concern Functionality or Quality.[1] To make such high frequency delivery of increments possible, the transition from talking to coding needs to be as easy as possible. A user story or some other documentation of what´s supposed to get implemented until tomorrow evening at latest is one side of the medal. The other is where to put the logic in all of the code base. To implement an increment, only logic statements are needed. Functionality like Quality are just about expressions and control flow statements. Think of Assembler code without the CALL/RET instructions. That´s all is needed. Forget about functions, forget about classes. To make a user happy none of that is really needed. It´s just about the right expressions and conditional executions paths plus some memory allocation. Automatic function inlining of compilers which makes it clear how unimportant functions are for delivering value to users at runtime. But why then are there functions? Because they were invented for optimization purposes. We need them for better Evolvability and Production Efficiency. Nothing more, nothing less. No software has become faster, more secure, more scalable, more functional because we gathered logic under the roof of a function or two or a thousand. Functions make logic easier to understand. Functions make us faster in producing logic. Functions make it easier to keep logic consistent. Functions help to conserve memory. That said, functions are important. They are even the pivotal element of software development. We can´t code without them - whether you write a function yourself or not. Because there´s always at least one function in play: the Entry Point of a program. In Ruby the simplest program looks like this:puts "Hello, world!" In C# more is necessary:class Program { public static void Main () { System.Console.Write("Hello, world!"); } } C# makes the Entry Point function explicit, not so Ruby. But still it´s there. So you can think of logic always running in some function. Which brings me back to increments: In order to make the transition from talking to code as easy as possible, it has to be crystal clear into which function you should put the logic. Product owners might be content once there is a sticky note a user story on the Scrum or Kanban board. But developers need an idea of what that sticky note means in term of functions. Because with a function in hand, with a signature to run tests against, they have something to focus on. All´s well once there is a function behind whose signature logic can be piled up. Then testing frameworks can be used to check if the logic is correct. Then practices like TDD can help to drive the implementation. That´s why most code katas define exactly how the API of a solution should look like. It´s a function, maybe two or three, not more. A requirement like “Write a function f which takes this as parameters and produces such and such output by doing x” makes a developer comfortable. Yes, there are all kinds of details to think about, like which algorithm or technology to use, or what kind of state and side effects to consider. Even a single function not only must deliver on Functionality, but also on Quality and Evolvability. Nevertheless, once it´s clear which function to put logic in, you have a tangible starting point. So, yes, what I´m suggesting is to find a single function to put all the logic in that´s necessary to deliver on a the requirements of an increment. Or to put it the other way around: Slice requirements in a way that each increment´s logic can be located under the roof of a single function. Entry points Of course, the logic of a software will always be spread across many, many functions. But there´s always an Entry Point. That´s the most important function for each increment, because that´s the root to put integration or even acceptance tests on. A batch program like the above hello-world application only has a single Entry Point. All logic is reached from there, regardless how deep it´s nested in classes. But a program with a user interface like this has at least two Entry Points: One is the main function called upon startup. The other is the button click event handler for “Show my score”. But maybe there are even more, like another Entry Point being a handler for the event fired when one of the choices gets selected; because then some logic could check if the button should be enabled because all questions got answered. Or another Entry Point for the logic to be executed when the program is close; because then the choices made should be persisted. You see, an Entry Point to me is a function which gets triggered by the user of a software. With batch programs that´s the main function. With GUI programs on the desktop that´s event handlers. With web programs that´s handlers for URL routes. And my basic suggestion to help you with slicing requirements for Spinning is: Slice them in a way so that each increment is related to only one Entry Point function.[2] Entry Points are the “outer functions” of a program. That´s where the environment triggers behavior. That´s where hardware meets software. Entry points always get called because something happened to hardware state, e.g. a key was pressed, a mouse button clicked, the system timer ticked, data arrived over a wire.[3] Viewed from the outside, software is just a collection of Entry Point functions made accessible via buttons to press, menu items to click, gestures, URLs to open, keys to enter. Collections of batch processors I´d thus say, we haven´t moved forward since the early days of software development. We´re still writing batch programs. Forget about “event-driven programming” with its fancy GUI applications. Software is just a collection of batch processors. Earlier it was just one per program, today it´s hundreds we bundle up into applications. Each batch processor is represented by an Entry Point as its root that works on a number of resources from which it reads data to process and to which it writes results. These resources can be the keyboard or main memory or a hard disk or a communication line or a display. Together many batch processors - large and small - form applications the user perceives as a single whole: Software development that way becomes quite simple: just implement one batch processor after another. Well, at least in principle ;-) Features Each batch processor entered through an Entry Point delivers value to the user. It´s an increment. Sometimes its logic is trivial, sometimes it´s very complex. Regardless, each Entry Point represents an increment. An Entry Point implemented thus is a step forward in terms of Agility. At the same time it´s a tangible unit for developers. Therefore, identifying the more or less numerous batch processors in a software system is a rewarding task for product owners and developers alike. That´s where user stories meet code. In this example the user story translates to the Entry Point triggered by clicking the login button on a dialog like this: The batch then retrieves what has been entered via keyboard, loads data from a user store, and finally outputs some kind of response on the screen, e.g. by displaying an error message or showing the next dialog. This is all very simple, but you see, there is not just one thing happening, but several. Get input (email address, password) Load user for email address If user not found report error Check password Hash password Compare hash to hash stored in user Show next dialog Viewed from 10,000 feet it´s all done by the Entry Point function. And of course that´s technically possible. It´s just a bunch of logic and calling a couple of API functions. However, I suggest to take these steps as distinct aspects of the overall requirement described by the user story. Such aspects of requirements I call Features. Features too are increments. Each provides some (small) value of its own to the user. Each can be checked individually by a product owner. Instead of implementing all the logic behind the Login() entry point at once you can move forward increment by increment, e.g. First implement the dialog, let the user enter any credentials, and log him/her in without any checks. Features 1 and 4. Then hard code a single user and check the email address. Features 2 and 2.1. Then check password without hashing it (or use a very simple hash like the length of the password). Features 3. and 3.2 Replace hard coded user with a persistent user directoy, but a very simple one, e.g. a CSV file. Refinement of feature 2. Calculate the real hash for the password. Feature 3.1. Switch to the final user directory technology. Each feature provides an opportunity to deliver results in a short amount of time and get feedback. If you´re in doubt whether you can implement the whole entry point function until tomorrow night, then just go for a couple of features or even just one. That´s also why I think, you should strive for wrapping feature logic into a function of its own. It´s a matter of Evolvability and Production Efficiency. A function per feature makes the code more readable, since the language of requirements analysis and design is carried over into implementation. It makes it easier to apply changes to features because it´s clear where their logic is located. And finally, of course, it lets you re-use features in different context (read: increments). Feature functions make it easier for you to think of features as Spinning increments, to implement them independently, to let the product owner check them for acceptance individually. Increments consist of features, entry point functions consist of feature functions. So you can view software as a hierarchy of requirements from broad to thin which map to a hierarchy of functions - with entry points at the top.   I like this image of software as a self-similar structure on many levels of abstraction where requirements and code match each other. That to me is true agile design: the core tenet of Agility to move forward in increments is carried over into implementation. Increments on paper are retained in code. This way developers can easily relate to product owners. Elusive and fuzzy requirements are not tangible. Software production is moving forward through requirements one increment at a time, and one function at a time. In closing Product owners and developers are different - but they need to work together towards a shared goal: working software. So their notions of software need to be made compatible, they need to be connected. The increments of the product owner - user stories and features - need to be mapped straightforwardly to something which is relevant to developers. To me that´s functions. Yes, functions, not classes nor components nor micro services. We´re talking about behavior, actions, activities, processes. Their natural representation is a function. Something has to be done. Logic has to be executed. That´s the purpose of functions. Later, classes and other containers are needed to stay on top of a growing amount of logic. But to connect developers and product owners functions are the appropriate glue. Functions which represent increments. Can there always be such a small increment be found to deliver until tomorrow evening? I boldly say yes. Yes, it´s always possible. But maybe you´ve to start thinking differently. Maybe the product owner needs to start thinking differently. Completion is not the goal anymore. Neither is checking the delivery of an increment through the user interface of a software. Product owners need to become comfortable using test beds for certain features. If it´s hard to slice requirements thin enough for Spinning the reason is too little knowledge of something. Maybe you don´t yet understand the problem domain well enough? Maybe you don´t yet feel comfortable with some tool or technology? Then it´s time to acknowledge this fact. Be honest about your not knowing. And instead of trying to deliver as a craftsman officially become a researcher. Research an check back with the product owner every day - until your understanding has grown to a level where you are able to define the next Spinning increment. ? Sometimes even thin requirement slices will cover several Entry Points, like “Add validation of email addresses to all relevant dialogs.” Validation then will it put into a dozen functons. Still, though, it´s important to determine which Entry Points exactly get affected. That´s much easier, if strive for keeping the number of Entry Points per increment to 1. ? If you like call Entry Point functions event handlers, because that´s what they are. They all handle events of some kind, whether that´s palpable in your code or note. A public void btnSave_Click(object sender, EventArgs e) {…} might look like an event handler to you, but public static void Main() {…} is one also - for then event “program started”. ?

    Read the article

  • Using GoogleMaps with JXMapViewer

    - by npinti
    I have been searching on the web to see if I can use GoogleMaps with the JXMapViewer. According to this, it is illegal, but the article is more than three years old. Could anyone be kind enough to tell me if I can use GoogleMaps with the JXMap viewer? I know that Google has recently allowed desktop applications to use their static maps provided that the application is freely accessible to people on some website. If this can be done, I would appreciate some pointer to where I could start looking so that I can use Google Maps, I tried messing around with this but to no avail. Thanks in advance.

    Read the article

  • How do you make a Factory that can return derived types?

    - by Seth Spearman
    I have created a factory class called AlarmFactory as such... 1 class AlarmFactory 2 { 3 public static Alarm GetAlarm(AlarmTypes alarmType) //factory ensures that correct alarm is returned and right func pointer for trigger creator. 4 { 5 switch (alarmType) 6 { 7 case AlarmTypes.Heartbeat: 8 HeartbeatAlarm alarm = HeartbeatAlarm.GetAlarm(); 9 alarm.CreateTriggerFunction = QuartzAlarmScheduler.CreateMinutelyTrigger; 10 return alarm; 11 12 break; 13 default: 14 15 break; 16 } 17 } 18 } Heartbeat alarm is derived from Alarm. I am getting a compile error "cannot implicitly convert type...An explicit conversion exists (are you missing a cast?)". How do I set this up to return a derived type? Seth

    Read the article

  • WPF - Bind Menu Item

    - by Miguel
    Hello, I am creating a menu and binding the menu items ate runtime as follows but I am not able to make it work. I am creating the menu as follows: Menu menu = new Menu(); menu.Items.Add(new MenuItem { Command = new PackCommand(), Header = "Pack" }); DockPanel.SetDock(menu, Dock.Top); content.Children.Add(menu); And I am implementing ICommand: public static class PackCommand : ICommand { Boolean CanExecute(object parameter) { return true; } void Execute(object parameter) { Packer packer = new Packer(); packer.Run(); } } I am not sure how to bind the menu item. Why CanExecute? Shouldn't it always? I only want to run packer.Run when the buttom is clicked. I think I should implement ICommand but I am not even sure I should do this? Could someone please help me out? Thanks, Miguel

    Read the article

  • Assembly Microsoft.Xna.Framework.dll does not load

    - by jbsnorro
    When trying to load Microsoft.Xna.Framework.dll from any project, it throws a FileNotFoundException. The specified module could not be found. (Exception from HRESULT: 0x8007007E), with no innerException. Even the simple code like the following throws that exception: static void Main(string[] args) { Assembly.LoadFile(@"C:\Microsoft.Xna.Framework.dll"); } I run XP x64, but I've set the platform in the configuration manager to x86, because I know it shouldn't(doesn't) work on x64 or Any CPU. I've manually added the dll file to GAC, but that didn't solve the problem. I have also tried the M$ Assembly Binding Log Viewer to see if those logs had any useful information, but they didn't. Everything, the loading etc, was a success according to them. Any suggestions? please?

    Read the article

  • Can I use a class as an object in C# to create instances of that class?

    - by Troy
    I'm writing a fairly uncomplicated program which can "connect" to several different types of data sources including text files and various databases. I've decided to implement each of these connection types as a class inherited from an interface I called iConnection. So, for example, I have TextConnection, MySQLConnection, &c... as classes. In another static class I've got a dictionary with human-readable names for these connections as keys. For the value of each dictionary entry, I want the class itself. That way, I can do things like: newConnection = new dict[connectionTypeString](); Is there a way to do something like this? I'm fairly new to C# so I'd appreciate any help.

    Read the article

  • Compilation error when using boost serialization library

    - by Shakir
    I have been struggling with this error for a long time. The following is my code snippet. //This is the header file template<typename TElem> class ArrayList { public: /** An accessible typedef for the elements in the array. */ typedef TElem Elem; friend class boost::serialization::access; template<class Archive> void serialize(Archive & ar, const unsigned int version) { ar & ptr_; ar & size_; ar & cap_; } Elem *ptr_; // the stored or aliased array index_t size_; // number of active objects index_t cap_; // allocated size of the array; -1 if alias }; template <typename TElem> class gps_position { public: typedef TElem Elem; friend class boost::serialization::access; template<class Archive> void serialize(Archive & ar, const unsigned int version) { ar & degrees; ar & minutes; ar & seconds; } private: Elem degrees; index_t minutes; index_t seconds; }; // This is the .cc file #include #include #include #include #include #include #include #include #include "arraylist.h" int main() { // create and open a character archive for output std::ofstream ofs("filename"); // create class instance // gps_position<int> g(35.65, 59, 24.567f); gps_position<float> g; // save data to archive { boost::archive::text_oarchive oa(ofs); // write class instance to archive //oa << g; // archive and stream closed when destructors are called } // ... some time later restore the class instance to its orginal state /* gps_position<int> newg; { // create and open an archive for input std::ifstream ifs("filename"); boost::archive::text_iarchive ia(ifs); // read class state from archive ia >> newg; // archive and stream closed when destructors are called }*/ ArrayList<float> a1; ArrayList<int> a2; a1.Init(22); a2.Init(21); // a1.Resize(30); // a1.Resize(12); // a1.Resize(22); // a2.Resize(22); a1[21] = 99.0; a1[20] = 88.0; for (index_t i = 0; i < a1.size(); i++) { a1[i] = i; a1[i]++; } std::ofstream s("test.txt"); { boost::archive::text_oarchive oa(s); oa << a1; } return 0; } The following is the compilation error i get. In file included from /usr/include/boost/serialization/split_member.hpp:23, from /usr/include/boost/serialization/nvp.hpp:33, from /usr/include/boost/serialization/serialization.hpp:17, from /usr/include/boost/archive/detail/oserializer.hpp:61, from /usr/include/boost/archive/detail/interface_oarchive.hpp:24, from /usr/include/boost/archive/detail/common_oarchive.hpp:20, from /usr/include/boost/archive/basic_text_oarchive.hpp:32, from /usr/include/boost/archive/text_oarchive.hpp:31, from demo.cc:4: /usr/include/boost/serialization/access.hpp: In static member function ‘static void boost::serialization::access::serialize(Archive&, T&, unsigned int) [with Archive = boost::archive::text_oarchive, T = float]’: /usr/include/boost/serialization/serialization.hpp:74: instantiated from ‘void boost::serialization::serialize(Archive&, T&, unsigned int) [with Archive = boost::archive::text_oarchive, T = float]’ /usr/include/boost/serialization/serialization.hpp:133: instantiated from ‘void boost::serialization::serialize_adl(Archive&, T&, unsigned int) [with Archive = boost::archive::text_oarchive, T = float]’ /usr/include/boost/archive/detail/oserializer.hpp:140: instantiated from ‘void boost::archive::detail::oserializer<Archive, T>::save_object_data(boost::archive::detail::basic_oarchive&, const void*) const [with Archive = boost::archive::text_oarchive, T = float]’ demo.cc:105: instantiated from here /usr/include/boost/serialization/access.hpp:109: error: request for member ‘serialize’ in ‘t’, which is of non-class type ‘float’ Please help me out.

    Read the article

  • How to log subsonic3 sql

    - by bastos.sergio
    Hi, I'm starting to develop a new asp.net application based on subsonic3 (for queries) and log4net (for logs) and would like to know how to interface subsonic3 with log4net so that log4net logs the underlying sql used by subsonic. This is what I have so far: public static IEnumerable<arma_ocorrencium> ListArmasOcorrencia() { if (logger.IsInfoEnabled) { logger.Info("ListarArmasOcorrencia: start"); } var db = new BdvdDB(); var select = from p in db.arma_ocorrencia select p; var results = select.ToList<arma_ocorrencium>(); //Execute the query here if (logger.IsInfoEnabled) { // log sql here } if (logger.IsInfoEnabled) { logger.Info("ListarArmasOcorrencia: end"); } return results; }

    Read the article

  • Configuring asp.net web applications. Best practices

    - by Andrew Florko
    Hello everybody, There is a lot of configurable information for a web-site: UI messages Number of records used in pagination & other UI parameters Cache duration for web-pages & timeouts Route maps & site structure ... There are many approaches to store all this information also: AppSettings (web.config) Custom sections (web.config) External xml/text files referred from web.config Internal static class(es) of constants Database table(s) ... What approaches do you usually choose for your tasks & what approaches do you find unsuitable? Thank you in advance!

    Read the article

  • Inheritence in C# question - is overriding internal methods possible?

    - by Jeff Dahmer
    Is it possible to override an internal method's behavior? using System; class TestClass { public string Name { get { return this.ProtectedMethod(); } } protected string ProtectedMethod() { return InternalMethod(); } string InternalMethod() { return "TestClass::InternalMethod()"; } } class OverrideClassProgram : TestClass { // try to override the internal method using new? (compiler warning) new string InternalMethod() { return "OverrideClassProgram::InternalMethod()"; } static int Main(string[] args) { // TestClass::InternalMethod() Console.WriteLine(new TestClass().Name); // TestClass::InternalMethod() ?? are we just screwed? Console.WriteLine(new OverrideClassProgram().Name); return (int)Console.ReadKey().Key; } }

    Read the article

  • Fill WinForms DataGridView From Anonymous Linq Query

    - by mdvaldosta
    // From my form BindingSource bs = new BindingSource(); private void fillStudentGrid() { bs.DataSource = Admin.GetStudents(); dgViewStudents.DataSource = bs; } // From the Admin class public static List<Student> GetStudents() { DojoDBDataContext conn = new DojoDBDataContext(); var query = (from s in conn.Students select new Student { ID = s.ID, FirstName = s.FirstName, LastName = s.LastName, Belt = s.Belt }).ToList(); return query; } I'm trying to fill a datagridview control in Winforms, and I only want a few of the values. The code compiles, but throws a runtime error: Explicit construction of entity type 'DojoManagement.Student' in query is not allowed. Is there a way to get it working in this manner?

    Read the article

  • How to configure maximum number of channels in WCF?

    - by Hemant
    Consider following code which calls a calculator service: static void Main (string[] args) { for (int i = 0; i < 32; i++) { ThreadPool.QueueUserWorkItem (o => { var client = new CalcServiceClient (); client.Open (); while (true) { var sum = client.Add (2, 3); } }); } Console.ReadLine (); } If I use TCP binding then maximum 32 connections are opened but if I use HTTP binding, only 2 TCP connections are opened. How can I configure the maximum number of connections that can be opened using HTTP binding?

    Read the article

  • How to check if object is sql to linq type

    - by remdao
    Hi, anyone know if there's any easy way to check if an object is an sql to ling type? I have a method like this but would like it to only accept valid types. public static void Update(params object[] items) { using (TheDataContext dc = new TheDataContext()) { System.Data.Linq.ITable table; if(items.Length > 0) table = dc.GetTable(items[0].GetType()); for (int i = 0; i < items.Length; i++) { table.Attach(items[i]); dc.Refresh(System.Data.Linq.RefreshMode.KeepCurrentValues, items[i]); } dc.SubmitChanges(); } }

    Read the article

  • TimeSpan to Localized String in C#

    - by MadBoy
    Is there an easy way (maybe built in solution) to convert TimeSpan to localized string? For example new TimeSpan(3, 5, 0); would be converted to 3 hours, 5minutes (just in polish language). I can of course create my own extension: public static string ConvertToReadable(this TimeSpan timeSpan) { int hours = timeSpan.Hours; int minutes = timeSpan.Minutes; int days = timeSpan.Days; if (days > 0) { return days + " dni " + hours + " godzin " + minutes + " minut"; } else { return hours + " godzin " + minutes + " minut"; } } But this gets complicated if i want to have proper grammar involved.

    Read the article

  • Problem calling linux C code from FIQ handler

    - by fastmonkeywheels
    I'm working on an armv6 core and have an FIQ hander that works great when I do all of my work in it. However I need to branch to some additional code that's too large for the FIQ memory area. The FIQ handler gets copied from fiq_start to fiq_end to 0xFFFF001C when registered static void test_fiq_handler(void) { asm volatile("\ .global fiq_start\n\ fiq_start:"); // clear gpio irq asm("ldr r10, GPIO_BASE_ISR"); asm("ldr r9, [r10]"); asm("orr r9, #0x04"); asm("str r9, [r10]"); // clear force register asm("ldr r10, AVIC_BASE_INTFRCH"); asm("ldr r9, [r10]"); asm("mov r9, #0"); asm("str r9, [r10]"); // prepare branch register asm(" ldr r11, fiq_handler"); // save all registers, build sp and branch to C asm(" adr r9, regpool"); asm(" stmia r9, {r0 - r8, r14}"); asm(" adr sp, fiq_sp"); asm(" ldr sp, [sp]"); asm(" add lr, pc,#4"); asm(" mov pc, r11"); #if 0 asm("ldr r10, IOMUX_ADDR12"); asm("ldr r9, [r10]"); asm("orr r9, #0x08 @ top/vertex LED"); asm("str r9,[r10] @turn on LED"); asm("bic r9, #0x08 @ top/vertex LED"); asm("str r9,[r10] @turn on LED"); #endif asm(" adr r9, regpool"); asm(" ldmia r9, {r0 - r8, r14}"); // return asm("subs pc, r14, #4"); asm("IOMUX_ADDR12: .word 0xFC2A4000"); asm("AVIC_BASE_INTCNTL: .word 0xFC400000"); asm("AVIC_BASE_INTENNUM: .word 0xFC400008"); asm("AVIC_BASE_INTDISNUM: .word 0xFC40000C"); asm("AVIC_BASE_FIVECSR: .word 0xFC400044"); asm("AVIC_BASE_INTFRCH: .word 0xFC400050"); asm("GPIO_BASE_ISR: .word 0xFC2CC018"); asm(".globl fiq_handler"); asm("fiq_sp: .long fiq_stack+120"); asm("fiq_handler: .long 0"); asm("regpool: .space 40"); asm(".pool"); asm(".align 5"); asm("fiq_stack: .space 124"); asm(".global fiq_end"); asm("fiq_end:"); } fiq_hander gets set to the following function: static void fiq_flip_pins(void) { asm("ldr r10, IOMUX_ADDR12_k"); asm("ldr r9, [r10]"); asm("orr r9, #0x08 @ top/vertex LED"); asm("str r9,[r10] @turn on LED"); asm("bic r9, #0x08 @ top/vertex LED"); asm("str r9,[r10] @turn on LED"); asm("IOMUX_ADDR12_k: .word 0xFC2A4000"); } EXPORT_SYMBOL(fiq_flip_pins); I know that since the FIQ handler operates outside of any normal kernel API's and that it is a rather high priority interrupt I must ensure that whatever I call is already swapped into memory. I do this by having the fiq_flip_pins function defined in the monolithic kernel and not as a module which gets vmalloc. If I don't branch to the fiq_flip_pins function, and instead do the work in the test_fiq_handler function everything works as expected. It's the branching that's causing me problems at the moment. Right after branching I get a kernel panic about a paging request. I don't understand why I'm getting the paging request. fiq_flip_pins is in the kernel at: c00307ec t fiq_flip_pins Unable to handle kernel paging request at virtual address 736e6f63 pgd = c3dd0000 [736e6f63] *pgd=00000000 Internal error: Oops: 5 [#1] PREEMPT Modules linked in: hello_1 CPU: 0 Not tainted (2.6.31-207-g7286c01-svn4 #122) PC is at strnlen+0x10/0x28 LR is at string+0x38/0xcc pc : [<c016b004>] lr : [<c016c754>] psr: a00001d3 sp : c3817ea0 ip : 736e6f63 fp : 00000400 r10: c03cab5c r9 : c0339ae0 r8 : 736e6f63 r7 : c03caf5c r6 : c03cab6b r5 : ffffffff r4 : 00000000 r3 : 00000004 r2 : 00000000 r1 : ffffffff r0 : 736e6f63 Flags: NzCv IRQs off FIQs off Mode SVC_32 ISA ARM Segment user Control: 00c5387d Table: 83dd0008 DAC: 00000015 Process sh (pid: 1663, stack limit = 0xc3816268) Stack: (0xc3817ea0 to 0xc3818000) Since there are no API calls in my code I have to assume that something is going wrong in the C call and back. Any help solving this is appreciated.

    Read the article

  • Implementing RSA-SHA1 signature algorithm in Java (creating a private key for use with OAuth RSA-SHA

    - by The Elite Gentleman
    Hi everyone, As you know, OAuth can support RSA-SHA1 Signature. I have an OAuthSignature interface that has the following method public String sign(String data, String consumerSecret, String tokenSecret) throws GeneralSecurityException; I successfully implemented and tested HMAC-SHA1 Signature (which OAuth Supports) as well as the PLAINTEXT "signature". I have searched google and I have to create a private key if I need to use SHA1withRSA signature: Sample code: /** * Signs the data with the given key and the provided algorithm. */ private static byte[] sign(PrivateKey key, String data) throws GeneralSecurityException { Signature signature = Signature.getInstance("SHA1withRSA"); signature.initSign(key); signature.update(data.getBytes()); return signature.sign(); } Now, How can I take the OAuth key (which is key = consumerSecret&tokenSecret) and create a PrivateKey to use with SHA1withRSA signature? Thanks

    Read the article

  • Creating a Compilation Unit with type bindings

    - by hizki
    I am working with the AST API in java, and I am trying to create a Compilation Unit with type bindings. I wrote the following code: private static CompilationUnit parse(ICompilationUnit unit) { ASTParser parser = ASTParser.newParser(AST.JLS3); parser.setKind(ASTParser.K_COMPILATION_UNIT); parser.setSource(unit); parser.setResolveBindings(true); CompilationUnit compiUnit = (CompilationUnit) parser.createAST(null); return compiUnit; } Unfortunately, when I run this code in debug mode and inspect compiUnit I find that compiUnit.resolver.isRecoveringBindings is false. Can anyone think of a reason why it wouldn't be true, as I specified it to be? Thank you

    Read the article

  • IOC Container Handling State Params in Non-Default Constructor

    - by Mystagogue
    For the purpose of this discussion, there are two kinds of parameters an object constructor might take: state dependency or service dependency. Supplying a service dependency with an IOC container is easy: DI takes over. But in contrast, state dependencies are usually only known to the client. That is, the object requestor. It turns out that having a client supply the state params through an IOC Container is quite painful. I will show several different ways to do this, all of which have big problems, and ask the community if there is another option I'm missing. Let's begin: Before I added an IOC container to my project code, I started with a class like this: class Foobar { //parameters are state dependencies, not service dependencies public Foobar(string alpha, int omega){...}; //...other stuff } I decide to add a Logger service depdendency to the Foobar class, which perhaps I'll provide through DI: class Foobar { public Foobar(string alpha, int omega, ILogger log){...}; //...other stuff } But then I'm also told I need to make class Foobar itself "swappable." That is, I'm required to service-locate a Foobar instance. I add a new interface into the mix: class Foobar : IFoobar { public Foobar(string alpha, int omega, ILogger log){...}; //...other stuff } When I make the service locator call, it will DI the ILogger service dependency for me. Unfortunately the same is not true of the state dependencies Alpha and Omega. Some containers offer a syntax to address this: //Unity 2.0 pseudo-ish code: myContainer.Resolve<IFoobar>( new parameterOverride[] { {"alpha", "one"}, {"omega",2} } ); I like the feature, but I don't like that it is untyped and not evident to the developer what parameters must be passed (via intellisense, etc). So I look at another solution: //This is a "boiler plate" heavy approach! class Foobar : IFoobar { public Foobar (string alpha, int omega){...}; //...stuff } class FoobarFactory : IFoobarFactory { public IFoobar IFoobarFactory.Create(string alpha, int omega){ return new Foobar(alpha, omega); } } //fetch it... myContainer.Resolve<IFoobarFactory>().Create("one", 2); The above solves the type-safety and intellisense problem, but it (1) forced class Foobar to fetch an ILogger through a service locator rather than DI and (2) it requires me to make a bunch of boiler-plate (XXXFactory, IXXXFactory) for all varieties of Foobar implementations I might use. Should I decide to go with a pure service locator approach, it may not be a problem. But I still can't stand all the boiler-plate needed to make this work. So then I try this: //code named "concrete creator" class Foobar : IFoobar { public Foobar(string alpha, int omega, ILogger log){...}; static IFoobar Create(string alpha, int omega){ //unity 2.0 pseudo-ish code. Assume a common //service locator, or singleton holds the container... return Container.Resolve<IFoobar>( new parameterOverride[] {{"alpha", alpha},{"omega", omega} } ); } //Get my instance: Foobar.Create("alpha",2); I actually don't mind that I'm using the concrete "Foobar" class to create an IFoobar. It represents a base concept that I don't expect to change in my code. I also don't mind the lack of type-safety in the static "Create", because it is now encapsulated. My intellisense is working too! Any concrete instance made this way will ignore the supplied state params if they don't apply (a Unity 2.0 behavior). Perhaps a different concrete implementation "FooFoobar" might have a formal arg name mismatch, but I'm still pretty happy with it. But the big problem with this approach is that it only works effectively with Unity 2.0 (a mismatched parameter in Structure Map will throw an exception). So it is good only if I stay with Unity. The problem is, I'm beginning to like Structure Map a lot more. So now I go onto yet another option: class Foobar : IFoobar, IFoobarInit { public Foobar(ILogger log){...}; public IFoobar IFoobarInit.Initialize(string alpha, int omega){ this.alpha = alpha; this.omega = omega; return this; } } //now create it... IFoobar foo = myContainer.resolve<IFoobarInit>().Initialize("one", 2) Now with this I've got a somewhat nice compromise with the other approaches: (1) My arguments are type-safe / intellisense aware (2) I have a choice of fetching the ILogger via DI (shown above) or service locator, (3) there is no need to make one or more seperate concrete FoobarFactory classes (contrast with the verbose "boiler-plate" example code earlier), and (4) it reasonably upholds the principle "make interfaces easy to use correctly, and hard to use incorrectly." At least it arguably is no worse than the alternatives previously discussed. One acceptance barrier yet remains: I also want to apply "design by contract." Every sample I presented was intentionally favoring constructor injection (for state dependencies) because I want to preserve "invariant" support as most commonly practiced. Namely, the invariant is established when the constructor completes. In the sample above, the invarient is not established when object construction completes. As long as I'm doing home-grown "design by contract" I could just tell developers not to test the invariant until the Initialize(...) method is called. But more to the point, when .net 4.0 comes out I want to use its "code contract" support for design by contract. From what I read, it will not be compatible with this last approach. Curses! Of course it also occurs to me that my entire philosophy is off. Perhaps I'd be told that conjuring a Foobar : IFoobar via a service locator implies that it is a service - and services only have other service dependencies, they don't have state dependencies (such as the Alpha and Omega of these examples). I'm open to listening to such philosophical matters as well, but I'd also like to know what semi-authorative reference to read that would steer me down that thought path. So now I turn it to the community. What approach should I consider that I havn't yet? Must I really believe I've exhausted my options?

    Read the article

  • SQL Server &ndash; Undelete a Table and Restore a Single Table from Backup

    - by Mladen Prajdic
    This post is part of the monthly community event called T-SQL Tuesday started by Adam Machanic (blog|twitter) and hosted by someone else each month. This month the host is Sankar Reddy (blog|twitter) and the topic is Misconceptions in SQL Server. You can follow posts for this theme on Twitter by looking at #TSQL2sDay hashtag. Let me start by saying: This code is a crazy hack that is to never be used unless you really, really have to. Really! And I don’t think there’s a time when you would really have to use it for real. Because it’s a hack there are number of things that can go wrong so play with it knowing that. I’ve managed to totally corrupt one database. :) Oh… and for those saying: yeah yeah.. you have a single table in a file group and you’re restoring that, I say “nay nay” to you. As we all know SQL Server can’t do single table restores from backup. This is kind of a obvious thing due to different relational integrity (RI) concerns. Since we have to maintain that we have to restore all tables represented in a RI graph. For this exercise i say BAH! to those concerns. Note that this method “works” only for simple tables that don’t have LOB and off rows data. The code can be expanded to include those but I’ve tried to leave things “simple”. Note that for this to work our table needs to be relatively static data-wise. This doesn’t work for OLTP table. Products are a perfect example of static data. They don’t change much between backups, pretty much everything depends on them and their table is one of those tables that are relatively easy to accidentally delete everything from. This only works if the database is in Full or Bulk-Logged recovery mode for tables where the contents have been deleted or truncated but NOT when a table was dropped. Everything we’ll talk about has to be done before the data pages are reused for other purposes. After deletion or truncation the pages are marked as reusable so you have to act fast. The best thing probably is to put the database into single user mode ASAP while you’re performing this procedure and return it to multi user after you’re done. How do we do it? We will be using an undocumented but known DBCC commands: DBCC PAGE, an undocumented function sys.fn_dblog and a little known DATABASE RESTORE PAGE option. All tests will be on a copy of Production.Product table in AdventureWorks database called Production.Product1 because the original table has FK constraints that prevent us from truncating it for testing. -- create a duplicate table. This doesn't preserve indexes!SELECT *INTO AdventureWorks.Production.Product1FROM AdventureWorks.Production.Product   After we run this code take a full back to perform further testing.   First let’s see what the difference between DELETE and TRUNCATE is when it comes to logging. With DELETE every row deletion is logged in the transaction log. With TRUNCATE only whole data page deallocations are logged in the transaction log. Getting deleted data pages is simple. All we have to look for is row delete entry in the sys.fn_dblog output. But getting data pages that were truncated from the transaction log presents a bit of an interesting problem. I will not go into depths of IAM(Index Allocation Map) and PFS (Page Free Space) pages but suffice to say that every IAM page has intervals that tell us which data pages are allocated for a table and which aren’t. If we deep dive into the sys.fn_dblog output we can see that once you truncate a table all the pages in all the intervals are deallocated and this is shown in the PFS page transaction log entry as deallocation of pages. For every 8 pages in the same extent there is one PFS page row in the transaction log. This row holds information about all 8 pages in CSV format which means we can get to this data with some parsing. A great help for parsing this stuff is Peter Debetta’s handy function dbo.HexStrToVarBin that converts hexadecimal string into a varbinary value that can be easily converted to integer tus giving us a readable page number. The shortened (columns removed) sys.fn_dblog output for a PFS page with CSV data for 1 extent (8 data pages) looks like this: -- [Page ID] is displayed in hex format. -- To convert it to readable int we'll use dbo.HexStrToVarBin function found at -- http://sqlblog.com/blogs/peter_debetta/archive/2007/03/09/t-sql-convert-hex-string-to-varbinary.aspx -- This function must be installed in the master databaseSELECT Context, AllocUnitName, [Page ID], DescriptionFROM sys.fn_dblog(NULL, NULL)WHERE [Current LSN] = '00000031:00000a46:007d' The pages at the end marked with 0x00—> are pages that are allocated in the extent but are not part of a table. We can inspect the raw content of each data page with a DBCC PAGE command: -- we need this trace flag to redirect output to the query window.DBCC TRACEON (3604); -- WITH TABLERESULTS gives us data in table format instead of message format-- we use format option 3 because it's the easiest to read and manipulate further onDBCC PAGE (AdventureWorks, 1, 613, 3) WITH TABLERESULTS   Since the DBACC PAGE output can be quite extensive I won’t put it here. You can see an example of it in the link at the beginning of this section. Getting deleted data back When we run a delete statement every row to be deleted is marked as a ghost record. A background process periodically cleans up those rows. A huge misconception is that the data is actually removed. It’s not. Only the pointers to the rows are removed while the data itself is still on the data page. We just can’t access it with normal means. To get those pointers back we need to restore every deleted page using the RESTORE PAGE option mentioned above. This restore must be done from a full backup, followed by any differential and log backups that you may have. This is necessary to bring the pages up to the same point in time as the rest of the data.  However the restore doesn’t magically connect the restored page back to the original table. It simply replaces the current page with the one from the backup. After the restore we use the DBCC PAGE to read data directly from all data pages and insert that data into a temporary table. To finish the RESTORE PAGE  procedure we finally have to take a tail log backup (simple backup of the transaction log) and restore it back. We can now insert data from the temporary table to our original table by hand. Getting truncated data back When we run a truncate the truncated data pages aren’t touched at all. Even the pointers to rows stay unchanged. Because of this getting data back from truncated table is simple. we just have to find out which pages belonged to our table and use DBCC PAGE to read data off of them. No restore is necessary. Turns out that the problems we had with finding the data pages is alleviated by not having to do a RESTORE PAGE procedure. Stop stalling… show me The Code! This is the code for getting back deleted and truncated data back. It’s commented in all the right places so don’t be afraid to take a closer look. Make sure you have a full backup before trying this out. Also I suggest that the last step of backing and restoring the tail log is performed by hand. USE masterGOIF OBJECT_ID('dbo.HexStrToVarBin') IS NULL RAISERROR ('No dbo.HexStrToVarBin installed. Go to http://sqlblog.com/blogs/peter_debetta/archive/2007/03/09/t-sql-convert-hex-string-to-varbinary.aspx and install it in master database' , 18, 1) SET NOCOUNT ONBEGIN TRY DECLARE @dbName VARCHAR(1000), @schemaName VARCHAR(1000), @tableName VARCHAR(1000), @fullBackupName VARCHAR(1000), @undeletedTableName VARCHAR(1000), @sql VARCHAR(MAX), @tableWasTruncated bit; /* THE FIRST LINE ARE OUR INPUT PARAMETERS In this case we're trying to recover Production.Product1 table in AdventureWorks database. My full backup of AdventureWorks database is at e:\AW.bak */ SELECT @dbName = 'AdventureWorks', @schemaName = 'Production', @tableName = 'Product1', @fullBackupName = 'e:\AW.bak', @undeletedTableName = '##' + @tableName + '_Undeleted', @tableWasTruncated = 0, -- copy the structure from original table to a temp table that we'll fill with restored data @sql = 'IF OBJECT_ID(''tempdb..' + @undeletedTableName + ''') IS NOT NULL DROP TABLE ' + @undeletedTableName + ' SELECT *' + ' INTO ' + @undeletedTableName + ' FROM [' + @dbName + '].[' + @schemaName + '].[' + @tableName + ']' + ' WHERE 1 = 0' EXEC (@sql) IF OBJECT_ID('tempdb..#PagesToRestore') IS NOT NULL DROP TABLE #PagesToRestore /* FIND DATA PAGES WE NEED TO RESTORE*/ CREATE TABLE #PagesToRestore ([ID] INT IDENTITY(1,1), [FileID] INT, [PageID] INT, [SQLtoExec] VARCHAR(1000)) -- DBCC PACE statement to run later RAISERROR ('Looking for deleted pages...', 10, 1) -- use T-LOG direct read to get deleted data pages INSERT INTO #PagesToRestore([FileID], [PageID], [SQLtoExec]) EXEC('USE [' + @dbName + '];SELECT FileID, PageID, ''DBCC TRACEON (3604); DBCC PAGE ([' + @dbName + '], '' + FileID + '', '' + PageID + '', 3) WITH TABLERESULTS'' as SQLToExecFROM (SELECT DISTINCT LEFT([Page ID], 4) AS FileID, CONVERT(VARCHAR(100), ' + 'CONVERT(INT, master.dbo.HexStrToVarBin(SUBSTRING([Page ID], 6, 20)))) AS PageIDFROM sys.fn_dblog(NULL, NULL)WHERE AllocUnitName LIKE ''%' + @schemaName + '.' + @tableName + '%'' ' + 'AND Context IN (''LCX_MARK_AS_GHOST'', ''LCX_HEAP'') AND Operation in (''LOP_DELETE_ROWS''))t');SELECT *FROM #PagesToRestore -- if upper EXEC returns 0 rows it means the table was truncated so find truncated pages IF (SELECT COUNT(*) FROM #PagesToRestore) = 0 BEGIN RAISERROR ('No deleted pages found. Looking for truncated pages...', 10, 1) -- use T-LOG read to get truncated data pages INSERT INTO #PagesToRestore([FileID], [PageID], [SQLtoExec]) -- dark magic happens here -- because truncation simply deallocates pages we have to find out which pages were deallocated. -- we can find this out by looking at the PFS page row's Description column. -- for every deallocated extent the Description has a CSV of 8 pages in that extent. -- then it's just a matter of parsing it. -- we also remove the pages in the extent that weren't allocated to the table itself -- marked with '0x00-->00' EXEC ('USE [' + @dbName + '];DECLARE @truncatedPages TABLE(DeallocatedPages VARCHAR(8000), IsMultipleDeallocs BIT);INSERT INTO @truncatedPagesSELECT REPLACE(REPLACE(Description, ''Deallocated '', ''Y''), ''0x00-->00 '', ''N'') + '';'' AS DeallocatedPages, CHARINDEX('';'', Description) AS IsMultipleDeallocsFROM (SELECT DISTINCT LEFT([Page ID], 4) AS FileID, CONVERT(VARCHAR(100), CONVERT(INT, master.dbo.HexStrToVarBin(SUBSTRING([Page ID], 6, 20)))) AS PageID, DescriptionFROM sys.fn_dblog(NULL, NULL)WHERE Context IN (''LCX_PFS'') AND Description LIKE ''Deallocated%'' AND AllocUnitName LIKE ''%' + @schemaName + '.' + @tableName + '%'') t;SELECT FileID, PageID , ''DBCC TRACEON (3604); DBCC PAGE ([' + @dbName + '], '' + FileID + '', '' + PageID + '', 3) WITH TABLERESULTS'' as SQLToExecFROM (SELECT LEFT(PageAndFile, 1) as WasPageAllocatedToTable , SUBSTRING(PageAndFile, 2, CHARINDEX('':'', PageAndFile) - 2 ) as FileID , CONVERT(VARCHAR(100), CONVERT(INT, master.dbo.HexStrToVarBin(SUBSTRING(PageAndFile, CHARINDEX('':'', PageAndFile) + 1, LEN(PageAndFile))))) as PageIDFROM ( SELECT SUBSTRING(DeallocatedPages, delimPosStart, delimPosEnd - delimPosStart) as PageAndFile, IsMultipleDeallocs FROM ( SELECT *, CHARINDEX('';'', DeallocatedPages)*(N-1) + 1 AS delimPosStart, CHARINDEX('';'', DeallocatedPages)*N AS delimPosEnd FROM @truncatedPages t1 CROSS APPLY (SELECT TOP (case when t1.IsMultipleDeallocs = 1 then 8 else 1 end) ROW_NUMBER() OVER(ORDER BY number) as N FROM master..spt_values) t2 )t)t)tWHERE WasPageAllocatedToTable = ''Y''') SELECT @tableWasTruncated = 1 END DECLARE @lastID INT, @pagesCount INT SELECT @lastID = 1, @pagesCount = COUNT(*) FROM #PagesToRestore SELECT @sql = 'Number of pages to restore: ' + CONVERT(VARCHAR(10), @pagesCount) IF @pagesCount = 0 RAISERROR ('No data pages to restore.', 18, 1) ELSE RAISERROR (@sql, 10, 1) -- If the table was truncated we'll read the data directly from data pages without restoring from backup IF @tableWasTruncated = 0 BEGIN -- RESTORE DATA PAGES FROM FULL BACKUP IN BATCHES OF 200 WHILE @lastID <= @pagesCount BEGIN -- create CSV string of pages to restore SELECT @sql = STUFF((SELECT ',' + CONVERT(VARCHAR(100), FileID) + ':' + CONVERT(VARCHAR(100), PageID) FROM #PagesToRestore WHERE ID BETWEEN @lastID AND @lastID + 200 ORDER BY ID FOR XML PATH('')), 1, 1, '') SELECT @sql = 'RESTORE DATABASE [' + @dbName + '] PAGE = ''' + @sql + ''' FROM DISK = ''' + @fullBackupName + '''' RAISERROR ('Starting RESTORE command:' , 10, 1) WITH NOWAIT; RAISERROR (@sql , 10, 1) WITH NOWAIT; EXEC(@sql); RAISERROR ('Restore DONE' , 10, 1) WITH NOWAIT; SELECT @lastID = @lastID + 200 END /* If you have any differential or transaction log backups you should restore them here to bring the previously restored data pages up to date */ END DECLARE @dbccSinglePage TABLE ( [ParentObject] NVARCHAR(500), [Object] NVARCHAR(500), [Field] NVARCHAR(500), [VALUE] NVARCHAR(MAX) ) DECLARE @cols NVARCHAR(MAX), @paramDefinition NVARCHAR(500), @SQLtoExec VARCHAR(1000), @FileID VARCHAR(100), @PageID VARCHAR(100), @i INT = 1 -- Get deleted table columns from information_schema view -- Need sp_executeSQL because database name can't be passed in as variable SELECT @cols = 'select @cols = STUFF((SELECT '', ['' + COLUMN_NAME + '']''FROM ' + @dbName + '.INFORMATION_SCHEMA.COLUMNSWHERE TABLE_NAME = ''' + @tableName + ''' AND TABLE_SCHEMA = ''' + @schemaName + '''ORDER BY ORDINAL_POSITIONFOR XML PATH('''')), 1, 2, '''')', @paramDefinition = N'@cols nvarchar(max) OUTPUT' EXECUTE sp_executesql @cols, @paramDefinition, @cols = @cols OUTPUT -- Loop through all the restored data pages, -- read data from them and insert them into temp table -- which you can then insert into the orignial deleted table DECLARE dbccPageCursor CURSOR GLOBAL FORWARD_ONLY FOR SELECT [FileID], [PageID], [SQLtoExec] FROM #PagesToRestore ORDER BY [FileID], [PageID] OPEN dbccPageCursor; FETCH NEXT FROM dbccPageCursor INTO @FileID, @PageID, @SQLtoExec; WHILE @@FETCH_STATUS = 0 BEGIN RAISERROR ('---------------------------------------------', 10, 1) WITH NOWAIT; SELECT @sql = 'Loop iteration: ' + CONVERT(VARCHAR(10), @i); RAISERROR (@sql, 10, 1) WITH NOWAIT; SELECT @sql = 'Running: ' + @SQLtoExec RAISERROR (@sql, 10, 1) WITH NOWAIT; -- if something goes wrong with DBCC execution or data gathering, skip it but print error BEGIN TRY INSERT INTO @dbccSinglePage EXEC (@SQLtoExec) -- make the data insert magic happen here IF (SELECT CONVERT(BIGINT, [VALUE]) FROM @dbccSinglePage WHERE [Field] LIKE '%Metadata: ObjectId%') = OBJECT_ID('['+@dbName+'].['+@schemaName +'].['+@tableName+']') BEGIN DELETE @dbccSinglePage WHERE NOT ([ParentObject] LIKE 'Slot % Offset %' AND [Object] LIKE 'Slot % Column %') SELECT @sql = 'USE tempdb; ' + 'IF (OBJECTPROPERTY(object_id(''' + @undeletedTableName + '''), ''TableHasIdentity'') = 1) ' + 'SET IDENTITY_INSERT ' + @undeletedTableName + ' ON; ' + 'INSERT INTO ' + @undeletedTableName + '(' + @cols + ') ' + STUFF((SELECT ' UNION ALL SELECT ' + STUFF((SELECT ', ' + CASE WHEN VALUE = '[NULL]' THEN 'NULL' ELSE '''' + [VALUE] + '''' END FROM ( -- the unicorn help here to correctly set ordinal numbers of columns in a data page -- it's turning STRING order into INT order (1,10,11,2,21 into 1,2,..10,11...21) SELECT [ParentObject], [Object], Field, VALUE, RIGHT('00000' + O1, 6) AS ParentObjectOrder, RIGHT('00000' + REVERSE(LEFT(O2, CHARINDEX(' ', O2)-1)), 6) AS ObjectOrder FROM ( SELECT [ParentObject], [Object], Field, VALUE, REPLACE(LEFT([ParentObject], CHARINDEX('Offset', [ParentObject])-1), 'Slot ', '') AS O1, REVERSE(LEFT([Object], CHARINDEX('Offset ', [Object])-2)) AS O2 FROM @dbccSinglePage WHERE t.ParentObject = ParentObject )t)t ORDER BY ParentObjectOrder, ObjectOrder FOR XML PATH('')), 1, 2, '') FROM @dbccSinglePage t GROUP BY ParentObject FOR XML PATH('') ), 1, 11, '') + ';' RAISERROR (@sql, 10, 1) WITH NOWAIT; EXEC (@sql) END END TRY BEGIN CATCH SELECT @sql = 'ERROR!!!' + CHAR(10) + CHAR(13) + 'ErrorNumber: ' + ERROR_NUMBER() + '; ErrorMessage' + ERROR_MESSAGE() + CHAR(10) + CHAR(13) + 'FileID: ' + @FileID + '; PageID: ' + @PageID RAISERROR (@sql, 10, 1) WITH NOWAIT; END CATCH DELETE @dbccSinglePage SELECT @sql = 'Pages left to process: ' + CONVERT(VARCHAR(10), @pagesCount - @i) + CHAR(10) + CHAR(13) + CHAR(10) + CHAR(13) + CHAR(10) + CHAR(13), @i = @i+1 RAISERROR (@sql, 10, 1) WITH NOWAIT; FETCH NEXT FROM dbccPageCursor INTO @FileID, @PageID, @SQLtoExec; END CLOSE dbccPageCursor; DEALLOCATE dbccPageCursor; EXEC ('SELECT ''' + @undeletedTableName + ''' as TableName; SELECT * FROM ' + @undeletedTableName)END TRYBEGIN CATCH SELECT ERROR_NUMBER() AS ErrorNumber, ERROR_MESSAGE() AS ErrorMessage IF CURSOR_STATUS ('global', 'dbccPageCursor') >= 0 BEGIN CLOSE dbccPageCursor; DEALLOCATE dbccPageCursor; ENDEND CATCH-- if the table was deleted we need to finish the restore page sequenceIF @tableWasTruncated = 0BEGIN -- take a log tail backup and then restore it to complete page restore process DECLARE @currentDate VARCHAR(30) SELECT @currentDate = CONVERT(VARCHAR(30), GETDATE(), 112) RAISERROR ('Starting Log Tail backup to c:\Temp ...', 10, 1) WITH NOWAIT; PRINT ('BACKUP LOG [' + @dbName + '] TO DISK = ''c:\Temp\' + @dbName + '_TailLogBackup_' + @currentDate + '.trn''') EXEC ('BACKUP LOG [' + @dbName + '] TO DISK = ''c:\Temp\' + @dbName + '_TailLogBackup_' + @currentDate + '.trn''') RAISERROR ('Log Tail backup done.', 10, 1) WITH NOWAIT; RAISERROR ('Starting Log Tail restore from c:\Temp ...', 10, 1) WITH NOWAIT; PRINT ('RESTORE LOG [' + @dbName + '] FROM DISK = ''c:\Temp\' + @dbName + '_TailLogBackup_' + @currentDate + '.trn''') EXEC ('RESTORE LOG [' + @dbName + '] FROM DISK = ''c:\Temp\' + @dbName + '_TailLogBackup_' + @currentDate + '.trn''') RAISERROR ('Log Tail restore done.', 10, 1) WITH NOWAIT;END-- The last step is manual. Insert data from our temporary table to the original deleted table The misconception here is that you can do a single table restore properly in SQL Server. You can't. But with little experimentation you can get pretty close to it. One way to possible remove a dependency on a backup to retrieve deleted pages is to quickly run a similar script to the upper one that gets data directly from data pages while the rows are still marked as ghost records. It could be done if we could beat the ghost record cleanup task.

    Read the article

  • Silverlight 4.0: DataTemplate Error

    - by xscape
    Im trying to get the specific template in my resource dictionary. This is my resource dictionary <ResourceDictionary xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:view="clr-namespace:Test.Layout.View" xmlns:toolkit="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Toolkit" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"><DataTemplate x:Key="LeftRightLayout"> <toolkit:DockPanel> <view:SharedContainerView toolkit:DockPanel.Dock="Left"/> <view:SingleContainerView toolkit:DockPanel.Dock="Right"/> </toolkit:DockPanel> </DataTemplate> However when it gets to XamlReader.Load private static ResourceDictionary GetResource(string resourceName) { ResourceDictionary resource = null; XDocument xDoc = XDocument.Load(resourceName); resource = (ResourceDictionary)XamlReader.Load(xDoc.ToString(SaveOptions.None)); return resource; } The type 'SharedContainerView' was not found because 'clr-namespace:Test.Layout.View' is an unknown namespace. [Line: 4 Position: 56]

    Read the article

< Previous Page | 336 337 338 339 340 341 342 343 344 345 346 347  | Next Page >