Search Results

Search found 97840 results on 3914 pages for 'custom data type'.

Page 17/3914 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • How to dynamically add controls in asp.net Dynamic Data

    - by loviji
    Hello, i'm trying to work with asp.NET Dynamic Data. So, I see asp.NET Dynamic Data not well learned by people as other technologies. now, to my question. Lets us work with Details.aspx page that located on ~\DynamicData\PageTemplates I need to add <asp:DynamicControl runat="server" to page into Form1.detailsTable. i've tried like this: protected DynamicControl myC=new DynamicControl(); protected void Page_Load(object sender, EventArgs e) { foreach(var c in table.Columns) { myC.DataField=c.DisplayName; FormView1.Controls.Add(myC); } } but I can not see the desired result. where is the problem. thanks

    Read the article

  • Deleted Partition Recovery

    - by ankur.trapasiya
    Recently i was installing ubuntu 12.04 on my system. There were 4 partitions on my system and i selected one of the four partition for the installation and chose the option of re sizing the partition. Initially my partition was of size 100+GB and i created another partition out of it of size 15GB (EXT4). Now the moment i changed this partition structure my original partition got lost along with its data and i am left with 50GB partition and 50GB unallocated free space. Now the data that i have lost is meant a lot to me and i want to recover that data. So is there any way i can recover it ? And i haven't checked "format" option while resizing the partition. Thanks in advance.

    Read the article

  • Fast set indexing data structure for superset retrieval

    - by Asterios
    I am given a set of sets: {{a,b}, {a,b,c}, {a,c}, {a,c,f}} I would like to have a data structure to index those sets such that the following "lookup" is executed fast: find all supersets of a given set. For example, given the set {a,c} the structure would return {{a,b,c}, {a,c,f}, {a,c}} but not {a,b}. Any suggestions? Could this be done with a smart trie-like data structure storing sets after a proper sorting? This data structures is going to be queried a lot. Thus, I'm searching for a structure that might be expensive in build but rather fast to query.

    Read the article

  • Recovering data from hard disk after an accidental Ubuntu reinstallation

    - by Saurabh Agarwal
    My computer got wiped accidentally due to a fresh Ubuntu installation. Since the drive contains very important data and codes, it would be really great if the same could be recovered. It is a 2TB hard drive which had Ubuntu 10.10 earlier. It now has a Ubuntu 12.04 installed on it (which I understand occupies ~4GB). The machine has been powered off since. The installation was done using a usb with the option where the previous ubuntu installation is removed. Since installation doesn't take a lot of time, I'm inclined to think that the disk wasn't completely formatted and that most of the data is still there. I have no experience with recovery and hence a detailed explanation is very helpful. NOTE: I can arrange an additional 2TB hard disk for copying data. My computer has a fast internet connection and I have other computers connected to the network which I may use to access the previous one as well.

    Read the article

  • Out-Of-Memory while doing Core Data migration

    - by Kamchatka
    Hello, I'm migrating a CoreData model between two versions of an application. I was storing binary data as blobs in the previous version and I want to take them out of the blobs for performance. My issue is that during the migration it seems that Core Data loads everything into memory which leads to Low Memory Warnings and then to my app being killed. Apple documentation suggests the following : http://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/CoreDataVersioning/Articles/vmCustomizingTheProcess.html#//apple_ref/doc/uid/TP40005510-SW9 However, it seems to rely on the fact that the large objects are applied different mapping. In my case, all the objects are basically the same and the same mapping has to be applied to each of them. I don't see in this case how I could apply their technique. How should I handle a migration with very large objects ?

    Read the article

  • Serializing Data Structures in C

    - by src
    I've recently read three separate books on algorithms and data structures, tcp/ip socket programming, and programming with memory. The book about memory briefly discussed the topic of serializing data structures for the purposes of storing it to disk, or sending it across a network. I can't help but wonder why the the other two books didn't discuss serialization at all. After an unsuccessful web/book search I'm left wondering where I can find a good book/paper/tutorial on serializing data structures in C? Where or how did you learn it?

    Read the article

  • How to expose an entity via alternate keys with spring data rest

    - by dan carter
    Spring-data-rest does a great job exposing entities via their primary key for GET, PUT and DELETE etc. operations. /myentityies/123 It also exposes search operations. /myentities/search/byMyOtherKey?myOtherKey=123 In my case the entities have a number of alternate keys. The systems calling us, will know the objects by these IDs, rather than our internal primary key. Is it possible to expose the objects via another URL and have the GET, PUT and DELETE handled by the built-in spring-data-rest controllers? /myentities/myotherkey/456 We'd like to avoid forcing the calling systems to have to make two requests for each update. I've tried playing with @RestResource path value, but there doesn't seem to be a way to add additional paths.

    Read the article

  • .NET (C#) passing messages from a custom control to main application

    - by zer0c00l
    A custom windows form control named 'tweet' is in a dll. The custom control has couple of basic controls to display a tweet. I add this custom control to my main application. This custom control has a button named "retweet", when some user clicks this "retweet" button, i need to send some message to the main application. Unfortunately the this tweet control has no idea about this main application (both or in their own namespaces) How can i send messages from this custom control to the main application?

    Read the article

  • Data structure supporting the following operations

    - by 500865
    I'm looking for a data structure for working with a set of data which is most efficient to do the following : Check whether an item has been categorized or not. (The categorized and uncategorized set are disjoint sets). Get the category of an item. Get all the items in a particular category. Get all the uncategorized items. Remove a particular item from the data set. I was thinking of having a Dictionary<String, Set<String>> to hold all the items in a given category, but that doesn't solve 2.

    Read the article

  • serializing type definitions?

    - by Dave
    I'm not positive I'm going about this the right way. I've got a suite of applications that have varying types of output (custom defined types). For example, I might have a type called Widget: Class Widget Public name as String End Class Throughout the course of operation, when a user experiences a certain condition, the application will take that output instance of widget that user received, serialize it, and log it to the database noting the name of the type. Now, I have other applications that do something similar, but instead of dealing with Widget, it could be some totally random other type with different attributes, but again I serialize the instance, log it to the db, and note the name of the type. I have maybe a half dozen different types and don't anticipate too many additional ones in the future. After all this is said and done, I have an admin interface that looks through these logs, and has the ability for the user to view the contents of this data thats been logged. The Admin app has a reference to all the types involved, and with some basic switch case logic hinged upon the name of the type, will cast it into their original types, and pass it on to some handlers that have basic display logic to spit the data back out in a readable format (one display handler for each type) NOW... all this is well and good... Until one day, my model changed. The Widget class now has deprecated the name attribute and added on a bunch of other attributes. I will of course get type mismatches in the admin side when I try to reconstitute this data. I was wondering if there was some way, at runtime, i could perhaps reflect through my code and get a snapshot of the type definition at that precise moment, serialize it, and store it along with the data so that I could somehow use this to reconstitute it in the future?

    Read the article

  • UPOS RFIDScanner data format

    - by Robert Snyder
    A lot of work that I do currently is based in the OPOS/UPOS world. My company has a device that can read 13.56Mhz tags (RFID), Smart Cards, and Mag Stripe cards. Up until somewhat recently I have only been working with RFID for a very specific scenario. That was to read UltraLight C and Desfire cards. These cards were all setup very specifically so that I could take the data read from those cards and force it into a MSR track2 format. The past couple of weeks, however, I have been working on reading RFID credit cards (since I have a Visa card I've been using mine), and Smart Card credit cards. (The visa card I have has both) In learning how to communicate with SmartCard and reading ISO7816 and EMVCO documents I became a little more familiar with how info is stored. But now I have a question regarding UPOS. The RFID data on my Visa is stored (and read) very similar to how the data is stored and read from the Smart Card on my Visa. Cool. Well in the UPOS spec for SmartCardRW the ReadData method returns a byte array. That's cool, I can just return all that data and then parse it as my heart desires. The RFID though has a LinkedList of Tags. Well this makes sense in terms of my Visa card (reminds me of a question I have in regards to SmartCard, but that is for another question) but what about ULC and Desfire, or for that matter any Mifare card. Pages, Files, Purses don't exactly fit the Tag profile. For instance lets just say I read pages 4-12 on my ULC card. Each page I read is 4 bytes long. Does this mean I have 9 tags in my LinkedList? Is my Tag id the page number? Or then how does that translate to Desfire? I open application 123456 and read file 1 and file 2, Do I have 2 tags? and if so what is my tag id? At least with my Visa I think that I have to use the Tag id (ex 5F24 for my expiration date) and value of {0x15, 0x10, 0x31} Part of me says yes..that makes sense. Another part of me says, "well if that is the case then why doesn't SmartCardRW have Tags?" So that is my question. How do I format my data from those different types of media? or is that the job of my Control Object (the application)? Is so how does it know? The only protocols I have are: // Summary: // Enumerates the available predefined RFID tag protocols the device supports. [Flags] public enum RFIDProtocols { EpcClass0 = 1, RFIDSdt0Plus = 2, EpcClass1 = 4, EpcClass1Gen2 = 8, EpcClass2 = 16, Iso14443A = 4096, Iso14443B = 8192, Iso15693 = 12288, Iso180006B = 16384, Other = 16777216, All = 1073741824, } If I use that well all of my cards that I have are all Iso14443A. I use the ATQA and the SAK to know what type of card I really have. There is no RFID property that lets me specify that. So I'm lost.

    Read the article

  • Integrating WIF with WCF Data Services

    - by cibrax
    A time ago I discussed how a custom REST Starter kit interceptor could be used to parse a SAML token in the Http Authorization header and wrap that into a ClaimsPrincipal that the WCF services could use. The thing is that code was initially created for Geneva framework, so it got deprecated quickly. I recently needed that piece of code for one of projects where I am currently working on so I decided to update it for WIF. As this interceptor can be injected in any host for WCF REST services, also represents an excellent solution for integrating claim-based security into WCF Data Services (previously known as ADO.NET Data Services). The interceptor basically expects a SAML token in the Authorization header. If a token is found, it is parsed and a new ClaimsPrincipal is initialized and injected in the WCF authorization context. public class SamlAuthenticationInterceptor : RequestInterceptor {   SecurityTokenHandlerCollection handlers;   public SamlAuthenticationInterceptor()     : base(false)   {     this.handlers = FederatedAuthentication.ServiceConfiguration.SecurityTokenHandlers;   }   public override void ProcessRequest(ref RequestContext requestContext)   {     SecurityToken token = ExtractCredentials(requestContext.RequestMessage);     if (token != null)     {       ClaimsIdentityCollection claims = handlers.ValidateToken(token);       var principal = new ClaimsPrincipal(claims);       InitializeSecurityContext(requestContext.RequestMessage, principal);     }     else     {       DenyAccess(ref requestContext);     }   }   private void DenyAccess(ref RequestContext requestContext)   {     Message reply = Message.CreateMessage(MessageVersion.None, null);     HttpResponseMessageProperty responseProperty = new HttpResponseMessageProperty() { StatusCode = HttpStatusCode.Unauthorized };     responseProperty.Headers.Add("WWW-Authenticate",           String.Format("Basic realm=\"{0}\"", ""));     reply.Properties[HttpResponseMessageProperty.Name] = responseProperty;     requestContext.Reply(reply);     requestContext = null;   }   private SecurityToken ExtractCredentials(Message requestMessage)   {     HttpRequestMessageProperty request = (HttpRequestMessageProperty)  requestMessage.Properties[HttpRequestMessageProperty.Name];     string authHeader = request.Headers["Authorization"];     if (authHeader != null && authHeader.Contains("<saml"))     {       XmlTextReader xmlReader = new XmlTextReader(new StringReader(authHeader));       var col = SecurityTokenHandlerCollection.CreateDefaultSecurityTokenHandlerCollection();       SecurityToken token = col.ReadToken(xmlReader);                                        return token;     }     return null;   }   private void InitializeSecurityContext(Message request, IPrincipal principal)   {     List<IAuthorizationPolicy> policies = new List<IAuthorizationPolicy>();     policies.Add(new PrincipalAuthorizationPolicy(principal));     ServiceSecurityContext securityContext = new ServiceSecurityContext(policies.AsReadOnly());     if (request.Properties.Security != null)     {       request.Properties.Security.ServiceSecurityContext = securityContext;     }     else     {       request.Properties.Security = new SecurityMessageProperty() { ServiceSecurityContext = securityContext };      }    }    class PrincipalAuthorizationPolicy : IAuthorizationPolicy    {      string id = Guid.NewGuid().ToString();      IPrincipal user;      public PrincipalAuthorizationPolicy(IPrincipal user)      {        this.user = user;      }      public ClaimSet Issuer      {        get { return ClaimSet.System; }      }      public string Id      {        get { return this.id; }      }      public bool Evaluate(EvaluationContext evaluationContext, ref object state)      {        evaluationContext.AddClaimSet(this, new DefaultClaimSet(System.IdentityModel.Claims.Claim.CreateNameClaim(user.Identity.Name)));        evaluationContext.Properties["Identities"] = new List<IIdentity>(new IIdentity[] { user.Identity });        evaluationContext.Properties["Principal"] = user;        return true;      }    } A WCF Data Service, as any other WCF Service, contains a service host where this interceptor can be injected. The following code illustrates how that can be done in the “svc” file. <%@ ServiceHost Language="C#" Debug="true" Service="ContactsDataService"                 Factory="AppServiceHostFactory" %> using System; using System.ServiceModel; using System.ServiceModel.Activation; using Microsoft.ServiceModel.Web; class AppServiceHostFactory : ServiceHostFactory {    protected override ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses)   {     WebServiceHost2 result = new WebServiceHost2(serviceType, true, baseAddresses);     result.Interceptors.Add(new SamlAuthenticationInterceptor());                 return result;   } } WCF Data Services includes an specific WCF host of out the box (DataServiceHost). However, the service is not affected at all if you replace it with a custom one as I am doing in the code above (WebServiceHost2 is part of the REST Starter kit). Finally, the client application needs to pass the SAML token somehow to the data service. In case you are using any Http client library for consuming the data service, that’s easy to do, you only need to include the SAML token as part of the “Authorization” header. If you are using the auto-generated data service proxy, a little piece of code is needed to inject a SAML token into the DataServiceContext instance. That class provides an event “SendingRequest” that any client application can leverage to include custom code that modified the Http request before it is sent to the service. So, you can easily create an extension method for the DataServiceContext that negotiates the SAML token with an existing STS, and adds that token as part of the “Authorization” header. public static class DataServiceContextExtensions {        public static void ConfigureFederatedCredentials(this DataServiceContext context, string baseStsAddress, string realm)   {     string address = string.Format(STSAddressFormat, baseStsAddress, realm);                  string token = NegotiateSecurityToken(address);     context.SendingRequest += (source, args) =>     {       args.RequestHeaders.Add("Authorization", token);     };   } private string NegotiateSecurityToken(string address) { } } I left the NegociateSecurityToken method empty for this extension as it depends pretty much on how you are negotiating tokens from an existing STS. In case you want to end-to-end REST solution that involves an Http endpoint for the STS, you should definitely take a look at the Thinktecture starter STS project in codeplex.

    Read the article

  • How to disable authentication schemes for WCF Data Services

    - by Schneider
    When I deployed my WCF Data Services to production hosting I started to get the following error (or similar depending on which auth schemes are active): IIS specified authentication schemes 'Basic, Anonymous', but the binding only supports specification of exactly one authentication scheme. Valid authentication schemes are Digest, Negotiate, NTLM, Basic, or Anonymous. Change the IIS settings so that only a single authentication scheme is used. Apparently WCF Data Services (WCF in general?) cannot handle having more than once authentication scheme active. OK so I am aware that I can disable all-but-one authentication scheme on the web application via IIS control panel .... via a support request!! Is there a way to specify a single authentication scheme on a per-service level in the web.config? I thought this might be as straight forward as making a change to <system.serviceModel> but... it turns out that WCF Data Services do not configure themselves in the web config. If you look at the DataService<> class it does not implement a [ServiceContract] hence you cannot refer to it in the <service><endpoint>...which I presume would be needed for changing its configuration via XML. P.S. Our host is using II6, but both solutions for IIS6 & IIS7 appreciated.

    Read the article

  • Correctly assigning value to a Core Data attribute with an integer data-type

    - by Gordon Fontenot
    I'm missing something here, and feeling like an idiot about it. I'm using a UIPickerView in my app, and I need to assign the row number to a 32-bit integer attribute for a Core Data object. To do this, I am using this method: -(void)pickerView:(UIPickerView *)pickerView didSelectRow:(NSInteger)row inComponent:(NSInteger)component { object.integerValue = row; } This is giving me a warning: warning: passing argument 1 of 'setIntegerValue:' makes pointer from integer without a cast What am I mixing up here? --Edit 1-- Ok, so I can get rid of the warning by changing the method to do the following: NSNumber *number = [NSNumber numberWithInteger:row]; object.integerValue = rating; However, I still get a value of 0 for object.integerValue if I use NSLog to print it out. object.integerValue has a max value of 5, so I print out number instead, and then I'm getting a number above 62,000,000. Which doesn't seem right to me, since there are 5 rows. If I NSLog the row variable, I get a number between 0 and 5. So why do I end up with a completely different number after casting the number to NSNumber? --Edit 2-- Ok, so I'm realizing that there is some fundamental idea that I don't understand. I now understand that the 60 million + number can be cast back to the correct 0-5 number by using integerValue. So, it seems my question is how can I save an integer between 0-5 to the attribute if the NSNumber that is returned is over 60 million? Do I need to be using a different data type?

    Read the article

  • Doubts About Core Data NSManagedObject Deep Copy

    - by Jigzat
    Hello everyone, I have a situation where I must copy one NSManagedObject from the main context into an editing context. It sounds unnecessary to most people as I have seen in similar situations described in Stackoverflow but I looks like I need it. In my app there are many views in a tab bar and every view handles different information that is related to the other views. I think I need multiple MOCs since the user may jump from tab to tab and leave unsaved changes in some tab but maybe it saves data in some other tab/view so if that happens the changes in the rest of the views are saved without user consent and in the worst case scenario makes the app crash. For adding new information I got away by using an adding MOC and then merging changes in both MOCs but for editing is not that easy. I saw a similar situation here in Stackoverflow but the app crashes since my data model doesn't seem to use NSMutableSet for the relationships (I don't think I have a many-to-many relationship, just one-to-many) I think it can be modified so I can retrieve the relationships as if they were attributes for (NSString *attr in relationships) { [cloned setValue:[source valueForKey:attr] forKey:attr]; } but I don't know how to merge the changes of the cloned and original objects. I think I could just delete the object from the main context, then merge both contexts and save changes in the main context but I don't know if is the right way to do it. I'm also concerned about database integrity since I'm not sure that the inverse relationships will keep the same reference to the cloned object as if it were the original one. Can some one please enlighten me about this?

    Read the article

  • Binding Data to a TextBlock using Domain Data Source

    - by TFisher
    I have a map with hotspots for each state (done in Expression Blend). I capture each MouseEnter of the state (1 thru 50). I pass that into my Domain Data Source: Dim activebox As Path = TryCast(sender, Path) activebox.Fill = mouseOverColor Dim StateID As Integer = CInt(Right(activebox.Name, 2)) Dim _StateContext As New StateContext myDataGrid.ItemsSource = _StateContext.States _StateContext.Load(_StateContext.GetStateByStateIDQuery(StateID.Text)) The above works fine for a datagrid, listbox and even a dataform. But I created a popup with a stackpanel that has textblocks. popupStatesBox.DataContext = ?????????????? popupStatesBox.IsOpen = True 'popup does open but has no data -- popupStatesBox.xaml <Popup x:Name="popupStatsBox" Margin="8,35,0,8" DataContext="{Binding}" IsOpen="false" Width="268" HorizontalAlignment="Left"> <StackPanel x:Name="Layout" Background="Black"> <TextBlock x:Name="tbState" Text="{Binding StateName /> <TextBlock x:Name="tbAbbrev" Text="{Binding Abbreviation}" /> </StackPanel> </Popup> How do I get the textblocks to display the values from the _StateContext. StackPanel has DataContext but no ItemsSource. What am I missing?

    Read the article

  • Design suggestion for expression tree evaluation with time-series data

    - by Lirik
    I have a (C#) genetic program that uses financial time-series data and it's currently working but I want to re-design the architecture to be more robust. My main goals are: sequentially present the time-series data to the expression trees. allow expression trees to access previous data rows when needed. to optimize performance of the data access while evaluating the expression trees. keep a common interface so various types of data can be used. Here are the possible approaches I've thought about: I can evaluate the expression tree by passing in a data row into the root node and let each child node use the same data row. I can evaluate the expression tree by passing in the data row index and letting each node get the data row from a shared DataSet (currently I'm passing the row index and going to multiple synchronized arrays to get the data). Hybrid: an immutable data set is accessible by all of the expression trees and each expression tree is evaluated by passing in a data row. The benefit of the first approach is that the data row is being passed into the expression tree and there is no further query done on the data set (which should increase performance in a multithreaded environment). The drawback is that the expression tree does not have access to the rest of the data (in case some of the functions need to do calculations using previous data rows). The benefit of the second approach is that the expression trees can access any data up to the latest data row, but unless I specify what that row is, I'll have to iterate through the rows and figure out which one is the last one. The benefit of the hybrid is that it should generally perform better and still provide access to the earlier data. It supports two basic "views" of data: the latest row and the previous rows. Do you guys know of any design patterns or do you have any tips that can help me build this type of system? Should I use a DataSet to hold and present the data, or are there more efficient ways to present rows of data while maintaining a simple interface? FYI: All of my code is written in C#.

    Read the article

  • Data Warehouse: Modelling a future schedule

    - by Pat
    I'm creating a DW that will contain data on financial securities such as bonds and loans. These securities are associated with payment schedules. For example, a bond could pay quarterly, while a mortage would usually pay monthly (sometimes biweekly). The payment schedule is created when the security is traded and, in the majority of cases, will remain unchanged. However, the design would need to accomodate those cases where it does change. I'm currently attempting to model this data and I'm having difficulty coming up with a workable design. One of the most commonly queried fields is "next payment date". Users often want to know when a security will pay next. Therefore, I want to make it as easy as possible for them to get the next payment date and amount for each security. Also, users often run historical queries in which case they'd want the next payment date and amount as of a specific point in time. For example, they may want to look back at 1/31/09 and query the next payment dates (which would usually be in February 2009 for mortgages). It's also common that they want to query a security's entire payment schedule, which might consist of 360 records (30 year mortgage x 12 payments/year). Since the next payment date and amount would be changing each month or even biweekly, these fields wouldn't seem to fit into a slow-changing dimension very well. It would probably make more sense to use a fact table, but I'm unsure of how to model it. Any ideas would be greatly appreciated.

    Read the article

  • How to represent a Rubik's Cube in a data structure

    - by Mel
    I am just curious, how will you guys create a data structure for a rubik's cube with X number of sides. Things to consider: - the cube can be of any size - it is a rubik's cube! so layers can be rotated (in all three axes) and a plus question: using the data structure, how can we know if a certain cube in a certain state is solvable? I have been struggling with this question my self and haven't quite found the answer yet.

    Read the article

  • SQL SERVER – What is MDS? – Master Data Services in Microsoft SQL Server 2008 R2

    - by pinaldave
    What is MDS? Master Data Services helps enterprises standardize the data people rely on to make critical business decisions. With Master Data Services, IT organizations can centrally manage critical data assets company wide and across diverse systems, enable more people to securely manage master data directly, and ensure the integrity of information over time. (Source: Microsoft) Today I will be talking about the same subject at Microsoft TechEd India. If you want to learn about how to standardize your data and apply the business rules to validate data you must attend my session. MDS is very interesting concept, I will cover super short but very interesting 10 quick slides about this subject. I will make sure in very first 20 mins, you will understand following topics Introduction to Master Data Management What is Master Data and Challenges MDM Challenges and Advantage Microsoft Master Data Services Benefits and Key Features Uses of MDS Capabilities Key Features of MDS This slides decks will be followed by around 30 mins demo which will have story of entity, hierarchies, versions, security, consolidation and collection. I will be tell this story keeping business rules in center. We take one business rule which will be simple validation rule and will make it much more complex and yet very useful to product. I will also demonstrate few real life scenario where I will be talking about MDS and its usage. Do not miss this session. At the end of session there will be book awarded to best participant. My session details: Session: Master Data Services in Microsoft SQL Server 2008 R2 Date: April 12, 2010  Time: 2:30pm-3:30pm SQL Server Master Data Services will ship with SQL Server 2008 R2 and will improve Microsoft’s platform appeal. This session provides an in depth demonstration of MDS features and highlights important usage scenarios. Master Data Services enables consistent decision making by allowing you to create, manage and propagate changes from single master view of your business entities. Also with MDS – Master Data-hub which is the vital component helps ensure reporting consistency across systems and deliver faster more accurate results across the enterprise. We will talk about establishing the basis for a centralized approach to defining, deploying, and managing master data in the enterprise. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, Data Warehousing, MVP, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

  • SQLAuthority News – SQL Server Technical Article – The Data Loading Performance Guide

    - by pinaldave
    The white paper describes load strategies for achieving high-speed data modifications of a Microsoft SQL Server database. “Bulk Load Methods” and “Other Minimally Logged and Metadata Operations” provide an overview of two key and interrelated concepts for high-speed data loading: bulk loading and metadata operations. After this background knowledge, white paper describe how these methods can be [...]

    Read the article

  • Data caching in ASP.Net applications

    - by nikolaosk
    In this post I will continue my series of posts on caching. You can read my other post in Output caching here .You can read on how to cache a page depending on the user's browser language. Output caching has its place as a caching mechanism. But right now I will focus on data caching .The advantages of data caching are well known but I will highlight the main points. We have improvements in response times We have reduced database round trips We have different levels of caching and it is up to us...(read more)

    Read the article

  • SQL Developer Data Modeler: On Notes, Comments, and Comments in RDBMS

    - by thatjeffsmith
    Ah the beautiful data model. They say a picture is worth a 1,000 words. And then we have our diagrams, how many words are they worth? Our friends from the Human Relations sample schema So our models describe how the data ‘works’ – whether that be at a logical-business level, or a technical-physical level. Developers like to say that their code is self-documenting. These would be very lazy or very bad (or both) developers. Models are the same way, you should document your models with comments and notes! I have 3 basic options: Comments Comments in RDBMS Notes So what’s the difference? Comments You’re describing the entity/table or attribute/column. This information will NOT be published in the database. It will only be available to the model, and hence, folks with access to the model. Table Comments (in the design only!) Comments in RDBMS You’re doing the same thing as above, but your words will be stored IN the data dictionary of the database. Oracle allows you to store comments on the table and column definitions. So your awesome documentation is going to be viewable to anyone with access to the database. RDBMS is an acronym for Relational Database Management System – of which Oracle is one of the first commercial examples If the DDL is produced and ran against a database, these comments WILL be stored in the data dictionary. Notes A place for you to add notes, maybe from a design meeting. Or maybe you’re using this as a to-do or requirements list. Basically it’s for anything that doesn’t literally describe the object at hand – that’s what the comments are for. I totally made these up. Now these are free text fields and you can put whatever you want here. Just make sure you put stuff here that’s worth reading. And it will live on…forever.

    Read the article

  • Truly understand the threshold for document set in document library in SharePoint

    - by ybbest
    Recently, I am working on an issue with threshold. The problem is that when the user navigates to a view of the document library, it displays the error message “list view threshold is exceeded”. However, in the view, it has no data. The list view threshold limit is 5000 by default for the non-admin user. This limit is not the number of items returned by your query; it is the total number of items the database needs to read to calculate the returned result set. So although the view does not return any result but to calculate the result (no data to show), it needs to access more than 5000 items in the database. To fix the issue, you need to create an index for the column that you use in the filter for the view. Let’s look at the problem in details. You can download a solution to replicate this issue here. 1. Go to Central Admin ==> Web Application Management ==>General Settings==> Click on Resource Throttling 2. Change the list view threshold in web application from 5000 to 2000 so that I can show the problem without loading more than 5000 items into the list. FROM TO 3. Go to the page that displays the approved view of the Loan application document set. It displays the message as shown below although I do not have any data returned for this view. 4. To get around this, you need to create an index column. Go to list settings and click on the Index columns. 5. Click on the “Create a new index” link. 6. Select the LoanStatus field as I use this filed as the filter to create the view. 7. After the index is created now I can access the approved view, as you can see it does not return any data. Notes: List View Threshold: Specify the maximum number of items that a database operation can involve at one time. Operations that exceed this limit are prohibited. References: SharePoint lists V: Techniques for managing large lists Manage large SharePoint lists for better performance http://blogs.technet.com/b/speschka/archive/2009/10/27/working-with-large-lists-in-sharepoint-2010-list-throttling.aspx

    Read the article

  • Automatic Standby Recreation for Data Guard

    - by pablo.boixeda(at)oracle.com
    Hi,Unfortunately sometimes a Standby Instance needs to be recreated. This can happen for many reasons such as lost archive logs, standby data files, failover, among others.This is why we wanted to have one script to recreate standby instances in an easy way.This script recreates the standby considering some prereqs:-Database Version should be at least 11gR1-Dummy instance started on the standby node (Seeking to improve this so it won't be needed)-Broker configuration hasn't been removed-In our case we have two TNSNAMES files, one for the Standby creation (using SID) and the other one for production using service names (including broker service name)-Some environment variables set up by the environment db script (like ORACLE_HOME, PATH...)-The directory tree should not have been modified in the stanby hostWe are currently using it on our 11gR2 Data Guard tests.Any improvements will be welcome! Normal 0 21 false false false ES X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} #!/bin/ksh ###    NOMBRE / VERSION ###       recrea_dg.sh   v.1.00 ### ###    DESCRIPCION ###       reacreacion de la Standby ### ###    DEVUELVE ###       0 Creacion de STANDBY correcta ###       1 Fallo ### ###    NOTAS ###       Este shell script NO DEBE MODIFICARSE. ###       Todas las variables y constantes necesarias se toman del entorno. ### ###    MODIFICADO POR:    FECHA:        COMENTARIOS: ###    ---------------    ----------    ------------------------------------- ###      Oracle           15/02/2011    Creacion. ### ### ### Cargar entorno ### V_ADMIN_DIR=`dirname $0` . ${V_ADMIN_DIR}/entorno_bd.sh 1>>/dev/null if [ $? -ne 0 ] then   echo "Error Loading the environment."   exit 1 fi V_RET=0 V_DATE=`/bin/date` V_DATE_F=`/bin/date +%Y%m%d_%H%M%S` V_LOGFILE=${V_TRAZAS}/recrea_dg_${V_DATE_F}.log exec 4>&1 tee ${V_FICH_LOG} >&4 |& exec 1>&p 2>&1 ### ### Variables para Recrear el Data Guard ### V_DB_BR=`echo ${V_DB_NAME}|tr '[:lower:]' '[:upper:]'` if [ "${ORACLE_SID}" = "${V_DB_NAME}01" ] then         V_LOCAL_BR=${V_DB_BR}'01'         V_REMOTE_BR=${V_DB_BR}'02' else         V_LOCAL_BR=${V_DB_BR}'02'         V_REMOTE_BR=${V_DB_BR}'01' fi echo " Getting local instance ROLE ${ORACLE_SID} ..." sqlplus -s /nolog 1>>/dev/null 2>&1 <<-! whenever sqlerror exit 1 connect / as sysdba variable salida number declare   v_database_role v\$database.database_role%type; begin   select database_role into v_database_role from v\$database;   :salida := case v_database_role        when 'PRIMARY' then 2        when 'PHYSICAL STANDBY' then 3        else 4      end; end; / exit :salida ! case $? in 1) echo " ERROR: Cannot get instance ROLE ." | tee -a ${V_LOGFILE}   2>&1    V_RET=1 ;; 2) echo " Local Instance with PRIMARY role." | tee -a ${V_LOGFILE}   2>&1    V_DB_ROLE_LCL=PRIMARY ;; 3) echo " Local Instance with PHYSICAL STANDBY role." | tee -a ${V_LOGFILE}   2>&1    V_DB_ROLE_LCL=STANDBY ;; *) echo " ERROR: UNKNOWN ROLE." | tee -a ${V_LOGFILE}   2>&1    V_RET=1 ;; esac if [ "${V_DB_ROLE_LCL}" = "PRIMARY" ] then         echo "####################################################################" | tee -a ${V_LOGFILE}   2>&1         echo "${V_DATE} - Reacreating  STANDBY Instance." | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         echo "DATAFILES, CONTROL FILES, REDO LOGS and ARCHIVE LOGS in standby instance ${V_REMOTE_BR} will be removed" | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         V_PRIMARY=${V_LOCAL_BR}         V_STANDBY=${V_REMOTE_BR} fi if [ "${V_DB_ROLE_LCL}" = "STANDBY" ] then         echo "####################################################################" | tee -a ${V_LOGFILE}   2>&1         echo "${V_DATE} - Reacreating  STANDBY Instance." | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         echo "DATAFILES, CONTROL FILES, REDO LOGS and ARCHIVE LOGS in standby instance ${V_LOCAL_BR} will be removed" | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         V_PRIMARY=${V_REMOTE_BR}         V_STANDBY=${V_LOCAL_BR} fi # Cargamos las variables de los hosts # Cargamos las variables de los hosts PRY_HOST=`sqlplus  /nolog << EOF | grep KEEP | sed 's/KEEP//;s/[   ]//g' connect sys/${V_DB_PWD}@${V_PRIMARY} as sysdba select 'KEEP',host_name from v\\$instance; EOF` SBY_HOST=`sqlplus  /nolog << EOF | grep KEEP | sed 's/KEEP//;s/[   ]//g' connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba select 'KEEP',host_name from v\\$instance; EOF` echo "el HOST primary es: ${PRY_HOST}" | tee -a ${V_LOGFILE}   2>&1 echo "el HOST standby es: ${SBY_HOST}" | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 ## ## Paramos la instancia STANDBY ## V_DATE=`/bin/date` echo "${V_DATE} - Shutting down Standby instance" | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ## ## Paramos la instancia STANDBY ## SBY_STATUS=`sqlplus  /nolog << EOF | grep KEEP | sed 's/KEEP//;s/[   ]//g' connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba select 'KEEP',status from v\\$instance; EOF` if [ ${SBY_STATUS} = 'STARTED' ] || [ ${SBY_STATUS} = 'MOUNTED' ] || [ ${SBY_STATUS} = 'OPEN' ] then         echo "${V_DATE} - Standby instance shutdown in progress..." | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1         sqlplus -s /nolog 1>>/dev/null 2>&1 <<-!         whenever sqlerror exit 1         connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba         shutdown abort         ! fi V_DATE=`/bin/date` echo "" echo "${V_DATE} - Standby instance stopped" | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ## ## Eliminamos los ficheros de la base de datos ## V_SBY_SID=`echo ${V_STANDBY}|tr '[:upper:]' '[:lower:]'` V_PRY_SID=`echo ${V_PRIMARY}|tr '[:upper:]' '[:lower:]'` ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/data/*.dbf ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/arch/*.arc ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/ctl/*.ctl ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/redo/*.ctl ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/redo/*.rdo ## ## Startup nomount stby instance ## V_DATE=`/bin/date` echo "" | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Starting  DUMMY Standby Instance " | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ssh ${SBY_HOST} touch /home/oracle/init_dg.ora ssh ${SBY_HOST} 'echo "DB_NAME='${V_DB_NAME}'">>/home/oracle/init_dg.ora' ssh ${SBY_HOST} touch /home/oracle/start_dummy.sh ssh ${SBY_HOST} 'echo "ORACLE_HOME=/opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2 ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "export ORACLE_HOME">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "PATH=\$ORACLE_HOME/bin:\$PATH">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "export PATH">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "ORACLE_SID='${V_SBY_SID}'">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "export ORACLE_SID">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "sqlplus -s /nolog <<-!" >>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "      whenever sqlerror exit 1 ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "      connect / as sysdba ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "      startup nomount pfile='\''/home/oracle/init_dg.ora'\''">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "! ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'chmod 744 /home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'sh /home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'rm /home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'rm /home/oracle/init_dg.ora' ## ## TNSNAMES change, specific for RMAN duplicate ## V_DATE=`/bin/date` echo "" | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Setting up TNSNAMES in PRIMARY host " | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ssh ${PRY_HOST} 'cp /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora.inst  /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora' V_DATE=`/bin/date` echo "" | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Starting STANDBY creation with RMAN.. " | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 rman<<-! >>${V_LOGFILE} connect target sys/${V_DB_PWD}@${V_PRIMARY} connect auxiliary sys/${V_DB_PWD}@${V_STANDBY} run { allocate channel prmy1 type disk; allocate channel prmy2 type disk; allocate channel prmy3 type disk; allocate channel prmy4 type disk; allocate auxiliary channel stby type disk; duplicate target database for standby from active database dorecover spfile parameter_value_convert '${V_PRY_SID}','${V_SBY_SID}' set control_files='/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/ctl/control01.ctl','/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/redo/control02.ctl' set db_file_name_convert='/opt/oracle/db/db${V_DB_NAME}/${V_PRY_SID}/','/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/' set log_file_name_convert='/opt/oracle/db/db${V_DB_NAME}/${V_PRY_SID}/','/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/' set 'db_unique_name'='${V_SBY_SID}' set log_archive_config='DG_CONFIG=(${V_PRIMARY},${V_STANDBY})' set fal_client='${V_STANDBY}' set fal_server='${V_PRIMARY}' set log_archive_dest_1='LOCATION=/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/arch DB_UNIQUE_NAME=${V_SBY_SID} MANDATORY VALID_FOR=(ALL_LOGFILES,ALL_ROLES)' set log_archive_dest_2='SERVICE="${V_PRIMARY}"','SYNC AFFIRM DB_UNIQUE_NAME=${V_PRY_SID} DELAY=0 MAX_FAILURE=0 REOPEN=300 REGISTER VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)' nofilenamecheck ; } ! V_DATE=`/bin/date` if [ $? -ne 0 ] then         echo ""         echo "${V_DATE} - Error creating STANDBY instance"         echo ""         echo "********************************************************************************" else         echo ""         echo "${V_DATE} - STANDBY instance created SUCCESSFULLY "         echo ""         echo "********************************************************************************" fi sqlplus -s /nolog 1>>/dev/null 2>&1 <<-!         whenever sqlerror exit 1         connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba         alter system set local_listener='(ADDRESS=(PROTOCOL=TCP)(HOST=${SBY_HOST})(PORT=1544))' scope=both;         alter system set service_names='${V_DB_NAME}.eu.roca.net,${V_SBY_SID}.eu.roca.net,${V_SBY_SID}_DGMGRL.eu.roca.net' scope=both;         alter database recover managed standby database using current logfile disconnect from session;         alter system set dg_broker_start=true scope=both; ! ## ## TNSNAMES change, back to Production Mode ## V_DATE=`/bin/date` echo " " | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Restoring TNSNAMES in PRIMARY "  | tee -a ${V_LOGFILE}   2>&1 echo ""  | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************"  | tee -a ${V_LOGFILE}   2>&1 ssh ${PRY_HOST} 'cp /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora.prod  /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora' echo ""  | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} -  Waiting for media recovery before check the DATA GUARD Broker"  | tee -a ${V_LOGFILE}   2>&1 echo ""  | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************"  | tee -a ${V_LOGFILE}   2>&1 sleep 200 dgmgrl <<-! | grep SUCCESS 1>/dev/null 2>&1     connect ${V_DB_USR}/${V_DB_PWD}@${V_STANDBY}     show configuration verbose; ! if [ $? -ne 0 ] ; then         echo "       ERROR: El status del Broker no es SUCCESS" | tee -a ${V_LOGFILE}   2>&1 ;         V_RET=1 else          echo "      DATA GUARD OK " | tee -a ${V_LOGFILE}   2>&1 ; Normal 0 21 false false false ES X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}         V_RET=0 fi Hope it helps.

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >