Search Results

Search found 69877 results on 2796 pages for 'ibm data studio'.

Page 39/2796 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • Serializing Data Structures in C

    - by src
    I've recently read three separate books on algorithms and data structures, tcp/ip socket programming, and programming with memory. The book about memory briefly discussed the topic of serializing data structures for the purposes of storing it to disk, or sending it across a network. I can't help but wonder why the the other two books didn't discuss serialization at all. After an unsuccessful web/book search I'm left wondering where I can find a good book/paper/tutorial on serializing data structures in C? Where or how did you learn it?

    Read the article

  • How to expose an entity via alternate keys with spring data rest

    - by dan carter
    Spring-data-rest does a great job exposing entities via their primary key for GET, PUT and DELETE etc. operations. /myentityies/123 It also exposes search operations. /myentities/search/byMyOtherKey?myOtherKey=123 In my case the entities have a number of alternate keys. The systems calling us, will know the objects by these IDs, rather than our internal primary key. Is it possible to expose the objects via another URL and have the GET, PUT and DELETE handled by the built-in spring-data-rest controllers? /myentities/myotherkey/456 We'd like to avoid forcing the calling systems to have to make two requests for each update. I've tried playing with @RestResource path value, but there doesn't seem to be a way to add additional paths.

    Read the article

  • Data structure supporting the following operations

    - by 500865
    I'm looking for a data structure for working with a set of data which is most efficient to do the following : Check whether an item has been categorized or not. (The categorized and uncategorized set are disjoint sets). Get the category of an item. Get all the items in a particular category. Get all the uncategorized items. Remove a particular item from the data set. I was thinking of having a Dictionary<String, Set<String>> to hold all the items in a given category, but that doesn't solve 2.

    Read the article

  • UPOS RFIDScanner data format

    - by Robert Snyder
    A lot of work that I do currently is based in the OPOS/UPOS world. My company has a device that can read 13.56Mhz tags (RFID), Smart Cards, and Mag Stripe cards. Up until somewhat recently I have only been working with RFID for a very specific scenario. That was to read UltraLight C and Desfire cards. These cards were all setup very specifically so that I could take the data read from those cards and force it into a MSR track2 format. The past couple of weeks, however, I have been working on reading RFID credit cards (since I have a Visa card I've been using mine), and Smart Card credit cards. (The visa card I have has both) In learning how to communicate with SmartCard and reading ISO7816 and EMVCO documents I became a little more familiar with how info is stored. But now I have a question regarding UPOS. The RFID data on my Visa is stored (and read) very similar to how the data is stored and read from the Smart Card on my Visa. Cool. Well in the UPOS spec for SmartCardRW the ReadData method returns a byte array. That's cool, I can just return all that data and then parse it as my heart desires. The RFID though has a LinkedList of Tags. Well this makes sense in terms of my Visa card (reminds me of a question I have in regards to SmartCard, but that is for another question) but what about ULC and Desfire, or for that matter any Mifare card. Pages, Files, Purses don't exactly fit the Tag profile. For instance lets just say I read pages 4-12 on my ULC card. Each page I read is 4 bytes long. Does this mean I have 9 tags in my LinkedList? Is my Tag id the page number? Or then how does that translate to Desfire? I open application 123456 and read file 1 and file 2, Do I have 2 tags? and if so what is my tag id? At least with my Visa I think that I have to use the Tag id (ex 5F24 for my expiration date) and value of {0x15, 0x10, 0x31} Part of me says yes..that makes sense. Another part of me says, "well if that is the case then why doesn't SmartCardRW have Tags?" So that is my question. How do I format my data from those different types of media? or is that the job of my Control Object (the application)? Is so how does it know? The only protocols I have are: // Summary: // Enumerates the available predefined RFID tag protocols the device supports. [Flags] public enum RFIDProtocols { EpcClass0 = 1, RFIDSdt0Plus = 2, EpcClass1 = 4, EpcClass1Gen2 = 8, EpcClass2 = 16, Iso14443A = 4096, Iso14443B = 8192, Iso15693 = 12288, Iso180006B = 16384, Other = 16777216, All = 1073741824, } If I use that well all of my cards that I have are all Iso14443A. I use the ATQA and the SAK to know what type of card I really have. There is no RFID property that lets me specify that. So I'm lost.

    Read the article

  • Building an ASP.Net 4.5 Web forms application - part 5

    - by nikolaosk
    ?his is the fifth post in a series of posts on how to design and implement an ASP.Net 4.5 Web Forms store that sells posters on line. There are 4 more posts in this series of posts.Please make sure you read them first.You can find the first post here. You can find the second post here. You can find the third post here.You can find the fourth here.  In this new post we will build on the previous posts and we will demonstrate how to display the details of a poster when the user clicks on an individual poster photo/link. We will add a FormView control on a web form and will bind data from the database. FormView is a great web server control for displaying the details of a single record. 1) Launch Visual Studio and open your solution where your project lives2) Add a new web form item on the project.Make sure you include the Master Page.Name it PosterDetails.aspx 3) Open the PosterDetails.aspx page. We will add some markup in this page. Have a look at the code below <asp:Content ID="Content2" ContentPlaceHolderID="FeaturedContent" runat="server">    <asp:FormView ID="posterDetails" runat="server" ItemType="PostersOnLine.DAL.Poster" SelectMethod ="GetPosterDetails">        <ItemTemplate>            <div>                <h1><%#:Item.PosterName %></h1>            </div>            <br />            <table>                <tr>                    <td>                        <img src="<%#:Item.PosterImgpath %>" border="1" alt="<%#:Item.PosterName %>" height="300" />                    </td>                    <td style="vertical-align: top">                        <b>Description:</b><br /><%#:Item.PosterDescription %>                        <br />                        <span><b>Price:</b>&nbsp;<%#: String.Format("{0:c}", Item.PosterPrice) %></span>                        <br />                        <span><b>Poster Number:</b>&nbsp;<%#:Item.PosterID %></span>                        <br />                    </td>                </tr>            </table>        </ItemTemplate>    </asp:FormView></asp:Content> I set the ItemType property to the Poster entity class and the SelectMethod to the GetPosterDetails method.The Item binding expression is available and we can retrieve properties of the Poster object.I retrieve the name, the image,the description and the price of each poster. 4) Now we need to write the GetPosterDetails method.In the code behind of the PosterDetails.aspx page we type public IQueryable<Poster> GetPosterDetails([QueryString("PosterID")]int? posterid)        {                    PosterContext ctx = new PosterContext();            IQueryable<Poster> query = ctx.Posters;            if (posterid.HasValue && posterid > 0)            {                query = query.Where(p => p.PosterID == posterid);            }            else            {                query = null;            }            return query;        } I bind the value from the query string to the posterid parameter at run time.This is all possible due to the QueryStringAttribute class that lives inside the System.Web.ModelBinding and gets the value of the query string variable PosterID.If there is a matching poster it is fetched from the database.If not,there is no data at all coming back from the database. 5) I run my application and then click on the "Midfielders" link.Then click on the first poster that appears from the left (Kenny Dalglish) and click on it to see the details. Have a look at the picture below to see the results.   You can see that now I have all the details of the poster in a new page.?ake sure you place breakpoints in the code so you can see what is really going on. Hope it helps!!!

    Read the article

  • Integrating WIF with WCF Data Services

    - by cibrax
    A time ago I discussed how a custom REST Starter kit interceptor could be used to parse a SAML token in the Http Authorization header and wrap that into a ClaimsPrincipal that the WCF services could use. The thing is that code was initially created for Geneva framework, so it got deprecated quickly. I recently needed that piece of code for one of projects where I am currently working on so I decided to update it for WIF. As this interceptor can be injected in any host for WCF REST services, also represents an excellent solution for integrating claim-based security into WCF Data Services (previously known as ADO.NET Data Services). The interceptor basically expects a SAML token in the Authorization header. If a token is found, it is parsed and a new ClaimsPrincipal is initialized and injected in the WCF authorization context. public class SamlAuthenticationInterceptor : RequestInterceptor {   SecurityTokenHandlerCollection handlers;   public SamlAuthenticationInterceptor()     : base(false)   {     this.handlers = FederatedAuthentication.ServiceConfiguration.SecurityTokenHandlers;   }   public override void ProcessRequest(ref RequestContext requestContext)   {     SecurityToken token = ExtractCredentials(requestContext.RequestMessage);     if (token != null)     {       ClaimsIdentityCollection claims = handlers.ValidateToken(token);       var principal = new ClaimsPrincipal(claims);       InitializeSecurityContext(requestContext.RequestMessage, principal);     }     else     {       DenyAccess(ref requestContext);     }   }   private void DenyAccess(ref RequestContext requestContext)   {     Message reply = Message.CreateMessage(MessageVersion.None, null);     HttpResponseMessageProperty responseProperty = new HttpResponseMessageProperty() { StatusCode = HttpStatusCode.Unauthorized };     responseProperty.Headers.Add("WWW-Authenticate",           String.Format("Basic realm=\"{0}\"", ""));     reply.Properties[HttpResponseMessageProperty.Name] = responseProperty;     requestContext.Reply(reply);     requestContext = null;   }   private SecurityToken ExtractCredentials(Message requestMessage)   {     HttpRequestMessageProperty request = (HttpRequestMessageProperty)  requestMessage.Properties[HttpRequestMessageProperty.Name];     string authHeader = request.Headers["Authorization"];     if (authHeader != null && authHeader.Contains("<saml"))     {       XmlTextReader xmlReader = new XmlTextReader(new StringReader(authHeader));       var col = SecurityTokenHandlerCollection.CreateDefaultSecurityTokenHandlerCollection();       SecurityToken token = col.ReadToken(xmlReader);                                        return token;     }     return null;   }   private void InitializeSecurityContext(Message request, IPrincipal principal)   {     List<IAuthorizationPolicy> policies = new List<IAuthorizationPolicy>();     policies.Add(new PrincipalAuthorizationPolicy(principal));     ServiceSecurityContext securityContext = new ServiceSecurityContext(policies.AsReadOnly());     if (request.Properties.Security != null)     {       request.Properties.Security.ServiceSecurityContext = securityContext;     }     else     {       request.Properties.Security = new SecurityMessageProperty() { ServiceSecurityContext = securityContext };      }    }    class PrincipalAuthorizationPolicy : IAuthorizationPolicy    {      string id = Guid.NewGuid().ToString();      IPrincipal user;      public PrincipalAuthorizationPolicy(IPrincipal user)      {        this.user = user;      }      public ClaimSet Issuer      {        get { return ClaimSet.System; }      }      public string Id      {        get { return this.id; }      }      public bool Evaluate(EvaluationContext evaluationContext, ref object state)      {        evaluationContext.AddClaimSet(this, new DefaultClaimSet(System.IdentityModel.Claims.Claim.CreateNameClaim(user.Identity.Name)));        evaluationContext.Properties["Identities"] = new List<IIdentity>(new IIdentity[] { user.Identity });        evaluationContext.Properties["Principal"] = user;        return true;      }    } A WCF Data Service, as any other WCF Service, contains a service host where this interceptor can be injected. The following code illustrates how that can be done in the “svc” file. <%@ ServiceHost Language="C#" Debug="true" Service="ContactsDataService"                 Factory="AppServiceHostFactory" %> using System; using System.ServiceModel; using System.ServiceModel.Activation; using Microsoft.ServiceModel.Web; class AppServiceHostFactory : ServiceHostFactory {    protected override ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses)   {     WebServiceHost2 result = new WebServiceHost2(serviceType, true, baseAddresses);     result.Interceptors.Add(new SamlAuthenticationInterceptor());                 return result;   } } WCF Data Services includes an specific WCF host of out the box (DataServiceHost). However, the service is not affected at all if you replace it with a custom one as I am doing in the code above (WebServiceHost2 is part of the REST Starter kit). Finally, the client application needs to pass the SAML token somehow to the data service. In case you are using any Http client library for consuming the data service, that’s easy to do, you only need to include the SAML token as part of the “Authorization” header. If you are using the auto-generated data service proxy, a little piece of code is needed to inject a SAML token into the DataServiceContext instance. That class provides an event “SendingRequest” that any client application can leverage to include custom code that modified the Http request before it is sent to the service. So, you can easily create an extension method for the DataServiceContext that negotiates the SAML token with an existing STS, and adds that token as part of the “Authorization” header. public static class DataServiceContextExtensions {        public static void ConfigureFederatedCredentials(this DataServiceContext context, string baseStsAddress, string realm)   {     string address = string.Format(STSAddressFormat, baseStsAddress, realm);                  string token = NegotiateSecurityToken(address);     context.SendingRequest += (source, args) =>     {       args.RequestHeaders.Add("Authorization", token);     };   } private string NegotiateSecurityToken(string address) { } } I left the NegociateSecurityToken method empty for this extension as it depends pretty much on how you are negotiating tokens from an existing STS. In case you want to end-to-end REST solution that involves an Http endpoint for the STS, you should definitely take a look at the Thinktecture starter STS project in codeplex.

    Read the article

  • How to disable authentication schemes for WCF Data Services

    - by Schneider
    When I deployed my WCF Data Services to production hosting I started to get the following error (or similar depending on which auth schemes are active): IIS specified authentication schemes 'Basic, Anonymous', but the binding only supports specification of exactly one authentication scheme. Valid authentication schemes are Digest, Negotiate, NTLM, Basic, or Anonymous. Change the IIS settings so that only a single authentication scheme is used. Apparently WCF Data Services (WCF in general?) cannot handle having more than once authentication scheme active. OK so I am aware that I can disable all-but-one authentication scheme on the web application via IIS control panel .... via a support request!! Is there a way to specify a single authentication scheme on a per-service level in the web.config? I thought this might be as straight forward as making a change to <system.serviceModel> but... it turns out that WCF Data Services do not configure themselves in the web config. If you look at the DataService<> class it does not implement a [ServiceContract] hence you cannot refer to it in the <service><endpoint>...which I presume would be needed for changing its configuration via XML. P.S. Our host is using II6, but both solutions for IIS6 & IIS7 appreciated.

    Read the article

  • Correctly assigning value to a Core Data attribute with an integer data-type

    - by Gordon Fontenot
    I'm missing something here, and feeling like an idiot about it. I'm using a UIPickerView in my app, and I need to assign the row number to a 32-bit integer attribute for a Core Data object. To do this, I am using this method: -(void)pickerView:(UIPickerView *)pickerView didSelectRow:(NSInteger)row inComponent:(NSInteger)component { object.integerValue = row; } This is giving me a warning: warning: passing argument 1 of 'setIntegerValue:' makes pointer from integer without a cast What am I mixing up here? --Edit 1-- Ok, so I can get rid of the warning by changing the method to do the following: NSNumber *number = [NSNumber numberWithInteger:row]; object.integerValue = rating; However, I still get a value of 0 for object.integerValue if I use NSLog to print it out. object.integerValue has a max value of 5, so I print out number instead, and then I'm getting a number above 62,000,000. Which doesn't seem right to me, since there are 5 rows. If I NSLog the row variable, I get a number between 0 and 5. So why do I end up with a completely different number after casting the number to NSNumber? --Edit 2-- Ok, so I'm realizing that there is some fundamental idea that I don't understand. I now understand that the 60 million + number can be cast back to the correct 0-5 number by using integerValue. So, it seems my question is how can I save an integer between 0-5 to the attribute if the NSNumber that is returned is over 60 million? Do I need to be using a different data type?

    Read the article

  • Doubts About Core Data NSManagedObject Deep Copy

    - by Jigzat
    Hello everyone, I have a situation where I must copy one NSManagedObject from the main context into an editing context. It sounds unnecessary to most people as I have seen in similar situations described in Stackoverflow but I looks like I need it. In my app there are many views in a tab bar and every view handles different information that is related to the other views. I think I need multiple MOCs since the user may jump from tab to tab and leave unsaved changes in some tab but maybe it saves data in some other tab/view so if that happens the changes in the rest of the views are saved without user consent and in the worst case scenario makes the app crash. For adding new information I got away by using an adding MOC and then merging changes in both MOCs but for editing is not that easy. I saw a similar situation here in Stackoverflow but the app crashes since my data model doesn't seem to use NSMutableSet for the relationships (I don't think I have a many-to-many relationship, just one-to-many) I think it can be modified so I can retrieve the relationships as if they were attributes for (NSString *attr in relationships) { [cloned setValue:[source valueForKey:attr] forKey:attr]; } but I don't know how to merge the changes of the cloned and original objects. I think I could just delete the object from the main context, then merge both contexts and save changes in the main context but I don't know if is the right way to do it. I'm also concerned about database integrity since I'm not sure that the inverse relationships will keep the same reference to the cloned object as if it were the original one. Can some one please enlighten me about this?

    Read the article

  • Binding Data to a TextBlock using Domain Data Source

    - by TFisher
    I have a map with hotspots for each state (done in Expression Blend). I capture each MouseEnter of the state (1 thru 50). I pass that into my Domain Data Source: Dim activebox As Path = TryCast(sender, Path) activebox.Fill = mouseOverColor Dim StateID As Integer = CInt(Right(activebox.Name, 2)) Dim _StateContext As New StateContext myDataGrid.ItemsSource = _StateContext.States _StateContext.Load(_StateContext.GetStateByStateIDQuery(StateID.Text)) The above works fine for a datagrid, listbox and even a dataform. But I created a popup with a stackpanel that has textblocks. popupStatesBox.DataContext = ?????????????? popupStatesBox.IsOpen = True 'popup does open but has no data -- popupStatesBox.xaml <Popup x:Name="popupStatsBox" Margin="8,35,0,8" DataContext="{Binding}" IsOpen="false" Width="268" HorizontalAlignment="Left"> <StackPanel x:Name="Layout" Background="Black"> <TextBlock x:Name="tbState" Text="{Binding StateName /> <TextBlock x:Name="tbAbbrev" Text="{Binding Abbreviation}" /> </StackPanel> </Popup> How do I get the textblocks to display the values from the _StateContext. StackPanel has DataContext but no ItemsSource. What am I missing?

    Read the article

  • Design suggestion for expression tree evaluation with time-series data

    - by Lirik
    I have a (C#) genetic program that uses financial time-series data and it's currently working but I want to re-design the architecture to be more robust. My main goals are: sequentially present the time-series data to the expression trees. allow expression trees to access previous data rows when needed. to optimize performance of the data access while evaluating the expression trees. keep a common interface so various types of data can be used. Here are the possible approaches I've thought about: I can evaluate the expression tree by passing in a data row into the root node and let each child node use the same data row. I can evaluate the expression tree by passing in the data row index and letting each node get the data row from a shared DataSet (currently I'm passing the row index and going to multiple synchronized arrays to get the data). Hybrid: an immutable data set is accessible by all of the expression trees and each expression tree is evaluated by passing in a data row. The benefit of the first approach is that the data row is being passed into the expression tree and there is no further query done on the data set (which should increase performance in a multithreaded environment). The drawback is that the expression tree does not have access to the rest of the data (in case some of the functions need to do calculations using previous data rows). The benefit of the second approach is that the expression trees can access any data up to the latest data row, but unless I specify what that row is, I'll have to iterate through the rows and figure out which one is the last one. The benefit of the hybrid is that it should generally perform better and still provide access to the earlier data. It supports two basic "views" of data: the latest row and the previous rows. Do you guys know of any design patterns or do you have any tips that can help me build this type of system? Should I use a DataSet to hold and present the data, or are there more efficient ways to present rows of data while maintaining a simple interface? FYI: All of my code is written in C#.

    Read the article

  • Data Warehouse: Modelling a future schedule

    - by Pat
    I'm creating a DW that will contain data on financial securities such as bonds and loans. These securities are associated with payment schedules. For example, a bond could pay quarterly, while a mortage would usually pay monthly (sometimes biweekly). The payment schedule is created when the security is traded and, in the majority of cases, will remain unchanged. However, the design would need to accomodate those cases where it does change. I'm currently attempting to model this data and I'm having difficulty coming up with a workable design. One of the most commonly queried fields is "next payment date". Users often want to know when a security will pay next. Therefore, I want to make it as easy as possible for them to get the next payment date and amount for each security. Also, users often run historical queries in which case they'd want the next payment date and amount as of a specific point in time. For example, they may want to look back at 1/31/09 and query the next payment dates (which would usually be in February 2009 for mortgages). It's also common that they want to query a security's entire payment schedule, which might consist of 360 records (30 year mortgage x 12 payments/year). Since the next payment date and amount would be changing each month or even biweekly, these fields wouldn't seem to fit into a slow-changing dimension very well. It would probably make more sense to use a fact table, but I'm unsure of how to model it. Any ideas would be greatly appreciated.

    Read the article

  • MS Visual Studio 2008 Certified with Oracle EBS 12 on MS Windows Server (32-bit)

    - by Steven Chan
    Microsoft Visual Studio 2008 is now certified with Oracle E-Business Suite Release 12 (12.0.4 or higher, 12.1.1 or higher) as a release maintenance tool. Previously, Microsoft Visual Studio 2005 was required for E-Business Suite Release 12. The editions of Visual Studio 2008 covered by this announcement are:Microsoft Visual Studio 2008 StandardMicrosoft Visual Studio 2008 Professional Microsoft Visual Studio 2008 Team Microsoft Visual C++ 2008 Express (part of Visual Studio 2008 Express Edition) The operating systems supported by Visual Studio 2008 on this platform are:Microsoft Windows Server 2003 (for EBS 12.0.4, 12.1.1) Microsoft Windows Server 2008 (for EBS 12.1.1 only)

    Read the article

  • How to represent a Rubik's Cube in a data structure

    - by Mel
    I am just curious, how will you guys create a data structure for a rubik's cube with X number of sides. Things to consider: - the cube can be of any size - it is a rubik's cube! so layers can be rotated (in all three axes) and a plus question: using the data structure, how can we know if a certain cube in a certain state is solvable? I have been struggling with this question my self and haven't quite found the answer yet.

    Read the article

  • SQL SERVER – What is MDS? – Master Data Services in Microsoft SQL Server 2008 R2

    - by pinaldave
    What is MDS? Master Data Services helps enterprises standardize the data people rely on to make critical business decisions. With Master Data Services, IT organizations can centrally manage critical data assets company wide and across diverse systems, enable more people to securely manage master data directly, and ensure the integrity of information over time. (Source: Microsoft) Today I will be talking about the same subject at Microsoft TechEd India. If you want to learn about how to standardize your data and apply the business rules to validate data you must attend my session. MDS is very interesting concept, I will cover super short but very interesting 10 quick slides about this subject. I will make sure in very first 20 mins, you will understand following topics Introduction to Master Data Management What is Master Data and Challenges MDM Challenges and Advantage Microsoft Master Data Services Benefits and Key Features Uses of MDS Capabilities Key Features of MDS This slides decks will be followed by around 30 mins demo which will have story of entity, hierarchies, versions, security, consolidation and collection. I will be tell this story keeping business rules in center. We take one business rule which will be simple validation rule and will make it much more complex and yet very useful to product. I will also demonstrate few real life scenario where I will be talking about MDS and its usage. Do not miss this session. At the end of session there will be book awarded to best participant. My session details: Session: Master Data Services in Microsoft SQL Server 2008 R2 Date: April 12, 2010  Time: 2:30pm-3:30pm SQL Server Master Data Services will ship with SQL Server 2008 R2 and will improve Microsoft’s platform appeal. This session provides an in depth demonstration of MDS features and highlights important usage scenarios. Master Data Services enables consistent decision making by allowing you to create, manage and propagate changes from single master view of your business entities. Also with MDS – Master Data-hub which is the vital component helps ensure reporting consistency across systems and deliver faster more accurate results across the enterprise. We will talk about establishing the basis for a centralized approach to defining, deploying, and managing master data in the enterprise. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, Data Warehousing, MVP, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

  • SQLAuthority News – SQL Server Technical Article – The Data Loading Performance Guide

    - by pinaldave
    The white paper describes load strategies for achieving high-speed data modifications of a Microsoft SQL Server database. “Bulk Load Methods” and “Other Minimally Logged and Metadata Operations” provide an overview of two key and interrelated concepts for high-speed data loading: bulk loading and metadata operations. After this background knowledge, white paper describe how these methods can be [...]

    Read the article

  • Installing ASP.NET MVC 2 RTM on Visual Studio 2010 RC

    - by shiju
    Visual Studio 2010 RC is built against the ASP.NET MVC 2 RC version but you easily install ASP.NET MVC 2 RTM on the Visual Studio 2010 RC. For installing ASP.NET MVC 2 RTM, do the following steps 1) Uninstall "ASP.NET MVC 2 ". 2) Uninstall "Microsoft ASP.NET MVC 2 – Visual Studio 2008 Tools". 3) Install the new ASP.NET MVC 2 RTM version for Visual Studio 2008 SP1. The above steps will enable you to use ASP.NET MVC 2 RTM version on the Visual Studio 2010 RC. Note : Don't uninstall Microsoft ASP.NET MVC 2 – Visual Studio 2010 Tools

    Read the article

  • Visual Studio 11 not 2011

    - by Daniel Moth
    A little pet peeve of mine is when people incorrectly refer to the Developer Preview (or the upcoming Beta) as Visual Studio 2011 instead of the correct Visual Studio 11. The "11" refers to the version number (internally we call it Dev11). What the product will be called when it is released is anyone's guess (it could keep the name or it could have a year appended to it, or it could be something else, who knows). Even if it does have a year appended to the name, I think it is a safe bet it won't be last year! For reference, version 10 was the previous version of Visual Studio which happened to be released in 2010, hence it got the name Visual Studio 2010. That is what confuses new people to this product I guess... they think that the two-digit number matches the year, just because it coincided like that last year. (btw, internally we called it Dev10). For further reference, older releases were: Visual Studio 2008 (v9) aka "Orcas", Visual Studio 2005 (v8) aka "Whidbey", Visual Studio .NET 2003 (v7.1) aka "Everett", and Visual Studio .NET 2002 (v7) aka "Rainier". Before that, we were in the pre-.NET era with Visual Studio 6 (where the version and the product name matched, without the year appended to the name). So next time you hear someone saying "Visual Studio 2011", point them to this post for some mini-education... thanks. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • SQL Developer Data Modeler: On Notes, Comments, and Comments in RDBMS

    - by thatjeffsmith
    Ah the beautiful data model. They say a picture is worth a 1,000 words. And then we have our diagrams, how many words are they worth? Our friends from the Human Relations sample schema So our models describe how the data ‘works’ – whether that be at a logical-business level, or a technical-physical level. Developers like to say that their code is self-documenting. These would be very lazy or very bad (or both) developers. Models are the same way, you should document your models with comments and notes! I have 3 basic options: Comments Comments in RDBMS Notes So what’s the difference? Comments You’re describing the entity/table or attribute/column. This information will NOT be published in the database. It will only be available to the model, and hence, folks with access to the model. Table Comments (in the design only!) Comments in RDBMS You’re doing the same thing as above, but your words will be stored IN the data dictionary of the database. Oracle allows you to store comments on the table and column definitions. So your awesome documentation is going to be viewable to anyone with access to the database. RDBMS is an acronym for Relational Database Management System – of which Oracle is one of the first commercial examples If the DDL is produced and ran against a database, these comments WILL be stored in the data dictionary. Notes A place for you to add notes, maybe from a design meeting. Or maybe you’re using this as a to-do or requirements list. Basically it’s for anything that doesn’t literally describe the object at hand – that’s what the comments are for. I totally made these up. Now these are free text fields and you can put whatever you want here. Just make sure you put stuff here that’s worth reading. And it will live on…forever.

    Read the article

  • Truly understand the threshold for document set in document library in SharePoint

    - by ybbest
    Recently, I am working on an issue with threshold. The problem is that when the user navigates to a view of the document library, it displays the error message “list view threshold is exceeded”. However, in the view, it has no data. The list view threshold limit is 5000 by default for the non-admin user. This limit is not the number of items returned by your query; it is the total number of items the database needs to read to calculate the returned result set. So although the view does not return any result but to calculate the result (no data to show), it needs to access more than 5000 items in the database. To fix the issue, you need to create an index for the column that you use in the filter for the view. Let’s look at the problem in details. You can download a solution to replicate this issue here. 1. Go to Central Admin ==> Web Application Management ==>General Settings==> Click on Resource Throttling 2. Change the list view threshold in web application from 5000 to 2000 so that I can show the problem without loading more than 5000 items into the list. FROM TO 3. Go to the page that displays the approved view of the Loan application document set. It displays the message as shown below although I do not have any data returned for this view. 4. To get around this, you need to create an index column. Go to list settings and click on the Index columns. 5. Click on the “Create a new index” link. 6. Select the LoanStatus field as I use this filed as the filter to create the view. 7. After the index is created now I can access the approved view, as you can see it does not return any data. Notes: List View Threshold: Specify the maximum number of items that a database operation can involve at one time. Operations that exceed this limit are prohibited. References: SharePoint lists V: Techniques for managing large lists Manage large SharePoint lists for better performance http://blogs.technet.com/b/speschka/archive/2009/10/27/working-with-large-lists-in-sharepoint-2010-list-throttling.aspx

    Read the article

  • HTML5 and Visual Studio 2010

    - by Harish Ranganathan
    All of us work with Visual Studio (or the free Visual Web Developer Express Edition) for developing web applications targeting ASP.NET / ASP.NET MVC or Silverlight etc.,  Over the years, Visual Studio has grown to a great extent.  From being a simple limited functionality tool in VS.NET 2002 to the multi-faceted, MEF driven Visual Studio 2010, it has come a long way.  And as much as Visual Studio supports rapid web development by generating HTML mark up, it also added intellisense for some of the HTML specifications that one has otherwise monotonously type every time.  Ex.- In Visual Studio 2010, one can just type the angular bracket “<” and then the first keyword “h” or “x” for html or xhtml respectively and then press tab twice and it would render the entire markup required for XHTML or HTML 1.0/1.1 strict/transitional and the fully qualified W3C URL. The same holds good for specifying HTML type declaration.  Now, the difference between HTML and XHTML has been discussed in detail already, though, if you are interested to know, you can read it from http://www.w3schools.com/xhtml/xhtml_html.asp But, the industry trend or the buzz around is HTML5.  With browsers like IE9 Beta, Google Chrome, Firefox 4 etc., supporting HTML5 standards big time, everyone wants to start developing HTML5 based websites. VS developers (like me) often get the question around when would VS start supporting HTML5.  VS 2010 was released last year and HTML5 is still specifications under development.  Clearly, with the timelines we started developing Visual Studio (way back in 2008), HTML5 specs were almost non-existent.  Even today, the HTML5 body recommends not to fully depend on the entire mark up set as they are still under development specs and might change in the future. However, with Visual Studio 2010 SP1 beta, there is quite a bit of support for HTML5 based web development.  In fact, one of my colleagues pointed out that SP1 beta’s major enhancement is its ability to support HTML5 tags and even add server mode to them. Lets look at the existing validation schema available in Visual Studio (Tools – Options – Validation) This is before installing Visual Studio 2010 SP1 Beta.  Clearly, the validation options are restricted to HTML 4.01 and XHTML 1.1 transitional and below. Also, lets consider using some of the new HTML5 input type elements.  (I found out this, just today from my friend, also an, ASP.NET team member) <input type=”email”> is one of the new input type elements according to the HTML5 specification.  Now, this works well if you type it as is  in Visual Studio and the page renders without any issue (since the default behaviour is, if there is an “undefined” type specified to input tag, it would fall back on the default mode, which is text. The moment you add <input type=”email” runat=”server” >, you get an error Naturally you don’t get intellisense support as well for these new tags.  Once you install Visual Studio 2010 Service Pack 1 Beta from here (it takes a while so you need to be patient for the installation to complete), you will start getting additional Validation templates for HTML5, as below:- Once you set this, you can start using HTML5 elements in your web page without getting errors/warnings.  Look at the screen shot below, for the new “video” tag which is showing up in intellisense (video is a part of the new HTML5 specifications)     note that, you still need to hook up the <!DOCTYPE html /> on the top manually as it doesn’t change automatically  (from the default XHTML 1.0 strict) when you create a new page. Also, the new input type tags in HTML5 are also supported One, can also use the <asp:TextBox type=”email” which would in turn generate the <input type=”email”> markup when the page is rendered.  In fact, as of SP1 beta, this is the only way to put the new input type tags with the runat=”server” attribute (otherwise you will get the parser error mentioned above.  This issue would be fixed by the final release of SP1 beta) Going further, there may be more support for having server tags for some of the common HTML5 elements, but this is work in progress currently. So, other than not having runat=”server” support for the new HTML5  input tags, you can pretty much build and target HTML5 websites with Visual Studio 2010 SP1 Beta, today.  For those who are running Visual Studio 2008, you also have the “HTML5 intellisense for Visual Studio 2010 and 2008” available for download, from http://visualstudiogallery.msdn.microsoft.com/d771cbc8-d60a-40b0-a1d8-f19fc393127d/ Note that, if you are running Visual Studio 2010, the recommended approach is to install the SP1 beta which would be the way forward for HTML5 support in Visual Studio. Of course, you need to test these on a browser supporting HTML5 such as IE9 Beta or Chrome or FireFox 4.  You can download IE9 Beta from here You can also follow the Visual Web Developer Team Blog for more updates on the stuff they are building. Cheers !!!

    Read the article

  • Cooking with Wessty: HTML 5 and Visual Studio

    - by David Wesst
    The hardest part about using a new technology, such as HTML 5, is getting to what features are available and the syntax. One way to learn how to use new technologies is to adapt your current development to help you use the technology in comfort of your own development environment. For .NET Web Developers, that environment is usually Visual Studio 2010. This technique intends on showing you how to get HTML 5 Intellisense working in your current version of Visual Studio 2008 or 2010, making it easier for you to start using HTML 5 features in your current .NET web development projects. Quick Note According to the Visual Web Developer team at Microsoft, the Visual Studio 2010 SP1 beta has support for both HTML 5 and CSS 3. If you are willing to try out the bleeding edge update from Microsoft, then you won’t need this technique. --- Ingredients Visual Studio 2008 or 2010 Your favourite HTML 5 compliant browser (e.g. Internet Explorer 9) Administrator privileges, or the ability to install Visual Studio Extensions in your development environment. Directions Download the HTML 5 Intellisense for Visual Studio 2008 and 2010 extension from the Visual Studio Extension Gallery. Install it. Open Visual Studio. Open up a web file, such as an HTML or ASPX file. he HTML Source Editing toolbar should have appeared. (Optional) If it did not appear, you can activate it through the main menu by selecting View, then Toolbars, and then select HTML Source Editing if it does not have a checkbox beside it. (NOTE: If there is a checkbox, then the toolbar is enabled) In the HTML Source Editing toolbar, open up the validation schema drop box, and select HTML 5. Et voila! You now have HTML 5 intellisense enabled to help you get started in adding HTML 5 awesomeness to your web sites and web applications. Optional – Setting HTML 5 Validation Options At this point, you may want to select how Visual Studio shows validation errors. You can do that in the Options Menu. To get to the Options Menu… In the main menu select Tools, then Options. In the Options window, select and expand Text Editor, then HTML, followed by selecting Validation. Resources HTML 5 Intellisense for Visual Studio 2008 and 2010 extenstion Visual Studio Extension Gallery Visual Studio 2010 SP1 Beta This post also appears at http://david.wes.st

    Read the article

  • Big DataData Mining with Hive – What is Hive? – What is HiveQL (HQL)? – Day 15 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the operational database in Big Data Story. In this article we will understand what is Hive and HQL in Big Data Story. Yahoo started working on PIG (we will understand that in the next blog post) for their application deployment on Hadoop. The goal of Yahoo to manage their unstructured data. Similarly Facebook started deploying their warehouse solutions on Hadoop which has resulted in HIVE. The reason for going with HIVE is because the traditional warehousing solutions are getting very expensive. What is HIVE? Hive is a datawarehouseing infrastructure for Hadoop. The primary responsibility is to provide data summarization, query and analysis. It  supports analysis of large datasets stored in Hadoop’s HDFS as well as on the Amazon S3 filesystem. The best part of HIVE is that it supports SQL-Like access to structured data which is known as HiveQL (or HQL) as well as big data analysis with the help of MapReduce. Hive is not built to get a quick response to queries but it it is built for data mining applications. Data mining applications can take from several minutes to several hours to analysis the data and HIVE is primarily used there. HIVE Organization The data are organized in three different formats in HIVE. Tables: They are very similar to RDBMS tables and contains rows and tables. Hive is just layered over the Hadoop File System (HDFS), hence tables are directly mapped to directories of the filesystems. It also supports tables stored in other native file systems. Partitions: Hive tables can have more than one partition. They are mapped to subdirectories and file systems as well. Buckets: In Hive data may be divided into buckets. Buckets are stored as files in partition in the underlying file system. Hive also has metastore which stores all the metadata. It is a relational database containing various information related to Hive Schema (column types, owners, key-value data, statistics etc.). We can use MySQL database over here. What is HiveSQL (HQL)? Hive query language provides the basic SQL like operations. Here are few of the tasks which HQL can do easily. Create and manage tables and partitions Support various Relational, Arithmetic and Logical Operators Evaluate functions Download the contents of a table to a local directory or result of queries to HDFS directory Here is the example of the HQL Query: SELECT upper(name), salesprice FROM sales; SELECT category, count(1) FROM products GROUP BY category; When you look at the above query, you can see they are very similar to SQL like queries. Tomorrow In tomorrow’s blog post we will discuss about very important components of the Big Data Ecosystem – Pig. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Data Masking for Oracle E-Business Suite

    - by Troy Kitch
    E-Business Suite customers can now use Oracle Data Masking to obscure sensitive information in non-production environments. Many organizations are inadvertently exposed when copying sensitive or regulated production data into non-production database environments for development, quality assurance or outsourcing purposes. Due to weak security controls and unmonitored access, these non-production environments have increasingly become the target of cyber criminals. Learn more about the announcement here.

    Read the article

  • Extracting Data from a Source System to History Tables

    - by Derek D.
    This is a topic I find very little information written about, however it is very important that the method for extracting data be done in a way that does not hinder performance of the source system.  In this example, the goal is to extract data from a source system, into another database (or server) all [...]

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >