Search Results

Search found 5849 results on 234 pages for 'partition scheme'.

Page 167/234 | < Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >

  • Designing server communications like DNS

    - by fryme
    I'm just need some help about server design. I need help in the design of the structure of the server application. Need to develop a network of servers that interact with each other. If a server receives a request and the server can not comply it, then the request is sent to another server and the result will be refunded to the sender. Information on all servers is different, and sender did not know where the result is placed in. The first thing that comes to mind is to use the model(scheme) of the DNS. But any interesting detailed articles on this subject I have not found. Any ideas?

    Read the article

  • Validation against 10K XSD - performance problem

    - by stck777
    I have an XSD scheme which has 10K lines. It takes 5 seconds to validate my XML with 500 lines. I get dynamically XML via POST from external server, on every click of the user on my homepage. The validation takes 5+ seconds, which is very much for every click of the user. PHP Example: $doc = new DOMDocument(); $doc->load('file.xml'); //100 to 500 lines $doc->schemaValidate('schema.xsd'); //schema.xsd 10 000 lines Do you have any idea how I can validate the XML against the XSD faster?

    Read the article

  • Kernel dealing with the section headers in an ELF

    - by uki
    I recently read that the kernel and the dynamic loader mostly deal with the program header tables in an ELF file and that assemblers, compilers and linkers deal with the section header tables. The number of program header tables and section header tables are mentioned in the ELF header in fields named e_phnum and e_shnum respectively. e_phnum is two bytes in size, so if the number of program headers is 65535, we use a scheme known as extended numbering where, e_phnum is set to 0xffff and sh_link field of the zeroth section header table holds the actual count. My doubt is : If the count of program headers exceeds 65535, does that mean the kernel and/or the dynamic loader end up having to read the section table?

    Read the article

  • Which web-safe fonts are more readable to eyes as a body text? Which web-safe fonts should not be us

    - by metal-gear-solid
    Which web-safe fonts are more readable to eyes as a body text? Which web-safe fonts should not be used? What should be the minimum font size of <p>body text</p> for better readability? What font size should we use for <H1/2/3/4/5/6>, <p> <ul>, <ol>? Should we use same font-size for <p>, <ul>, <ol> and <th> <td>? What would be the balanced typography font sizing scheme?

    Read the article

  • C++ converting hexadecimal md5 hash to decimal integer

    - by Zackery
    I'm doing Elgamal Signature Scheme and I need to use the decimal hash value from the message to compute S for signature generation. string hash = md5(message); cout << hash << endl; NTL::ZZ msgHash = strtol(hash.c_str(), NULL, 16); cout << msgHash << endl; There are no integer large enough to contain the value of 32 byte hexadecimal hash, and so I tried big integer from NTL library but it didn't work out because you cannot assign long integer to NTL::ZZ type. Is there any good solution to this? I'm doing this with visual C++ in Visual Studio 2013.

    Read the article

  • Oauth2 External Browser Request - App Launches, Credentials received, Browser Not Updated

    - by Michael Drozdowski
    I'm using an external browser request to authenticate myself with the SoundCloudAPI in OSX. My app launches, and has a button that opens an external browser window to authenticate against SoundCloud. When I click "connect" in the new window, I get a "External Protocol Request" that is consistent with my custom launch URI scheme. Clicking this loads the app, and it gets the correct credentials. The trouble is, the browser window never changes - it simply says that it's connecting forever. There is no "connection confirmed" alert coming from SoundCloud. I know I'm logged in because I can make the correct calls to the API to get things such as my username. Why isn't the browser confirming the connection or dismissing? The same thing happens if I load the authentication in an internal WebView in the app.

    Read the article

  • Are your redirects HTTP compliant (absolute URI)?

    - by webbiedave
    In the Hypertext Transfer Protocol 1.1 spec, it states for the Location header: The field value consists of a single absolute URI. Location = "Location" ":" absoluteURI An example is: Location: http://www.w3.org/pub/WWW/People.html I have coded many an app that doesn't include the scheme and domain (i.e., /thankyou/) and every browser I've ever tested with redirects correctly. I'm wondering if it's imperative to go back and change all my redirect code or just ignore this part of the spec. Have many of you produced code that doesn't comply and will you go back and change it? Or will you just trust that clients will continue to resolve them well into the future? (going forward I will certainly adhere to this)

    Read the article

  • Need url's to be non secure when moving away from a secured link (without hardcoded url's in html)?

    - by Tony_Henrich
    I have an asp.net site. It has an order form which is accessible at https://secure.example.com/order.aspx. The links on the site do not include the domain name. So for example the home page is 'default.aspx'. The issue is that if I click on a link like the home page from the secure page, the url becomes https://secure.example.com/default.aspx instead of http://www.example.com/default.aspx. What's a good way to handle this? The scheme should automatically work using any domain name based on where it's launched from. So if the site is launched from 'localhost', moving away from the secured page, the url's should be http://localhost/... The navigation links are in a master page.

    Read the article

  • Open Source .NET embedded web/http server

    - by Daniel Mošmondor
    I am working on a project where I need to embed a web server into my C# application so the application could display it's status via HTTP. I suppose I'll want to configure it through the http also. I am looking for an open-source library written in C# and with a licensing scheme that will allow me to link it into my existing closed source code (LGPL). Any suggestions of specific products or where to look first? It would be great if that product could have some kind of scripting, at least templates. All html output would go from the application, only resources would be stored on the disk (images, icons, ...) EDIT: I would like it to run under .NET 2.0, however.

    Read the article

  • Custom DDL Templates for Visual Studio 2010

    - by Stacey
    I was wondering if anyone knows of some good community distributed custom DDL templates for Entity Framework 4.0. The default DDL to SQL10 Works well enough, but we're looking to do some customization to the naming convention that it just isn't offering us. I'm not really finding many samples out there of people doing this, so I was hoping someone might know of a resource I'm overlooking (perhaps I am searching for it wrong, or misunderstanding how the whole process works) Specifically we're wanting to change up how it writes out fields from relationships. For instance, the default template puts in.. tablename_propertyendpoint_propertyname. We're wanting to find tune this to our naming scheme a little more. And none of us can quite figure out where in the .tt files it is doing this exact behavior.

    Read the article

  • How to send data securely over a public channel?

    - by Daniel
    Hi! I have a smart client application being deployed with a CickOnce webpage. here's the current scenario. 1.User runs the application, and the application shows a login form. 2.User enters ID/Password in the login form, and the application sends that information to the server. 3.The server authenticates the user and sends configuration and data to the application. Different users have different configuration and data for their application. I was concerned that anyone can download the application from the webpage if they know the URL. So I'm trying to change the authentication scheme, so that users can login at the webpage to download the application. I want to send the authentication info from the webpage(Program running at the server) to the smart client app, so that application can download the configuration information from the server, without prompting users to make a login again. How can the webpage send the ID/Passoword to the application securely?

    Read the article

  • question abouut string sort

    - by davit-datuashvili
    i have question from programming pearls problem is following show how to use lomuto's partitioning scheme to sort varying length bit strings in time proportional to the sum oof their length and algorithm is following each record in x[0..n-1] has an integer length and pointer to the array bit[0..length-1] code void bsort(l,u,depth){ if (l>=u) return; for (int i=l;i<u;i++) if (x[i].length<depth) swap(i,l++); m=l; if (x[i].bit[depth] ==0) swap(i,m++); bsort(l,m-1,depth+1); bsort(m,u,depth+1); please help me i need following things 1. how this algorith works 2.how implement in java?

    Read the article

  • One application instance for two domain name

    - by dervlap
    Hello, I have two web applications in ASP.NET which are quite the same (same business logic, same DAL, same DB scheme but different instance). The only thing that I need to change is the design (logo, color,...) and the text (global and local resource) to adress two separate business sector. We cannot "subdomain" the application because we need the two app "seems to be" independant. Is it a good idea to run only one instance for the 2 web applications. For example : I will have 2 hostnames : mycompagny.com and mycompagny2.com and I will put an HTTP Module which will set a string which will be propagated in my application like 'company' and 'company2'. I will instanciate the dal only once but the connection string will change depending on the string 'company' or 'company2'. Any pros and cons ? Any other alternatives ?

    Read the article

  • XmlResolver Class' GetEntity function

    - by Pok
    I wrote a custom resolver class. It works OK for resolving SYSTEM DTDs, but not for resolving PUBLIC DTDs. When the class has to resolve PUBLIC DTDs instead of the URI of the resource, the function receives the public identifier through the absoluteUri parameter of the GetEntity function. Is there a solution to this. In examples: if I have a DTD declaration like <!DOCTYPE document SYSTEM "document.dtd"> then the custom resolver correctly receives the string "document.dtd" through the absoluteUri parameter of the GetEntity function. if I have a DTD declaration like <!DOCTYPE document PUBLIC "-//Organization//DTD Document 1.0//EN" "http://localhost/document.dtd"> then the custom resolver incorrectly receives the string "-//Organization//DTD Document 1.0//EN" instead of "scheme://host/document.dtd".

    Read the article

  • Spring security with database and multiple roles?

    - by Joe
    I'm trying to make an application using spring 3.0. Now I've decided to try my hand at spring-security and hibernate. I've already seen that it's possible to back it with a databasem and I've seen a reference to defining your own queries? Now the problem I have is that the tutorials I've been finding aren't too clear and that they assume that a user can only have one role. I want to give some users multiple roles. So I was thinking about a database scheme along the lines of: User: user_id username password registrationDate User_Role: user_id role_id Role: role_id rolename Now I was wondering if anyone had some pointers to some usefull tutorials/advice/comments.

    Read the article

  • How to reduce latency of data sent through a REST api

    - by Sid
    I have an application which obtains data in JSON format from one of our other servers. The problem I am facing is, there is is significant delay when when requesting for this information. Since a lot of data is passed (approx 1000 records per request where each record is pretty huge) is there a way that compression would help reducing the speed. If so which compression scheme would you recommend. I read on another thread that they pattern of data also matters a lot on they type of compression that needs to be used. The pattern of data is consistent and resembles the following :desc=>some_description :url=>some_url :content=>some_content :score=>some_score :more_attributes=>more_data Can someone recommend a solution to how I could reduce this delay. They delay is approx 6-8 seconds. I'm using Ruby on Rails to develop this application and the server providing the data uses Python for the most part.

    Read the article

  • Understanding a lighttpd.conf file?

    - by AP257
    Hi all, I've been given a lighttpd.conf that someone else wrote and need help working out how to serve it. I'm 90% of the way there but stuck... The index.html page appears, but links and CSS files don't point to the right place. To clarify, the links and CSS files are all pointing to a 'file:///' URL. So styles.css in the HTML header points to file:///html/styles.css, whereas it should be going to http://url.com/styles.css Maybe url.rewrite or url.redirect isn't working properly? server.document-root = "~/html" server.port = 28001 mimetype.assign = ( ".html" => "text/html", ".txt" => "text/plain", ".jpg" => "image/jpeg", ".png" => "image/png" ) url.rewrite = ( "^(.*)/($|\?.*)" => "$1/index.html", "^(.*)/([^.?]+)($|\?.*)$" => "$1/$2.html" ) $HTTP["scheme"] == "http" { url.redirect = ( "^/platform/index.html$" => "/platform", "^/about/company.html$" => "/about/company",, ) }

    Read the article

  • How to return data structure from stored procedure

    - by rodnower
    Hello, I have C# application that retrieve data from AQ with some oracle stored procedure, that stored in package. The scheme is: C# code - Stored Procedure in Package - AQ Inside of this stored procedure I use DBMS_AQ for dequeue the data to some object of some type. Now I have this object. My question is how I return it? Previously I: Created some virtual table, Make EXTEND() to table Inserted the data from object to table, Perform select on the table, And return sys_refcursor. In side of C# I filled DataSet with help of OracleDataAdapter.Fill() After that I upgraded it to return data fields during OUT parameters. But now I have much fields, and I may not to create so much OUT parameters... What the best way to do this? Thank you for ahead.

    Read the article

  • Tomcat SSL Configuration

    - by bdares
    I received a SSL cert to use for a Tomcat 6.0 server, ready to use. I configured Tomcat to use it with the following in server.xml: <Connector port="8443" maxThreads="200" scheme="https" secure="true" SSLEnabled="true" keystoreFile="C:\Tomcat 6.0\ssl\cert" keystorePass="*****" clientAuth="false" sslProtocol="TLS"/> I started Tomcat using the command prompt so I could see any error message as they happened. There were none. The results for accessing different URLS: http://localhost - normal page loads fine https://localhost - browser claims page cannot be found https://localhost:8443 - page cannot be found http://localhost:8443 - offers a certificate, after accepted redirects to https://localhost (I suspect the https:// urls initially offer the certificate which is automatically accepted by the browser, as it was issued by Verisign) How to fix? Edit: I've also tried port="443". Same result.

    Read the article

  • get function address from name [.debug_info ??]

    - by user361190
    Hi, I was trying to write a small debug utility and for this I need to get the function/global variable address given its name. This is built-in debug utility, which means that the debug utility will run from within the code to be debugged or in plain words I cannot parse the executable file. Now is there a well-known way to do that ? The plan I have is to make the .debug_* sections to to be loaded into to memory [which I plan to do by a cheap trick like this in ld script] .data { *(.data) __sym_start = .; (debug_); __sym_end = .; } Now I have to parse the section to get the information I need, but I am not sure this is doable or is there issues with this - this is all just theory. But it also seems like too much of work :-) is there a simple way. Or if someone can tell upfront why my scheme will not work, it ill also be helpful. Thanks in Advance, Alex.

    Read the article

  • should a student be diversifying or mastering programming languages?

    - by Max Link
    As the question states, is it better if a student diversifies or explores when learning programming languages or should they focus only on 2-3 languages and really get to know them well? Example of what I mean by diversifying: Functional -> Scheme Procedural -> C Object Oriented -> Java Dynamic or scripting -> Python Other -> C++ I have a few breaks in between semesters sometimes (up to 3 months) and I'm thinking of either learning a new language or "master" those that I know right now. Which would benefit me in the future? I know some(about 3 months of self studying each) Java, C, and C++ already . If I'm not mistaken, where I live, the industry is heavy on Java, C++, and C#.

    Read the article

  • SQL Developer Database Diff – Compare Objects From Multiple Schemas

    - by thatjeffsmith
    Ever wonder why Database Diff isn’t called Schema Diff? One reason is because SQL Developer allows you select objects from more than one schema in the ‘Source’ connection for the compare. Simply use the ‘More’ dialog view and select as many tables from as many different schemas as you require Now, before you get around to testing this – as you should never believe what I say, trust but verify – two things you need to know: I’m using SQL Developer version 3.2 On the initial screen you need to use the ‘Maintain’ option Maintain tells SQL Developer to use the schema designation in the source connection to find the same corresponding object in the destination schema. Choose ‘maintain’ if you want to compare objects in the same schema in the destination but don’t have the user login for that schema. So after you’ve selected your databases, your diff preferences, and your objects – you’re ready to perform the compare and review your results. The DIFF Report Notice the highlighted text, SQL Developer is ‘maintaining’ the Schema context from the two databases. Short and sweet. That’s pretty much all there is to doing a compare with SQL Developer with multiple schemas involved. You may have noticed in some posts lately that my editor screenshots had a ‘green screen’ look and feel to them. What’s with the black background in your editors? In the SQL Developer preferences, you can set your editor color schemes. I started with the ‘Twilight’ scheme (team Jacob in case you’re wondering) and then customized it further by going with a default green font color. You could go pretty crazy in here, and I’m assuming 90% of you could care less and will just stick with the original. But for those of you who are particular about your IDE styling – go crazy! SQL Developer Editor Display Preferences

    Read the article

  • Entity Framework Code-First, OData & Windows Phone Client

    - by Jon Galloway
    Entity Framework Code-First is the coolest thing since sliced bread, Windows  Phone is the hottest thing since Tickle-Me-Elmo and OData is just too great to ignore. As part of the Full Stack project, we wanted to put them together, which turns out to be pretty easy… once you know how.   EF Code-First CTP5 is available now and there should be very few breaking changes in the release edition, which is due early in 2011.  Note: EF Code-First evolved rapidly and many of the existing documents and blog posts which were written with earlier versions, may now be obsolete or at least misleading.   Code-First? With traditional Entity Framework you start with a database and from that you generate “entities” – classes that bridge between the relational database and your object oriented program. With Code-First (Magic-Unicorn) (see Hanselman’s write up and this later write up by Scott Guthrie) the Entity Framework looks at classes you created and says “if I had created these classes, the database would have to have looked like this…” and creates the database for you! By deriving your entity collections from DbSet and exposing them via a class that derives from DbContext, you "turn on" database backing for your POCO with a minimum of code and no hidden designer or configuration files. POCO == Plain Old CLR Objects Your entity objects can be used throughout your applications - in web applications, console applications, Silverlight and Windows Phone applications, etc. In our case, we'll want to read and update data from a Windows Phone client application, so we'll expose the entities through a DataService and hook the Windows Phone client application to that data via proxies.  Piece of Pie.  Easy as cake. The Demo Architecture To see this at work, we’ll create an ASP.NET/MVC application which will act as the host for our Data Service.  We’ll create an incredibly simple data layer using EF Code-First on top of SQLCE4 and we’ll expose the data in a WCF Data Service using the oData protocol.  Our Windows Phone 7 client will instantiate  the data context via a URI and load the data asynchronously. Setting up the Server project with MVC 3, EF Code First, and SQL CE 4 Create a new application of type ASP.NET MVC 3 and name it DeadSimpleServer.  We need to add the latest SQLCE4 and Entity Framework Code First CTP's to our project. Fortunately, NuGet makes that really easy. Open the Package Manager Console (View / Other Windows / Package Manager Console) and type in "Install-Package EFCodeFirst.SqlServerCompact" at the PM> command prompt. Since NuGet handles dependencies for you, you'll see that it installs everything you need to use Entity Framework Code First in your project. PM> install-package EFCodeFirst.SqlServerCompact 'SQLCE (= 4.0.8435.1)' not installed. Attempting to retrieve dependency from source... Done 'EFCodeFirst (= 0.8)' not installed. Attempting to retrieve dependency from source... Done 'WebActivator (= 1.0.0.0)' not installed. Attempting to retrieve dependency from source... Done You are downloading SQLCE from Microsoft, the license agreement to which is available at http://173.203.67.148/licenses/SQLCE/EULA_ENU.rtf. Check the package for additional dependencies, which may come with their own license agreement(s). Your use of the package and dependencies constitutes your acceptance of their license agreements. If you do not accept the license agreement(s), then delete the relevant components from your device. Successfully installed 'SQLCE 4.0.8435.1' You are downloading EFCodeFirst from Microsoft, the license agreement to which is available at http://go.microsoft.com/fwlink/?LinkID=206497. Check the package for additional dependencies, which may come with their own license agreement(s). Your use of the package and dependencies constitutes your acceptance of their license agreements. If you do not accept the license agreement(s), then delete the relevant components from your device. Successfully installed 'EFCodeFirst 0.8' Successfully installed 'WebActivator 1.0.0.0' You are downloading EFCodeFirst.SqlServerCompact from Microsoft, the license agreement to which is available at http://173.203.67.148/licenses/SQLCE/EULA_ENU.rtf. Check the package for additional dependencies, which may come with their own license agreement(s). Your use of the package and dependencies constitutes your acceptance of their license agreements. If you do not accept the license agreement(s), then delete the relevant components from your device. Successfully installed 'EFCodeFirst.SqlServerCompact 0.8' Successfully added 'SQLCE 4.0.8435.1' to EfCodeFirst-CTP5 Successfully added 'EFCodeFirst 0.8' to EfCodeFirst-CTP5 Successfully added 'WebActivator 1.0.0.0' to EfCodeFirst-CTP5 Successfully added 'EFCodeFirst.SqlServerCompact 0.8' to EfCodeFirst-CTP5 Note: We're using SQLCE 4 with Entity Framework here because they work really well together from a development scenario, but you can of course use Entity Framework Code First with other databases supported by Entity framework. Creating The Model using EF Code First Now we can create our model class. Right-click the Models folder and select Add/Class. Name the Class Person.cs and add the following code: using System.Data.Entity; namespace DeadSimpleServer.Models { public class Person { public int ID { get; set; } public string Name { get; set; } } public class PersonContext : DbContext { public DbSet<Person> People { get; set; } } } Notice that the entity class Person has no special interfaces or base class. There's nothing special needed to make it work - it's just a POCO. The context we'll use to access the entities in the application is called PersonContext, but you could name it anything you wanted. The important thing is that it inherits DbContext and contains one or more DbSet which holds our entity collections. Adding Seed Data We need some testing data to expose from our service. The simplest way to get that into our database is to modify the CreateCeDatabaseIfNotExists class in AppStart_SQLCEEntityFramework.cs by adding some seed data to the Seed method: protected virtual void Seed( TContext context ) { var personContext = context as PersonContext; personContext.People.Add( new Person { ID = 1, Name = "George Washington" } ); personContext.People.Add( new Person { ID = 2, Name = "John Adams" } ); personContext.People.Add( new Person { ID = 3, Name = "Thomas Jefferson" } ); personContext.SaveChanges(); } The CreateCeDatabaseIfNotExists class name is pretty self-explanatory - when our DbContext is accessed and the database isn't found, a new one will be created and populated with the data in the Seed method. There's one more step to make that work - we need to uncomment a line in the Start method at the top of of the AppStart_SQLCEEntityFramework class and set the context name, as shown here, public static class AppStart_SQLCEEntityFramework { public static void Start() { DbDatabase.DefaultConnectionFactory = new SqlCeConnectionFactory("System.Data.SqlServerCe.4.0"); // Sets the default database initialization code for working with Sql Server Compact databases // Uncomment this line and replace CONTEXT_NAME with the name of your DbContext if you are // using your DbContext to create and manage your database DbDatabase.SetInitializer(new CreateCeDatabaseIfNotExists<PersonContext>()); } } Now our database and entity framework are set up, so we can expose data via WCF Data Services. Note: This is a bare-bones implementation with no administration screens. If you'd like to see how those are added, check out The Full Stack screencast series. Creating the oData Service using WCF Data Services Add a new WCF Data Service to the project (right-click the project / Add New Item / Web / WCF Data Service). We’ll be exposing all the data as read/write.  Remember to reconfigure to control and minimize access as appropriate for your own application. Open the code behind for your service. In our case, the service was called PersonTestDataService.svc so the code behind class file is PersonTestDataService.svc.cs. using System.Data.Services; using System.Data.Services.Common; using System.ServiceModel; using DeadSimpleServer.Models; namespace DeadSimpleServer { [ServiceBehavior( IncludeExceptionDetailInFaults = true )] public class PersonTestDataService : DataService<PersonContext> { // This method is called only once to initialize service-wide policies. public static void InitializeService( DataServiceConfiguration config ) { config.SetEntitySetAccessRule( "*", EntitySetRights.All ); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; config.UseVerboseErrors = true; } } } We're enabling a few additional settings to make it easier to debug if you run into trouble. The ServiceBehavior attribute is set to include exception details in faults, and we're using verbose errors. You can remove both of these when your service is working, as your public production service shouldn't be revealing exception information. You can view the output of the service by running the application and browsing to http://localhost:[portnumber]/PersonTestDataService.svc/: <service xml:base="http://localhost:49786/PersonTestDataService.svc/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:app="http://www.w3.org/2007/app" xmlns="http://www.w3.org/2007/app"> <workspace> <atom:title>Default</atom:title> <collection href="People"> <atom:title>People</atom:title> </collection> </workspace> </service> This indicates that the service exposes one collection, which is accessible by browsing to http://localhost:[portnumber]/PersonTestDataService.svc/People <?xml version="1.0" encoding="iso-8859-1" standalone="yes"?> <feed xml:base=http://localhost:49786/PersonTestDataService.svc/ xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices" xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata" xmlns="http://www.w3.org/2005/Atom"> <title type="text">People</title> <id>http://localhost:49786/PersonTestDataService.svc/People</id> <updated>2010-12-29T01:01:50Z</updated> <link rel="self" title="People" href="People" /> <entry> <id>http://localhost:49786/PersonTestDataService.svc/People(1)</id> <title type="text"></title> <updated>2010-12-29T01:01:50Z</updated> <author> <name /> </author> <link rel="edit" title="Person" href="People(1)" /> <category term="DeadSimpleServer.Models.Person" scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme" /> <content type="application/xml"> <m:properties> <d:ID m:type="Edm.Int32">1</d:ID> <d:Name>George Washington</d:Name> </m:properties> </content> </entry> <entry> ... </entry> </feed> Let's recap what we've done so far. But enough with services and XML - let's get this into our Windows Phone client application. Creating the DataServiceContext for the Client Use the latest DataSvcUtil.exe from http://odata.codeplex.com. As of today, that's in this download: http://odata.codeplex.com/releases/view/54698 You need to run it with a few options: /uri - This will point to the service URI. In this case, it's http://localhost:59342/PersonTestDataService.svc  Pick up the port number from your running server (e.g., the server formerly known as Cassini). /out - This is the DataServiceContext class that will be generated. You can name it whatever you'd like. /Version - should be set to 2.0 /DataServiceCollection - Include this flag to generate collections derived from the DataServiceCollection base, which brings in all the ObservableCollection goodness that handles your INotifyPropertyChanged events for you. Here's the console session from when we ran it: <ListBox x:Name="MainListBox" Margin="0,0,-12,0" ItemsSource="{Binding}" SelectionChanged="MainListBox_SelectionChanged"> Next, to keep things simple, change the Binding on the two TextBlocks within the DataTemplate to Name and ID, <ListBox x:Name="MainListBox" Margin="0,0,-12,0" ItemsSource="{Binding}" SelectionChanged="MainListBox_SelectionChanged"> <ListBox.ItemTemplate> <DataTemplate> <StackPanel Margin="0,0,0,17" Width="432"> <TextBlock Text="{Binding Name}" TextWrapping="Wrap" Style="{StaticResource PhoneTextExtraLargeStyle}" /> <TextBlock Text="{Binding ID}" TextWrapping="Wrap" Margin="12,-6,12,0" Style="{StaticResource PhoneTextSubtleStyle}" /> </StackPanel> </DataTemplate> </ListBox.ItemTemplate> </ListBox> Getting The Context In the code-behind you’ll first declare a member variable to hold the context from the Entity Framework. This is named using convention over configuration. The db type is Person and the context is of type PersonContext, You initialize it by providing the URI, in this case using the URL obtained from the Cassini web server, PersonContext context = new PersonContext( new Uri( "http://localhost:49786/PersonTestDataService.svc/" ) ); Create a second member variable of type DataServiceCollection<Person> but do not initialize it, DataServiceCollection<Person> people; In the constructor you’ll initialize the DataServiceCollection using the PersonContext, public MainPage() { InitializeComponent(); people = new DataServiceCollection<Person>( context ); Finally, you’ll load the people collection using the LoadAsync method, passing in the fully specified URI for the People collection in the web service, people.LoadAsync( new Uri( "http://localhost:49786/PersonTestDataService.svc/People" ) ); Note that this method runs asynchronously and when it is finished the people  collection is already populated. Thus, since we didn’t need or want to override any of the behavior we don’t implement the LoadCompleted. You can use the LoadCompleted event if you need to do any other UI updates, but you don't need to. The final code is as shown below: using System; using System.Data.Services.Client; using System.Windows; using System.Windows.Controls; using DeadSimpleServer.Models; using Microsoft.Phone.Controls; namespace WindowsPhoneODataTest { public partial class MainPage : PhoneApplicationPage { PersonContext context = new PersonContext( new Uri( "http://localhost:49786/PersonTestDataService.svc/" ) ); DataServiceCollection<Person> people; // Constructor public MainPage() { InitializeComponent(); // Set the data context of the listbox control to the sample data // DataContext = App.ViewModel; people = new DataServiceCollection<Person>( context ); people.LoadAsync( new Uri( "http://localhost:49786/PersonTestDataService.svc/People" ) ); DataContext = people; this.Loaded += new RoutedEventHandler( MainPage_Loaded ); } // Handle selection changed on ListBox private void MainListBox_SelectionChanged( object sender, SelectionChangedEventArgs e ) { // If selected index is -1 (no selection) do nothing if ( MainListBox.SelectedIndex == -1 ) return; // Navigate to the new page NavigationService.Navigate( new Uri( "/DetailsPage.xaml?selectedItem=" + MainListBox.SelectedIndex, UriKind.Relative ) ); // Reset selected index to -1 (no selection) MainListBox.SelectedIndex = -1; } // Load data for the ViewModel Items private void MainPage_Loaded( object sender, RoutedEventArgs e ) { if ( !App.ViewModel.IsDataLoaded ) { App.ViewModel.LoadData(); } } } } With people populated we can set it as the DataContext and run the application; you’ll find that the Name and ID are displayed in the list on the Mainpage. Here's how the pieces in the client fit together: Complete source code available here

    Read the article

  • Oracle Data Mining a Star Schema: Telco Churn Case Study

    - by charlie.berger
    There is a complete and detailed Telco Churn case study "How to" Blog Series just posted by Ari Mozes, ODM Dev. Manager.  In it, Ari provides detailed guidance in how to leverage various strengths of Oracle Data Mining including the ability to: mine Star Schemas and join tables and views together to obtain a complete 360 degree view of a customer combine transactional data e.g. call record detail (CDR) data, etc. define complex data transformation, model build and model deploy analytical methodologies inside the Database  His blog is posted in a multi-part series.  Below are some opening excerpts for the first 3 blog entries.  This is an excellent resource for any novice to skilled data miner who wants to gain competitive advantage by mining their data inside the Oracle Database.  Many thanks Ari! Mining a Star Schema: Telco Churn Case Study (1 of 3) One of the strengths of Oracle Data Mining is the ability to mine star schemas with minimal effort.  Star schemas are commonly used in relational databases, and they often contain rich data with interesting patterns.  While dimension tables may contain interesting demographics, fact tables will often contain user behavior, such as phone usage or purchase patterns.  Both of these aspects - demographics and usage patterns - can provide insight into behavior.Churn is a critical problem in the telecommunications industry, and companies go to great lengths to reduce the churn of their customer base.  One case study1 describes a telecommunications scenario involving understanding, and identification of, churn, where the underlying data is present in a star schema.  That case study is a good example for demonstrating just how natural it is for Oracle Data Mining to analyze a star schema, so it will be used as the basis for this series of posts...... Mining a Star Schema: Telco Churn Case Study (2 of 3) This post will follow the transformation steps as described in the case study, but will use Oracle SQL as the means for preparing data.  Please see the previous post for background material, including links to the case study and to scripts that can be used to replicate the stages in these posts.1) Handling missing values for call data recordsThe CDR_T table records the number of phone minutes used by a customer per month and per call type (tariff).  For example, the table may contain one record corresponding to the number of peak (call type) minutes in January for a specific customer, and another record associated with international calls in March for the same customer.  This table is likely to be fairly dense (most type-month combinations for a given customer will be present) due to the coarse level of aggregation, but there may be some missing values.  Missing entries may occur for a number of reasons: the customer made no calls of a particular type in a particular month, the customer switched providers during the timeframe, or perhaps there is a data entry problem.  In the first situation, the correct interpretation of a missing entry would be to assume that the number of minutes for the type-month combination is zero.  In the other situations, it is not appropriate to assume zero, but rather derive some representative value to replace the missing entries.  The referenced case study takes the latter approach.  The data is segmented by customer and call type, and within a given customer-call type combination, an average number of minutes is computed and used as a replacement value.In SQL, we need to generate additional rows for the missing entries and populate those rows with appropriate values.  To generate the missing rows, Oracle's partition outer join feature is a perfect fit.  select cust_id, cdre.tariff, cdre.month, minsfrom cdr_t cdr partition by (cust_id) right outer join     (select distinct tariff, month from cdr_t) cdre     on (cdr.month = cdre.month and cdr.tariff = cdre.tariff);   ....... Mining a Star Schema: Telco Churn Case Study (3 of 3) Now that the "difficult" work is complete - preparing the data - we can move to building a predictive model to help identify and understand churn.The case study suggests that separate models be built for different customer segments (high, medium, low, and very low value customer groups).  To reduce the data to a single segment, a filter can be applied: create or replace view churn_data_high asselect * from churn_prep where value_band = 'HIGH'; It is simple to take a quick look at the predictive aspects of the data on a univariate basis.  While this does not capture the more complex multi-variate effects as would occur with the full-blown data mining algorithms, it can give a quick feel as to the predictive aspects of the data as well as validate the data preparation steps.  Oracle Data Mining includes a predictive analytics package which enables quick analysis. begin  dbms_predictive_analytics.explain(   'churn_data_high','churn_m6','expl_churn_tab'); end; /select * from expl_churn_tab where rank <= 5 order by rank; ATTRIBUTE_NAME       ATTRIBUTE_SUBNAME EXPLANATORY_VALUE RANK-------------------- ----------------- ----------------- ----------LOS_BAND                                      .069167052          1MINS_PER_TARIFF_MON  PEAK-5                   .034881648          2REV_PER_MON          REV-5                    .034527798          3DROPPED_CALLS                                 .028110322          4MINS_PER_TARIFF_MON  PEAK-4                   .024698149          5From the above results, it is clear that some predictors do contain information to help identify churn (explanatory value > 0).  The strongest uni-variate predictor of churn appears to be the customer's (binned) length of service.  The second strongest churn indicator appears to be the number of peak minutes used in the most recent month.  The subname column contains the interior piece of the DM_NESTED_NUMERICALS column described in the previous post.  By using the object relational approach, many related predictors are included within a single top-level column. .....   NOTE:  These are just EXCERPTS.  Click here to start reading the Oracle Data Mining a Star Schema: Telco Churn Case Study from the beginning.    

    Read the article

  • SSIS Lookup component tuning tips

    - by jamiet
    Yesterday evening I attended a London meeting of the UK SQL Server User Group at Microsoft’s offices in London Victoria. As usual it was both a fun and informative evening and in particular there seemed to be a few questions arising about tuning the SSIS Lookup component; I rattled off some comments and figured it would be prudent to drop some of them into a dedicated blog post, hence the one you are reading right now. Scene setting A popular pattern in SSIS is to use a Lookup component to determine whether a record in the pipeline already exists in the intended destination table or not and I cover this pattern in my 2006 blog post Checking if a row exists and if it does, has it changed? (note to self: must rewrite that blog post for SSIS2008). Fundamentally the SSIS lookup component (when using FullCache option) sucks some data out of a database and holds it in memory so that it can be compared to data in the pipeline. One of the big benefits of using SSIS dataflows is that they process data one buffer at a time; that means that not all of the data from your source exists in the dataflow at the same time and is why a SSIS dataflow can process data volumes that far exceed the available memory. However, that only applies to data in the pipeline; for reasons that are hopefully obvious ALL of the data in the lookup set must exist in the memory cache for the duration of the dataflow’s execution which means that any memory used by the lookup cache will not be available to be used as a pipeline buffer. Moreover, there’s an obvious correlation between the amount of data in the lookup cache and the time it takes to charge that cache; the more data you have then the longer it will take to charge and the longer you have to wait until the dataflow actually starts to do anything. For these reasons your goal is simple: ensure that the lookup cache contains as little data as possible. General tips Here is a simple tick list you can follow in order to tune your lookups: Use a SQL statement to charge your cache, don’t just pick a table from the dropdown list made available to you. (Read why in SELECT *... or select from a dropdown in an OLE DB Source component?) Only pick the columns that you need, ignore everything else Make the database columns that your cache is populated from as narrow as possible. If a column is defined as VARCHAR(20) then SSIS will allocate 20 bytes for every value in that column – that is a big waste if the actual values are significantly less than 20 characters in length. Do you need DT_WSTR typed columns or will DT_STR suffice? DT_WSTR uses twice the amount of space to hold values that can be stored using a DT_STR so if you can use DT_STR, consider doing so. Same principle goes for the numerical datatypes DT_I2/DT_I4/DT_I8. Only populate the cache with data that you KNOW you will need. In other words, think about your WHERE clause! Thinking outside the box It is tempting to build a large monolithic dataflow that does many things, one of which is a Lookup. Often though you can make better use of your available resources by, well, mixing things up a little and here are a few ideas to get your creative juices flowing: There is no rule that says everything has to happen in a single dataflow. If you have some particularly resource intensive lookups then consider putting that lookup into a dataflow all of its own and using raw files to pass the pipeline data in and out of that dataflow. Know your data. If you think, for example, that the majority of your incoming rows will match with only a small subset of your lookup data then consider chaining multiple lookup components together; the first would use a FullCache containing that data subset and the remaining data that doesn’t find a match could be passed to a second lookup that perhaps uses a NoCache lookup thus negating the need to pull all of that least-used lookup data into memory. Do you need to process all of your incoming data all at once? If you can process different partitions of your data separately then you can partition your lookup cache as well. For example, if you are using a lookup to convert a location into a [LocationId] then why not process your data one region at a time? This will mean your lookup cache only has to contain data for the location that you are currently processing and with the ability of the Lookup in SSIS2008 and beyond to charge the cache using a dynamically built SQL statement you’ll be able to achieve it using the same dataflow and simply loop over it using a ForEach loop. Taking the previous data partitioning idea further … a dataflow can contain more than one data path so why not split your data using a conditional split component and, again, charge your lookup caches with only the data that they need for that partition. Lookups have two uses: to (1) find a matching row from the lookup set and (2) put attributes from that matching row into the pipeline. Ask yourself, do you need to do these two things at the same time? After all once you have the key column(s) from your lookup set then you can use that key to get the rest of attributes further downstream, perhaps even in another dataflow. Are you using the same lookup data set multiple times? If so, consider the file caching option in SSIS 2008 and beyond. Above all, experiment and be creative with different combinations. You may be surprised at what works. Final  thoughts If you want to know more about how the Lookup component differs in SSIS2008 from SSIS2005 then I have a dedicated blog post about that at Lookup component gets a makeover. I am on a mini-crusade at the moment to get a BULK MERGE feature into the database engine, the thinking being that if the database engine can quickly merge massive amounts of data in a similar manner to how it can insert massive amounts using BULK INSERT then that’s a lot of work that wouldn’t have to be done in the SSIS pipeline. If you think that is a good idea then go and vote for BULK MERGE on Connect. If you have any other tips to share then please stick them in the comments. Hope this helps! @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

< Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >