Search Results

Search found 74473 results on 2979 pages for 'data table'.

Page 116/2979 | < Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >

  • Is it possible to have a wireless in-house NAS with wireless data transfer rates of equivalent to SATA speeds?

    - by techaddict
    Basically I would like to know, if it is possible to set up an NAS in my house to be accessed wirelessly, that can reach equivalent real-life data transfer speeds to USB 3.0 or an internal SATA hard drive. I have been wanting to do this for some time ( a couple of years now). Basically, this is what I want to do: Plug in a number of hard drives in an array, somewhere in my house, to be left plugged in and never have to be monitored. Ideally several terabytes. Whenever I am home, to have my computer and laptop configured to automatically find the NAS, as easy as plugging in an external hard drive - except completely wirelessly. Data transfer needs to be as seamless and quick as having added another internal hard drive in my laptop. Moreover, data should be able to accessed without having to copy it over - I should be able to wirelessly access the NAS and browse files, and open files directly from the NAS. For example, say I wanted to open a video - I should be able to play the video that is located on the NAS, directly from the NAS, completely wirelessly. If I wanted to open a .pdf file, I should be able to open it and read it directly from the NAS, as if it were located on my physical internal hard drive. Cost is important as well. Please tell me what equipment I need for this to be possible. I know you geniuses out there who can tell me if this is possible.

    Read the article

  • SQLAuthority News – List of Master Data Services White Paper

    - by pinaldave
    Since my TechEd India 2010 presentation I am very excited with SQL Server 2010 MDS. I just come across very interesting white paper on Microsoft site related to this subject. Here is the list of the same and location where you can download them. They are all written by Top Experts at Microsoft. Master Data Management from a Business Perspective - Download a PDF version or an XPS version Master Data Management from a Technical Perspective - Download a PDF version or an XPS version Bringing Master Data Management to the Stakeholders - Download a PDF version or an XPS version Implementing a Phased Approach to Master Data Management - Download a PDF version or an XPS version SharePoint Workflow Integration with Master Data Services - Read it here. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, T SQL

    Read the article

  • Offloading (Some) EBS 12 Reporting to Active Data Guard Instances

    - by Steven Chan
    For most Oracle Database users, Oracle Active Data Guard allows users to:Create a physical standby database for business continuity and disaster recoveryOffload reporting from the production database to the read-only physical standby databaseE-Business Suite customers have been able to use Active Data Guard to create physical standby databases for their EBS environments since the feature was introduced with the 11g Database.  EBS sysadmins can use the generic Active Data Guard documentation to take advantage of the Active Data Guard standby database capabilities.  I am pleased to announce that it is now possible to offload a subset of some ReportWriter-based reports -- but not all -- from a production EBS environment to an Active Data Guard physical standby database.  But before I go into the details of this newly-certified configuration, it's necessary to understand some details about what happens whenever someone attempts to access the E-Business Suite.

    Read the article

  • Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 2

    - by rajbk
    In the previous post, you saw how to create an OData feed and pre-filter the data. In this post, we will see how to shape the data. A sample project is attached at the bottom of this post. Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 1 Shaping the feed The Product feed we created earlier returns too much information about our products. Let’s change this so that only the following properties are returned – ProductID, ProductName, QuantityPerUnit, UnitPrice, UnitsInStock. We also want to return only Products that are not discontinued.  Splitting the Entity To shape our data according to the requirements above, we are going to split our Product Entity into two and expose one through the feed. The exposed entity will contain only the properties listed above. We will use the other Entity in our Query Interceptor to pre-filter the data so that discontinued products are not returned. Go to the design surface for the Entity Model and make a copy of the Product entity. A “Product1” Entity gets created.   Rename Product1 to ProductDetail. Right click on the Product entity and select “Add Association” Make a one to one association between Product and ProductDetails.   Keep only the properties we wish to expose on the Product entity and delete all other properties on it (see diagram below). You delete a property on an Entity by right clicking on the property and selecting “delete”. Keep the ProductID on the ProductDetail. Delete any other property on the ProductDetail entity that is already present in the Product entity. Your design surface should look like below:    Mapping Entity to Database Tables Right click on “ProductDetail” and go to “Table Mapping”   Add a mapping to the “Products” table in the Mapping Details.   After mapping ProductDetail, you should see the following.   Add a referential constraint. Lets add a referential constraint which is similar to a referential integrity constraint in SQL. Double click on the Association between the Entities and add the constraint with “Principal” set to “Product”. Let us review what we did so far. We made a copy of the Product entity and called it ProductDetail We created a one to one association between these entities Excluding the ProductID, we made sure properties were not duplicated between these entities  We added a ProductDetail entity to Products table mapping (Entity to Database). We added a referential constraint between the entities. Lets build our project. We get the following error: ”'NortwindODataFeed.Product' does not contain a definition for 'Discontinued' and no extension method 'Discontinued' accepting a first argument of type 'NortwindODataFeed.Product' could be found …" The reason for this error is because our Product Entity no longer has a “Discontinued” property. We “moved” it to the ProductDetail entity since we want our Product Entity to contain only properties that will be exposed by our feed. Since we have a one to one association between the entities, we can easily rewrite our Query Interceptor like so: [QueryInterceptor("Products")] public Expression<Func<Product, bool>> OnReadProducts() { return o => o.ProductDetail.Discontinued == false; } Similarly, all “hidden” properties of the Product table are available to us internally (through the ProductDetail Entity) for any additional logic we wish to implement. Compile the project and view the feed. We see that the feed returns only the properties that were part of the requirement.   To see the data in JSON format, you have to create a request with the following request header Accept: application/json, text/javascript, */* (easy to do in jQuery) The result should look like this: { "d" : { "results": [ { "__metadata": { "uri": "http://localhost.:2576/DataService.svc/Products(1)", "type": "NorthwindModel.Product" }, "ProductID": 1, "ProductName": "Chai", "QuantityPerUnit": "10 boxes x 20 bags", "UnitPrice": "18.0000", "UnitsInStock": 39 }, { "__metadata": { "uri": "http://localhost.:2576/DataService.svc/Products(2)", "type": "NorthwindModel.Product" }, "ProductID": 2, "ProductName": "Chang", "QuantityPerUnit": "24 - 12 oz bottles", "UnitPrice": "19.0000", "UnitsInStock": 17 }, { ... ... If anyone has the $format operation working, please post a comment. It was not working for me at the time of writing this.  We have successfully pre-filtered our data to expose only products that have not been discontinued and shaped our data so that only certain properties of the Entity are exposed. Note that there are several other ways you could implement this like creating a QueryView, Stored Procedure or DefiningQuery. You have seen how easy it is to create an OData feed, shape the data and pre-filter it by hardly writing any code of your own. For more details on OData, Google it with your favorite search engine :-) Also check out the one of the most passionate persons I have ever met, Pablo Castro – the Architect of Aristoria WCF Data Services. Watch his MIX 2010 presentation titled “OData: There's a Feed for That” here. Download Sample Project for VS 2010 RTM NortwindODataFeed.zip

    Read the article

  • SQL Server DATA Tools CTP4 Released!

    - by hassanfadili
    SQL Server team has released the new SQL Server Data Tools CTP4. Congratulations and Thanks to Gert Drapers and his team with this great milestone. To lear more about this SSDT CTP4 Release, check: What’s new in SQL Server Data Tools CTP4?http://blogs.msdn.com/b/ssdt/archive/2011/11/21/what-s-new-in-sql-server-data-tools-ctp4.aspxSQL Server Data Tools CTP4 vs. VS2010 Database Projectshttp://blogs.msdn.com/b/ssdt/archive/2011/11/21/sql-server-data-tools-ctp4-vs-vs2010-database-projects.aspxTop VSDB->SSDT Project Conversion Issueshttp://blogs.msdn.com/b/ssdt/archive/2011/11/21/top-vsdb-gt-ssdt-project-conversion issues.aspxUninstalling SQL Server Developer Tools CTP3 (Code-named “Juneau”) http://blogs.msdn.com/b/ssdt/archive/2011/11/21/uninstalling-ssdt-ctp3-code-named-juneau.aspxThis actually points to a nifty PowerShell script to help you uninstall.Have Fun.v

    Read the article

  • ODI 11g - Oracle Data Integrator 11g – A Hands-On Tutorial

    - by David Allan
    I've have been asked by Packt publishing to review a brand new book on Oracle Data Integrator: Getting Started with Oracle Data Integrator 11g – A Hands-On Tutorial. Waiting on this book to arrive and see what goodies are inside, I'll blog a review later. The book can be found at Oracle Data Integrator 11g – A Hands-On Tutorial Looking at the table of contents, it looks like it gives a good broad introduction (including various data formats) to the product; Chapter 1: Product Overview Chapter 2: Product Installation Chapter 3: Using Variables Chapter 4: ODI Sources, Targets, and Knowledge Modules Chapter 5: Working with Databases Chapter 6: Working with MySQL Chapter 7: Working with Microsoft SQL Server Chapter 8: Integrating File Data Chapter 9: Working with XML Files Chapter 10: Creating Workflows—Packages and Load Plans Chapter 11: Error Management Chapter 12: Managing and Monitoring ODI Components Chapter 13: Concluding Remarks Looking forward to it.

    Read the article

  • SSIS Snack: Data Flow Source Adapters

    - by andyleonard
    Introduction Configuring a Source Adapter in a Data Flow Task couples (binds) the Data Flow to an external schema. This has implications for dynamic data loads. "Why Can't I...?" I'm often asked a question similar to the following: "I have 17 flat files with different schemas that I want to load to the same destination database - how many Data Flow Tasks do I need?" I reply "17 different schemas? That's easy, you need 17 Data Flow Tasks." In his book Microsoft SQL Server 2005 Integration Services...(read more)

    Read the article

  • Oracle Database 11g Helps Control Exponential Data Growth

    - by [email protected]
    The 2010 ESG annual customer survey is now available. As part of it, ESG interviewed 300 customers about their IT priorities and, unsurprisingly, "Manage Data Growth" is top of the list. Perhaps less self-evident is the proposed solution to target this prime concern: "Often overlooked because it is a database platform, Oracle Database 11g offers additional capabilities such as automatic storage management (ASM), advanced data compression, and data protection that make managing data growth much easier for organizations of any size." The paper goes on to discuss these capabilities and highlights their potential benefits. Oracle Database 11g Helps Control Exponential Database Growth - a worthwhile read for anyone having to deal with rapidly increasing amounts of data. Download your free copy here.

    Read the article

  • Does software testing methodology rely on flawed data?

    - by Konrad Rudolph
    It’s a well-known fact in software engineering that the cost of fixing a bug increases exponentially the later in development that bug is discovered. This is supported by data published in Code Complete and adapted in numerous other publications. However, it turns out that this data never existed. The data cited by Code Complete apparently does not show such a cost / development time correlation, and similar published tables only showed the correlation in some special cases and a flat curve in others (i.e. no increase in cost). Is there any independent data to corroborate or refute this? And if true (i.e. if there simply is no data to support this exponentially higher cost for late discovered bugs), how does this impact software development methodology?

    Read the article

  • Is data integrity possible without normalization?

    - by shuniar
    I am working on an application that requires the storage of location information such as city, state, zip code, latitude, and longitude. I would like to ensure: Location data is accurate Detroit, CA Detroit IS NOT in California Detroit, MI Detroit IS in Michigan Cities and states are spelled correctly California not Calefornia Detroit not Detriot Cities and states are named consistently Valid: CA Detroit Invalid: Cali california DET d-town The D Also, since city/zip data is not guaranteed to be static, updating this data in a normalized fashion could be difficult, whereas it could be implemented as a de facto location if it is denormalized. A couple thoughts that come to mind: A collection of reference tables that store a list of all states and the most common cities and zip codes that can grow over time. It would search the database for an exact or similar match and recommend corrections. Use some sort of service to validate the location data before it is stored in the database. Is it possible to fulfill these requirements without normalization, and if so, should I denormalize this data?

    Read the article

  • How to define cell width for 2 HTML tables with different column counts?

    - by DaveDev
    If I have 2 tables: <table id="Table1"> <tr> <td></td><td></td><td></td> </tr> </table> <table id="Table2"> <tr> <td></td><td></td><td></td><td></td> </tr> </table> The first has 3 columns, the second has 4 columns. How can I define a style to represent both tables when I want Table1's cell width to be 1/3 the width of the full table, and Table2's cells are 1/4 the width of the table?

    Read the article

  • How can an SQL relational database be used to model a thesaurus? [closed]

    - by Miles O'Keefe
    I would like to design a web app that functions as a simple thesaurus: a long list of words with attributes, all of which are linked to each other. This thesaurus data model can be defined as: a controlled vocabulary arranged in a known order in which equivalence, hierarchical, and associative relationships among terms are clearly displayed and identified by standardized relationship indicators. My idea so far is to have one database in which every word is a table, and every table contains all words related to that word. e.g. Thesaurus(database) - happy(table) - excited(row)|cheerful(row)|lively(row) Is there are more efficient way to store words and their relationship to other words in a relational SQL database?

    Read the article

  • Colspan in IE7/8 not respected

    - by Stefan Kendall
    The DOM looks like this: <table> <tr> <td>a</td>...<td>g</td> </tr> <tr> <td colspan="3"> <table> ... </table> </td> </tr> <tr> <td></td>...<td></td> </tr> </table> Any idea why this wouldn't work in IE? I tried setting width:auto on the TD holding the inner table, and table-layout:fixed isn't viable because the tabular data is generated dynamically. What could be going wrong?

    Read the article

  • How to enable SSIS as data source type on SQL Server Reporing Services SSRS 2008 R2

    when you create a data source in SSRS 2008 R2 (Nov CTP), you won't be able to get SSIS listed as a data source type. Therefore applications that are already using it as a data source or applications that require it as a data source get stuck. Let's learn how to enable and get SSIS listed back as a data source in SSRS 2008 R2. SQL Server monitoring made easy "Keeping an eye on our many SQL Server instances is much easier with SQL Response." Mike Lile.Download a free trial of SQL Response now.

    Read the article

  • Transparent Data Encryption

    Transparent Data Encryption is designed to protect data by encrypting the physical files of the database, rather than the data itself. Its main purpose is to prevent unauthorized access to the data by restoring the files to another server. With Transparent Data Encryption in place, this requires the original encryption certificate and master key. It was introduced in the Enterprise edition of SQL Server 2008. John Magnabosco explains fully, and guides you through the process of setting it up....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Google I/O 2010 - Data migration in App Engine

    Google I/O 2010 - Data migration in App Engine Google I/O 2010 - Data migration in App Engine App Engine 201 Matthew Blain Learn about the App Engine bulk loader and see an example of migrating data from an external data source into the app engine datastore--and back out. Do you have data stored in a traditional, relational DB which you'd like to upload to App Engine? This session will teach you how. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 6 0 ratings Time: 44:26 More in Science & Technology

    Read the article

  • Forbes Article on Big Data and Java Embedded Technology

    - by hinkmond
    Whoa, cool! Forbes magazine has an online article about what I've been blogging about all this time: Big Data and Java Embedded Technology, tying it all together with a big bow, connecting small devices to the data center. See: Billions of Java Embedded Devices Here's a quote: By the end of the decade we could see tens of billions of new Internet-connected devices... with billions of Internet- connected devices generating Big Data, are the next big thing. ... That’s why Oracle has put together an ecosystem of solutions for this new, Big Data-oriented device-to-data center world: secure, powerful, and adaptable embedded Java for intelligent devices, integrated middleware... This is the next big thing. Java SE Embedded Technology is something to watch for in the new year. Start developing for it now to get a head-start... Hinkmond

    Read the article

  • Lua class instance with nested tables

    - by Anonnobody
    Hello, Simple lua game with simple class like so: creature = class({ name = "MONSTER BADDY!", stats = { power = 10, agility = 10, endurance = 10, filters = {} }, other_things = ... }) creatureA = creature.new() creatureB = creature.new() creatureA.name = "Frank" creatureB.name = "Zappa" creatureA.stats.agility = 20 creatureB.stats.power = 12 -- blah blah blah Non table values are individual to each instance, but table values are shared among all instances and if I modify a stats.X value in one instance, all other instances see the same stats table. Q1: Is my OO implementation flawed? I tried LOOP and the same result occured, is there a fundamental flaw in my logic? Q2: How would you have each instance of creature have it's own stats table (and sub tables)? PS. I cannot flatten my class table as it's a bit more complicated than the example and other parts of the code are simplified with this nested table implementation. Thanks a bunch.

    Read the article

  • iPhone: Grouped tables and navigation controller issues

    - by Jack Griffiths
    Hi there, I've set up a grouped table on my app, and pushing to new views works fine. Except for one thing. The titles and stack layout are all weird. Here's the breakdown: I have two sections in my table. When I tap on the first row in the first section, it takes me to the correct view, but the title of the new view is the name of the first row in the second section of the table. In turn, the second row in the first section's title is the second row in the second section's title. If I tap on the second row in the second section of the root view table, the navigation button goes to the second row in the first section of the table. So here's a diagram of my table: Table Section 1 Row 1 Row 2 Row 3 Table Section 2 Row A Row B Row C So if I tap on row 3, the title of the pushed view is Row C. The navigation button tells me to go back to Row 3, then eventually ending up at the root view. Here's my implementation file pushing the views: - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { //CSS if ([[arryClientSide objectAtIndex:indexPath.row] isEqual:@"CSS"]) { CSSViewController *css = [[CSSViewController alloc] initWithNibName:@"CSSViewController" bundle:nil]; [css setTitle:@"CSS"]; [self.navigationController pushViewController:css animated:YES]; } //HTML if ([[arryClientSide objectAtIndex:indexPath.row] isEqual:@"HTML"]) { HTMLViewController *html = [[HTMLViewController alloc] initWithNibName:@"HTMLViewController" bundle:nil]; [html setTitle:@"HTML"]; [self.navigationController pushViewController:html animated:YES]; } //JS if ([[arryClientSide objectAtIndex:indexPath.row] isEqual:@"JavaScript"]) { JSViewController *js = [[JSViewController alloc] initWithNibName:@"JSViewController" bundle:nil]; [js setTitle:@"JavaScript"]; [self.navigationController pushViewController:js animated:YES]; } //PHP if ([[arryServerSide objectAtIndex:indexPath.row] isEqual:@"PHP"]) { PHPViewController *php = [[PHPViewController alloc] initWithNibName:@"PHPViewController" bundle:nil]; [php setTitle:@"PHP"]; [self.navigationController pushViewController:php animated:YES]; } //SQL if ([[arryServerSide objectAtIndex:indexPath.row] isEqual:@"SQL"]) { SQLViewController *sql = [[SQLViewController alloc] initWithNibName:@"SQLViewController" bundle:nil]; [sql setTitle:@"SQL"]; [self.navigationController pushViewController:sql animated:YES]; } & the array feeding the table's data: - (void)viewDidLoad { [super viewDidLoad]; arryClientSide = [[NSArray alloc] initWithObjects:@"CSS",@"HTML",@"JavaScript",nil]; arryServerSide = [[NSArray alloc] initWithObjects:@"Objective-C", @"PHP",@"SQL",nil]; // arryResources = [[NSArray alloc] initWithObjects:@"HTML Colour Codes", @"Useful Resources", @"About",nil]; self.title = @"Select a Language"; [super viewDidLoad]; // Uncomment the following line to display an Edit button in the navigation bar for this view controller. // self.navigationItem.rightBarButtonItem = self.editButtonItem; } Any help would be greatly appreciated. Jack

    Read the article

  • R: How can I use apply on rows of a data.frame and get out $column_name?

    - by John
    I'm trying to access $a using the following example: df<-data.frame(a=c("x","x","y","y"),b=c(1,2,3,4)) > df a b 1 x 1 2 x 2 3 y 3 4 y 4 test_fun <- function (data.frame_in) { print (data.frame_in[1]) } I can now access $a if I use an index for the first column: apply(df, 1, test_fun) a "x" a "x" a "y" a "y" [1] "x" "x" "y" "y" But I cannot access column $a with the $ notation: error: "$ operator is invalid for atomic vectors" test_fun_2 <- function (data.frame_in) { print (data.frame_in$a) } >apply(df, 1, test_fun_2) Error in data.frame_in$a : $ operator is invalid for atomic vectors Is this not possible?

    Read the article

  • Using one data source across multiple views in Kendo UI SPA

    - by user3731783
    I am trying to build a Kendo UI SPA. I have two views. View 1 (appListView) shows Application Details in a grid and view 2 (activityView) will have a dropdown for application names and a grid that shows the activity for selected application As I am loading all the application details on the loading of view 1, I would like to re-use those details to populate the dropdown on view 2. Please see my code below. Everything works fine but when I go to View 2 it makes a call to the service again to get application details. I would like to use the existing data if it is already loaded and if the uses comes to view 2 directly then it should get application data also. I am not sure what I am missing in the code. View Markup: <script id="appListView" type="text/x-kendo-template"> <h3 data-bind="html: displayName"></h3> <div data-role="grid" data-editable="{'mode':'popup'}" data-bind="source: items" data-columns="[ {'field': 'Name'}, {'field': 'ContactEmail','title':'Contact Email'} ]"> </div> </script> <script id="" type="text\x-kendo-template"> <div> Activity for Application&nbsp;&nbsp; <input name="AppName" data-role="dropdownlist" data-source="appsModel.items" data-text-field="Name" data-value-field="Id" data-option-label="Choose an application name" style="width:250px;" /> </div> <div id="Activities" data-role="grid" data-bind="source: items" data-auto-bind="false" data-columns="[ {'field': 'Domain','title':'Domain'}, {'field': 'ActivityType','title':'Activity Type'} ]"> </div> </script> js with DataSource and View Model: //data sources var applications = new kendo.data.DataSource({ schema: { model: { id: "Id" } }, serverFiltering : true, transport: { read: { url: '/api/App', dataType: 'json', type:'GET' } } }); var activities = new kendo.data.DataSource({ schema: { model: { id: "Id" } }, transport: { read: { url: '/api/Activity', dataType: 'json', type: 'GET' }, parameterMap: function (data, type) { if (type == "read") { return 'appId=' + $("#AppName").val() ; } } } }); //Models var appsModel = kendo.observable({ items: applications, displayName: 'My Applications' }); var activityModel = kendo.observable({ items: activities, onAppChange: function(t){ $("#Activities").data("kendoGrid").dataSource.read(); }, dispayName: 'Application Activities' }); //views var layout = new kendo.Layout("layout-template"); var appListView = new kendo.View("appListView", { model: appsModel }); var activityView = new kendo.View("activityView", { model: activityModel }); Thank you for taking time to read this long question.

    Read the article

  • Importing Data from Google Analytics

    - by Adam Tannon
    I am planning on building a web app with many different public-facing HTTP servers; each of which will have Google Analytics (GA) installed on them. I'd like to create a "dashboard" app that consolidates the GA data into one screen. I've been perusing the documentation for this so-called GA API, but I can't tell what the end result of the GA API is: Does the GA API allow me to do exactly what I am looking for it to do? Or... Does the GA API do something entirely different (like allow me to share my data with Google+ or something else weird) Since an API can be used to CRUD any kind of data, I guess I'm asking which way the GA API goes: is it for querying (reading) data from 1+ server instances, or is it for modifying data on those servers or somewhere else? Thanks in advance!

    Read the article

  • Storing non-content data in Orchard

    - by Bertrand Le Roy
    A CMS like Orchard is, by definition, designed to store content. What differentiates content from other kinds of data is rather subtle. The way I would describe it is by saying that if you would put each instance of a kind of data on its own web page, if it would make sense to add comments to it, or tags, or ratings, then it is content and you can store it in Orchard using all the convenient composition options that it offers. Otherwise, it probably isn't and you can store it using somewhat simpler means that I will now describe. In one of the modules I wrote, Vandelay.ThemePicker, there is some configuration data for the module. That data is not content by the definition I gave above. Let's look at how this data is stored and queried. The configuration data in question is a set of records, each of which has a number of properties: public class SettingsRecord { public virtual int Id { get; set;} public virtual string RuleType { get; set; } public virtual string Name { get; set; } public virtual string Criterion { get; set; } public virtual string Theme { get; set; } public virtual int Priority { get; set; } public virtual string Zone { get; set; } public virtual string Position { get; set; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Each property has to be virtual for nHibernate to handle it (it creates derived classed that are instrumented in all kinds of ways). We also have an Id property. The way these records will be stored in the database is described from a migration: public int Create() { SchemaBuilder.CreateTable("SettingsRecord", table => table .Column<int>("Id", column => column.PrimaryKey().Identity()) .Column<string>("RuleType", column => column.NotNull().WithDefault("")) .Column<string>("Name", column => column.NotNull().WithDefault("")) .Column<string>("Criterion", column => column.NotNull().WithDefault("")) .Column<string>("Theme", column => column.NotNull().WithDefault("")) .Column<int>("Priority", column => column.NotNull().WithDefault(10)) .Column<string>("Zone", column => column.NotNull().WithDefault("")) .Column<string>("Position", column => column.NotNull().WithDefault("")) ); return 1; } When we enable the feature, the migration will run, which will create the table in the database. Once we've done that, all we have to do in order to use the data is inject an IRepository<SettingsRecord>, which is what I'm doing from the set of helpers I put under the SettingsService class: private readonly IRepository<SettingsRecord> _repository; private readonly ISignals _signals; private readonly ICacheManager _cacheManager; public SettingsService( IRepository<SettingsRecord> repository, ISignals signals, ICacheManager cacheManager) { _repository = repository; _signals = signals; _cacheManager = cacheManager; } The repository has a Table property, which implements IQueryable<SettingsRecord> (enabling all kind of Linq queries) as well as methods such as Delete and Create. Here's for example how I'm getting all the records in the table: _repository.Table.ToList() And here's how I'm deleting a record: _repository.Delete(_repository.Get(r => r.Id == id)); And here's how I'm creating one: _repository.Create(new SettingsRecord { Name = name, RuleType = ruleType, Criterion = criterion, Theme = theme, Priority = priority, Zone = zone, Position = position }); In summary, you create a record class, a migration, and you're in business and can just manipulate the data through the repository that the framework is exposing. You even get ambient transactions from the work context.

    Read the article

  • Displaying a Sorted, Paged, and Filtered Grid of Data in ASP.NET MVC

    Over the past couple of months I've authored five articles on displaying a grid of data in an ASP.NET MVC application. The first article in the series focused on simply displaying data. This was followed by articles showing how to sort, page, and filter a grid of data. We then examined how to both sort and page a single grid of data. This article looks at how to add the final piece to the puzzle: we'll see how to combine sorting, paging and filtering when displaying data in a single grid. Like with its predecessors, this article offers step-by-step instructions and includes a complete, working demo available for download at the end of the article. Read on to learn more! Read More >

    Read the article

  • How to implement Self-host WCF data serivces (http://localhost:1234/myDataService.svc/...)

    - by warmcold
    I have a project that needs to implement WCF data services (OData) to retrieve data from a control system (.NET Framework Application). The WCF data service needs to be hosted by the .NET application (No ASP.NET and NO IIS). I have seen many WCF Data Service examples recently; they are all hosted by ASP.NET application. I also see the self-host (console application) examples, but it is for WCF Service (not WCF Data Service). Here is my question: It is possible to have a standalone .NET Applications to host WCF Data Services ((http://localhost:1234/mydataservice.svc/...). If yes, can someone provide an example? Thanks.

    Read the article

< Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >