Search Results

Search found 109760 results on 4391 pages for 'ado net entity data model'.

Page 252/4391 | < Previous Page | 248 249 250 251 252 253 254 255 256 257 258 259  | Next Page >

  • Breeze Expand not working on WebAPI with EF

    - by Rodney
    I have published a WebAPI service which returns a list of items. I am implementing Breeze and have managed to get it basically working with filtering/sorting. However, the Expand is not working. http://www.ftter.com/desktopmodules/framework/api/dare/dares/?$filter=DareId%20eq%2010&expand%20eq%20ToUser You can see the ToUserId ForeignKey in the response above, but the ToUser properties are NULL (the user definitely exists) You can see the ToUser EF navigation property in the metadata. When I use .Include on the server side I can populate it with EF, but I don't want to do this. [HttpGet] [Queryable(AllowedQueryOptions = AllowedQueryOptions.All)] public HttpResponseMessage Dares() { var response = Request.CreateResponse(HttpStatusCode.OK, (IQueryable<Dare>)contextProvider.Context.Dares); return ControllerUtilities.GetResponseWithCorsHeader(response); } and here is the generated class from my EF model (using Database First) public partial class Dare { public int DareId { get; set; } public int ToUserId { get; set; } public virtual User ToUser { get; set; } }

    Read the article

  • Best practices for encrypting continuous/small UDP data

    - by temp
    Hello everyone, I am having an application where I have to send several small data per second through the network using UDP. The application need to send the data in real-time (no waiting). I want to encrypt these data and insure that what I am doing is as secure as possible. Since I am using UDP, there is no way to use SSL/TLS, so I have to encrypt each packet alone since the protocol is connectionless/unreliable/unregulated. Right now, I am using a 128-bit key derived from a passphrase from the user, and AES in CBC mode (PBE using AES-CBC). I decided to use a random salt with the passphrase to derive the 128-bit key (prevent dictionary attack on the passphrase), and of course use IVs (to prevent statistical analysis for packets). However I am concerned about few things: Each packet contains small amount of data (like a couple of integer values per packet) which will make the encrypted packets vulnerable to known-plaintext attacks (which will result in making it easier to crack the key). Also, since the encryption key is derived from a passphrase, this will make the key space way less (I know the salt will help, but I have to send the salt through the network once and anyone can get it). Given these two things, anyone can sniff and store the sent data, and try to crack the key. Although this process might take some time, once the key is cracked all the stored data will be decrypted, which will be a real problem for my application. So my question is, what is the best practices for sending/encrypting continuous small data using a connectionless protocol (UDP)? Is my way the best way to do it? ...flowed? ...Overkill? ... Please note that I am not asking for a 100% secure solution, as there is no such thing. Cheers

    Read the article

  • Export large amount of data from Oracle 10G to SQL Server 2005

    - by uniball
    Dear all, I need to export 100 million data rows (avg row length ~ 100 bytes) from Oracle 10G database table into SQL server (over WAN/VLAN with 6MBits/sec capacity) on a regular basis. So far, these are the options that I have tried and a quick summary. Has anyone tried this before? Are there other better options? Which option would be the best in terms of performance and reliability? The time taken has been calculated using tests on smaller amounts of data and then extrapolating it to estimate the time required. Using data import wizard on the SQL server or SSIS packages to import the data. It will take around 150 hours to complete the task. Using Oracle batch job to spool data into a comma-delimited flat-file. Then using SSIS package to FTP this file to the SQL server and then load directly from the flat-file. The issue here is the size of the flat-file which is expected to run in GBs. Although this option is drastically different, I am even considering the option of using Linked Server to query the Oracle data directly at run-time to avoid bringing in data. Performance is a big problem and I have limited control over the Oracle database in terms of creating table indexes. Regards, Uniball

    Read the article

  • Best practices for encrytping continuous/small UDP data

    - by temp
    Hello everyone, I am having an application where I have to send several small data per second through the network using UDP. The application need to send the data in real-time (on waiting). I want to encrypt these data and insure that what I am doing is as secure as possible. Since I am using UDP, there is no way to use SSL/TLS, so I have to encrypt each packet alone since the protocol is connectionless/unreliable/unregulated. Right now, I am using a 128-bit key derived from a passphrase from the user, and AES in CBC mode (PBE using AES-CBC). I decided to use a random salt with the passphrase to derive the 128-bit key (prevent dictionary attack on the passphrase), and of course use IVs (to prevent statistical analysis for packets). However I am concerned about few things: Each packet contains small amount of data (like a couple of integer values per packet) which will make the encrypted packets vulnerable to known-plaintext attacks (which will result in making it easier to crack the key). Also, since the encryption key is derived from a passphrase, this will make the key space way less (I know the salt will help, but I have to send the salt through the network once and anyone can get it). Given these two things, anyone can sniff and store the sent data, and try to crack the key. Although this process might take some time, once the key is cracked all the stored data will be decrypted, which will be a real problem for my application. So my question is, what is the best practices for sending/encrypting continuous small data using a connectionless protocol (UDP)? Is my way the best way to do it? ...flowed? ...Overkill? ... Please note that I am not asking for a 100% secure solution, as there is no such thing. Cheers

    Read the article

  • Why doesn't the default model binder update my partial view model on postback?

    - by bdnewbe
    I have a class that contains another class as one of its properties. public class SiteProperties { public SiteProperties() { DropFontFamily = "Arial, Helvetica, Sans-serif"; } public string DropFontFamily { get; set; } private ResultPageProperties m_ResultPagePropertyList; public ResultPageProperties ResultPagePropertyList { get { if (m_ResultPagePropertyList == null) m_ResultPagePropertyList = new ResultPageProperties(); return m_ResultPagePropertyList; } set { m_ResultPagePropertyList = value; } } } The second class has just one property public class ResultPageProperties { public ResultPageProperties() { ResultFontFamily = "Arial, Helvetica, Sans-serif"; } public string ResultFontFamily { get; set; } } My controller just grabs the SiteProperties and returns the view. On submit, it accepts SiteProperties and returns the same view. public class CompanyController : Controller { public ActionResult SiteOptions(int id) { SiteProperties site = new SiteProperties(); PopulateProperyDropDownLists(); return View("SiteOptions", site); } [AcceptVerbs(HttpVerbs.Post)] public ActionResult SiteOptions(SiteProperties properties) { PopulateProperyDropDownLists(); return View("SiteOptions", properties); } private void PopulateProperyDropDownLists() { var fontFamilyList = new List<SelectListItem>(); fontFamilyList.Add(new SelectListItem() { Text = "Arial, Helvetica, Sans-serif", Value = "Arial, Helvetica, Sans-serif" }); fontFamilyList.Add(new SelectListItem() { Text = "Times New Roman, Times, serif", Value = "Times New Roman, Times, serif" }); fontFamilyList.Add(new SelectListItem() { Text = "Courier New, Courier, Monospace", Value = "Courier New, Courier, Monospace" }); ViewData["FontFamilyList"] = fontFamilyList; } } The view contains a partial view that renders the ResultPageProperties Model. <% using (Html.BeginForm("SiteOptions", "Company", FormMethod.Post)) {%> <p><input type="submit" value="Submit" /></p> <div>View level input</div> <div> <label>Font family</label><br /> <%= Html.DropDownListFor(m => m.DropFontFamily, ViewData["FontFamilyList"] as List<SelectListItem>, new { Class = "UpdatesDropDownExample" })%> </div> <% Html.RenderPartial("ResultPagePropertyInput", Model.ResultPagePropertyList); %> <% } %> The partial is just <div style='margin-top: 1em;'>View level input</div> <div> <label>Font family</label><br /> <%= Html.DropDownListFor(m => m.ResultFontFamily, ViewData["FontFamilyList"] as List<SelectListItem>, new { Class = "UpdatesResultPageExample" })%> </div> OK, so when the page renders, you get "Arial, ..." in both selects. If you choose another option for both and click submit, the binder populates the SiteProperties object and passes it to the controller. However, the ResultFontFamily always contains the original value. I was expecting it to have the value the user selected. What am I missing?

    Read the article

  • Retrieving a unique result set with Core Data

    - by randombits
    I have a core data based app that manages a bunch of entities. I'm looking to be able to do the following. I have an entity "SomeEntity" with the attributes: name, type, rank, foo1, foo2. Now, SomeEntity has several rows if when we're speaking strictly in SQL terms. What I'm trying to accomplish is to retrieve only available types, even though each instance can have duplicate types. I also need them returned in order according to rank. So in SQL, what I'm looking for is the following: SELECT DISTINCT(type) ORDER BY rank ASC Here is the code I have so far that's breaking: NSError *error = NULL; NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; [fetchRequest setReturnsDistinctResults:YES]; [fetchRequest setPropertiesToFetch:[NSArray arrayWithObjects:@"type", @"rank", nil]]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"SomeEntity" inManagedObjectContext:managedObjectContext]; [fetchRequest setEntity:entity]; // sort by rank NSSortDescriptor *rankDescriptor = [[NSSortDescriptor alloc] initWithKey:@"rank" ascending:YES]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:rankDescriptor,nil]; [fetchRequest setSortDescriptors:sortDescriptors]; [sortDescriptors release]; [rankDescriptor release]; NSArray *fetchResults = [managedObjectContext executeFetchRequest:fetchRequest error:&error]; [fetchRequest release]; return fetchResults; Right now that is crashing with the following: Invalid keypath section passed to setPropertiesToFetch:

    Read the article

  • EF Code First - Relationships

    - by CaffGeek
    I have these classes public class EntityBase : IEntity { public int Id { get; set; } public DateTime Created { get; set; } public string CreatedBy { get; set; } public DateTime Updated { get; set; } public string UpdatedBy { get; set; } } public class EftInterface : EntityBase { public string Name { get; set; } public Provider Provider { get; set; } public List<BusinessUnit> BusinessUnits { get; set; } } public class Provider : EntityBase, IEntity { public string Name { get; set; } public decimal DefaultDebitLimit { get; set; } public decimal DefaultCreditLimit { get; set; } public decimal TreasuryDebitLimit { get; set; } public decimal TreasuryCreditLimit { get; set; } } public class BusinessUnit : EntityBase { public string Name { get; set; } } An interface, is really a Provider, with a collection of Business Units. The issue is that while my db model ends up having a correct EftInterfaces table, with a FK to Provider_Id, the BusinessUnits table has a FK to EftInterface_Id. But, a BusinessUnit can be included in more than one EftInterface. I need a many to many relationship. A BusinessUnit can be part of many EftInterfaces, and an EftInterface can contain many BusinessUnits. How can I get CodeFirst to generate the many-to-many table?

    Read the article

  • Iphone Core Data Internal Inconsistency

    - by kiyoshi
    This question has something to do with the question I posted here: http://stackoverflow.com/questions/1230858/iphone-core-data-crashing-on-save however the error is different so I am making a new question. Now I get this error when trying to insert new objects into my managedObjectContext: *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: '"MailMessage" is not a subclass of NSManagedObject.' But clearly it is: @interface MailMessage : NSManagedObject { .... And when I run this code: NSManagedObjectModel *managedObjectModel = [[self.managedObjectContext persistentStoreCoordinator] managedObjectModel]; NSEntityDescription *entity =[[managedObjectModel entitiesByName] objectForKey:@"MailMessage"]; NSManagedObject *newObject = [[NSManagedObject alloc] initWithEntity:entity insertIntoManagedObjectContext:self.managedObjectContext]; It runs fine when I do not present an MFMailComposeViewController, but if I run this code in the - (void)mailComposeController:(MFMailComposeViewController*)controller didFinishWithResult:(MFMailComposeResult)result error:(NSError*)error { method, it throws the above error when creating the newObject variable. The entity object when I use print object produces the following: (<NSEntityDescription: 0x1202e0>) name MailMessage, managedObjectClassName MailMessage, renamingIdentifier MailMessage, isAbstract 0, superentity name (null), properties { in both cases, so I don't think the managedObjectContext is completely invalid. I have no idea why it would say MailMessage is not a subclass of NSManagedObject at that point, and not at the other. Any help would be appreciated, thanks in advance.

    Read the article

  • model binding of non-sequential arrays

    - by user281180
    I am having a table in which i`m dynamically creating and deleting rows. How can I change the code such that the rows be added and deleted and the model info property filled accordingly. Bearing in mind that the rows can be dynamically created and deleted, I may have Info[0], Inf0[3], info[4]... My objective is to be able to bind the array even if it`s not in sequence. Model public class Person { public int[] Size { get; set; } public string[] Name { get; set; } public Info[]info { get; set; } } public class Info { public string Address { get; set; } public string Tel { get; set; } View <script type="text/javascript" language="javascript"> $(function () { var count = 1; $('#AddSize').live('click', function () { $("#divSize").append('</br><input type="text" id="Size" name="Size" value=""/><input type = "button" id="AddSize" value="Add"/>'); }); $('#AddName').live('click', function () { $("#divName").append('</br><input type="text" id="Name" name="Name" value=""/><input type = "button" id="AddName" value="Add"/>'); }); $('#AddRow').live('click', function () { $('#details').append('<tr><td>Address</td><td> <input type="text" name="Info[' + count + '].Address"/></td><td>Tel</td><td><input type="text" name="Info[' + count++ + '].Tel"/></td> <td><input type="button" id="AddRow" value="Add"/> </td></tr>'); }); }); </script> </head> <body> <form id="closeForm" action="<%=Url.Action("Create",new{Action="Create"}) %>" method="post" enctype="multipart/form-data"> <div id="divSize"> <input type="text" name="Size" value=""/> <input type="button" value="Add" id="AddSize" /> </div> <div id="divName"> <input type="text" name="Name" value=""/> <input type="button" value="Add" id="AddName" /> </div> <div id="Tab"> <table id="details"> <tr><td>Address</td><td> <input type="text" name="Info[0].Address"/></td><td>Tel</td><td><input type="text" name="Info[0].Tel"/></td> <td><input type="button" id="AddRow" value="Add"/> </td></tr> </table> </div> <input type="submit" value="Submit" /> </form> </body> } Controller public ActionResult Create(Person person) { return new EmptyResult(); }

    Read the article

  • Using data.table to aggregate

    - by dayne
    After multiple suggestions from SO users, I am finally trying to convert my code over to using data.tables. library(data.table) DT <- data.table(plate = paste0("plate",rep(1:2,each=5)), id = rep(c("CTRL","CTRL","ID1","ID2","ID3"),2), val = 1:10) > DT plate id val 1: plate1 CTRL 1 2: plate1 CTRL 2 3: plate1 ID1 3 4: plate1 ID2 4 5: plate1 ID3 5 6: plate2 CTRL 6 7: plate2 CTRL 7 8: plate2 ID1 8 9: plate2 ID2 9 10: plate2 ID3 10 What I would like to do is take the average of DT[,val] by plate when the id is "CTRL". I would normally aggregate the data frame, then use match to map the values back to a new column, 'ctrl'. Using the data.table package I can get: DT[id=="CTRL",ctrl:=mean(val),by=plate] > DT plate id val ctrl 1: plate1 CTRL 1 1.5 2: plate1 CTRL 2 1.5 3: plate1 ID1 3 NA 4: plate1 ID2 4 NA 5: plate1 ID3 5 NA 6: plate2 CTRL 6 6.5 7: plate2 CTRL 7 6.5 8: plate2 ID1 8 NA 9: plate2 ID2 9 NA 10: plate2 ID3 10 NA What I need is really: DT <- data.table(plate = paste0("plate",rep(1:2,each=5)), id = rep(c("CTRL","CTRL","ID1","ID2","ID3"),2), val = 1:10, ctrl = rep(c(1.5,6.5),each=5)) > DT plate id val ctrl 1: plate1 CTRL 1 1.5 2: plate1 CTRL 2 1.5 3: plate1 ID1 3 1.5 4: plate1 ID2 4 1.5 5: plate1 ID3 5 1.5 6: plate2 CTRL 6 6.5 7: plate2 CTRL 7 6.5 8: plate2 ID1 8 6.5 9: plate2 ID2 9 6.5 10: plate2 ID3 10 6.5 Eventually I would like to use much more complicated selections of the values, but I do not know how to select specific values, run some function, then map those values back to the appropriate row using data frames.

    Read the article

  • What is Linq?

    - by Aamir Hasan
    The way data can be retrieved in .NET. LINQ provides a uniform way to retrieve data from any object that implements the IEnumerable<T> interface. With LINQ, arrays, collections, relational data, and XML are all potential data sources. Why LINQ?With LINQ, you can use the same syntax to retrieve data from any data source:var query = from e in employeeswhere e.id == 1select e.nameThe middle level represents the three main parts of the LINQ project: LINQ to Objects is an API that provides methods that represent a set of standard query operators (SQOs) to retrieve data from any object whose class implements the IEnumerable<T> interface. These queries are performed against in-memory data.LINQ to ADO.NET augments SQOs to work against relational data. It is composed of three parts.LINQ to SQL (formerly DLinq) is use to query relational databases such as Microsoft SQL Server. LINQ to DataSet supports queries by using ADO.NET data sets and data tables. LINQ to Entities is a Microsoft ORM solution, allowing developers to use Entities (an ADO.NET 3.0 feature) to declaratively specify the structure of business objects and use LINQ to query them. LINQ to XML (formerly XLinq) not only augments SQOs but also includes a host of XML-specific features for XML document creation and queries. What You Need to Use LINQLINQ is a combination of extensions to .NET languages and class libraries that support them. To use it, you’ll need the following: Obviously LINQ, which is available from the new Microsoft .NET Framework 3.5 that you can download at http://go.microsoft.com/?linkid=7755937.You can speed up your application development time with LINQ using Visual Studio 2008, which offers visual tools such as LINQ to SQL designer and the Intellisense  support with LINQ’s syntax.Optionally, you can download the Visual C# 2008 Expression Edition tool at www.microsoft.com/vstudio/express/download. It is the free edition of Visual Studio 2008 and offers a lot of LINQ support such as Intellisense and LINQ to SQL designer. To use LINQ to ADO.NET, you need SQL

    Read the article

  • How to Create Views for All Tables with Oracle SQL Developer

    - by thatjeffsmith
    Got this question over the weekend via a friend and Oracle ACE Director, so I thought I would share the answer here. If you want to quickly generate DDL to create VIEWs for all the tables in your system, the easiest way to do that with SQL Developer is to create a data model. Wait, why would I want to do this? StackOverflow has a few things to say on this subject… So, start with importing a data dictionary. Step One: Open of Create a Model In SQL Developer, go to View – Data Modeler – Browser. Then in the browser panel, expand your design and create a new Relational Model. Step Two: Import your Data Dictionary This is a fancy way of saying, ‘suck objects out of the database into my model’ This will open a wizard to connect, select your schema(s), objects, etc. Once they’re in your model, you’re ready to cook with gas I’m using HR (Human Resources) for this example. You should end up with something that looks like this. Our favorite HR model Now we’re ready to generate the views! Step Three: Auto-generate the Views Go to Tools – Data Modeler – Table to View Wizard. I don’t want all my tables included, and I want to change the naming standard Decide if you want to change the default generated view names By default the views will be created as ‘V_TABLE_NAME.’ If you don’t like the ‘V_’ you can enter your own. You also can reference the object and model name with variables as shown in the screenshot above. I’m going to go with something a little more personal. The views are the little green boxes in the diagram Can’t find your views? They should be grouped together in your diagram. Don’t forget to use the Navigator to easily find and navigate to those model diagram objects! Step Four: Generate the DDL Ok, let’s use the Generate DDL button on the toolbar. Un-check everything but your views If you used a prefix, take advantage of that to create a filter. You might have existing views in your model that you don’t want to include, right? Once you click ‘OK’ the DDL will be generated. -- Generated by Oracle SQL Developer Data Modeler 4.0.0.825 -- at: 2013-11-04 10:26:39 EST -- site: Oracle Database 11g -- type: Oracle Database 11g CREATE OR REPLACE VIEW HR.TJS_BLOG_COUNTRIES ( COUNTRY_ID , COUNTRY_NAME , REGION_ID ) AS SELECT COUNTRY_ID , COUNTRY_NAME , REGION_ID FROM HR.COUNTRIES ; CREATE OR REPLACE VIEW HR.TJS_BLOG_EMPLOYEES ( EMPLOYEE_ID , FIRST_NAME , LAST_NAME , EMAIL , PHONE_NUMBER , HIRE_DATE , JOB_ID , SALARY , COMMISSION_PCT , MANAGER_ID , DEPARTMENT_ID ) AS SELECT EMPLOYEE_ID , FIRST_NAME , LAST_NAME , EMAIL , PHONE_NUMBER , HIRE_DATE , JOB_ID , SALARY , COMMISSION_PCT , MANAGER_ID , DEPARTMENT_ID FROM HR.EMPLOYEES ; CREATE OR REPLACE VIEW HR.TJS_BLOG_JOBS ( JOB_ID , JOB_TITLE , MIN_SALARY , MAX_SALARY ) AS SELECT JOB_ID , JOB_TITLE , MIN_SALARY , MAX_SALARY FROM HR.JOBS ; CREATE OR REPLACE VIEW HR.TJS_BLOG_JOB_HISTORY ( EMPLOYEE_ID , START_DATE , END_DATE , JOB_ID , DEPARTMENT_ID ) AS SELECT EMPLOYEE_ID , START_DATE , END_DATE , JOB_ID , DEPARTMENT_ID FROM HR.JOB_HISTORY ; CREATE OR REPLACE VIEW HR.TJS_BLOG_LOCATIONS ( LOCATION_ID , STREET_ADDRESS , POSTAL_CODE , CITY , STATE_PROVINCE , COUNTRY_ID ) AS SELECT LOCATION_ID , STREET_ADDRESS , POSTAL_CODE , CITY , STATE_PROVINCE , COUNTRY_ID FROM HR.LOCATIONS ; CREATE OR REPLACE VIEW HR.TJS_BLOG_REGIONS ( REGION_ID , REGION_NAME ) AS SELECT REGION_ID , REGION_NAME FROM HR.REGIONS ; -- Oracle SQL Developer Data Modeler Summary Report: -- -- CREATE TABLE 0 -- CREATE INDEX 0 -- ALTER TABLE 0 -- CREATE VIEW 6 -- CREATE PACKAGE 0 -- CREATE PACKAGE BODY 0 -- CREATE PROCEDURE 0 -- CREATE FUNCTION 0 -- CREATE TRIGGER 0 -- ALTER TRIGGER 0 -- CREATE COLLECTION TYPE 0 -- CREATE STRUCTURED TYPE 0 -- CREATE STRUCTURED TYPE BODY 0 -- CREATE CLUSTER 0 -- CREATE CONTEXT 0 -- CREATE DATABASE 0 -- CREATE DIMENSION 0 -- CREATE DIRECTORY 0 -- CREATE DISK GROUP 0 -- CREATE ROLE 0 -- CREATE ROLLBACK SEGMENT 0 -- CREATE SEQUENCE 0 -- CREATE MATERIALIZED VIEW 0 -- CREATE SYNONYM 0 -- CREATE TABLESPACE 0 -- CREATE USER 0 -- -- DROP TABLESPACE 0 -- DROP DATABASE 0 -- -- REDACTION POLICY 0 -- -- ERRORS 0 -- WARNINGS 0 You can then choose to save this to a file or not. This has a few steps, but as the number of tables in your system increases, so does the amount of time this feature can save you!

    Read the article

  • Healthcare and Distributed Data Don't Mix

    - by [email protected]
    How many times have you heard the story?  Hard disk goes missing, USB thumb drive goes missing, laptop goes missing...Not a week goes by that we don't hear about our data going missing...  Healthcare data is a big one, but we hear about credit card data, pricing info, corporate intellectual property...  When I have spoken at Security and IT conferences part of my message is "Why do you give your users data to lose in the first place?"  I don't suggest they can't have access to it...in fact I work for the company that provides the premiere data security and desktop solutions that DO provide access.  Access isn't the issue.  'Keeping the data' is the issue.We are all human - we all make mistakes... I fault no one for having their car stolen or that they dropped a USB thumb drive. (well, except the thieves - I can certainly find some fault there)  Where I find fault is in policy (or lack thereof sometimes) that allows users to carry around private, and important, data with them.  Mr. Director of IT - It is your fault, not theirs.  Ms. CSO - Look in the mirror.It isn't like one can't find a network to access the data from.  You are on a network right now.  How many Wireless ones (wifi, mifi, cellular...) are there around you, right now?  Allowing employees to remove data from the confines of (wait for it... ) THE DATA CENTER is just plain indefensible when it isn't required.  The argument that the laptop had a password and the hard disk was encrypted is ridiculous.  An encrypted drive tells thieves that before they sell the stolen unit for $75, they should crack the encryption and ascertain what the REAL value of the laptop is... credit card info, Identity info, pricing lists, banking transactions... a veritable treasure trove of info people give away on an 'encrypted disk'.What started this latest rant on lack of data control was an article in Government Health IT that was forwarded to me by Denny Olson, an Oracle Principal Sales Consultant in Minnesota.  The full article is here, but the point was that a couple laptops went missing in a couple different cases, and.. well... no one knows where the data is, and yes - they were loaded with patient info.  What were you thinking?Obviously you can't steal data form a Sun Ray appliance... since it has no data, nor any storage to keep the data on, and Secure Global Desktop allows access from Macs, Linux and Windows client devices...  but in all cases, there is no keeping the data unless you explicitly allow for it in your policy.   Since you can get at the data securely from any network, why would you want to take personal responsibility for it?  Both Sun Rays and Secure Global Desktop are widely used in Healthcare... but clearly not widely enough.We need to do a better job of getting the message out -  Healthcare (or insert your business type here) and distributed data don't mix. Then add Hot Desking and 'follow me printing' and you have something that Clinicians (and CSOs) love.Thanks for putting up my blood pressure, Denny.

    Read the article

  • ATG Live Webcast June 28: Scrambling Sensitive Data in EBS 12 Cloned Environments

    - by BillSawyer
    Securing the Oracle E-Business Suite includes protecting the underlying E-Business data in production and non-production databases.  While steps can be taken to provide a secure configuration to limit EBS access, a better approach to protecting non-production data is simply to scramble (mask) the data in the non-production copy.   The Oracle E-Business Suite Template for Data Masking Pack can be used in situations where confidential or regulated data needs to be shared with other non-production users who need access to some of the original data, but not necessarily every table.  Examples of non-production users include internal application developers or external business partners such as offshore testing companies, suppliers or customers. The Oracle E-Business Suite Template for Data Masking Pack is applied to a non-production environment with the Enterprise Manager Grid Control Data Masking Pack.  When applied, the Oracle E-Business Suite Template for Data Masking Pack will create an irreversibly scrambled version of your production database for development and testing. This ATG Live Webcast is your chance to come learn about the Oracle E-Business Suite Release 12.1.3 Template for Data Masking Pack from the experts. Oracle E-Business Suite Release 12.1.3 Template for Data Masking The agenda for the Oracle E-Business Suite Template for Data Masking Pack webcast includes the following topics: What does data masking do in E-Business Suite environments? De-identify the data Mask sensitive data Maintain data validity How can EBS customers use data masking? References Join Eric Bing, Senior Director and Elke Phelps, Senior Principal Product Manager, as they discusses the Oracle E-Business Suite Template for Data Masking Pack.Date:                  Thursday, June 28, 2012Time:                 8:00AM Pacific Standard TimePresenters:     Eric Bing, Senior Director                           Elke Phelps, Senior Principal Product ManagerWebcast Registration Link (Preregistration is optional but encouraged) To hear the audio feed:    Domestic Participant Dial-In Number:           877-697-8128    International Participant Dial-In Number:      706-634-9568    Additional International Dial-In Numbers Link:    Dial-In Passcode:                                              100865To see the presentation:    The Direct Access Web Conference details are:    Website URL: https://ouweb.webex.com    Meeting Number:  599097152If you miss the webcast, or you have missed any webcast, don't worry -- we'll post links to the recording as soon as it's available from Oracle University.  You can monitor this blog for pointers to the replay. And, you can find our archive of our past webcasts and training here.If you have any questions or comments, feel free to email Bill Sawyer (Senior Manager, Applications Technology Curriculum) at BilldotSawyer-AT-Oracle-DOT-com.

    Read the article

  • Evaluating Oracle Data Mining Has Never Been Easier - Evaluation "Kit" Available

    - by chberger
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Now you can quickly and easily get set up to starting using Oracle Data Mining for evaluation purposes. Just go to the Oracle Technology Network (OTN) and follow these simple steps. Oracle Data Mining Evaluation "Kit" Instructions Step 1: Download and Install the Oracle Database 11g Release 2 Anyone can download and install the Oracle Database for free for evaluation purposes. Read OTN web site for details. 11.2.0.1.0 DB is the minimum, 11.2.0.2 is better and naturally 11.2.0.3 is best if you are a current customer and on active support. Either 32-bit or 64-bit is fine. 4GB of RAM or more works fine for SQL Developer and the Oracle Data Miner GUI extension. Downloading the database and installing it should take just about an hour or so, depending on your network and computer. For more instructions on setting up Oracle Data Mining see: http://www.oracle.com/technetwork/database/options/odm/dataminerworkflow-168677.html When you install the Oracle Database, the Sample Examples data should also be installed e.g.:Release 2 Examples win32_11gR2_examples.zip (565,154,740 bytes). Contains examples of how to use the Oracle Database. Download if you are new to Oracle and want to try some of the examples presented in the Documentation Step 2: Install SQL Developer 3.1 (the Oracle Data Mining Extension installs automatically) Step 3. Follow the four free step-by-step Oracle-by-Examples e-training lessons: Setting Up Oracle Data Miner 11g Release 2 This tutorial covers the process of setting up Oracle Data Miner 11g Release 2 for use within Oracle SQL Developer 3.0. Using Oracle Data Miner 11g Release 2 This tutorial covers the use of Oracle Data Miner to perform data mining against Oracle Database 11g Release 2. In this lesson, you examine and solve a data mining business problem by using the Oracle Data Miner graphical user interface (GUI). Star Schema Mining Using Oracle Data Miner This tutorial covers the use of Oracle Data Miner to perform star schema mining against Oracle Database 11g Release 2. Text Mining Using Oracle Data Miner This tutorial covers the use of Oracle Data Miner to perform text mining against Oracle Database 11g Release 2. That’s it! Easy, fun and the fastest way to get started evaluating Oracle Data Mining. Enjoy! Charlie

    Read the article

  • How to programmatically bind to a Core Data model?

    - by Dave Gallagher
    Hello. I have a Core Data model, and was wondering if you know how to create a binding to an Entity, programmatically? Normally you use bind:toObject:withKeyPath:options: to create a binding. But I'm having a little difficulty getting this to work with Core Data, and couldn't find anything in Apple's docs regarding doing this programmatically. The Core Data model is simple: An Entity called Book An Attribute of Book called author (NSString) I have an object called BookController. It looks like so: @interface BookController : NSObject { NSString *anAuthor; } @property (nonatomic, retain) NSString *anAuthor; // @synthesize anAuthor; inside @implementation I'd like to bind anAuthor inside BookController, to author inside a Book entity. This is how I'm attempting to wrongly do it (it partially works): // A custom class I made, providing an interface to the Core Data database CoreData *db = [[CoreData alloc] init]; // Creating a Book entity, saving it [db addMocObject:@"Book"]; [db saveMoc]; // Fetching the Book entity we just created NSArray *books = [db fetchObjectsForEntity:@"Book" withPredicate:nil withSortDescriptors:nil]; NSManagedObject *book = [books objectAtIndex:0]; // Creating the binding BookController *bookController = [[BookController alloc] init]; [bookController bind:@"anAuthor" toObject:book withKeyPath:@"author" options:nil]; // Manipulating the binding [bookController setAnAuthor:@"Bill Gates"]; Now, when updating from the perspective of bookController, things don't work quite right: // Testing the binding from the bookController's perspective [bookController setAnAuthor:@"Bill Gates"]; // Prints: "bookController's anAuthor: Bill Gates" NSLog(@"bookController's anAuthor: %@", [bookController anAuthor]); // OK! // ERROR HERE - Prints: "bookController's anAuthor: (null)" NSLog(@"Book's author: %@", [book valueForKey:@"author"]); // DOES NOT WORK! :( When updating from the perspective of the Book entity, things work fine: // ------------------------------ // Testing the binding from the Book's (Entity) perspective (this works perfect) [book setValue:@"Steve Jobs" forKey:@"author"]; // Prints: "bookController's anAuthor: Steve Jobs" NSLog(@"bookController's anAuthor: %@", [bookController anAuthor]); // OK! // Prints: "bookController's anAuthor: Steve Jobs" NSLog(@"Book's author: %@", [book valueForKey:@"author"]); // OK! It appears that the binding is partially working. I can update it on the side of the Model and it propagates up to the Controller via KVO, but if I update it on the side of the Controller, it doesn't trickle down to the Model via KVC. Any idea on what I'm doing wrong? Thanks so much for looking! :)

    Read the article

  • How to build MVC Views that work with polymorphic domain model design?

    - by Johann de Swardt
    This is more of a "how would you do it" type of question. The application I'm working on is an ASP.NET MVC4 app using Razor syntax. I've got a nice domain model which has a few polymorphic classes, awesome to work with in the code, but I have a few questions regarding the MVC front-end. Views are easy to build for normal classes, but when it comes to the polymorphic ones I'm stuck on deciding how to implement them. The one (ugly) option is to build a page which handles the base type (eg. IContract) and has a bunch of if statements to check if we passed in a IServiceContract or ISupplyContract instance. Not pretty and very nasty to maintain. The other option is to build a view for each of these IContract child classes, breaking DRY principles completely. Don't like doing this for obvious reasons. Another option (also not great) is to split the view into chunks with partials and build partial views for each of the child types that are loaded into the main view for the base type, then deciding to show or hide the partial in a single if statement in the partial. Also messy. I've also been thinking about building a master page with sections for the fields that only occur in subclasses and to build views for each subclass referencing the master page. This looks like the least problematic solution? It will allow for fairly simple maintenance and it doesn't involve code duplication. What are your thoughts? Am I missing something obvious that will make our lives easier? Suggestions?

    Read the article

  • SQL SERVER – Configuring Interactive Cleansing Suggestion Min Score for Suggestions in Data Quality Services (DQS) – Sensitivity of Suggestion

    - by pinaldave
    Earlier I talked about what kind of questions, I do not like when I get asked. Today we will go over the question which I like when I get asked the same. One of the reader practices various steps in my earlier blog post Step by Step Guide to Beginning Data Quality Services in SQL Server 2012 – Introduction to DQS. While reading the blog post he noticed that Data Quality Services is not providing very helpful suggestions. He wrote an email to me about it. Let us go over his email. “Pinal, I noticed in one of your images that DQS is not providing very helpful suggestions. First of all DQS should be able to make intelligent guesses and make the necessary correction by itself. If it cannot do the same, in that case, it should give us intelligent suggestions but in the image included here, I see the suggestions are not there as well. Why is it so? Would you please tell me how to increase the numbers of suggestion? I do understand this may not be preferable solution in many case but all the business cases go on it depends. There are cases when the high sensitivity required and there are cases when higher sensitivities are not required. I would like to seek your help here. –Sriram MD” This is indeed a great question. I see that Sriram understands that every system is different and every application has a different need. I will not have to tell him this most important concept. The question is about how to change the sensitivity of suggestions for correction in DQS. Well, this option is available under the configuration tab in the DQS client. Once you click on Configuration you will see the following screen. Click the Tab of General Settings. You will see the section of Interactive Cleansing. Under this second there is the first option of “Min score for suggestions”. As this is set to 0.7 every suggestion which matches 0.7 probabilities or higher probability are displayed under the suggestion tab. You can see in the following image that there is no suggestion as the min score for suggestions is set to 0.7 and there is no record which qualifies to that much confidence. Now let us change the value of Min Score for suggestion to 0.5. The lower value increased the confidence of DQS to give further suggestion to values which are over 0.5. However, in our case the suggestions which it provides are also accurate. This may not be true for your sample. Every sample is different so you should manually review it before approving them. I guess, this is a simple blog post to demonstrate how to change the confidence value for the suggestions which Data Quality Services provides. Use this feature with care and always tune it according to your datasets and record diversity. Reference: Pinal Dave (http://blog.SQLAuthority.com)       Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Data Quality Services, DQS

    Read the article

  • Essbase BSO Data Fragmentation

    - by Ann Donahue
    Essbase BSO Data Fragmentation Data fragmentation naturally occurs in Essbase Block Storage (BSO) databases where there are a lot of end user data updates, incremental data loads, many lock and send, and/or many calculations executed.  If an Essbase database starts to experience performance slow-downs, this is an indication that there may be too much fragmentation.  See Chapter 54 Improving Essbase Performance in the Essbase DBA Guide for more details on measuring and eliminating fragmentation: http://docs.oracle.com/cd/E17236_01/epm.1112/esb_dbag/daprcset.html Fragmentation is likely to occur in the following situations: Read/write databases that users are constantly updating data Databases that execute calculations around the clock Databases that frequently update and recalculate dense members Data loads that are poorly designed Databases that contain a significant number of Dynamic Calc and Store members Databases that use an isolation level of uncommitted access with commit block set to zero There are two types of data block fragmentation Free space tracking, which is measured using the Average Fragmentation Quotient statistic. Block order on disk, which is measured using the Average Cluster Ratio statistic. Average Fragmentation Quotient The Average Fragmentation Quotient ratio measures free space in a given database.  As you update and calculate data, empty spaces occur when a block can no longer fit in its original space and will either append at the end of the file or fit in another empty space that is large enough.  These empty spaces take up space in the .PAG files.  The higher the number the more empty spaces you have, therefore, the bigger the .PAG file and the longer it takes to traverse through the .PAG file to get to a particular record.  An Average Fragmentation Quotient value of 3.174765 means the database is 3% fragmented with free space. Average Cluster Ratio Average Cluster Ratio describes the order the blocks actually exist in the database. An Average Cluster Ratio number of 1 means all the blocks are ordered in the correct sequence in the order of the Outline.  As you load data and calculate data blocks, the sequence can start to be out of order.  This is because when you write to a block it may not be able to place back in the exact same spot in the database that it existed before.  The lower this number the more out of order it becomes and the more it affects performance.  An Average Cluster Ratio value of 1 means no fragmentation.  Any value lower than 1 i.e. 0.01032828 means the data blocks are getting further out of order from the outline order. Eliminating Data Block Fragmentation Both types of data block fragmentation can be removed by doing a dense restructure or export/clear/import of the data.  There are two types of dense restructure: 1. Implicit Restructures Implicit dense restructure happens when outline changes are done using EAS Outline Editor or Dimension Build. Essbase restructures create new .PAG files restructuring the data blocks in the .PAG files. When Essbase restructures the data blocks, it regenerates the index automatically so that index entries point to the new data blocks. Empty blocks are NOT removed with implicit restructures. 2. Explicit Restructures Explicit dense restructure happens when a manual initiation of the database restructure is executed. An explicit dense restructure is a full restructure which comprises of a dense restructure as outlined above plus the removal of empty blocks Empty Blocks vs. Fragmentation The existence of empty blocks is not considered fragmentation.  Empty blocks can be created through calc scripts or formulas.  An empty block will add to an existing database block count and will be included in the block counts of the database properties.  There are no statistics for empty blocks.  The only way to determine if empty blocks exist in an Essbase database is to record your current block count, export the entire database, clear the database then import the exported data.  If the block count decreased, the difference is the number of empty blocks that had existed in the database.

    Read the article

  • unable to recover data from failed hdd

    - by Eslam Elyamany
    my hdd failing (or maybe totally dead) i've connected the hdd via USB but it doesn't appear in fdisk Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0xe9fb38fb Device Boot Start End Blocks Id System /dev/sda1 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda2 206848 40959999 20376576 7 HPFS/NTFS/exFAT /dev/sda4 40962046 976771071 467904513 5 Extended Partition 4 does not start on physical sector boundary. /dev/sda5 82913280 86910975 1998848 82 Linux swap / Solaris /dev/sda6 86913024 394113023 153600000 7 HPFS/NTFS/exFAT /dev/sda7 40962048 82913279 20975616 83 Linux /dev/sda8 394122708 976768064 291322678+ 7 HPFS/NTFS/exFAT Partition 8 does not start on physical sector boundary. no sdc appears here , BUT it's appears on /dev/ rootghost-lap:/home/ghost# ls /dev/sd* /dev/sda /dev/sda2 /dev/sda5 /dev/sda8 /dev/sdb /dev/sdc1 /dev/sdc2 /dev/sdc6 /dev/sdc8 /dev/sda1 /dev/sda4 /dev/sda6 /dev/sda9 /dev/sdc /dev/sdc10 /dev/sdc5 /dev/sdc7 /dev/sdc9 also it appears in proc Code: rootghost-lap:/home/ghost# cat /proc/partitions major minor #blocks name 8 0 488386584 sda 8 1 102400 sda1 8 2 20376576 sda2 8 4 1 sda4 8 5 1998848 sda5 8 6 153600000 sda6 8 8 291322678 sda8 8 9 20975616 sda9 11 0 1048575 sr0 11 1 99136 sr1 8 32 244198583 sdc 8 33 14651248 sdc1 8 34 1 sdc2 8 37 15380480 sdc5 8 38 4153344 sdc6 8 39 48829536 sdc7 8 40 48829536 sdc8 8 41 110374551 sdc9 8 42 1975963 sdc10 and dmesg : [10604.777168] end_request: I/O error, dev sdc, sector 1 [10604.817238] sd 26:0:0:0: [sdc] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [10604.817243] sd 26:0:0:0: [sdc] Sense Key : Aborted Command [current] [10604.817248] sd 26:0:0:0: [sdc] Add. Sense: No additional sense information [10604.817253] sd 26:0:0:0: [sdc] CDB: Read(10): 28 00 00 00 00 02 00 00 06 00 ok now , let's see what i've tried testdisk to check for partitions -- failed dd to copy data from /dev/sdcX -- provide strange output size for example /dev/sdc1 is about 15G , the output for dd is 62G+ so i had to cancle it safecopy successfully made an image for partitons , but can't fix images, can't mount it, can't do any thing with it and some other tools i've tried and all failed , so any idea ?

    Read the article

  • Restoring MBR, partition table, and boot sector of memory card without data loss ("USBC")

    - by Synetech
    Abstract I have a FAT32 memory card that when inserted into a computer causes Windows to prompt to format it. The card is definitely not supposed to be blank and has a bunch of files on it. Symptoms Using a hex-editor/disk-viewer, I examined the card and found that several sectors/clusters have been overwritten with something that has a signature of USBC at the start of the sector. Specifically, the master boot record (and partition table) is gone (hence Windows thinking the card is blank and needing to be formatted), as are the boot sectors (they have the USBC signature and a volume label of NO NAME and partition type of FAT32). Fortunately, it looks like both copies of the FAT are almost entirely intact (a few FAT entries at the start of a cluster here and there seem to be overwritten by USBC). The root directory is also nearly intact—I can see the volume label entry and subdirectory listings, but one sector is overwritten. (There are no more instances of USBC after the last one in the FAT2.) Hypothesis These observations seem to indicate some sort of virus that erases a few key filesystem structures, and then overwrites a few extra sectors here and there. Googling it seems to corroborate the idea of a virus, except that others report a file called USBC which does not apply here, and in fact, could not be possible since there is no filesystem to even see files. I cannot find any information about a virus with these symptoms, nor a removal tool. (I can't help but wonder if it is actually due to an autorun virus prevention tool.) Question I can likely fix the FAT corruption since they are mostly contiguous chains and maybe even the lost sector of the root directory, but does anyone know of a convenient way to restore or (re)create the MBR/partition table and boot sectors (without formatting or overwriting the data)?

    Read the article

  • Set up layer 2 vlan between 2 data centres

    - by user41679
    Hello, Our data centre provider operates 2 sites, and we currently have equipment in one and would like to have equipment in the second. They've told me that they operate a layer 2 vlan between the 2 sites over a 20gbit connection, and that they'd just give me ethernet cable at each end to connect the locations. At the current site, we have Cisco 2960 48TC-L switches, all the machines are on a 192.168.x.x subnet and we have cisco firewalls with which we connect to our internet provider with. My question is what would I need to do to connect the 2 sites? could I just plug the ethernet cables the provide into the cisco switches, and have the same switches the other end? would I need to set up a separate internal network on the other side and connect both through the firewalls? Would the cisco switches need special configuration? We expect to maintain a number of connections between the 2 sites, and each site would have its own internal dns name like dc1.xx.com. Sorry if I'm being vague or haven't included enough information, I've a fairly good knowledge of hardware but we're down a netops guy at the moment and I'd like to get both sites on-line ASAP! Thanks in advance!

    Read the article

  • In which order is model binding and validation done in ASP.NET MVC 2?

    - by Simon Bartlett
    I am using ASP.NET MVC 2, and am using a view-model per view approach. I am also using Automapper to map properties from my domain-model to the view-model. Take this example view-model (with Required data annotation attributes for validation purposes): public class BlogPost_ViewModel { public int Id { get; set; } [Required] public string Title { get; set; } [Required] public string Text { get; set; } } In the post editor view I am using a rich text editor (CKeditor). Because CKeditor is a HTML editor, I ideally need CKeditor to HTMLencode the user's input when the form is submitted, so that ASP.NET's input validation does not complain. This is not a problem as CKeditor has this functionality built in, however I need CKeditor's output decoded before mapping back to the domain object (via Automapper). I am wanting to add a new property (to the view-model above) to solve this, as follows: public string HTMLEncodedText { get { return HTMLEncode(Text); } set { Text = HTMLDecode(value); } } I can then bind this property to CKeditor in the view, but still use Automapper to map the 'Text' property in the controller - all without having to turn input-validation off. My question is: do you know how the model binding and validation process in ASP.NET MVC 2 works? Are all model properties binded before validation is carried out? Or is each individual property get validated when it is being set. I think ideally for my idea to work, all properties need to be set before the model is validated.

    Read the article

  • Moving to .net 4 results in System.Transactions Critical: 0

    - by john
    Hi I have fully working project in .net 3.5SP1, with EF 1 ORM. I tried to upgrade to .net 4. No issue while upgrading... Then i ran the project and got a NullExecptionError, with no stack trace... and no way to debug. Looking at the output windows i can read this: System.Transactions Critical: 0 : <TraceRecord xmlns="http://schemas.microsoft.com/2004/10/E2ETraceEvent/TraceRecord" Severity="Critical"><TraceIdentifier>http://msdn.microsoft.com/TraceCodes/System/ActivityTracing/2004/07/Reliability/Exception/Unhandled</TraceIdentifier><Description>Unhandled exception</Description><AppDomain>OTCSouscriptions.vshost.exe</AppDomain><Exception><ExceptionType>System.NullReferenceException, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089</ExceptionType><Message>Object reference not set to an instance of an object.</Message><StackTrace> at System.Windows.StyleHelper.GetInstanceValue(UncommonField`1 dataField, DependencyObject container, FrameworkElement feChild, FrameworkContentElement fceChild, Int32 childIndex, DependencyProperty dp, Int32 i, EffectiveValueEntry&amp;amp; entry) at System.Windows.FrameworkTemplate.ReceivePropertySet(Object targetObject, XamlMember member, Object value, DependencyObject templatedParent) at System.Windows.FrameworkTemplate.&amp;lt;&amp;gt;c__DisplayClass6.&amp;lt;LoadOptimizedTemplateContent&amp;gt;b__4(Object sender, XamlSetValueEventArgs setArgs) at System.Xaml.XamlObjectWriter.OnSetValue(Object eventSender, XamlMember member, Object value) at System.Xaml.XamlObjectWriter.Logic_ApplyPropertyValue(ObjectWriterContext ctx, XamlMember prop, Object value, Boolean onParent) at System.Xaml.XamlObjectWriter.Logic_DoAssignmentToParentProperty(ObjectWriterContext ctx) at System.Xaml.XamlObjectWriter.Logic_AssignProvidedValue(ObjectWriterContext ctx) at System.Xaml.XamlObjectWriter.WriteEndObject() at System.Xaml.XamlWriter.WriteNode(XamlReader reader) at System.Windows.FrameworkTemplate.LoadTemplateXaml(XamlReader templateReader, XamlObjectWriter currentWriter) Any help appreciated... Thanks John

    Read the article

  • Is ASP.NET MVC is really MVC? Or how to separate model from controller?

    - by Andrey
    Hi all, This question is a bit rhetorical. At some point i got a feeling that ASP.NET MVC is not that authentic implementation of MVC pattern. Or i didn't understood it. Consider following domain: electric bulb, switch and motion detector. They are connected together and when you enter the room motion detector switches on the bulb. If i want to represent them as MVC: switch is model, because it holds the state and contains logic bulb is view, because it presents the state of model to human motion detector is controller, because it converts user actions to generic model commands Switch has one private field (On/Off) as a State and two methods (PressOn, PressOff). If you call PressOn when it is Off it goes to On, if you call it again state doesn't change. Bulb can be replaced with buzzer, motion detector with timer or button, but the model still represent the same logic. Eventually system will have same behavior. This is how i understand classical MVC decomposition, please correct me if i am wrong. Now let's decompose it in ASP.Net MVC way. Bulb is still a view Controller will be switch + motion detector Model is some object that will just pass state to bulb. So the logic that defines behavior moves to controller. Question 1: Is my understanding of MVC and ASP.NET MVC correct? Question 2: If yes, do you agree that ASP.NET MVC is not 100% accurate implementation? And back to life. The final question is how to separate model from controller in case of ASP.NET MVC. There can be two extremes. Controller does basic stuff and call model to do all the logic. Another is controller does all the logic and model is just something like class with properties that is mapped to DB. Question 3: Where should i draw the line between this extremes? How to balance? Thanks, Andrey

    Read the article

< Previous Page | 248 249 250 251 252 253 254 255 256 257 258 259  | Next Page >