Search Results

Search found 5543 results on 222 pages for 'legacy terms'.

Page 116/222 | < Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >

  • VC++ to C# migration guidelines/approaches/Issues

    - by KSH
    Hi all, We are planning to move few of our VC++ Legacy products to C# with .NET platform.. I am in the process of collecting the relavent information before making the proposal to give optimistic and effective approach to clients. Am looking for the following details. Any general guidelines in migration of VC++ to C#.NET What are the issues that a team can face when we take up this activity Are there any existing approaches available ? I believe many might have tried but may not have detailed information, but consolidating this under this would help not only me but anyone who look for these information. Any good / valid resources available on internet? Any suggestions from Microsoft team if any Microsoft people in this group? Architecture, components design approaches, etc. Please help me in getting these information, each penny would help me to gain good understanding.. Thanks in advance to those who will share their wisdom thorough this query.

    Read the article

  • Efficient mapping for a particular finite integer set

    - by R..
    I'm looking for a small, fast (in both directions) bijective mapping between the following list of integers and a subset of the range 0-127: 0x200C, 0x200D, 0x200E, 0x200F, 0x2013, 0x2014, 0x2015, 0x2017, 0x2018, 0x2019, 0x201A, 0x201C, 0x201D, 0x201E, 0x2020, 0x2021, 0x2022, 0x2026, 0x2030, 0x2039, 0x203A, 0x20AA, 0x20AB, 0x20AC, 0x20AF, 0x2116, 0x2122 One obvious solution is: y = x>>2 & 0x40 | x & 0x3f; x = 0x2000 | y<<2 & 0x100 | y & 0x3f; Edit: I was missing some of the values, particularly 0x20Ax, which don't work with the above. Another obvious solution is a lookup table, but without making it unnecessarily large, a lookup table would require some bit rearrangement anyway and I suspect the whole task can be better accomplished with simple bit rearrangement. For the curious, those magic numbers are the only "large" Unicode codepoints that appear in legacy ISO-8859 and Windows codepages.

    Read the article

  • How can i see how many mysql connections are opend ?

    - by Sahal
    How can i see how many number of connections are opened in one request. (mysql_connect) I know that if i call mysql_connect function 100 times, function will return the connection link which is already opened only if parameters of the function is same. It will not request for new connection if connection is already exists. But i just want to make sure mysql_connect is not requesting new. I am working with a legacy system which contains so many "mysql_connect" function. Is there any setting in apache or is there any way i can log this number of connection value in apache or mysql log file? Please help.

    Read the article

  • Writing a C++ wrapper for a C library

    - by stripes
    I have a legacy C library, written in an OO type form. Typical functions are like: LIB *lib_new(); void lib_free(LIB *lib); int lib_add_option(LIB *lib, int flags); void lib_change_name(LIB *lib, char *name); I'd like to use this library in my C++ program, so I'm thinking a C++ wrapper is required. The above would all seem to map to something like: class LIB { public: LIB(); ~LIB(); int add_option(int flags); void change_name(char *name); ... }; I've never written a C++ wrapper round C before, and can't find much advice about it. Is this a good/typical/sensible approach to creating a C++/C wrapper?

    Read the article

  • How can I access a shared Exchange mailbox with IMAP (over telnet)?

    - by gauteh
    I have an Mailbox which multiple users have access to, it works fine for me and I can add it in Outlook as an additional mailbox to my account and list all its content. I can access my personal mailbox using IMAP, I'm testing it by just telneting in and LIST'ing it. The problem is that another user trying to access it is having problems accessing it through IMAP; and I want to test if I can access the shared mailbox with my account - how can I do that in terms of IMAP commands? What I am doing now is: telnet mail.server 01 LOGIN user pass 02 LIST "" * 03 LOGOUT Edit: If there is another way to test this, that is an equally good answer.

    Read the article

  • Is is better to combine Apache for file manipulation and upload and Nginx for static file serving, or to use one of the two alone

    - by user1032393
    Based on my research, I've read that nginx is best and ideal for serving up static files and images. My application depends heavily on uploading of images and rewriting them, then serving them up. Given that I only have one VPS currently, it has been suggested that I use nginx to serve up the images and website, and reverse proxy to Apache (on the same VPS) to rewrite files with image magick and handle the file uploads. Which would be the best solution, Apache, Nginx, or Apache + Nginx? In terms of best solution, I'm looking at minimal average RAM consumption, while maintaining decent load speed of maybe sub 2 seconds?

    Read the article

  • Is object clearing/array deallocation really necessary in VB6/VBA (Pros/Cons?)

    - by Oorang
    Hello, A lot of what I have learned about VB I learned from using Static Code Analysis (Particularly Aivosto's Project Analyzer). And one one of things it checks for is whether or not you cleared all objects and arrays. I used to just do this blindly because PA said so. But now that I know a little bit more about the way VB releases resources, it seems to me that these things should be happening automatically. Is this a legacy feature from pre VB6, or is there a reason why you should explicitly set objects back to nothing and use Erase on arrays?

    Read the article

  • All of a Sudden , Sql Server Timeout

    - by Adinochestva
    Hey Guys We got a legacy vb.net applicaction that was working for years But all of a sudden it stops working yesterday and gives sql server timeout Most part of application gives time out error , one part for example is below code : command2 = New SqlCommand("select * from Acc order by AccDate,AccNo,AccSeq", SBSConnection2) reader2 = command2.ExecuteReader() If reader2.HasRows() Then While reader2.Read() If IndiAccNo <> reader2("AccNo") Then CAccNo = CAccNo + 1 CAccSeq = 10001 IndiAccNo = reader2("AccNo") Else CAccSeq = CAccSeq + 1 End If command3 = New SqlCommand("update Acc Set AccNo=@NewAccNo,AccSeq=@NewAccSeq where AccNo=@AccNo and AccSeq=@AccSeq", SBSConnection3) command3.Parameters.Add("@AccNo", SqlDbType.Int).Value = reader2("AccNo") command3.Parameters.Add("@AccSeq", SqlDbType.Int).Value = reader2("AccSeq") command3.Parameters.Add("@NewAccNo", SqlDbType.Int).Value = CAccNo command3.Parameters.Add("@NewAccSeq", SqlDbType.Int).Value = CAccSeq command3.ExecuteNonQuery() End While End If It was working and now gives time out in command3.ExecuteNonQuery() Any ideas ?

    Read the article

  • How do I return an object that is able to execute on the server?

    - by mafutrct
    Coming from a Java background, this is the way I'm thinking: The server provides an object to the client. This object should be able to execute on the server. Server: private string _S = "A"; public interface IFoo { void Bar(); } private class Foo : IFoo { void Bar() { _S = "B";} } public IFoo GetFoo() { return new Foo(); } Client: IFoo foo = serverChannel.GetFoo(); foo.Bar(); Remoting is legacy (everyone keeps pointing to WCF instead) and WCF does not support this at all basically ( http://stackoverflow.com/questions/2431510 ), so how should I implement this kind of behavior? Using 3rd party components is possible iff required. I searched on SO but found no similar question. If this has indeed been answered before, just let me know and I'll delete.

    Read the article

  • Please Help Me Optimize This

    - by Zero
    I'm trying to optimize my .htaccess file to avoid performance issues. In my .htaccess file I have something that looks like this: RewriteEngine on RewriteCond %{HTTP_USER_AGENT} bigbadbot [NC,OR] RewriteCond %{HTTP_USER_AGENT} otherbot1 [NC,OR] RewriteCond %{HTTP_USER_AGENT} otherbot2 [NC] RewriteRule ^.* - [F,L] The first rewrite rule (bigbadbot) handles about 100 requests per second, whereas the other two rewrite rules below it only handle a few requests per hour. My question is, since the first rewrite rule (bigbadbot) handles about 99% of the traffic would it be better to place these rules into two separate rulesets? For example: RewriteEngine on RewriteCond %{HTTP_USER_AGENT} bigbadbot [NC] RewriteRule ^.* - [F,L] RewriteCond %{HTTP_USER_AGENT} otherbot1 [NC,OR] RewriteCond %{HTTP_USER_AGENT} otherbot2 [NC] RewriteRule ^.* - [F,L] Can someone tell me what would be better in terms of performance? Has anyone ever benchmarked this? Thanks!

    Read the article

  • DVI vs HDMI graphics card output

    - by Shack
    Asking on behalf of someone: I want to buy a new graphics card but do not know which would be best in terms of output, DVI or HDMI, the sound part of the HDMI is not really required, I just need something to go to my new 32 inch hd tv. It accepts both DVI and HDMI. I only need it for basic gaming but mostly as a media center to watch movies and tv shows on. Also for windows media center's TV application, I need a tv card, should I get a graphics card with bult in tv card, or a usb dongle??

    Read the article

  • SQL and IIS HDDs configuration on server

    - by john_1234
    Hi, I've just added a new production server and I was wondering if you guys could help me decide which configuration suits best. Current configuration: 40GB ~ C (System) 250GB ~ D (SQL - MDF & LDF) 250GB ~ F (IIS) 1TB ~ E (storage of users' files) (note: C and D are partitions on the same physical HDD) I've heard splitting LDF/MDF can do magic in terms of performance. Therefore, the core of my question is how would you recommend to do so. For example, putting the MDF with the IIS is an option, yet I'm not so sure about it.

    Read the article

  • C++ - Implementing my own stream

    - by HardCoder1986
    Hello! My problem can be described the following way: I have some data which actually is an array and could be represented as char* data with some size I also have some legacy code (function) that takes some abstract std::istream object as a param and uses that stream to retrieve data to operate. So, my question is the following - what would be the easy way to map my data to some std::istream object so that I can pass it to my function? I thought about creating a std::stringstream object from my data, but that means copying and (as I assume) isn't the best solution. Any ideas how this could be done so that my std::istream operates on the data directly? Thank you.

    Read the article

  • Pointer aliasing- in C++0x

    - by DeadMG
    I'm thinking about (just as an idea) disjointed pointer aliasing in C++0x. I was thinking about seeing if it could be implemented similarly to const correctness- that is, enforced by the compiler. What would be the requirements for such a thing? As this is more of a thought experiment, I'm perfectly happy to look at solutions that destroy legacy code or redefine half the language and that kind of thing. What I'd really rather not do is have, say, restrict from C99 where the programmer just promises it. It should be enforced.

    Read the article

  • How do I make two different Google Chrome profiles?

    - by ldigas
    I have a laptop which I use for (the major part) of my work and private life, and would like to set those two apart starting with Google Chrome. So far, my Google Chrome contains my work bookmarks, logins/passwords and everything else ... which I use for work, which I use privately, which I use in my spare time (funny Youtube videos, what else :) Is it possible to define multiple Google Chrome profiles, let's say work and free-time, so one can quickly switch between them, where bookmarks, logins and so on, from one would be invisible in the other, and vice versa? Also, is it possible to put them into some directory different from the default, so one can easily backup them, when needed? If it is, could anyone describe it in simple terms, or his experiences if she/he has a better way of going about this?

    Read the article

  • Why do 2 excel (2003) files that are exactly the same have different file size?

    - by meme
    I have two excel files that are exactly the same (in terms of the content of the file) but differ by quite a margin on filesize. One file is 37.5Kb while the other is 56Kb. The only difference I can see is the filename's. I don't know why there is such a big difference. Is there some sort of history or something that is stored with the file that is not visible to the user? If so, how would you delete this? Thanks for your help.

    Read the article

  • How is Javascript parsed/executed in a web browser exactly?

    - by ededed
    For example when I access web server likely Javascript will execute. From there on, how will the browser parse the Javascript, or "execute" the functions, the memory used, etc. How will the browser "handle" all of that? Does it act like a compilation lexer in that it passes line by line and generates object code, or does it use the DOM, and other specifications to handle memory, etc. Also, in terms of updating the page, and alterior concurrent executions as well, such as Flash, HTML, Java, etc. Point be simplified, how does the browser handle the scripts, the memory, and the logic on page from a javascript file?

    Read the article

  • Periods in Javascript function definition (function window.onload(){}) [closed]

    - by nemec
    Possible Duplicate: JavaScript Function Syntax Explanation: function object.myFunction(){..} I've seen some (legacy) javascript code recently that looks like: function window.onload(){ // some code } This doesn't look like valid javascript to me since you can't have a period in an identifier, but it seems to work in IE8. I assume it's the equivalent of: window.onload = function(){} I've tried the same code in Chrome and IE9 and both of them raise syntax exceptions, so am I correct in thinking that this "feature" of IE8 is some non-standard function definition that should be replaced? The code in question is only sent to IE browsers, so that's probably why I haven't run into this issue before.

    Read the article

  • Folder access per user

    - by user137670
    I have sbs 2003 r2. I have a shared folder (s-drive) for all shared info for everyone. when user is on shared folder, you see size of folder 230G. I have one user that only sees 1g when on shared folder. I have pcs using XP pro. Have check quota and they say no quota limit checked. I had user use a different pc and still same result. With this I looked at server and users profile and compared with user that did not have problem. could not see anything different. what did I miss in some option or do I have to rebuild user? I have tried google with different terms but have not gotten any good clues

    Read the article

  • warning: '0' flag ignored with precision and ‘%i’ gnu_printf format

    - by morpheous
    I am getting the following warning when compiling some legacy C code on Ubuntu Karmic, using gcc 4.4.1 The warning is: src/filename.c:385: warning: '0' flag ignored with precision and ‘%i’ gnu_printf format The snippet which causes the warning to be emitted is: char buffer[256] ; long fnum ; /* some initialization code here ... */ sprintf(buffer, "F%03.3i.DTA", (int)fnum); /* <- warning emitted here */ I think I understand the warning, but I would like to check in here to see if I am right, and also the (definite) correct way of resolving this.

    Read the article

  • ASP.Net Web API in Visual Studio 2010

    - by sreejukg
    Recently for one of my project, it was necessary to create couple of services. In the past I was using WCF, since my Services are going to be utilized through HTTP, I was thinking of ASP.Net web API. So I decided to create a Web API project. Now the real issue is that ASP.Net Web API launched after Visual Studio 2010 and I had to use ASP.Net web API in VS 2010 itself. By default there is no template available for Web API in Visual Studio 2010. Microsoft has made available an update that installs ASP.Net MVC 4 with web API in Visual Studio 2010. You can find the update from the below url. http://www.microsoft.com/en-us/download/details.aspx?id=30683 Though the update denotes ASP.Net MVC 4, this also includes ASP.Net Web API. Download the installation media and start the installer. As usual for any update, you need to agree on terms and conditions. The installation starts straight away, once you clicked the Install button. If everything goes ok, you will see the success message. Now open Visual Studio 2010, you can see ASP.Net MVC 4 Project template is available for you. Now you can create ASP.Net Web API project using Visual Studio 2010. When you create a new ASP.Net MVC 4 project, you can choose the Web API template. Further reading http://www.asp.net/web-api/overview/getting-started-with-aspnet-web-api/tutorial-your-first-web-api http://www.asp.net/mvc/mvc4

    Read the article

  • Parallelism in .NET – Part 3, Imperative Data Parallelism: Early Termination

    - by Reed
    Although simple data parallelism allows us to easily parallelize many of our iteration statements, there are cases that it does not handle well.  In my previous discussion, I focused on data parallelism with no shared state, and where every element is being processed exactly the same. Unfortunately, there are many common cases where this does not happen.  If we are dealing with a loop that requires early termination, extra care is required when parallelizing. Often, while processing in a loop, once a certain condition is met, it is no longer necessary to continue processing.  This may be a matter of finding a specific element within the collection, or reaching some error case.  The important distinction here is that, it is often impossible to know until runtime, what set of elements needs to be processed. In my initial discussion of data parallelism, I mentioned that this technique is a candidate when you can decompose the problem based on the data involved, and you wish to apply a single operation concurrently on all of the elements of a collection.  This covers many of the potential cases, but sometimes, after processing some of the elements, we need to stop processing. As an example, lets go back to our previous Parallel.ForEach example with contacting a customer.  However, this time, we’ll change the requirements slightly.  In this case, we’ll add an extra condition – if the store is unable to email the customer, we will exit gracefully.  The thinking here, of course, is that if the store is currently unable to email, the next time this operation runs, it will handle the same situation, so we can just skip our processing entirely.  The original, serial case, with this extra condition, might look something like the following: foreach(var customer in customers) { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { // Exit gracefully if we fail to email, since this // entire process can be repeated later without issue. if (theStore.EmailCustomer(customer) == false) break; customer.LastEmailContact = DateTime.Now; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, we’re processing our loop, but at any point, if we fail to send our email successfully, we just abandon this process, and assume that it will get handled correctly the next time our routine is run.  If we try to parallelize this using Parallel.ForEach, as we did previously, we’ll run into an error almost immediately: the break statement we’re using is only valid when enclosed within an iteration statement, such as foreach.  When we switch to Parallel.ForEach, we’re no longer within an iteration statement – we’re a delegate running in a method. This needs to be handled slightly differently when parallelized.  Instead of using the break statement, we need to utilize a new class in the Task Parallel Library: ParallelLoopState.  The ParallelLoopState class is intended to allow concurrently running loop bodies a way to interact with each other, and provides us with a way to break out of a loop.  In order to use this, we will use a different overload of Parallel.ForEach which takes an IEnumerable<T> and an Action<T, ParallelLoopState> instead of an Action<T>.  Using this, we can parallelize the above operation by doing: Parallel.ForEach(customers, (customer, parallelLoopState) => { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { // Exit gracefully if we fail to email, since this // entire process can be repeated later without issue. if (theStore.EmailCustomer(customer) == false) parallelLoopState.Break(); else customer.LastEmailContact = DateTime.Now; } }); There are a couple of important points here.  First, we didn’t actually instantiate the ParallelLoopState instance.  It was provided directly to us via the Parallel class.  All we needed to do was change our lambda expression to reflect that we want to use the loop state, and the Parallel class creates an instance for our use.  We also needed to change our logic slightly when we call Break().  Since Break() doesn’t stop the program flow within our block, we needed to add an else case to only set the property in customer when we succeeded.  This same technique can be used to break out of a Parallel.For loop. That being said, there is a huge difference between using ParallelLoopState to cause early termination and to use break in a standard iteration statement.  When dealing with a loop serially, break will immediately terminate the processing within the closest enclosing loop statement.  Calling ParallelLoopState.Break(), however, has a very different behavior. The issue is that, now, we’re no longer processing one element at a time.  If we break in one of our threads, there are other threads that will likely still be executing.  This leads to an important observation about termination of parallel code: Early termination in parallel routines is not immediate.  Code will continue to run after you request a termination. This may seem problematic at first, but it is something you just need to keep in mind while designing your routine.  ParallelLoopState.Break() should be thought of as a request.  We are telling the runtime that no elements that were in the collection past the element we’re currently processing need to be processed, and leaving it up to the runtime to decide how to handle this as gracefully as possible.  Although this may seem problematic at first, it is a good thing.  If the runtime tried to immediately stop processing, many of our elements would be partially processed.  It would be like putting a return statement in a random location throughout our loop body – which could have horrific consequences to our code’s maintainability. In order to understand and effectively write parallel routines, we, as developers, need a subtle, but profound shift in our thinking.  We can no longer think in terms of sequential processes, but rather need to think in terms of requests to the system that may be handled differently than we’d first expect.  This is more natural to developers who have dealt with asynchronous models previously, but is an important distinction when moving to concurrent programming models. As an example, I’ll discuss the Break() method.  ParallelLoopState.Break() functions in a way that may be unexpected at first.  When you call Break() from a loop body, the runtime will continue to process all elements of the collection that were found prior to the element that was being processed when the Break() method was called.  This is done to keep the behavior of the Break() method as close to the behavior of the break statement as possible. We can see the behavior in this simple code: var collection = Enumerable.Range(0, 20); var pResult = Parallel.ForEach(collection, (element, state) => { if (element > 10) { Console.WriteLine("Breaking on {0}", element); state.Break(); } Console.WriteLine(element); }); If we run this, we get a result that may seem unexpected at first: 0 2 1 5 6 3 4 10 Breaking on 11 11 Breaking on 12 12 9 Breaking on 13 13 7 8 Breaking on 15 15 What is occurring here is that we loop until we find the first element where the element is greater than 10.  In this case, this was found, the first time, when one of our threads reached element 11.  It requested that the loop stop by calling Break() at this point.  However, the loop continued processing until all of the elements less than 11 were completed, then terminated.  This means that it will guarantee that elements 9, 7, and 8 are completed before it stops processing.  You can see our other threads that were running each tried to break as well, but since Break() was called on the element with a value of 11, it decides which elements (0-10) must be processed. If this behavior is not desirable, there is another option.  Instead of calling ParallelLoopState.Break(), you can call ParallelLoopState.Stop().  The Stop() method requests that the runtime terminate as soon as possible , without guaranteeing that any other elements are processed.  Stop() will not stop the processing within an element, so elements already being processed will continue to be processed.  It will prevent new elements, even ones found earlier in the collection, from being processed.  Also, when Stop() is called, the ParallelLoopState’s IsStopped property will return true.  This lets longer running processes poll for this value, and return after performing any necessary cleanup. The basic rule of thumb for choosing between Break() and Stop() is the following. Use ParallelLoopState.Stop() when possible, since it terminates more quickly.  This is particularly useful in situations where you are searching for an element or a condition in the collection.  Once you’ve found it, you do not need to do any other processing, so Stop() is more appropriate. Use ParallelLoopState.Break() if you need to more closely match the behavior of the C# break statement. Both methods behave differently than our C# break statement.  Unfortunately, when parallelizing a routine, more thought and care needs to be put into every aspect of your routine than you may otherwise expect.  This is due to my second observation: Parallelizing a routine will almost always change its behavior. This sounds crazy at first, but it’s a concept that’s so simple its easy to forget.  We’re purposely telling the system to process more than one thing at the same time, which means that the sequence in which things get processed is no longer deterministic.  It is easy to change the behavior of your routine in very subtle ways by introducing parallelism.  Often, the changes are not avoidable, even if they don’t have any adverse side effects.  This leads to my final observation for this post: Parallelization is something that should be handled with care and forethought, added by design, and not just introduced casually.

    Read the article

  • Direct Links To Download Microsoft Office 2007 Products From Microsoft Download Servers

    - by Damodhar
    Downloading installers of Microsoft Office 2007 products from Microsoft Office website is not an easy task. To download any of the Microsoft Office 2007 products – you need to sign-in using Windows Live/Passport/Hotmail account, fill a big profile form and accept their terms & conditions. However even after gong through all the steps all you get is a link to download 60 day trail version installers. This is not cool! How about links that allows you to download the required installers directly from Microsoft Servers?   Here are the links to download Microsoft Office 2007 Applications & Suites directly from Microsoft download servers: Microsoft Office 2007 Product Installers Microsoft Office Home and Student 2007 Microsoft Office Standard 2007 Microsoft Office Professional 2007 Microsoft Office Small Business 2007 Microsoft Office Enterprise 2007 Microsoft Office OneNote 2007 Microsoft Office Publisher 2007 Microsoft Office Visio Professional 2007 Microsoft Office 2007 Service Packs Microsoft Office 2007 Service Pack 2 Microsoft Office 2007 Service Pack 1 Microsoft Office 2007 Viewers & Compatibility Packs Microsoft Word Viewer 2007 Microsoft PowerPoint Viewer 2007 Microsoft Excel Viewer 2007 Microsoft Visio Viewer 2007 Microsoft Office 2007 Compatibility Pack for Word, Excel, and PowerPoint File Formats CC image credit: flickr Related Posts:None FoundJoin us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • Entity Association Mapping with Code First Part 1 : Mapping Complex Types

    - by mortezam
    Last week the CTP5 build of the new Entity Framework Code First has been released by data team at Microsoft. Entity Framework Code-First provides a pretty powerful code-centric way to work with the databases. When it comes to associations, it brings ultimate flexibility. I’m a big fan of the EF Code First approach and am planning to explain association mapping with code first in a series of blog posts and this one is dedicated to Complex Types. If you are new to Code First approach, you can find a great walkthrough here. In order to build a solid foundation for our discussion, we will start by learning about some of the core concepts around the relationship mapping.   What is Mapping?Mapping is the act of determining how objects and their relationships are persisted in permanent data storage, in our case, relational databases. What is Relationship mapping?A mapping that describes how to persist a relationship (association, aggregation, or composition) between two or more objects. Types of RelationshipsThere are two categories of object relationships that we need to be concerned with when mapping associations. The first category is based on multiplicity and it includes three types: One-to-one relationships: This is a relationship where the maximums of each of its multiplicities is one. One-to-many relationships: Also known as a many-to-one relationship, this occurs when the maximum of one multiplicity is one and the other is greater than one. Many-to-many relationships: This is a relationship where the maximum of both multiplicities is greater than one. The second category is based on directionality and it contains two types: Uni-directional relationships: when an object knows about the object(s) it is related to but the other object(s) do not know of the original object. To put this in EF terminology, when a navigation property exists only on one of the association ends and not on the both. Bi-directional relationships: When the objects on both end of the relationship know of each other (i.e. a navigation property defined on both ends). How Object Relationships Are Implemented in POCO domain models?When the multiplicity is one (e.g. 0..1 or 1) the relationship is implemented by defining a navigation property that reference the other object (e.g. an Address property on User class). When the multiplicity is many (e.g. 0..*, 1..*) the relationship is implemented via an ICollection of the type of other object. How Relational Database Relationships Are Implemented? Relationships in relational databases are maintained through the use of Foreign Keys. A foreign key is a data attribute(s) that appears in one table and must be the primary key or other candidate key in another table. With a one-to-one relationship the foreign key needs to be implemented by one of the tables. To implement a one-to-many relationship we implement a foreign key from the “one table” to the “many table”. We could also choose to implement a one-to-many relationship via an associative table (aka Join table), effectively making it a many-to-many relationship. Introducing the ModelNow, let's review the model that we are going to use in order to implement Complex Type with Code First. It's a simple object model which consist of two classes: User and Address. Each user could have one billing address. The Address information of a User is modeled as a separate class as you can see in the UML model below: In object-modeling terms, this association is a kind of aggregation—a part-of relationship. Aggregation is a strong form of association; it has some additional semantics with regard to the lifecycle of objects. In this case, we have an even stronger form, composition, where the lifecycle of the part is fully dependent upon the lifecycle of the whole. Fine-grained domain models The motivation behind this design was to achieve Fine-grained domain models. In crude terms, fine-grained means “more classes than tables”. For example, a user may have both a billing address and a home address. In the database, you may have a single User table with the columns BillingStreet, BillingCity, and BillingPostalCode along with HomeStreet, HomeCity, and HomePostalCode. There are good reasons to use this somewhat denormalized relational model (performance, for one). In our object model, we can use the same approach, representing the two addresses as six string-valued properties of the User class. But it’s much better to model this using an Address class, where User has the BillingAddress and HomeAddress properties. This object model achieves improved cohesion and greater code reuse and is more understandable. Complex Types: Splitting a Table Across Multiple Types Back to our model, there is no difference between this composition and other weaker styles of association when it comes to the actual C# implementation. But in the context of ORM, there is a big difference: A composed class is often a candidate Complex Type. But C# has no concept of composition—a class or property can’t be marked as a composition. The only difference is the object identifier: a complex type has no individual identity (i.e. no AddressId defined on Address class) which make sense because when it comes to the database everything is going to be saved into one single table. How to implement a Complex Types with Code First Code First has a concept of Complex Type Discovery that works based on a set of Conventions. The convention is that if Code First discovers a class where a primary key cannot be inferred, and no primary key is registered through Data Annotations or the fluent API, then the type will be automatically registered as a complex type. Complex type detection also requires that the type does not have properties that reference entity types (i.e. all the properties must be scalar types) and is not referenced from a collection property on another type. Here is the implementation: public class User{    public int UserId { get; set; }    public string FirstName { get; set; }    public string LastName { get; set; }    public string Username { get; set; }    public Address Address { get; set; }} public class Address {     public string Street { get; set; }     public string City { get; set; }            public string PostalCode { get; set; }        }public class EntityMappingContext : DbContext {     public DbSet<User> Users { get; set; }        } With code first, this is all of the code we need to write to create a complex type, we do not need to configure any additional database schema mapping information through Data Annotations or the fluent API. Database SchemaThe mapping result for this object model is as follows: Limitations of this mappingThere are two important limitations to classes mapped as Complex Types: Shared references is not possible: The Address Complex Type doesn’t have its own database identity (primary key) and so can’t be referred to by any object other than the containing instance of User (e.g. a Shipping class that also needs to reference the same User Address). No elegant way to represent a null reference There is no elegant way to represent a null reference to an Address. When reading from database, EF Code First always initialize Address object even if values in all mapped columns of the complex type are null. This means that if you store a complex type object with all null property values, EF Code First returns a initialized complex type when the owning entity object is retrieved from the database. SummaryIn this post we learned about fine-grained domain models which complex type is just one example of it. Fine-grained is fully supported by EF Code First and is known as the most important requirement for a rich domain model. Complex type is usually the simplest way to represent one-to-one relationships and because the lifecycle is almost always dependent in such a case, it’s either an aggregation or a composition in UML. In the next posts we will revisit the same domain model and will learn about other ways to map a one-to-one association that does not have the limitations of the complex types. References ADO.NET team blog Mapping Objects to Relational Databases Java Persistence with Hibernate

    Read the article

  • SQL Server and Hyper-V Dynamic Memory - Part 1

    - by SQLOS Team
    SQL and Dynamic Memory Blog Post Series   Hyper-V Dynamic Memory is a new feature in Windows Server 2008 R2 SP1 that allows the memory assigned to guest virtual machines to vary according to demand. Using this feature with SQL Server is supported, but how well does it work in an environment where available memory can vary dynamically, especially since SQL Server likes memory, and is not very eager to let go of it? The next three posts will look at this question in detail. In Part 1 Serdar Sutay, a program manager in the Windows Hyper-V team, introduces Dynamic Memory with an overview of the basic architecture, configuration and monitoring concepts. In subsequent parts we will look at SQL Server memory handling, and develop some guidelines on using SQL Server with Dynamic Memory.   Part 1: Dynamic Memory Introduction   In virtualized environments memory is often the bottleneck for reaching higher VM densities. In Windows Server 2008 R2 SP1 Hyper-V introduced a new feature “Dynamic Memory” to improve VM densities on Hyper-V hosts. Dynamic Memory increases the memory utilization in virtualized environments by enabling VM memory to be changed dynamically when the VM is running.   This brings up the question of how to utilize this feature with SQL Server VMs as SQL Server performance is very sensitive to the memory being used. In the next three posts we’ll discuss the internals of Dynamic Memory, SQL Server Memory Management and how to use Dynamic Memory with SQL Server VMs.   Memory Utilization Efficiency in Virtualized Environments   The primary reason memory is usually the bottleneck for higher VM densities is that users tend to be generous when assigning memory to their VMs. Here are some memory sizing practices we’ve heard from customers:   ·         I assign 4 GB of memory to my VMs. I don’t know if all of it is being used by the applications but no one complains. ·         I take the minimum system requirements and add 50% more. ·         I go with the recommendations provided by my software vendor.   In reality correctly sizing a virtual machine requires significant effort to monitor the memory usage of the applications. Since this is not done in most environments, VMs are usually over-provisioned in terms of memory. In other words, a SQL Server VM that is assigned 4 GB of memory may not need to use 4 GB.   How does Dynamic Memory help?   Dynamic Memory improves the memory utilization by removing the requirement to determine the memory need for an application. Hyper-V determines the memory needed by applications in the VM by evaluating the memory usage information in the guest with Dynamic Memory. VMs can start with a small amount of memory and they can be assigned more memory dynamically based on the workload of applications running inside.   Overview of Dynamic Memory Concepts   ·         Startup Memory: Startup Memory is the starting amount of memory when Dynamic Memory is enabled for a VM. Dynamic Memory will make sure that this amount of memory is always assigned to the VMs by default.   ·         Maximum Memory: Maximum Memory specifies the maximum amount of memory that a VM can grow to with Dynamic Memory. ·         Memory Demand: Memory Demand is the amount determined by Dynamic Memory as the memory needed by the applications in the VM. In Windows Server 2008 R2 SP1, this is equal to the total amount of committed memory of the VM. ·         Memory Buffer: Memory Buffer is the amount of memory assigned to the VMs in addition to their memory demand to satisfy immediate memory requirements and file cache needs.   Once Dynamic Memory is enabled for a VM, it will start with the “Startup Memory”. After the boot process Dynamic Memory will determine the “Memory Demand” of the VM. Based on this memory demand it will determine the amount of “Memory Buffer” that needs to be assigned to the VM. Dynamic Memory will assign the total of “Memory Demand” and “Memory Buffer” to the VM as long as this value is less than “Maximum Memory” and as long as physical memory is available on the host.   What happens when there is not enough physical memory available on the host?   Once there is not enough physical memory on the host to satisfy VM needs, Dynamic Memory will assign less than needed amount of memory to the VMs based on their importance. A concept known as “Memory Weight” is used to determine how much VMs should be penalized based on their needed amount of memory. “Memory Weight” is a configuration setting on the VM. It can be configured to be higher for the VMs with high performance requirements. Under high memory pressure on the host, the “Memory Weight” of the VMs are evaluated in a relative manner and the VMs with lower relative “Memory Weight” will be penalized more than the ones with higher “Memory Weight”.   Dynamic Memory Configuration   Based on these concepts “Startup Memory”, “Maximum Memory”, “Memory Buffer” and “Memory Weight” can be configured as shown below in Windows Server 2008 R2 SP1 Hyper-V Manager. Memory Demand is automatically calculated by Dynamic Memory once VMs start running.     Dynamic Memory Monitoring    In Windows Server 2008 R2 SP1, Hyper-V Manager displays the memory status of VMs in the following three columns:         ·         Assigned Memory represents the current physical memory assigned to the VM. In regular conditions this will be equal to the sum of “Memory Demand” and “Memory Buffer” assigned to the VM. When there is not enough memory on the host, this value can go below the Memory Demand determined for the VM. ·         Memory Demand displays the current “Memory Demand” determined for the VM. ·         Memory Status displays the current memory status of the VM. This column can represent three values for a VM: o   OK: In this condition the VM is assigned the total of Memory Demand and Memory Buffer it needs. o   Low: In this condition the VM is assigned all the Memory Demand and a certain percentage of the Memory Buffer it needs. o   Warning: In this condition the VM is assigned a lower memory than its Memory Demand. When VMs are running in this condition, it’s likely that they will exhibit performance problems due to internal paging happening in the VM.    So far so good! But how does it work with SQL Server?   SQL Server is aggressive in terms of memory usage for good reasons. This raises the question: How do SQL Server and Dynamic Memory work together? To understand the full story, we’ll first need to understand how SQL Server Memory Management works. This will be covered in our second post in “SQL and Dynamic Memory” series. Meanwhile if you want to dive deeper into Dynamic Memory you can check the below posts from the Windows Virtualization Team Blog:   http://blogs.technet.com/virtualization/archive/2010/03/18/dynamic-memory-coming-to-hyper-v.aspx   http://blogs.technet.com/virtualization/archive/2010/03/25/dynamic-memory-coming-to-hyper-v-part-2.aspx   http://blogs.technet.com/virtualization/archive/2010/04/07/dynamic-memory-coming-to-hyper-v-part-3.aspx   http://blogs.technet.com/b/virtualization/archive/2010/04/21/dynamic-memory-coming-to-hyper-v-part-4.aspx   http://blogs.technet.com/b/virtualization/archive/2010/05/20/dynamic-memory-coming-to-hyper-v-part-5.aspx   http://blogs.technet.com/b/virtualization/archive/2010/07/12/dynamic-memory-coming-to-hyper-v-part-6.aspx   - Serdar Sutay   Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

< Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >