Search Results

Search found 9662 results on 387 pages for 'sales and operations plan'.

Page 335/387 | < Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >

  • A little confused about MVC and where to put a database query

    - by jax
    OK, so my Joomla app is in MVC format. I am still a little confused about where to put certain operations, in the Controller or in the Model. This function below is in the controller, it gets called when &task=remove. Should the database stuff be in the Model? It does not seem to fit there because I have two models editapp (display a single application) and allapps (display all the applications), now which one would I put the delete operation in? /** * Delete an application */ function remove() { global $mainframe; $cid = JRequest::getVar( 'cid', array(), '', 'array' ); $db =& JFactory::getDBO(); //if there are items to delete if(count($cid)){ $cids = implode( ',', $cid ); $query = "DELETE FROM #__myapp_apps WHERE id IN ( $cids )"; $db->setQuery( $query ); if (!$db->query()){ echo "<script> alert('".$db->getErrorMsg()."');window.history.go(-1); </script>\n"; } } $mainframe->redirect( 'index.php?option=' . $option . '&c=apps'); } I am also confused about how the flow works. For example, there is a display() function in the controller that gets called by default. If I pass a task, does the display() function still run or does it go directly to the function name passed by $task?

    Read the article

  • Threads are blocked in malloc and free, virtual size

    - by Albert Wang
    Hi, I'm running a 64-bit multi-threaded program on the windows server 2003 server (X64), It run into a case that some of the threads seem to be blocked in the malloc or free function forever. The stack trace is like follows: ntdll.dll!NtWaitForSingleObject() + 0xa bytes ntdll.dll!RtlpWaitOnCriticalSection() - 0x1aa bytes ntdll.dll!RtlEnterCriticalSection() + 0xb040 bytes ntdll.dll!RtlpDebugPageHeapAllocate() + 0x2f6 bytes ntdll.dll!RtlDebugAllocateHeap() + 0x40 bytes ntdll.dll!RtlAllocateHeapSlowly() + 0x5e898 bytes ntdll.dll!RtlAllocateHeap() - 0x1711a bytes MyProg.exe!malloc(unsigned __int64 size=0) Line 168 C MyProg.exe!operator new(unsigned __int64 size=1) Line 59 + 0x5 bytes C++ ntdll.dll!NtWaitForSingleObject() ntdll.dll!RtlpWaitOnCriticalSection() ntdll.dll!RtlEnterCriticalSection() ntdll.dll!RtlpDebugPageHeapFree() ntdll.dll!RtlDebugFreeHeap() ntdll.dll!RtlFreeHeapSlowly() ntdll.dll!RtlFreeHeap() MyProg.exe!free(void * pBlock=0x000000007e8e4fe0) C BTW, the param values passed to the new operator is not correct here maybe due to optimization. Also, at the same time, I found in the process Explorer, the virtual size of this program is 10GB, but the private bytes and working set is very small (<2GB). We did have some threads using virtualalloc but in a way that commit the memory in the call, and these threads are not blocked. m_pBuf = VirtualAlloc(NULL, m_size, MEM_COMMIT, PAGE_READWRITE); ...... VirtualFree(m_pBuf, 0, MEM_RELEASE); This looks strange to me, seems a lot of virtual space is reserved but not committed, and malloc/free is blocked by lock. I'm guessing if there's any corruptions in the memory/object, so plan to turn on gflag with pageheap to troubleshoot this. Does anyone has similar experience on this before? Could you share with me so I may get more hints? Thanks a lot!

    Read the article

  • Python vs all the major professional languages [closed]

    - by Matt
    I've been reading up a lot lately on comparisons between Python and a bunch of the more traditional professional languages - C, C++, Java, etc, mainly trying to find out if its as good as those would be for my own purposes. I can't get this thought out of my head that it isn't good for 'real' programming tasks beyond automation and macros. Anyway, the general idea I got from about two hundred forum threads and blog posts is that for general, non-professional-level progs, scripts, and apps, and as long as it's a single programmer (you) writing it, a given program can be written quicker and more efficiently with Python than it could be with pretty much any other language. But once its big enough to require multiple programmers or more complex than a regular person (read: non-professional) would have any business making, it pretty much becomes instantly inferior to a million other languages. Is this idea more or less accurate? (I'm learning Python for my first language and want to be able to make any small app that I want, but I plan on learning C eventually too, because I want to get into driver writing eventually. So I've been trying to research each ones strengths and weaknesses as much as I can.) Anyway, thanks for any input

    Read the article

  • PHP Access property of a class from within a class instantiated in the original class.

    - by Iain
    I'm not certain how to explain this with the correct terms so maybe an example is the best method... $master = new MasterClass(); $master->doStuff(); class MasterClass { var $a; var $b; var $c; var $eventProccer; function MasterClass() { $this->a = 1; $this->eventProccer = new EventProcess(); } function printCurrent() { echo '<br>'.$this->a.'<br>'; } function doStuff() { $this->printCurrent(); $this->eventProccer->DoSomething(); $this->printCurrent(); } } class EventProcess { function EventProcess() {} function DoSomething() { // trying to access and change the parent class' a,b,c properties } } My problem is i'm not certain how to access the properties of the MasterClass from within the EventProcess-DoSomething() method? I would need to access, perform operations on and update the properties. The a,b,c properties will be quite large arrays and the DoSomething() method would be called many times during the execuction of the script. Any help or pointers would be much appreciated :)

    Read the article

  • Function Returning Negative Value

    - by Geowil
    I still have not run it through enough tests however for some reason, using certain non-negative values, this function will sometimes pass back a negative value. I have done a lot of manual testing in calculator with different values but I have yet to have it display this same behavior. I was wondering if someone would take a look at see if I am missing something. float calcPop(int popRand1, int popRand2, int popRand3, float pERand, float pSRand) { return ((((((23000 * popRand1) * popRand2) * pERand) * pSRand) * popRand3) / 8); } The variables are all contain randomly generated values: popRand1: between 1 and 30 popRand2: between 10 and 30 popRand3: between 50 and 100 pSRand: between 1 and 1000 pERand: between 1.0f and 5500.0f which is then multiplied by 0.001f before being passed to the function above Edit: Alright so after following the execution a bit more closely it is not the fault of this function directly. It produces an infinitely positive float which then flips negative when I use this code later on: pPMax = (int)pPStore; pPStore is a float that holds popCalc's return. So the question now is, how do I stop the formula from doing this? Testing even with very high values in Calculator has never displayed this behavior. Is there something in how the compiler processes the order of operations that is causing this or are my values simply just going too high? If the later I could just increase the division to 16 I think.

    Read the article

  • Recursive Iterators

    - by soandos
    I am having some trouble making an iterator that can traverse the following type of data structure. I have a class called Expression, which has one data member, a List<object>. This list can have any number of children, and some of those children might be other Expression objects. I want to traverse this structure, and print out every non-list object (but I do want to print out the elements of the list of course), but before entering a list, I want to return "begin nest" and after I just exited a list, I want to return "end nest". I was able to do this if I ignored the class wherever possible, and just had List<object> objects with List<object> items if I wanted a subExpression, but I would rather do away with this, and instead have an Expressions as the sublists (it would make it easier to do operations on the object. I am aware that I could use extension methods on the List<object> but it would not be appropriate (who wants an Evaluate method on their list that takes no arguments?). The code that I used to generate the origonal iterator (that works) is: public IEnumerator GetEnumerator(){ return theIterator(expr).GetEnumerator(); } private IEnumerable theIterator(object root) { if ((root is List<object>)){ yield return " begin nest "; foreach (var item in (List<object>)root){ foreach (var item2 in theIterator(item)){ yield return item2; } } yield return " end nest "; } else yield return root; } A type swap of List<object> for expression did not work, and lead to a stackOverflow error. How should the iterator be implemented? Update: Here is the swapped code: public IEnumerator GetEnumerator() { return this.GetEnumerator(); } private IEnumerable theIterator(object root) { if ((root is Expression)) { yield return " begin nest "; foreach (var item in (Expression)root) { foreach (var item2 in theIterator(item)) yield return item2; } yield return " end nest "; } else yield return root; }

    Read the article

  • Dynamically disable custom (VBA) Excel context menu buttons?

    - by Lopsided
    The Scenario Hi guys, I am about to add a few custom controls to the cell context menu in my Excel workbook using the instructions found on this MSDN page. The only problem I am having is that I need the items to only be enabled for a specific column/range of cells. I've looked around, and I've been unable to find any steps for this--there are some for VSTO development (written in C#), but that is not what I need. I plan to write this using the VBA IDE built into Office, and perhaps a bit of XML using the Custom UI Editor. The Question So basically, I'm looking for a way to run a function at the time the context menu is called (i.e., upon right-click) that validates the selection to make sure it is in the appropriate column. If it isn't, I would like my custom buttons to be greyed out. P.S. Please don't think I am asking you to write my code. Creating these buttons should be very simple, as I have created many before (albeit they were all Ribbon items), and I hope it is okay to ask for some quick assistance on this very specific issue. Thank you in advance!

    Read the article

  • Msysgit bash is horrendously slow in Windows 7

    - by Kevin L.
    I love git and use it on OS X pretty much constantly at home. At work, we use svn on Windows, but want to migrate to git as soon as the tools have fully matured (not just TortoiseGit, but also something akin the really nice Visual Studio integration provided by VisualSVN). But I digress... I recently installed msysgit on my Windows 7 machine, and when using the included version of bash, it is horrendously slow. And not just the git operations; clear takes about five seconds. AAAAH! Has anyone experienced a similar issue? Edit: It appears that msysgit is not playing nicely with UAC and might just be a tiny design oversight resulting from developing on XP or running Vista or 7 with UAC disabled; starting Git Bash using Run as administrator results in the lightning speed I see with OS X (or on 7 after starting Git Bash w/o a network connection - see @Gauthier answer). Edit 2: AH HA! See my answer.

    Read the article

  • Security considerations processing emails

    - by Timmy O' Tool
    I have process that will be reading emails from an account. The objective of the process is saving to a database those emails with image(s) as attachments. I will be saving sender, subject body and image path (the image will be saved on the process). I will be showing this information on a page so I would like to know all (or most of them :) ) security aspects to cover. I plan to sanitize the subject and body of the email. I can remove most of the tags, probably it would be enough keeping the <p> tag. I'm not sure if I can trust just in a sanitizer. I would like to HTML encode everything except for the <p> tag after sanitize, just in case. Any suggestion? I'm only accepting images as attachment as I said above, any security risk I have to take into account in relation to the attachment? Thanks!

    Read the article

  • MVC design for archived data view

    - by Hemant Tank
    Implementation of a standard archive process in ASP.Net MVC. Backend SQL Server 2005 We've an existing web app built in MVC. We've an Entity "Claim" and it has some child entities like ClaimDetails, Files, etc... A pretty standard setup in DB. Each entity has its own table and are linked via FK. Now, we need to have an "Archive" feature in web app which will allow admin to archive a Claim and its child entities. An archived Claim shud become readonly when visited again. Here're some points on which I need your valued opinion - To keep it simple and scalable (for a few million records) for now we plan to simply add a bit field "Archived" to the Claim table in db. And change the behavior accordingly in the web app. We've a 'Manage claim' page which renders a bunch of diff views for Claim and its child entities. Now, for a readonly view we can either use the same views or have a separate set of views. What do you suggest? At controller level, we can identify archived claim and select which view to render. At model level, though it'd be great to be able to use the same model used for Manage Claim - but it might not get us the "text" of some lookup fields. For example, Claim.BrandId is rendered as a dropdown in Manage claim (requires only BrandId) but for readonly view we need 'BrandText'. Any existing ref or architecture level example would be great. Here's my prev SO post but its more about db level changes: Design a process to archive data (SQL Server 2005) Thank you.

    Read the article

  • Which OpenGL functions are not GPU-accelerated?

    - by Xavier Ho
    I was shocked when I read this (from the OpenGL wiki): glTranslate, glRotate, glScale Are these hardware accelerated? No, there are no known GPUs that execute this. The driver computes the matrix on the CPU and uploads it to the GPU. All the other matrix operations are done on the CPU as well : glPushMatrix, glPopMatrix, glLoadIdentity, glFrustum, glOrtho. This is the reason why these functions are considered deprecated in GL 3.0. You should have your own math library, build your own matrix, upload your matrix to the shader. For a very, very long time I thought most of the OpenGL functions use the GPU to do computation. I'm not sure if this is a common misconception, but after a while of thinking, this makes sense. Old OpenGL functions (2.x and older) are really not suitable for real-world applications, due to too many state switches. This makes me realise that, possibly, many OpenGL functions do not use the GPU at all. So, the question is: Which OpenGL function calls don't use the GPU? I believe knowing the answer to the above question would help me become a better programmer with OpenGL. Please do share some of your insights.

    Read the article

  • NHibernate: What are child sessions and why and when should I use them?

    - by stefando
    In the comments for the ayende's blog about the auditing in NHibernate there is a mention about the need to use a child session:session.GetSession(EntityMode.Poco). As far as I understand it, it has something to do with the order of the SQL operation which session.Flush will emit. (For example: If I wanted to perform some delete operation in the pre-insert event but the session was already done with deleting operations, I would need some way to inject them in.) However I did not find documentation about this feature and behavior. Questions: Is my understanding of child sessions correct? How and in which scenarios should I use them? Are they documented somewhere? Could they be used for session "scoping"? (For example: I open the master session which will hold some data and then I create 2 child-sessions from the master one. I'd expect that the two child-scopes will be separated but the will share objects from the master session cache. Is this the case?) Are they first class citizens in NHibernate or are they just hack to support some edge-case scenarios? Thanks in advance for any info.

    Read the article

  • Speed comparison - Template specialization vs. Virtual Function vs. If-Statement

    - by Person
    Just to get it out of the way... Premature optimization is the root of all evil Make use of OOP etc. I understand. Just looking for some advice regarding the speed of certain operations that I can store in my grey matter for future reference. Say you have an Animation class. An animation can be looped (plays over and over) or not looped (plays once), it may have unique frame times or not, etc. Let's say there are 3 of these "either or" attributes. Note that any method of the Animation class will at most check for one of these (i.e. this isn't a case of a giant branch of if-elseif). Here are some options. 1) Give it boolean members for the attributes given above, and use an if statement to check against them when playing the animation to perform the appropriate action. Problem: Conditional checked every single time the animation is played. 2) Make a base animation class, and derive other animations classes such as LoopedAnimation and AnimationUniqueFrames, etc. Problem: Vtable check upon every call to play the animation given that you have something like a vector<Animation>. Also, making a separate class for all of the possible combinations seems code bloaty. 3) Use template specialization, and specialize those functions that depend on those attributes. Like template<bool looped, bool uniqueFrameTimes> class Animation. Problem: The problem with this is that you couldn't just have a vector<Animation> for something's animations. Could also be bloaty. I'm wondering what kind of speed each of these options offer? I'm particularly interested in the 1st and 2nd option because the 3rd doesn't allow one to iterate through a general container of Animations. In short, what is faster - a vtable fetch or a conditional?

    Read the article

  • Immutability and shared references - how to reconcile?

    - by davetron5000
    Consider this simplified application domain: Criminal Investigative database Person is anyone involved in an investigation Report is a bit of info that is part of an investigation A Report references a primary Person (the subject of an investigation) A Report has accomplices who are secondarily related (and could certainly be primary in other investigations or reports These classes have ids that are used to store them in a database, since their info can change over time (e.g. we might find new aliases for a person, or add persons of interest to a report) If these are stored in some sort of database and I wish to use immutable objects, there seems to be an issue regarding state and referencing. Supposing that I change some meta-data about a Person. Since my Person objects immutable, I might have some code like: class Person( val id:UUID, val aliases:List[String], val reports:List[Report]) { def addAlias(name:String) = new Person(id,name :: aliases,reports) } So that my Person with a new alias becomes a new object, also immutable. If a Report refers to that person, but the alias was changed elsewhere in the system, my Report now refers to the "old" person, i.e. the person without the new alias. Similarly, I might have: class Report(val id:UUID, val content:String) { /** Adding more info to our report */ def updateContent(newContent:String) = new Report(id,newContent) } Since these objects don't know who refers to them, it's not clear to me how to let all the "referrers" know that there is a new object available representing the most recent state. This could be done by having all objects "refresh" from a central data store and all operations that create new, updated, objects store to the central data store, but this feels like a cheesy reimplementation of the underlying language's referencing. i.e. it would be more clear to just make these "secondary storable objects" mutable. So, if I add an alias to a Person, all referrers see the new value without doing anything. How is this dealt with when we want to avoid mutability, or is this a case where immutability is not helpful?

    Read the article

  • How can I speed up a 1800-line PHP include? It's slowing my pageload down to 10sec/view

    - by somerandomguy
    I designed my code to put all important functions in a single PHP file that's now 1800 lines long. I call it in other PHP files--AJAX processors, for example--with a simple "require_once("codeBank.php")". I'm discovering that it takes about 10 seconds to load up all those functions, even though I have nothing more than a few global arrays and a bunch of other functions involved. The main AJAX processor code, for example, is taking 8 seconds just to do a simple syntax verification (whose operational function is stored in codeBank.php). When I comment out the require_once, my AJAX response time speeds up from 10sec to 40ms, so it's pretty clear that PHP's trying to do something with those 1800 lines of functions. That's even with APC installed, which is surprising. What should I do to get my code speed back to the sub-100ms level? Am I failing to get the cache's benefit somehow? Do I need to cut that single function bank file into different pieces? Are there other subtle things that I could be doing to screw up my response time? Or barring all that, what are some tools to dig further into which PHP operations are hitting speed bumps?

    Read the article

  • Iterating over member typed collection fails when using untyped reference to generic object

    - by Alexander Pavlov
    Could someone clarify why iterate1() is not accepted by compiler (Java 1.6)? I do not see why iterate2() and iterate3() are much better. This paragraph is added to avoid silly "Your post does not have much context to explain the code sections; please explain your scenario more clearly." protection. import java.util.Collection; import java.util.HashSet; public class Test<T> { public Collection<String> getCollection() { return new HashSet<String>(); } public void iterate1(Test test) { for (String s : test.getCollection()) { // ... } } public void iterate2(Test test) { Collection<String> c = test.getCollection(); for (String s : c) { // ... } } public void iterate3(Test<?> test) { for (String s : test.getCollection()) { // ... } } } Compiler output: $ javac Test.java Test.java:11: incompatible types found : java.lang.Object required: java.lang.String for (String s : test.getCollection()) { ^ Note: Test.java uses unchecked or unsafe operations. Note: Recompile with -Xlint:unchecked for details. 1 error

    Read the article

  • Recreating a workflow instance with the same instance id

    - by Miron Brezuleanu
    We have some objects that have an associated workflow instance. The objects are identified with a GUID, which is also the GUID of the workflow instance associated with the object. We need to restart (see NOTE 3 for the meaning of 'restart') the workflow instance if the workflow definition changed (there is no state in the workflow itself and it is written to support restarting in this manner). The restarting is performed by calling Terminate on the WorkflowInstance, then recreating the instance with the same GUID. The weird part is that this works every other attempt (odd attempts - the workflow is stopped, but for some reason doesn't restart, even attempt - the already terminated workflow is recreated and started successfully). While I admit that using 'second hand' GUIDs is a sign of extraordinary cheapness (and something we plan to change), I'm wondering why this isn't working. Any ideas? NOTES: The terminated workflow instance is passivated (waiting for a notification) at the time of the termination. The Terminate call successfully deletes the data persisted in the database for that instance. We're using 'restarting' with a meaning that's less common in the context of WF - not restarting a passivated instance, but force the workflow to start again from the beginning of its definition. Thanks!

    Read the article

  • Problem consuming a dataset via a .NET web service from Flex-ActionScript

    - by DEH
    Hi, I am returning a .NET dataset to Flex Actionscript via a web service. Actionscript snippet as follows: var websvc:WebService = new WebService(); websvc.useProxy=false; websvc.wsdl = "http://localhost:13229/test/mysvc.asmx?WSDL"; websvc.loadWSDL(); var operation:Operation = new Operation(null, "GetData"); operation.arguments.command="xx_gethierdata_"+mode+"_"+identifier; operation.addEventListener(ResultEvent.RESULT, onResultHandler, false, 0, true); operation.addEventListener(FaultEvent.FAULT, onFaultHandler, false, 0, true); operation.resultFormat="object"; websvc.operations = [operation]; operation.send(); Once in the onResultHandler function I have access to the datatable - I then want to grab the column names. The following code outputs my column names: for each (var tcolumn:Object in datatable.Columns){trace('Column:'+tcolumn);} This works ok, but the column names are encoded, so a column name that is actually "1-9" is output as "_x005B_1-9_x005D_" Anyone know the best way to decode the column name? I could replace all the encoding strings, but surely there is a better way? Thanks

    Read the article

  • How to cast C struct just another struct type if their memory size are equal?

    - by Eonil
    I have 2 matrix structs means equal data but have different form like these: // Matrix type 1. typedef float Scalar; typedef struct { Scalar e[4]; } Vector; typedef struct { Vector e[4]; } Matrix; // Matrix type 2 (you may know this if you're iPhone developer) struct CATransform3D { CGFloat m11, m12, m13, m14; CGFloat m21, m22, m23, m24; CGFloat m31, m32, m33, m34; CGFloat m41, m42, m43, m44; }; typedef struct CATransform3D CATransform3D; Their memory size are equal. So I believe there is a way to convert these types without any pointer operations or copy like this: // Implemented from external lib. CATransform3D CATransform3DMakeScale (CGFloat sx, CGFloat sy, CGFloat sz); Matrix m = (Matrix)CATransform3DMakeScale ( 1, 2, 3 ); Is this possible? Currently compiler prints an "error: conversion to non-scalar type requested" message.

    Read the article

  • Locking database edit by key name

    - by Will Glass
    I need to prevent simultaneous edits to a database field. Users are executing a push operation on a structured data field, so I want to sequence the operations, not simply ignore one edit and take the second. Essentially I want to do synchronized(key name) { push value onto the database field } and set up the synchronized item so that only one operation on "key name" will occur at a time. (note: I'm simplifying, it's not always a simple push). A crude way to do this would be a global synchronization, but that bottlenecks the entire app. All I need to do is sequence two simultaneous writes with the same key, which is rare but annoying occurrence. This is a web-based java app, written with Spring (and using JPA/MySQL). The operation is triggered by a user web service call. (the root cause is when a user sends two simultaneous http requests with the same key). I've glanced through the Doug Lea/Josh Bloch/et al Concurrency in Action, but don't see an obvious solution. Still, this seems simple enough I feel there must be an elegant way to do this.

    Read the article

  • Problem with Google Calendar API "getTimes()" method

    - by Raffo
    I'm trying to get several informations from google calendar's API in JAVA. While trying to access to informations about the time of an event I got a problem. I do the following: CalendarService myService = new CalendarService("project_name"); myService.setUserCredentials("user", "pwd"); URL feedUrl = new URL("http://www.google.com/calendar/feeds/default/private/full"); CalendarQuery q = new CalendarQuery(feedUrl); CalendarEventFeed resultFeed = myService.query(q, CalendarEventFeed.class); for (int i = 0; i < resultFeed.getEntries().size(); i++) { CalendarEventEntry g = resultFeed.getEntries().get(i); List<When> t = g.getTimes(); //other operations } The list t obtained with getTimes() on a CalendarEventEntry is always empty and I can't figure out why. This seems strange to me since for a calendar event it should always exist a description of when it should happen... what am I missing??

    Read the article

  • Will TFS 2010 support non-contiguous merging?

    - by steve_d
    I know that merging non-contiguous changesets at once may not be a good idea. However there is at least one situation in which merging non-contiguous changesets is (probably) not going to break anything: when there are no intervening changes on the affected individual files. (At least, it wouldn't break any worse than would a series of cherry-picked merges, checked in each time; and at least this way you would discover breakage before checking in). For instance, let's say you have a Main and a Development branch. They start out identical (e.g. after a release). They have two files, foo.cs and bar.cs. Alice makes a change in Development\foo.cs and checks it in as changeset #1001. Bob makes a change in Development\bar.cs and checks it in as #1002. Alice makes another change to Development\foo.cs and checks it in as #1003. Now we could in theory merge both changes #1001 and #1003 from dev-to main in a single operation. If we try to merge at the branch level, dev-to-main, we will have to do it as two operations. In this simple, contrived example it's simple enough to merge the one file - but in the real world where there would be many files involved, it's not so simple. Non-contiguous merging is one of the reasons given for why "merge by workitem" is not implemented in TFS.

    Read the article

  • Entity Framework 4 Entity with EntityState of Unchanged firing update

    - by Andy
    I am using EF 4, mapping all CUD operations for my entities using sprocs. I have two tables, ADDRESS and PERSON. A PERSON can have multiple ADDRESS associated with them. Here is the code I am running: Person person = (from p in context.People where p.PersonUID == 1 select p).FirstOrDefault(); Address address = (from a in context.Addresses where a.AddressUID == 51 select a).FirstOrDefault(); address.AddressLn2 = "Test"; context.SaveChanges(); The Address being updated is associated with the Person I am retrieveing - although they are not explicitly linked in any way in the code. When the context.SaveChanges() executes not only does the Update sproc for my Address entity get fired (like you would expect), but so does the Update sproc for the Person entity - even though you can see there was no change made to the Person entity. When I check the EntityState of both objects before the context.SaveChanges() call I see that my Address entity has an EntityState of "Modified" and my Person enity has an EntityState of "Unchanged". Why is the Update sproc being called for the Person entity? Is there a setting of some sort that I can set to prevent this from happening?

    Read the article

  • Deterministic floating point and .NET

    - by code2code
    How can I guarantee that floating point calculations in a .NET application (say in C#) always produce the same bit-exact result? Especially when using different versions of .NET and running on different platforms (x86 vs x86_64). Inaccuracies of floating point operations do not matter. In Java I'd use strictfp. In C/C++ and other low level languages this problem is essentially solved by accessing the FPU / SSE control registers but that's probably not possible in .NET. Even with control of the FPU control register the JIT of .NET will generate different code on different platforms. Something like HotSpot would be even worse in this case... Why do I need it? I'm thinking about writing a real-time strategy (RTS) game which heavily depends on fast floating point math together with a lock stepped simulation. Essentially I will only transmit user input across the network. This also applies to other games which implement replays by storing the user input. Not an option are: decimals (too slow) fixed point values (too slow and cumbersome when using sqrt, sin, cos, tan, atan...) update state across the network like an FPS: Sending position information for hundreds or a few thousand units is not an option Any ideas?

    Read the article

  • Repository Pattern Standardization of methods

    - by Nix
    All I am trying to find out the correct definition of the repository pattern. My original understanding was this (extremely dubmed down) Separate your Business Objects from your Data Objects Standardize access methods in data access layer. I have really seen 2 different implementations. Implementation 1 : public Interface IRepository<T>{ List<T> GetAll(); void Create(T p); void Update(T p); } public interface IProductRepository: IRepository<Product> { //Extension methods if needed List<Product> GetProductsByCustomerID(); } Implementation 2 : public interface IProductRepository { List<Product> GetAllProducts(); void CreateProduct(Product p); void UpdateProduct(Product p); List<Product> GetProductsByCustomerID(); } Notice the first is generic Get/Update/GetAll, etc, the second is more of what I would define "DAO" like. Both share an extraction from your data entities. Which I like, but i can do the same with a simple DAO. However the second piece standardize access operations I see value in, if you implement this enterprise wide people would easily know the set of access methods for your repository. Am I wrong to assume that the standardization of access to data is an integral piece of this pattern ? Rhino has a good article on implementation 1, and of course MS has a vague definition and an example of implementation 2 is here.

    Read the article

< Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >