Search Results

Search found 13403 results on 537 pages for 'epm performance tuning'.

Page 487/537 | < Previous Page | 483 484 485 486 487 488 489 490 491 492 493 494  | Next Page >

  • JMS message. Model to include data or pointers to data?

    - by John
    I am trying to resolve a design difference of opinion where neither of us has experience with JMS. We want to use JMS to communicate between a j2ee application and the stand-alone application when a new event occurs. We would be using a single point-to-point queue. Both sides are Java-based. The question is whether to send the event data itself in the JMS message body or to send a pointer to the data so that the stand-alone program can retrieve it. Details below. I have a j2ee application that supports data entry of new and updated persons and related events. The person records and associated events are written to an Oracle database. There are also stand-alone, separate programs that contribute new person and event records to the database. When a new event occurs through any of 5-10 different application functions, I need to notify remote systems through an outbound interface using an industry-specific standard messaging protocol. The outbound interface has been designed as a stand-alone application to support scalability through asynchronous operation and by moving it to a separate server. The j2ee application currently has most of the data in memory at the time the event is entered. The data would consist of approximately 6 different objects; a person object and some with multiple instances for an average size in the range of 3000 to 20,000 bytes. Some special cases could be many times this amount. From a performance and reliability perspective, should I model the JMS message to pass all the data needed to create the interface message, or model the JMS message to contain record keys for the data and have the stand-alone Java application retrieve the data to create the interface message?

    Read the article

  • How to give new life into a five years old, simple but reliable PHP form?

    - by Sam
    Hi all. I have a script in php 5.2. I want to use a simple form. I found something a programmer made for me about 5 years ago. When I use it, PHP outputs an error now unless I set register_long_arrays = On, then it works fine. On the PHP website, however, it says: Warning This feature has been DEPRECATED as of PHP 5.3.0. Relying on this feature is highly discouraged. It's recommended to turn them off, for performance reasons. Instead, use the superglobal arrays, like $_GET. Should I listen to PHP's warning, or just enable the option and keep using my old form happily? If the former, then how/where do I change this simple form, so it does not rely on the deprecated setting? Your answer is much appreciated. form.htm <html><body> <form method="POST" action="form_sent.php"> ... </form> </body></html> form_sent.php <html><body> <?php $email = $HTTP_POST_VARS[email]; $mailto = "[email protected]"; $mailsubj = "A Form was Sent from Website!"; $mailhead = "From: $email\n"; reset ($HTTP_POST_VARS); $mailbody = "Values submitted from web site form:\n"; while (list($key, $val) = each ($HTTP_POST_VARS)){$mailbody .= "$key : $val\n";} if (!eregi("\n",$HTTP_POST_VARS[email])) { mail($mailto, $mailsubj, $mailbody, $mailhead); } ?> <b>Form Sent. Thank you.</b> </body></html>

    Read the article

  • Using array of Action() in a lambda expression

    - by Sean87
    I want to do some performance measurement for a method that does some work with int arrays, so I wrote the following class: public class TimeKeeper { public TimeSpan Measure(Action[] actions) { var watch = new Stopwatch(); watch.Start(); foreach (var action in actions) { action(); } return watch.Elapsed; } } But I can not call the Measure mehotd for the example below: var elpased = new TimeKeeper(); elpased.Measure( () => new Action[] { FillArray(ref a, "a", 10000), FillArray(ref a, "a", 10000), FillArray(ref a, "a", 10000) }); I get the following errors: Cannot convert lambda expression to type 'System.Action[]' because it is not a delegate type Cannot implicitly convert type 'void' to 'System.Action' Cannot implicitly convert type 'void' to 'System.Action' Cannot implicitly convert type 'void' to 'System.Action' Here is the method that works with arrays: private void FillArray(ref int[] array, string name, int count) { array = new int[count]; for (int i = 0; i < array.Length; i++) { array[i] = i; } Console.WriteLine("Array {0} is now filled up with {1} values", name, count); } What I am doing wrong?

    Read the article

  • C++ AMP, for loops to parallel_for_each loop

    - by user1430335
    I'm converting an algorithm to make use of the massive acceleration that C++ AMP provides. The stage I'm at is putting the for loops into the known parallel_for_each loop. Normally this should be a straightforward task to do but it appears more complex then I first thought. It's a nested loop which I increment using steps of 4 per iterations: for(int j = 0; j < height; j += 4, data += width * 4 * 4) { for(int i = 0; i < width; i += 4) { The trouble I'm having is the use of the index. I can't seem to find a way to properly fit this into the parallel_for_each loop. Using an index of rank 2 is the way to go but manipulating it via branching will do harm to the performance gain. I found a similar post: Controlling the index variables in C++ AMP. It also deals about index manipulation but the increment aspect doesn't cover my issue. With kind regards, Forcecast

    Read the article

  • VB6 Game Development : Don't ask me why :-)

    - by CVS-2600Hertz-wordpress-com
    Hi All, I am developing a game in VB6 (plz don't ask me why :) ). The storyboard is ready and a rough implementation is underway. I am following a "pure-software-rendering" approach. (i.e. no DirectX, no openGL etc.) Amongst many others, the following "serious" problems exist: 2D alpha transparency reqd. to implement overlays. Parallax implementation to give depth-of-field illusion. Capturing mouse-scroll events globally (as in FPS-es; mapping them to changing weapon). Async sound play with absolute "zero-lag". Any ideas anyone. Please suggest any well documented library/ocx or sample-code. Plz do provide solutions with max performance and with as little overhead as possible. Also, anyone who has developed any games, and would be open to sharing her/his code would be highly appreciated. (any well-acknowledged VB games whose source-code i can study??) Thank You

    Read the article

  • What is better for a student programming in C++ to learn for writing GUI: C# vs QT?

    - by flashnik
    I'm a teacher(instructor) of CS in the university. The course is based on Cormen and Knuth and students program algorithms in C++. But sometimes it is good to show how an algorithm works or just a result of task through GUI. Also in my opinion it's very imporant to be able to write full programs. They will have courses concerning GUI but a three years, later, in fact, before graduatuion. I think that they should be able to write simple GUI applications earlier. So I want to teach them it. How do you think, what is more useful for them to learn: programming GUI with QT or writing GUI in C# and calling unmanaged C++ library? Update. For developing C++ applications students use MS Visual studio, so C# is already installed. But QT AFAIK also can be integrated into VS. I have following pros of C# (some were suggested there in answers): The need to make an additional layer. It's more work, but it forces you explicitly specify contract between GUI and processing data. The border between GUI and algorithms becomes very clear. It's more popular among employers. At least, in Russia where we live. It's rather common to write performance-critical algorithms in C++ and PInvoke them from well-looking C# application/ASP.Net website. Maybe it is not so widespread in the rest of the world but in Russia Windows is very popular, especially in companies and corporations due to some reasons, so most of b2b applications are Windows applications. Rapid development. It's much quicker to code in .Net then in C++ due to many reasons. And the con is that it's a new language with own specific for students. And the mess with invoking calls to library.

    Read the article

  • Best way to keep a .net client app updated with status of another application

    - by rwmnau
    I have a Windows service that's running all the time, and takes some action every 15 minutes. I also have a client WinForms app that displays some information about what the service is doing. I'd like the forms application to keep itself updated with a recent status, but I'm not sure if polling every second is a good move performance-wise. When it starts, my Windows Service opens a WCF named pipe to receive queries (from my client form) Every second, a timer on the winform sends a query to the pipe, and then displays the results. If the pipe isn't there, the form displays that the service isn't running. Is that the best way to do this? If my service opens the pipe when it starts, will it always stay open (until I close it or my service stops)? In addition to polling the service, maybe there's some way for the service to notify any watching applications of certain events, like starting and stopping processing? That way, I could poll less, since I'd presumably know about big events already, and would only be polling for progress. Anything that I'm missing?

    Read the article

  • Using an unencoded key vs a real Key, benefits?

    - by user246114
    Hi, I am reading the docs for Key generation in app engine. I'm not sure what effect using a simple String key has over a real Key. For example, when my users sign up, they must supply a unique username: class User { /** Key type = unencoded string. */ @PrimaryKey private String name; } now if I understand the docs correctly, I should still be able to generate named keys and entity groups using this, right?: // Find an instance of this entity: User user = pm.findObjectById(User.class, "myusername"); // Create a new obj and put it in same entity group: Key key = new KeyFactory.Builder( User.class.getSimpleName(), "myusername") .addChild(Goat.class.getSimpleName(), "baa").getKey(); Goat goat = new Goat(); goat.setKey(key); pm.makePersistent(goat); the Goat instance should now be in the same entity group as that User, right? I mean there's no problem with leaving the User's primary key as just the raw String? Is there a performance benefit to using a Key though? Should I update to: class User { /** Key type = unencoded string. */ @PrimaryKey private Key key; } // Generate like: Key key = KeyFactory.createKey( User.class.getSimpleName(), "myusername"); user.setKey(key); it's almost the same thing, I'd still just be generating the Key using the unique username anyway, Thanks

    Read the article

  • What collection object is appropriate for fixed ordering of values?

    - by makerofthings7
    Scenario: I am tracking several performance counters and have a CounterDescription[] correlate to DataSnapshot[]... where CounterDescription[n] describes the data loaded within DataSnapshot[n]. I want to expose an easy to use API within C# that will allow for the easy and efficient expansion of the arrays. For example CounterDescription[0] = Humidity; DataSnapshot[0] = .9; CounterDescription[1] = Temp; DataSnapshot[1] = 63; My upload object is defined like this: Note how my intent is to correlate many Datasnapshots with a dattime reference, and using the offset of the data to refer to its meaning. This was determined to be the most efficient way to store the data on the back-end, and has now reflected itself into the following structure: public class myDataObject { [DataMember] public SortedDictionary<DateTime, float[]> Pages { get; set; } /// <summary> /// An array that identifies what each position in the array is supposed to be /// </summary> [DataMember] public CounterDescription[] Counters { get; set; } } I will need to expand each of these arrays (float[] and CounterDescription[] ), but whatever data already exists must stay in that relative offset. Which .NET objects support this? I think Array[] , LinkedList<t>, and List<t> Are able to keep the data fixed in the right locations. What do you think?

    Read the article

  • Custom ADO.NET provider to intercept and modify sql queries.

    - by Faisal
    Our client has an application that stores blobs in database which has now grown enough to impact the performance of SQL Server. To overcome this issue, we are planning to offload all blobs to file system and leave the path of file in a new column in user table. Like if user has a table docs with columns id, name and content (blob); we would ask him to add a new column 'filepath' in this table. Our client is willing to make this change in this database. But when it comes to changing the sql queries to read and write into this table, they are not ready to accep this. Actually, they don't want any change that results in recompilation and deployment. Now we are planning to write a custom ADO.NET provider that will intercept the select queries add a column 'filepath' at the end of the select statement retieve the result set and modify the 'content' column value based on 'filepath' value Is there any use case that you think will certainly fail with this approach? I know this sounds dirty but do we have a better way?

    Read the article

  • Domain model for an online WYSYWG webpage generator / runtime

    - by CharlieBrown
    Hi all, I'm using C#, MVC, NHibernate and StructureMap as my IoC container, and need some ideas regarding my domain model. The application I'm working has two parts: an Authoring part and a Runtime part. The idea is to allow the user to create a webpage in Authoring (mostly a form actually) by choosing from a set of predefined controls. That webpage will be later used as a form in a call center environment (Runtime part), or may be used in an intranet portal, etc. Basically something similar to what a CMS would do. The difference is, of course, that the webpage/form the author generates will be used and fulfilled in runtime, and that authros should be able to freely create the webpage they want without limitations. I have a draft working model that allows a RunController to iterate over the ScriptPage (my class for the "generated webpage") Controls collection and uses partial views to render each of them. Works kind of fine. Basically I have a common ScriptControl class, and then I can create for example a TextInputControl or a DropDownControl by inheriting from that base class. I can also figure out the Authoring part of the app, although that will surely be fun in itself for sure. :) The biggest problem I have now is persistance. In order to be flexible, I want to be able to add more controls, and template controls (think of an Address composite control) in sepparate DLLs, so I think having a relational model that handles very possible control is not the way to go. My current thinking is using a kind of ObjectStore: binary-serializing the ScriptPage object that contains the List collection and deserializing at Runtime, but I'm not sure how good will it work with NHibernate and how good the performance will be. Serializing a small "page" with 10 controls results in 7964 bytes, for example. Any ideas out there? Thanks in advance, excuse the length. ;)

    Read the article

  • How to get a category listing from Magento?

    - by alex
    I want to create a page in Magento that shows a visual representation of the categories.. example CATEGORY product 1 product 2 ANOTHER CATEGORY product 3 My problem is, their database is organised very differently to what I've seen in the past. They have tables dedicated to data types like varchar, int, etc. I assume this is for performance or similiar. I haven't found a way to use MySQL to query the database and get a list of categories. I'd then like to match these categories to products, to get a listing of products for each category. Unfortunately Magento seems to make this very difficult. Also I have not found a method that will work from within a page block.. I have created showcase.phtml and put it in the XML layout and it displays and runs it's PHP code. I was hoping for something easy like looping through $this-getAllCategories(); and then a nested loop inside with something like $category-getChildProducts(); Can anyone help me? Thank you!

    Read the article

  • Is there a set based solution for this problem?

    - by NYSystemsAnalyst
    We have a table set up as follows: |ID|EmployeeID|Date |Category |Hours| |1 |1 |1/1/2010 |Vacation Earned|2.0 | |2 |2 |2/12/2010|Vacation Earned|3.0 | |3 |1 |2/4/2010 |Vacation Used |1.0 | |4 |2 |5/18/2010|Vacation Earned|2.0 | |5 |2 |7/23/2010|Vacation Used |4.0 | The business rules are: Vacation balance is calculated by vacation earned minus vacation used. Vacation used is always applied against the oldest vacation earned amount first. We need to return the rows for Vacation Earned that have not been offset by vacation used. If vacation used has only offset part of a vacation earned record, we need to return that record showing the difference. For example, using the above table, the result set would look like: |ID|EmployeeID|Date |Category |Hours| |1 |1 |1/1/2010 |Vacation Earned|1.0 | |4 |2 |5/18/2010|Vacation Earned|1.0 | Note that record 2 was eliminated because it was completely offset by used time, but records 1 and 4 were only partially used, so they were calculated and returned as such. The only way we have thought of to do this is to get all of the vacation earned records in a temporary table. Then, get the total vacation used and loop through the temporary table, deleting the oldest record and subtracting that value from the total vacation used until the total vacation used is zero. We could clean it up for when the remaining vacation used is only part of the oldest vacation earned record. This would leave us with just the outstanding vacation earned records. This works, but it is very inefficient and performs poorly. Also, the performance will just degrade over time as more and more records are added. Are there any suggestions for a better solution, preferable set based? If not, we'll just have to go with this.

    Read the article

  • CI + Joomla 1.5

    - by DMin
    Hi, This is something that I just cooked up with Joomla and CodeIgniter(CI). I Wrote my database intensive application in CodeIgniter and frontend is Joomla. I'm using Jumi(Joomla Extention) so I can include the CI files inside joomla to basically insert the content generated by CI into Joomla articles. Problem is, you can't include CI files directly using JUMI from joomla because CI tends to route the pages so instead of seeing your joomla page with the CI content, you be redirected to the CI page itself. I did a little work around for this : Made an additional page that just basically does cURL to the CI page - gets the data and echos it out. From jumi, I include this cURL page instead. Couple of questions: I've seen at least a few posts that CI + Joomla is difficult to do(link). 1) Do you see any glaring security issues or possible performance issues? 2) Do you know of a better way to implement this? 3) What do you think of this? Do you think this is a good way to do this? There is one component out there that plugs CI with Joomla but it requires you to have a fresh CI install. It allows only one controller & the download link is down as well.

    Read the article

  • Using same log4j logger in standalone java application

    - by ktaylorjohn
    I have some code which is a standalone java application comprising of 30+ classes. Most of these inherit from some other base classes. Each and every class has this method to get and use a log4j logger public static Logger getLogger() { if (logger != null) return logger; try { PropertyUtil propUtil = PropertyUtil.getInstance("app-log.properties"); if (propUtil != null && propUtil.getProperties() != null) PropertyConfigurator.configure(propUtil.getProperties ()); logger = Logger.getLogger(ExtractData.class); return logger; } catch (Exception exception) { exception.printStackTrace(); } } A) My question is whether this should be refactored to some common logger which is initialized once and used across by all classes? Is that a better practice? B) If yes, how can this be done ? How can I pass the logger around ? C) This is actually being used in the code not as Logger.debug() but getLogger().debug(). What is the impact of this in terms of performance?

    Read the article

  • Please tell me what is wrong with my threading!!!

    - by kiddo
    I have a function where I will compress a bunch of files into a single compressed file..it is taking a long time(to compress),so I tried implementing threading in my application..Say if I have 20 files for compression,I separated that as 5*4=20,inorder to do that I have separate variables(which are used for compression) for all 4 threads in order to avoid locks and I will wait until the 4 thread finishes..Now..the threads are working but i see no improvement in their performance..normally it will take 1 min for 20 files(for example) after implementing threading ...there is only 5 or 3 sec difference., sometimes the same. here i will show the code for 1 thread(so it is for other3 threads) //main thread myClassObject->thread1 = AfxBeginThread((AFX_THREADPROC)MyThreadFunction1,myClassObject); .... HANDLE threadHandles[4]; threadHandles[0] = myClassObject->thread1->m_hThread; .... WaitForSingleObject(myClassObject->thread1->m_hThread,INFINITE); UINT MyThreadFunction(LPARAM lparam) { CMerger* myClassObject = (CMerger*)lparam; CString outputPath = myClassObject->compressedFilePath.GetAt(0);//contains the o/p path wchar_t* compressInputData[] = {myClassObject->thread1outPath, COMPRESS,(wchar_t*)(LPCTSTR)(outputPath)}; HINSTANCE loadmyDll; loadmydll = LoadLibrary(myClassObject->thread1outPath); fp_Decompress callCompressAction = NULL; int getCompressResult=0; myClassObject->MyCompressFunction(compressInputData,loadClient7zdll,callCompressAction,myClassObject->thread1outPath, getCompressResult,minIndex,myClassObject->firstThread,myClassObject); return 0; }

    Read the article

  • Merging two arrays in PHP

    - by Industrial
    Hi everyone, I am trying to create a new array from two current arrays. Tried array_merge, but it will not give me what I want. $array1 is a list of keys that I pass to a function. $array2 holds the results from that function, but doesn't contain any non-available resuls for keys. So, I want to make sure that all requested keys comes out with 'null':ed values, as according to the shown $result array. It goes a little something like this: $array1 = array('item1', 'item2', 'item3', 'item4'); $array2 = array( 'item1' => 'value1', 'item2' => 'value2', 'item3' => 'value3' ); Here's the result I want: $result = array( 'item1' => 'value1', 'item2' => 'value2', 'item3' => 'value3', 'item4' => '' ); It can be done this way, but I don't think that it's a good solution - I really don't like to take the easy way out and suppress PHP errors by adding @:s in the code. This sample would obviously throw errors since 'item4' is not in $array2, based on the example. foreach ($keys as $k => $v){ @$array[$v] = $items[$v]; } So, what's the fastest (performance-wise) way to accomplish the same result?

    Read the article

  • Is there a faster way to access a property member of a class using reflection?

    - by Ross Goddard
    I am currently using the following code to access the property of an object using reflection: Dim propInfo As Reflection.PropertyInfo = myType.GetProperty(propName) Dim objValue As Object = propInfo.GetValue(myObject, Nothing) I am having some issues with the speed since this type of code is being called many times and is causing some slowdown. I have been looking into using Refelction.Emit or dynamic methods, but I am not sure exactly how to make use of them. Background Information: I am creating a list of a subset of the properties of the object, associating then with some meta information (such as if they can be loaded from the database or xml, if they are editable, can the user see them). This is for later consumption so we can write code such as : foreach prop as BaseWrapper in graphNode.NodeProperties prop.LoadFromDataRow(dr) next The application makes heavy use of having access to this list. The problem is that on the initial load of a project, a larger number of objects are being created that make use of this, so for each object created it is looping through this code a number of times. I initially tried adding each property to the list manually, but this ran into problems with not everything being initialized at the correct time and some other issues. If there is no other good way, then I may have to rethink some of the design and see what else can be done to improve the performance.

    Read the article

  • Faster way to convert from a String to generic type T when T is a valuetype?

    - by Kumba
    Does anyone know of a fast way in VB to go from a string to a generic type T constrained to a valuetype (Of T as Structure), when I know that T will always be some number type? This is too slow for my taste: Return DirectCast(Convert.ChangeType(myStr, GetType(T)), T) But it seems to be the only sane method of getting from a String -- T. I've tried using Reflector to see how Convert.ChangeType works, and while I can convert from the String to a given number type via a hacked-up version of that code, I have no idea how to jam that type back into T so it can be returned. I'll add that part of the speed penalty I'm seeing (in a timing loop) is because the return value is getting assigned to a Nullable(Of T) value. If I strongly-type my class for a specific number type (i.e., UInt16), then I can vastly increase the performance, but then the class would need to be duplicated for each numeric type that I use. It'd almost be nice if there was converter to/from T while working on it in a generic method/class. Maybe there is and I'm oblivious to its existence?

    Read the article

  • Do I need a spatial index in my database?

    - by Sanoj
    I am designing an application that needs to save geometric shapes in a database. I haven't choosen the database management system yet. In my application, all database queries will have an bounding box as input, and as output I want all shapes within that database. I know that databases with a spatial index is used for this kind of application. But in my application there will not be any queries of type "give me objects nearby x/y" or other more complex queries that are useful in a GIS application. I am planning of having a database without a spatial index and have queries looking like: SELECT * FROM shapes WHERE x < max_x AND x > min_x AND y < max_y AND y > min_y And have an index on the columns x (double) and y (double). As long I can see, I don't really need a database with an spatial index, howsoever my application is close to that kind of applications. And even if I would like to have nearby queries, then I could create a big enough bounding box around that point. Or will this lead to poor performance? Do I really need a spatial database? And when is a spatial index needed?

    Read the article

  • Tracking down origin of I/O in multi-process server

    - by Craig Ringer
    I'm currently trying to track down some phantom I/O in a PostgreSQL build I'm testing. It's a multi-process server and it isn't simple to associate disk I/O back to a particular back-end and query. I thought Linux's perf tool would be ideal for this, but I'm struggling to capture block I/O performance counter metrics and associate them with user-space activity. It's easy to record block I/O requests and completions with, eg: sudo perf record -g -T -u postgres -e 'block:block_rq_*' and the user-space pid is recorded, but there's no kernel or user-space stack captured, or ability to snapshot bits of the user-space process's heap (say, query text) etc. So while you have the pid, you don't know what the process was doing at that point. Just perf script output like: postgres 7462 [002] 301125.113632: block:block_rq_issue: 8,0 W 0 () 208078848 + 1024 [postgres] If I add the -g flag to perf record it'll take snapshots of the kernel stack, but doesn't capture user-space state for perf events captured in the kernel. The user-space stack only goes up to the entry-point from userspace, like LWLockRelease, LWLockAcquire, memcpy (mmap'd IO), __GI___libc_write, etc. So. Any tips? Being able to capture a snapshot of the user-space stack in response to kernel events would be ideal.

    Read the article

  • Aliasing `T*` with `char*` is allowed. Is it also allowed the other way around?

    - by StackedCrooked
    Note: This question has been renamed and reduced to make it more focused and readable. Most of the comments refer to the old text. According to the standard objects of different type may not share the same memory location. So this would not be legal: int i = 0; short * s = reinterpret_cast<short*>(&i); // BAD! The standard however allows an exception to this rule: any object may be accessed through a pointer to char or unsigned char: int i = 0; char * c = reinterpret_cast<char*>(&i); // OK However, it is not clear to me if this is also allowed the other way around. For example: char * c = read_socket(...); unsigned * u = reinterpret_cast<unsigned*>(c); // huh? Summary of the answers The answer is NO for two reasons: You an only access an existing object as char*. There is no object in my sample code, only a byte buffer. The pointer address may not have the right alignment for the target object. In that case dereferencing it would result in undefined behavior. On the Intel and AMD platforms it will result performance overhead. On ARM it will trigger a CPU trap and your program will be terminated! This is a simplified explanation. For more detailed information see answers by @Luc Danton, @Cheers and hth. - Alf and @David Rodríguez.

    Read the article

  • NHibernate lazy properties behavior?

    - by GeReV
    I've been trying to get NHibernate into development for a project I'm working on at my workplace. Since I have to put a strong emphasis on performance, I've been running a proof-of-concept stress test on an existing project's table with thousands of records, all of which contain a large text column. However, when selecting a collection of these records, the select statement takes a relatively long time to execute; apparently due to the aforementioned column. The first solution that comes to mind is setting this property as lazy: <property name="Content" lazy="true"/> But there seems to be no difference in the SQL generated by NHibernate. My question is, how do lazy properties behave in NHibernate? Is there some kind of type limitations I could be missing? Should I take a different approach altogether? Using HQL's new Class(column1, column2) approach works, but lazy properties sounds like a simpler solution. It's perhaps worth mentioning I'm using NHibernate 2.1.2GA with the Castle DynamicProxy. Thanks!

    Read the article

  • JavaScript array random index insertion and deletion

    - by Tomi
    I'm inserting some items into array with randomly created indexes, for example like this: var myArray = new Array(); myArray[123] = "foo"; myArray[456] = "bar"; myArray[789] = "baz"; ... In other words array indexes do not start with zero and there will be "numeric gaps" between them. My questions are: Will these numeric gaps be somehow allocated (and therefore take some memory) even when they do not have assigned values? When I delete myArray[456] from upper example, would items below this item be relocated? EDIT: Regarding my question/concern about relocation of items after insertion/deletion - I want to know what happens with the memory and not indexes. More information from wikipedia article: Linked lists have several advantages over dynamic arrays. Insertion of an element at a specific point of a list is a constant-time operation, whereas insertion in a dynamic array at random locations will require moving half of the elements on average, and all the elements in the worst case. While one can "delete" an element from an array in constant time by somehow marking its slot as "vacant", this causes fragmentation that impedes the performance of iteration.

    Read the article

  • Tracking a fragment of a file in two places with git

    - by mabraham
    Hi, I have code such as void myfunc() { introduction(); while(condition()) { complex(); loop(); interior(); code(); } cleanup(); } which I wish to duplicate into two versions, viz: void myfuncA() { introduction(); minorchangeA(); while(condition()) { complex(); loop(); interior(); code(); } cleanup(); } void myfuncB() { introduction(); minorchangeB(); while(condition()) { complex(); modifiedB(); loop(); interior(); code(); } cleanup(); extracleanupB(); } git claims to track content rather than files, so do I need to tell it that there are chunks here that are common to both myfuncA and myfuncB so that when merging with upstream changes to myfunc that those changes should propagate to both myfuncA and myfuncB? If so, how? The code could be written so that myfuncAB did the correct thing at each point by testing for condition A or B, but that could seriously hinder readability or performance.

    Read the article

< Previous Page | 483 484 485 486 487 488 489 490 491 492 493 494  | Next Page >