Search Results

Search found 21089 results on 844 pages for 'virtual memory'.

Page 426/844 | < Previous Page | 422 423 424 425 426 427 428 429 430 431 432 433  | Next Page >

  • Visual Studio crashes consistently on web-related projects

    - by Traveling Tech Guy
    Hi, I have a brand new VS2010 installed on a Win2008R2 machine. I started getting this error when debugging a WCF service project: "Attempted to read or write protected memory. This is often an indication that other memory is corrupt." When I started developing a web site a week later, this became consistent - I can't debug it. The stack dump reads: at Microsoft.VisualStudio.WebHost.Host.ProcessRequest(Connection conn) at Microsoft.VisualStudio.WebHost.Server.OnSocketAccept(Object acceptedSocket) at System.Threading.QueueUserWorkItemCallback.WaitCallback_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean ignoreSyncCtx) at System.Threading.QueueUserWorkItemCallback.System.Threading.IThreadPoolWorkItem.ExecuteWorkItem() at System.Threading.ThreadPoolWorkQueue.Dispatch() at System.Threading._ThreadPoolWaitCallback.PerformWaitCallback() I tried searching online, and some recommend turning off the "Suppress JIT Optimizations" in the Debugging options - this dos not seem to make a difference. Clearly the problem is with the built in web server. But am I doing something wrong? Is there something I can do? Or is this a known bug? Thanks for your time, Guy Update 12/31: Today I tried using CassiniDev as a replacement to the original VS2010 WebServer - exact same result. My suspicion is that there's some internal conflict between VS2010, Windows Server 2008R2 and maybe the fact that it's a 64 bit OS. I switched to using IIS as my debug server - and that seems to work, with some annoying side effects. My conclusion: do not use a 64 bit server system as your dev machine. Develop on 32bit - deploy to 64bit. Side conclusion: there are some scenarios Microsoft's QA doesn't test.

    Read the article

  • Swing GUI using JNI crashes

    - by Div
    Hi, A java swing application(GUI) using JNI code to communicate with native C code. The Swing application launches properly and works fine. The GUI is used to start some customized system level tests(io,memory,cpu) and show their progress. The tests have to be left running at-least overnight to get the results. But, the next morning, GUI crashes and throws following message. Any pointers on source of the issue will be greatly appreciated. Java version: java 1.5 / java 1.6 OS: Solaris 10. Thanks, Div =============MESSAGES================== # uname -a SunOS Generic_127127-11 sun4v sparc SUNW, # # # # An unexpected error has been detected by HotSpot Virtual Machine: # # SIGSEGV (0xb) at pc=0xff268924, pid=9473, tid=272 # # Java VM: Java HotSpot(TM) Server VM (1.5.0_14-b03 mixed mode) # Problematic frame: # C [libc.so.1+0x68924] strstr+0x20 # # An error report file with more information is saved as hs_err_pid9473.log # # If you would like to submit a bug report, please visit: # HotSpot Virtual Machine Error Reporting Page # ============================= Another machine: # A fatal error has been detected by the Java Runtime Environment: # # SIGSEGV (0xb) at pc=0xff231fd0, pid=1406, tid=180 # # JRE version: 6.0_18-b07 # Java VM: Java HotSpot(TM) Server VM (16.0-b13 mixed mode solaris-sparc ) # Problematic frame: # C [libc.so.1+0x31fd0] strcpy+0x70 # # An error report file with more information is saved as: # /usr/sunvts/bin/hs_err_pid1406.log #

    Read the article

  • What is a NULL value

    - by Adi
    I am wondering , what exactly is stored in the memory when we say a particular variable pointer to be NULL. suppose I have a structure, say typdef struct MEM_LIST MEM_INSTANCE; struct MEM_LIST { char *start_addr; int size; MEM_INSTANCE *next; }; MEM_INSTANCE *front; front = (MEM_INSTANCE*)malloc(sizeof(MEM_INSTANCE*)); -1) If I make front=NULL. What will be the value which actually gets stored in the different fields of the front, say front-size ,front-start_addr. Is it 0 or something else. I have limited knowledge in this NULL thing. -2) If I do a free(front); It frees the memory which is pointed out by front. So what exactly free means here, does it make it NULL or make it all 0. -3) What can be a good strategy to deal with initialization of pointers and freeing them . Thanks in advance

    Read the article

  • Alignment in assembly

    - by jena
    Hi, I'm spending some time on assembly programming (Gas, in particular) and recently I learned about the align directive. I think I've understood the very basics, but I would like to gain a deeper understanding of its nature and when to use alignment. For instance, I wondered about the assembly code of a simple C++ switch statement. I know that under certain circumstances switch statements are based on jump tables, as in the following few lines of code: .section .rodata .align 4 .align 4 .L8: .long .L2 .long .L3 .long .L4 .long .L5 ... .align 4 aligns the following data on the next 4-byte boundary which ensures that fetching these memory locations is efficient, right? I think this is done because there might be things happening before the switch statement which caused misalignment. But why are there actually two calls to .align? Are there any rules of thumb when to call .align or should it simply be done whenever a new block of data is stored in memory and something prior to this could have caused misalignment? In case of arrays, it seems that alignment is done on 32-byte boundaries as soon as the array occupies at least 32 byte. Is it more efficient to do it this way or is there another reason for the 32-byte boundary? I'd appreciate any explanation or hint on literature.

    Read the article

  • Using memcache together with conventional cache

    - by Industrial
    Hi! Here's the deal. We would have taken the complete static html road to solve performance issues, but since the site will be partially dynamic, this won't work out for us. What we have thought of instead is using memcache + eAccelerator to speed up PHP and take care of caching for the most used data. Here's our two approaches that we have thought of right now: Using memcache on all<< major queries and leaving it alone to do what it does best. Usinc memcache for most commonly retrieved data, and combining with a standard harddrive-stored cache for further usage. The major advantage of only using memcache is of course the performance, but as users increases, the memory usage gets heavy. Combining the two sounds like a more natural approach to us, even though the theoretical compromize in performance. Memcached appears to have some replication features available as well, which may come handy when it's time to increase the nodes. What approach should we use? - Is it stupid to compromize and combine the two methods? Should we insted be focusing on utilizing memcache and instead focusing on upgrading the memory as the load increases with the number of users? Thanks a lot!

    Read the article

  • Lookup table size reduction

    - by Ryan
    Hello: I have an application in which I have to store a couple of millions of integers, I have to store them in a Look up table, obviously I cannot store such amount of data in memory and in my requirements I am very limited I have to store the data in an embebedded system so I am very limited in the space, so I would like to ask you about recommended methods that I can use for the reduction of the look up table. I cannot use function approximation such as neural networks, the values needs to be in a table. The range of the integers is not known at the moment. When I say integers I mean a 32 bit value. Basically the idea is use some copmpression method to reduce the amount of memory but without losing many precision. This thing needs to run in hardware so the computation overhead cannot be very high. In my algorithm I have to access to one value of the table do some operations with it and after update the value. In the end what I should have is a function which I pass an index to it and then I get a value, and after I have to use another function to write a value in the table. I found one called tile coding http://www.cs.ualberta.ca/~sutton/book/8/node6.html, this one is based on several look up tables, does anyone know any other method?. Thanks.

    Read the article

  • c# multi inheritance

    - by user326839
    So ive got a base class which requires a Socket: class Sock { public Socket s; public Sock(Socket s) { this.s = s; } public virtual void Process(byte[] data) { } ... } then ive got another class. if a new socket gets accepted a new instance of this class will be created: class Game : Sock { public Random Random = new Random(); public Timerr Timers; public Test Test; public Game(Socket s) : base(s) { } public static void ReceiveNewSocket(object s) { Game Client = new Game((Socket)s); Client.Start(); } public override void Process(byte[] buf) { Timers = new Timerr(s); Test = new Test(s); Test.T(); } } in the Sock class ive got a virtual function that gets overwritten by the Game class.(Process function) in this function im calling a function from the Test Class(Test+ Timerr Class: class Test : Game { public Test(Socket s) : base(s) { } public void T() { Console.WriteLine(Random.Next(0, 10)); Timers.Start(); } } class Timerr : Game { public Timerr(Socket s) : base(s) { } public void Start() { Console.WriteLine("test"); } } ) So in the Process function im calling a function in Test. And in this function(T) i need to call a function from the Timerr Class.But the problem is its always NULL , although the constructor is called in Process. And e.g. the Random Class can be called, i guess its because its defined with the constructor.: public Random Random = new Random(); and thats why the other classes(without a constructor): public Timerr Timers; public Test Test; are always null in the inherited class Test.But its essentiel that i call other Methods of other classes in this function.How could i solve that?

    Read the article

  • Visibility of reintroduced constructor

    - by avenmore
    I have reintroduced the form constructor in a base form, but if I override the original constructor in a descendant form, the reintroduced constructor is no longer visible. type TfrmA = class(TForm) private FWndParent: HWnd; public constructor Create(AOwner: TComponent; const AWndParent: Hwnd); reintroduce; overload; virtual; end; constructor TfrmA.Create(AOwner: TComponent; const AWndParent: Hwnd); begin FWndParent := AWndParent; inherited Create(AOwner); end; type TfrmB = class(TfrmA) private public end; type TfrmC = class(TfrmB) private public constructor Create(AOwner: TComponent); override; end; constructor TfrmC.Create(AOwner: TComponent); begin inherited Create(AOwner); end; When creating: frmA := TfrmA.Create(nil, 0); frmB := TfrmB.Create(nil, 0); frmC := TfrmC.Create(nil, 0); // Compiler error My work-around is to override the reintroduced constructor or to declare the original constructor overloaded, but I'd like to understand the reason for this behavior. type TfrmA = class(TForm) private FWndParent: HWnd; public constructor Create(AOwner: TComponent); overload; override; constructor Create(AOwner: TComponent; const AWndParent: Hwnd); reintroduce; overload; virtual; end; type TfrmC = class(TfrmB) private public constructor Create(AOwner: TComponent; const AWndParent: Hwnd); override; end;

    Read the article

  • Which plugin framework to use for native C++/Win32

    - by Kerido
    Hi everybody. I have an extensible product that allows 3rd party developers to extend it. The aspects that can be extended are documented and interfaces are provided in the SDK. Currently, I'm using COM and I'm getting pretty comfortable with it. I especially like the ability to provide interface versioning in a unified manner. I consider it to be a requirement because you never know what you're gonna need in the future. Just to be precise, here's an example. Let's suppose I have an interface representing a particular feature: class IFeature { public: virtual void DoFeatureTask() = 0; }; Then after the interface is already documented (and someone may have used it in the plugin code) I'm realizing, I need more from this feature. Maybe, there is an option I need to provide. I just define the second version: class IFeature2 { public: virtual void DoFeatureTask(int theOption) = 0; }; I don't mean I intend to have lots of versions. But it just may happen. In COM, because every interface is associated with a GUID, I can query a preferred implementation, determine its presence, and, finally, fall back to a legacy one. But after glancing through C++/COM-related questions, I noticed many recommendations against COM. So maybe it's not the best choice and I'm just too old-school. Can you advise on an alternative?

    Read the article

  • MFC CDialog not showing

    - by Jesus_21
    here is my problem : In my Solution, I have 2 projects, one is a lib in which I created a ressource file (mylib.rc) and a dialog template in it. Then I made a class which inherits CDialog and uses this template. But when I instantiate it and call DoModal(), nothing appends... here the code of my class, is something wrong with it ? MyDialog.h /*MyDialog.h*/ #pragma once #include "../../../resource.h" class MyDialog : public CDialog { enum {IDD=IDD_DLGTEMPLATE}; public: MyDialog(CWnd* pParent = NULL); virtual ~MyDialog(); protected: virtual BOOL OnInitDialog(); DECLARE_MESSAGE_MAP() private: afx_msg void OnBnClickedOk(); afx_msg void OnBnClickedCancel(); }; MyDialog.cpp /*MyDialog.cpp*/ #include "stdafx.h" #include "MyDialog.h" MyDialog::MyDialog(CWnd* pParent /*=NULL*/) : CDialog(IDD_DLGTEMPLATE, pParent) {} MyDialog::~MyDialog() {} BOOL MyDialog::OnInitDialog() { return TRUE; } BEGIN_MESSAGE_MAP(MyDialog, CDialog) ON_BN_CLICKED(IDOK, &MyDialog::OnBnClickedOk) ON_BN_CLICKED(IDCANCEL, &MyDialog::OnBnClickedCancel) END_MESSAGE_MAP() void MyDialog::OnBnClickedOk() { OnOK(); } void MyDialog::OnBnClickedCancel() { OnCancel(); }

    Read the article

  • How is a referencing environment generally implemented for closures?

    - by Alexandr Kurilin
    Let's say I have a statically/lexically scoped language with deep binding and I create a closure. The closure will consist of the statements I want executed plus the so called referencing environment, or, to quote this post, the collection of variables which can be used. What does this referencing environment actually look like implementation-wise? I was recently reading about ObjectiveC's implementation of blocks, and the author suggests that behind the scenes you get a copy of all of the variables on the stack and also of all the references to heap objects. The explanation claims that you get a "snapshot" of the referencing environment at the point in time of the closure's creation. Is that more or less what happens, or did I misread that? Is anything done to "freeze" a separate copy of the heap objects, or is it safe to assume that if they get modified between closure creation and the closure executing, the closure will no longer be operating on the original version of the object? If indeed there's copying being made, are there memory usage considerations in situations where one might want to create plenty of closures and store them somewhere? I think that misunderstanding of some of these concepts might lead to tricky issues like the ones Eric Lippert mentions in this blog post. It's interesting because you'd think that it wouldn't make sense to keep a reference to a value type that might be gone by the time the closure is called, but I'm guessing that in C# the compiler will figure out that the variable is needed later and put it into the heap instead. It seems that in most memory-managed languages everything is a reference and thus ObjectiveC is a somewhat unique situation with having to deal with copying what's on the stack.

    Read the article

  • How to map a Dictionary<string, string> spanning several tables

    - by Kim Johansson
    I have four tables: CREATE TABLE [Languages] ( [Id] INTEGER IDENTITY(1,1) NOT NULL, [Code] NVARCHAR(10) NOT NULL, PRIMARY KEY ([Id]), UNIQUE INDEX ([Code]) ); CREATE TABLE [Words] ( [Id] INTEGER IDENTITY(1,1) NOT NULL, PRIMARY KEY ([Id]) ); CREATE TABLE [WordTranslations] ( [Id] INTEGER IDENTITY(1,1) NOT NULL, [Value] NVARCHAR(100) NOT NULL, [Word] INTEGER NOT NULL, [Language] INTEGER NOT NULL, PRIMARY KEY ([Id]), FOREIGN KEY ([Word]) REFERENCES [Words] ([Id]), FOREIGN KEY ([Language]) REFERENCES [Languages] ([Id]) ); CREATE TABLE [Categories] ( [Id] INTEGER IDENTITY(1,1) NOT NULL, [Word] INTEGER NOT NULL, PRIMARY KEY ([Id]), FOREIGN KEY ([Word]) REFERENCES [Words] ([Id]) ); So you get the name of a Category via the Word - WordTranslation - Language relations. Like this: SELECT TOP 1 wt.Value FROM [Categories] AS c LEFT JOIN [WordTranslations] AS wt ON c.Word = wt.Word WHERE wt.Language = ( SELECT TOP 1 l.Id FROM [Languages] WHERE l.[Code] = N'en-US' ) AND c.Id = 1; That would return the en-US translation of the Category with Id = 1. My question is how to map this using the following class: public class Category { public virtual int Id { get; set; } public virtual IDictionary<string, string> Translations { get; set; } } Getting the same as the SQL query above would be: Category category = session.Get<Category>(1); string name = category.Translations["en-US"]; And "name" would now contain the Category's name in en-US. Category is mapped against the Categories table. How would you do this and is it even possible?

    Read the article

  • Spring Integration 1.0 RC2: Streaming file content?

    - by gdm
    I've been trying to find information on this, but due to the immaturity of the Spring Integration framework I haven't had much luck. Here is my desired work flow: New files are placed in an 'Incoming' directory Files are picked up using a file:inbound-channel-adapter The file content is streamed, N lines at a time, to a 'Stage 1' channel, which parses the line into an intermediary (shared) representation. This parsed line is routed to multiple 'Stage 2' channels. Each 'Stage 2' channel does its own processing on the N available lines to convert them to a final representation. This channel must have a queue which ensures no Stage 2 channel is overwhelmed in the event that one channel processes significantly slower than the others. The final representation of the N lines is written to a file. There will be as many output files as there were routing destinations in step 4. *'N' above stands for any reasonable number of lines to read at a time, from [1, whatever I can fit into memory reasonably], but is guaranteed to always be less than the number of lines in the full file. How can I accomplish streaming (steps 3, 4, 5) in Spring Integration? It's fairly easy to do without streaming the files, but my files are large enough that I cannot read the entire file into memory. As a side note, I have a working implementation of this work flow without Spring Integration, but since we're using Spring Integration in other places in our project, I'd like to try it here to see how it performs and how the resulting code compares for length and clarity.

    Read the article

  • Question about cloning in Java

    - by devoured elysium
    In Effective Java, the author states that: If a class implements Cloneable, Object's clone method returns a field-by-field copy of the object; otherwise it throws CloneNotSupportedException. What I'd like to know is what he means with field-by-field copy. Does it mean that if the class has X bytes in memory, it will just copy that piece of memory? If yes, then can I assume all value types of the original class will be copied to the new object? class Point { private int x; private int y; @Override public Point clone() { return (Point)super.clone(); } } If what Object.clone() does is a field by field copy of the Point class, I'd say that I wouldn't need to explicitly copy fields x and y, being that the code shown above will be more than enough to make a clone of the Point class. That is, the following bit of code is redundant: @Override public Point clone() { Point newObj = (Point)super.clone(); newObj.x = this.x; //redundant newObj.y = this.y; //redundant } Am I right? I know references of the cloned object will point automatically to where the original object's references pointed to, I'm just not sure what happens specifically with value types. If anyone could state clearly what Object.clone()'s algorithm specification is (in easy language) that'd be great. Thanks

    Read the article

  • How to know why an animation stutters?

    - by Patrick Klug
    I have a few fairly simple animations (moving text around, moving ellipses etc.) and running in full screen (1920x1080 minus the task bar) the WPF Performance Suite reports a good framerate around 50 FPS throughout the animation. Dirty Rect Addition is somewhere around 300 rect/s, the SW frames are between 0 and 4 and the HW frames are between 3 and 5. Video memory usage is around 80 MB. Problem is that the animations stutters every other half second. My machine is a new Dell laptop XPS 15 with the GeForce GT 435 with 2GB memory. - The drivers are up to date. (The same behavior occurs on my netbook (in full screen) as well so I don't think it is hardware related.) If I make the window smaller the stutter goes away. The stutter occurs with the simplest of animations - even with just a couple of elements but adding more elements certainly makes it more noticeable. How can I find out what causes this stutter? When I think of it, I have not actually seen any WPF animations which run smoothly in full screen. Is this even possible?

    Read the article

  • Problem with writing mobilesubstrate plugins for iOS

    - by overboming
    I am trying to hooking message sending for iOS 3.2, I implement my own hook on a working ExampleHook program I find on the web. But my hook apparently caused segmentation fault everytime it hooks and I don't know why. I want to hook to [NSURL initWithString:(NSString *)URLString relativeToURL:(NSURL *)baseURL; and here is my related implementation static id __$GFWInterceptor_NSURL_initWithString2(NSURL<GFWInterceptor> *_NSURL, NSString *URLString, NSURL* baseURL){ NSLog(@"We have intercepted this url: %@",URLString); [_NSURL __HelloNSURL_initWithString:URLString relativeToURL:baseURL]; establish hook Class _$NSURL = objc_getClass("NSURL"); MSHookMessage(_$NSURL, @selector(initWithString:relativeToURL:), (IMP) &__$GFWInterceptor_NSURL_initWithString2, "__HelloNSURL_"); original method declaration - (void)__HelloNSURL_initWithString:(NSString *)URLString relativeToURL:(NSURL *)baseURL; and here is my gdb backtrace Program received signal EXC_BAD_ACCESS, Could not access memory. Reason: KERN_INVALID_ADDRESS at address: 0x74696e71 0x335625f8 in objc_msgSend () (gdb) bt 0 0x335625f8 in objc_msgSend () 1 0x32c05b1a in CFStringGetLength () 2 0x32c108a8 in _CFStringIsLegalURLString () 3 0x32b1c32a in -[NSURL initWithString:relativeToURL:] () 4 0x000877c0 in __$GFWInterceptor_NSURL_initWithString2 () 5 0x32b1c220 in +[NSURL URLWithString:relativeToURL:] () 6 0x32b1c1f4 in +[NSURL URLWithString:] () 7 0x3061c614 in createUniqueWebDataURL () 8 0x3061c212 in +[WebFrame(WebInternal) _createMainFrameWithSimpleHTMLDocumentWithPage:frameView:withStyle:editable:] () and apparently it hooks, but there is some memory issue there and I can't find anything to blame now

    Read the article

  • pure/const functions in C++

    - by Albert
    Hi, I'm thinking of using pure/const functions more heavily in my C++ code. (pure/const attribute in GCC) However, I am curious how strict I should be about it and what could possibly break. The most obvious case are debug outputs (in whatever form, could be on cout, in some file or in some custom debug class). I probably will have a lot of functions, which don't have any side effects despite this sort of debug output. No matter if the debug output is made or not, this will absolutely have no effect on the rest of my application. Or another case I'm thinking of is the use of my own SmartPointer class. In debug mode, my SmartPointer class has some global register where it does some extra checks. If I use such an object in a pure/const function, it does have some slight side effects (in the sense that some memory probably will be different) which should not have any real side effects though (in the sense that the behaviour is in any way different). Similar also for mutexes and other stuff. I can think of many complex cases where it has some side effects (in the sense of that some memory will be different, maybe even some threads are created, some filesystem manipulation is made, etc) but has no computational difference (all those side effects could very well be left out and I would even prefer that). How does it work out in practice? If I mark such functions as pure/const, could it break anything (considering that the code is all correct)?

    Read the article

  • Read whole ASCII file into C++ std::string

    - by Arrieta
    Hello, I need to read a whole file into memory and place it in a C++ std::string. If I were to read it into a char, the answer would be very simple: std::ifstream t; int lenght; t.open("file.txt", "r"); // open input file t.seekg(0, std::ios::end); // go to the end length = t.tellg(); // report location (this is the lenght) t.seekg(0, std::ios::beg); // go back to the beginning buffer = new char[length]; // allocate memory for a buffer of appropriate dimension t.read(buffer, length); // read the whole file into the buffer t.close(); // close file handle // ... do stuff with buffer here ... Now, I want to do the exact same thing, but using a std::string instead of a char. I want to avoid loops, i. e., I don't want to: std::ifstream t; t.open("file.txt", "r"); std::string buffer; std::string line; while(t){ std::getline(t, line); // ... append line to buffer and go on } t.close() any ideas?

    Read the article

  • Executing logic before save or validation with EF Code-First Models

    - by Ryan Norbauer
    I'm still getting accustomed to EF Code First, having spent years working with the Ruby ORM, ActiveRecord. ActiveRecord used to have all sorts of callbacks like before_validation and before_save, where it was possible to modify the object before it would be sent off to the data layer. I am wondering if there is an equivalent technique in EF Code First object modeling. I know how to set object members at the time of instantiation, of course, (to set default values and so forth) but sometimes you need to intervene at different moments in the object lifecycle. To use a slightly contrived example, say I have a join table linking Authors and Plays, represented with a corresponding Authoring object: public class Authoring { public int ID { get; set; } [Required] public int Position { get; set; } [Required] public virtual Play Play { get; set; } [Required] public virtual Author Author { get; set; } } where Position represents a zero-indexed ordering of the Authors associated to a given Play. (You might have a single "South Pacific" Play with two authors: a "Rodgers" author with a Position 0 and a "Hammerstein" author with a Position 1.) Let's say I wanted to create a method that, before saving away an Authoring record, it checked to see if there were any existing authors for the Play to which it was associated. If no, it set the Position to 0. If yes, it would find set the Position of the highest value associated with that Play and increment by one. Where would I implement such logic within an EF code first model layer? And, in other cases, what if I wanted to massage data in code before it is checked for validation errors? Basically, I'm looking for an equivalent to the Rails lifecycle hooks mentioned above, or some way to fake it at least. :)

    Read the article

  • Efficient algorithm for Next button on a MySQL result set

    - by David Grayson
    I have a website that lets people view rows in a table (each row is a picture). There are more than 100,000 rows. You can view different subsets of the rows, and you can view them with different sort orders. While you are viewing one of the rows, you can click the "Next" or "Previous" buttons to go the next/previous row in the list. How would you implement the "Next" and "Previous" features of the website? More specifically, if you have an arbitrary query that returns a list of up to 100,000+ rows, and you know some information about the current row someone is viewing, how do you determine the NEXT row efficiently? Here is the pseudo-code of the solution I came up with when the website was young, and it worked well when there were only 1000 rows, but now that there are 100,000 rows I think it is eating up too much memory. int nextRowId(string query, int currentRowId) { array allRowIds = mysql_query(query); // Takes up a lot of memory! int currentIndex = (index of currentRowId in allRowIds); // Takes time! return allRowIds[currentIndex+1]; } While you are thinking about this problem, remember that the website can store more information about the current row than just its ID (for example, the position of the current row in the result set), and this information can be used as a hint to help determine the ID of the next row. Edit: Sorry for not mentioning this earlier, but this isn't just a static website: rows can often be added to the list, and rows can be re-ordered in the list. (Much rarer, rows can be removed from the list.) I think that I should worry about that kind of thing, but maybe you can convince me otherwise.

    Read the article

  • dynamical binding or switch/case?

    - by kingkai
    A scene like this: I've different of objects do the similar operation as respective func() implements. There're 2 kinds of solution for func_manager() to call func() according to different objects Solution 1: Use virtual function character specified in c++. func_manager works differently accroding to different object point pass in. class Object{ virtual void func() = 0; } class Object_A : public Object{ void func() {}; } class Object_B : public Object{ void func() {}; } void func_manager(Object* a) { a->func(); } Solution 2: Use plain switch/case. func_manager works differently accroding to different type pass in typedef _type_t { TYPE_A, TYPE_B }type_t; void func_by_a() { // do as func() in Object_A } void func_by_b() { // do as func() in Object_A } void func_manager(type_t type) { switch(type){ case TYPE_A: func_by_a(); break; case TYPE_B: func_by_b(); default: break; } } My Question are 2: 1. at the view point of DESIGN PATTERN, which one is better? 2. at the view point of RUNTIME EFFCIENCE, which one is better? Especailly as the kinds of Object increases, may be up to 10-15 total, which one's overhead oversteps the other? I don't know how switch/case implements innerly, just a bunch of if/else? Thanks very much!

    Read the article

  • Visual Studio project remains "stuck" when stopped

    - by Traveling Tech Guy
    Hi, Currently developing a connector DLL to HP's Quality Center. I'm using their (insert expelative) COM API to connect to the server. An Interop wrapper gets created automatically by VStudio. My solution has 2 projects: the DLL and a tester application - essentially a form with buttons that call functions in the DLL. Everything works well - I can create defects, update them and delete them. When I close the main form, the application stops nicely. But when I call a function that returns a list of all available projects (to fill a combo box), if I close the main form, VStudio still shows the solution as running and I have to stop it. I've managed to pinpoint a single function in my code that when I call, the solution remains "hung" and if I don't, it closes well. It's a call to a property in the TDC object get_VisibleProjects that returns a List (not the .Net one, but a type in the COM library) - I just iterate over it and return a proper list (that I later use to fill the combo box): public List<string> GetAvailableProjects() { List<string> projects = new List<string>(); foreach (string project in this.tdc.get_VisibleProjects(qcDomain)) { projects.Add(project); } return projects; } My assumption is that something gets retained in memory. If I run the EXE outside of VStudio it closes - but who knows what gets left behind in memory? My question is - how do I get rid of whatever calling this property returns? Shouldn't the GC handle this? Do I need to delve into pointers? Things I've tried: getting the list into a variable and setting it to null at the end of the function Adding a destructor to the class and nulling the tdc object Stepping through the tester function application all the way out, whne the form closes and the Main function ends - it closes, but VStudio still shows I'm running. Thanks for your assistance!

    Read the article

  • handle large Parcelable ArrayList in Android

    - by Gal Ben-Haim
    I'm developing an Android app that is a client to a JSON webservice API. I have classes of resource objects (some are nested) and I pass results from an IntentService that access the webserive using the Parcelable interface for all the resource classes. the webservice returns arrays or results that can be potentially large (because of the nesting, for example, a post object also contains comments array, each comment also contains a user object). currently I'm either inserting the results into a SQlite database or displaying them in a ListView. (my relevant methods are accepting ArrayList<resourceClass> as arguments). (some data need to be persistent stored and some should not). since I don't know what size of lists I can handle this way without reaching the memory limits, is this a good practice ? is it a better idea to save the parsed JSON to a local file immediately and pass the file path to the ResultReceiver, then either insert to database from that file or display the data ? is there a better way to handle this ? btw - I'm parsing the JSON as a stream with Gson's Reader so there shouldn't be memory issues at that stage.

    Read the article

  • Installer for asp.net web application

    - by Thurein
    Hi I am trying to implement a installer which is going to perform following tasks.1. Check and install .net 3.52. check and install SQL server 2008 (standard edition)3. create the databases4. create a virtual directory and deploy published resources5. Deploy SSIS and package for the datawarehousing and to run the SSAS package.Right now I am using wix, to deal with some of the task, its working for me for now, but I just want to know other options and better way to do this (is there any) .Thanks and regardsThurein I am trying to implement an installer, which I m gonna hand it to the end user as a product. Check and install .net 3.5 check and install SQL server 2008 (standard edition) create the databases create a virtual directory and deploy published resources Deploy SSIS and package for the datawarehousing and to run the SSAS package. Right now I am using wix, to deal with some of the task, it works for me, but I am just curious about other options and better ways to do this (is there any) . My main intension is, I would like to distribute my product (asp.net web application) to the end user for a trial, and end user with the limited IT knowledge could install and use that web application with in a group of user. After the end of trial period the user could ask for the activation key for further usages. Thanks Thurein

    Read the article

  • Handling user interface in a multi-threaded application (or being forced to have a UI-only main thre

    - by Patrick
    In my application, I have a 'logging' window, which shows all the logging, warnings, errors of the application. Last year my application was still single-threaded so this worked [quite] good. Now I am introducing multithreading. I quickly noticed that it's not a good idea to update the logging window from different threads. Reading some articles on keeping the UI in the main thread, I made a communication buffer, in which the other threads are adding their logging messages, and from which the main thread takes the messages and shows them in the logging window (this is done in the message loop). Now, in a part of my application, the memory usage increases dramatically, because the separate threads are generating lots of logging messages, and the main thread cannot empty the communication buffer quickly enough. After the while the memory decreases again (if the other threads have finished their work and the main thread gradually empties the communication buffer). I solved this problem by having a maximum size on the communication buffer, but then I run into a problem in the following situation: the main thread has to perform a complex action the main thread takes some parts of the action and let's separate threads execute this while the seperate threads are executing their logic, the main thread processes the results from the other threads and continues with its work if the other threads are finished Problem is that in this situation, if the other threads perform logging, there is no UI-message loop, and so the communication buffer is filled, but not emptied. I see two solutions in solving this problem: require the main thread to do regular polling of the communication buffer only performing user interface logic in the main thread (no other logic) I think the second solution seems the best, but this may not that easy to introduce in a big application (in my case it performs mathematical simulations). Are there any other solutions or tips? Or is one of the two proposed the best, easiest, most-pragmatic solution? Thanks, Patrick

    Read the article

< Previous Page | 422 423 424 425 426 427 428 429 430 431 432 433  | Next Page >