Search Results

Search found 17366 results on 695 pages for 'memory card'.

Page 177/695 | < Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >

  • Dynamic memory allocation with default values

    - by viswanathan
    class A { private: int m_nValue; public: A() { m_nValue = 0; } A(int nValue) { m_nValue = nValue); ~A() {} } Now in main if i call A a(2);// 2 will be assigned for m_nValue of object A. Now how do we do this if i want to define an array of objects. Also how do we do this if i dynamically create objects using operator new like A *pA; pA = new A[5];// while creating the object i want the parameterised constructor to be //called I hope the question is clear. Do let me know if more explanation is needed

    Read the article

  • Optimal storage of data structure for fast lookup and persistence

    - by Mikael Svenson
    Scenario I have the following methods: public void AddItemSecurity(int itemId, int[] userIds) public int[] GetValidItemIds(int userId) Initially I'm thinking storage on the form: itemId -> userId, userId, userId and userId -> itemId, itemId, itemId AddItemSecurity is based on how I get data from a third party API, GetValidItemIds is how I want to use it at runtime. There are potentially 2000 users and 10 million items. Item id's are on the form: 2007123456, 2010001234 (10 digits where first four represent the year). AddItemSecurity does not have to perform super fast, but GetValidIds needs to be subsecond. Also, if there is an update on an existing itemId I need to remove that itemId for users no longer in the list. I'm trying to think about how I should store this in an optimal fashion. Preferably on disk (with caching), but I want the code maintainable and clean. If the item id's had started at 0, I thought about creating a byte array the length of MaxItemId / 8 for each user, and set a true/false bit if the item was present or not. That would limit the array length to little over 1mb per user and give fast lookups as well as an easy way to update the list per user. By persisting this as Memory Mapped Files with the .Net 4 framework I think I would get decent caching as well (if the machine has enough RAM) without implementing caching logic myself. Parsing the id, stripping out the year, and store an array per year could be a solution. The ItemId - UserId[] list can be serialized directly to disk and read/write with a normal FileStream in order to persist the list and diff it when there are changes. Each time a new user is added all the lists have to updated as well, but this can be done nightly. Question Should I continue to try out this approach, or are there other paths which should be explored as well? I'm thinking SQL server will not perform fast enough, and it would give an overhead (at least if it's hosted on a different server), but my assumptions might be wrong. Any thought or insights on the matter is appreciated. And I want to try to solve it without adding too much hardware :) [Update 2010-03-31] I have now tested with SQL server 2008 under the following conditions. Table with two columns (userid,itemid) both are Int Clustered index on the two columns Added ~800.000 items for 180 users - Total of 144 million rows Allocated 4gb ram for SQL server Dual Core 2.66ghz laptop SSD disk Use a SqlDataReader to read all itemid's into a List Loop over all users If I run one thread it averages on 0.2 seconds. When I add a second thread it goes up to 0.4 seconds, which is still ok. From there on the results are decreasing. Adding a third thread brings alot of the queries up to 2 seonds. A forth thread, up to 4 seconds, a fifth spikes some of the queries up to 50 seconds. The CPU is roofing while this is going on, even on one thread. My test app takes some due to the speedy loop, and sql the rest. Which leads me to the conclusion that it won't scale very well. At least not on my tested hardware. Are there ways to optimize the database, say storing an array of int's per user instead of one record per item. But this makes it harder to remove items.

    Read the article

  • Linked List exercise, what am I doing wrong?

    - by Sean Ochoa
    Hey all. I'm doing a linked list exercise that involves dynamic memory allocation, pointers, classes, and exceptions. Would someone be willing to critique it and tell me what I did wrong and what I should have done better both with regards to style and to those subjects I listed above? /* Linked List exercise */ #include <iostream> #include <exception> #include <string> using namespace std; class node{ public: node * next; int * data; node(const int i){ data = new int; *data = i; } node& operator=(node n){ *data = *(n.data); } ~node(){ delete data; } }; class linkedList{ public: node * head; node * tail; int nodeCount; linkedList(){ head = NULL; tail = NULL; } ~linkedList(){ while (head){ node* t = head->next; delete head; if (t) head = t; } } void add(node * n){ if (!head) { head = n; head->next = NULL; tail = head; nodeCount = 0; }else { node * t = head; while (t->next) t = t->next; t->next = n; n->next = NULL; nodeCount++; } } node * operator[](const int &i){ if ((i >= 0) && (i < nodeCount)) throw new exception("ERROR: Invalid index on linked list.", -1); node *t = head; for (int x = i; x < nodeCount; x++) t = t->next; return t; } void print(){ if (!head) return; node * t = head; string collection; cout << "["; int c = 0; if (!t->next) cout << *(t->data); else while (t->next){ cout << *(t->data); c++; if (t->next) t = t->next; if (c < nodeCount) cout << ", "; } cout << "]" << endl; } }; int main (const int & argc, const char * argv[]){ try{ linkedList * myList = new linkedList; for (int x = 0; x < 10; x++) myList->add(new node(x)); myList->print(); }catch(exception &ex){ cout << ex.what() << endl; return -1; } return 0; }

    Read the article

  • [NSCustomView isOpaque]: message sent to deallocated instance 0x123456

    - by JxXx
    Hi all I receive that message in debugger console since I added the following arguments for debugging my application with XCode. NSZombieEnabled: YES NSZombieLevel: 16 I was looking for zombie objects... Before doing so, the application failed before I could know where what and why was happening.... Now I´m pretty sure that 'something' outside code is trying to access an object previously released and I can't know where or why it happens neither where it was released... My application is based on this proof of concept (very interesting and colorful) of QuartzCore Framework: http://www.cimgf.com/2008/03/03/core-animation-tutorial-wizard-dialog-with-transitions/ Based on it, I added a few more nsviews to my project and a title and an index to each one, also I added some buttons, text and images depending on what 'dialog' (ACLinkedView object) it was... The transition from an ACLinkedView object to another is going through a validation that depends on the view where you are ... As you see I used this proof of concept as the foundation of my application and it grew and grew into an application that makes use of configuration files, web services (using gSOAP and C ...) I hope you can give me some clues to where is my error ... I´ve been the hole week debugging unsuccessfully, as I said before, I think that that message comes from a point outside my code. I'd say that the problem s related with bad memory allocation or automatisms (nearly completely unknowns for me) during loading the nib components... I will try to explain all this with parts of mycode. This is my ACLinkedView definition: #import <Cocoa/Cocoa.h> @interface ACLinkedView : NSView { // The Window (to close it if needed) IBOutlet NSWindow *mainWindow; // Linked Views IBOutlet ACLinkedView *previousView; IBOutlet ACLinkedView *nextView; // Buttons IBOutlet NSButton *previousButton; IBOutlet NSButton *nextButton; IBOutlet NSButton *helpButton; //It has to be a Button!! IBOutlet NSImageView *bannerImg; NSString *sName; int iPosition; } - (void) SetName: (NSString*) Name; - (void) SetPosition: (int) Position; - (NSString*) GetName; - (int) GetPosition; - (void) windowWillClose:(NSNotification*)aNotification; @property (retain) NSWindow *mainWindow; @property (retain) ACLinkedView *previousView, *nextView; @property (retain) NSButton *previousButton, *nextButton, *helpButton; @property (retain) NSImageView *bannerImg; @property (retain) NSString *sName; @end The ACLinkedView's AwakeFromNib is this: #import <Cocoa/Cocoa.h> @interface ACLinkedView : NSView { // The Window (to close it if needed) IBOutlet NSWindow *mainWindow; // Linked Views IBOutlet ACLinkedView *previousView; IBOutlet ACLinkedView *nextView; // Buttons IBOutlet NSButton *previousButton; IBOutlet NSButton *nextButton; IBOutlet NSButton *helpButton; //It has to be a Button!! IBOutlet NSImageView *bannerImg; NSString *sName; int iPosition; } - (void) SetName: (NSString*) Name; - (void) SetPosition: (int) Position; - (NSString*) GetName; - (int) GetPosition; - (void) windowWillClose:(NSNotification*)aNotification; @property (retain) NSWindow *mainWindow; @property (retain) ACLinkedView *previousView, *nextView; @property (retain) NSButton *previousButton, *nextButton, *helpButton; @property (retain) NSImageView *bannerImg; @property (retain) NSString *sName; @end (As you can see the initialization of each ACLinkedView object depends on it's position wich is seted up into the Interface Builder by linking actions, buttons and CustomViews... Does I explain enough? Do you think that I should put more of my code here, i.e. AppDelegate definition or it´s awakeFromNib method? Can you help me in any way? Thanks in advance. Juan

    Read the article

  • My server app works strangely. What could be the reason(s)?

    - by Poni
    Hi! I've written a server app (two parts actually; proxy server and a game server) using C++ (board game). It uses IOCP as the sockets interface. For that app I've also written a "client simulator" (hereafter "client") app that spawns many client connections, where each of them plays, in very high speed, getting the CPU to be 100% utilized. So, that's how it goes in terms of topology: Game server - holds the game state. Real players do not connect it directly but through the proxy server. When a player joins a game, the proxy actually asks for it on behalf of that player, and the game server spawns a "player instance" for that player, and from now on, every notification between the game server and the player is being passed through the proxy. Proxy server - holds TCP connections with the real players. Players communicate with the game server through it only. Client simulator - connects to the proxy only. When running the server (again, it's actually two server apps) & client locally it all works just fine. I'm talking about 40k+ player instances in which all of them are active in a game. On the other hand, when running the server remotely with, say, 1000 clients who play things getting strange. For example, I run it as said above. Then with Task Manager I kill the client simulator app ("End Process Tree"). Then it seems like the buffer of the remote server got modified by another thread, or in other words, a memory corruption has been occurred. The server crashes because it got an unknown message id (it's a custom protocol where each message has it's own unique number). To make things clear, here is how I run the apps: PC1 - game server and clients simulator (because the clients will connect the proxy). PC2 - proxy server. The strangest thing is this: Only the remote side gets "corrupted". Remote in terms that it's not the PC I use to code the app (VC++ 2008). Let's call the PC I use to code the apps "PC1". Now for example, if this time I ran the game server on PC1 (it means that proxy server on PC2 and clients simulator on PC1), then the proxy server crashes with an "unknown message id" error. Another variation is when I run the proxy server on PC1 (again, the dev machine), the game server and the clients simulator on PC2, then the game server on PC2 gets crashed. As for the IOCP config: The servers' internal connections use the default receive/send buffer sizes. Tried even with setting them to 1MB, but no luck. I have three PCs in total; 2 x Vista 64bit <<-- one of those is the dev machine. The other is connected through WiFi. 1 x WinXP 32bit They're all connected in a "full duplex" manner. What could be the reason? Tried about everything; Stack tracing, recording some actions (like read/write logging).. I want to stress that only the PC I'm not using to code the apps crashes (actually the server app "role" which is running on it - sometimes the game server and sometimes the proxy server). At first I thought that maybe the wireless PC has problems (it's wireless..) but: TCP has it's own mechanisms to make sure the packet is delivered properly. Also, a crash also happens when trying it with the two PCs that are physically connected (Vista vs. XP). Another option is that the Windows DLLs versions might have problems, but then again, one of the tests is Vista vs. Vista, and the other is Vista vs. XP. Any idea?

    Read the article

  • [C] Texture management / pointer question

    - by ndg
    I'm working on a texture management and animation solution for a small side project of mine. Although the project uses Allegro for rendering and input, my question mostly revolves around C and memory management. I wanted to post it here to get thoughts and insight into the approach, as I'm terrible when it comes to pointers. Essentially what I'm trying to do is load all of my texture resources into a central manager (textureManager) - which is essentially an array of structs containing ALLEGRO_BITMAP objects. The textures stored within the textureManager are mostly full sprite sheets. From there, I have an anim(ation) struct, which contains animation-specific information (along with a pointer to the corresponding texture within the textureManager). To give you an idea, here's how I setup and play the players 'walk' animation: createAnimation(&player.animations[0], "media/characters/player/walk.png", player.w, player.h); playAnimation(&player.animations[0], 10); Rendering the animations current frame is just a case of blitting a specific region of the sprite sheet stored in textureManager. For reference, here's the code for anim.h and anim.c. I'm sure what I'm doing here is probably a terrible approach for a number of reasons. I'd like to hear about them! Am I opening myself to any pitfalls? Will this work as I'm hoping? anim.h #ifndef ANIM_H #define ANIM_H #define ANIM_MAX_FRAMES 10 #define MAX_TEXTURES 50 struct texture { bool active; ALLEGRO_BITMAP *bmp; }; struct texture textureManager[MAX_TEXTURES]; typedef struct tAnim { ALLEGRO_BITMAP **sprite; int w, h; int curFrame, numFrames, frameCount; float delay; } anim; void setupTextureManager(void); int addTexture(char *filename); int createAnimation(anim *a, char *filename, int w, int h); void playAnimation(anim *a, float delay); void updateAnimation(anim *a); #endif anim.c void setupTextureManager() { int i = 0; for(i = 0; i < MAX_TEXTURES; i++) { textureManager[i].active = false; } } int addTextureToManager(char *filename) { int i = 0; for(i = 0; i < MAX_TEXTURES; i++) { if(!textureManager[i].active) { textureManager[i].bmp = al_load_bitmap(filename); textureManager[i].active = true; if(!textureManager[i].bmp) { printf("Error loading texture: %s", filename); return -1; } return i; } } return -1; } int createAnimation(anim *a, char *filename, int w, int h) { int textureId = addTextureToManager(filename); if(textureId > -1) { a->sprite = textureManager[textureId].bmp; a->w = w; a->h = h; a->numFrames = al_get_bitmap_width(a->sprite) / w; printf("Animation loaded with %i frames, given resource id: %i\n", a->numFrames, textureId); } else { printf("Texture manager full\n"); return 1; } return 0; } void playAnimation(anim *a, float delay) { a->curFrame = 0; a->frameCount = 0; a->delay = delay; } void updateAnimation(anim *a) { a->frameCount ++; if(a->frameCount >= a->delay) { a->frameCount = 0; a->curFrame ++; if(a->curFrame >= a->numFrames) { a->curFrame = 0; } } }

    Read the article

  • Is there a potential for resource leak/double free here?

    - by nhed
    The following sample (not compiled so I won't vouch for syntax) pulls two resources from resource pools (not allocated with new), then "binds" them together with MyClass for the duration of a certain transaction. The transaction, implemented here by myFunc, attempts to protect against leakage of these resources by tracking their "ownership". The local resource pointers are cleared when its obvious that instantiation of MyClass was successful. The local catch, as well as the destructor ~MyClass return the resources to their pool (double-frees are protected by teh above mentioned clearing of the local pointers). Instantiation of MyClass can fail and result in an exception at two steps (1) actual memory allocation, or (2) at the constructor body itself. I do not have a problem with #1, but in the case of #2, if the exception is thrown AFTER m_resA & m_resB were set. Causing both the ~MyClass and the cleanup code of myFunc to assume responsibility for returning these resources to their pools. Is this a reasonable concern? Options I have considered, but didn't like: Smart pointers (like boost's shared_ptr). I didn't see how to apply to a resource pool (aside for wrapping in yet another instance). Allowing double-free to occur at this level but protecting at the resource pools. Trying to use the exception type - trying to deduce that if bad_alloc was caught that MyClass did not take ownership. This will require a try-catch in the constructor to make sure that any allocation failures in ABC() ...more code here... wont be confused with failures to allocate MyClass. Is there a clean, simple solution that I have overlooked? class SomeExtResourceA; class SomeExtResourceB; class MyClass { private: // These resources come out of a resource pool not allocated with "new" for each use by MyClass SomeResourceA* m_resA; SomeResourceB* m_resB; public: MyClass(SomeResourceA* resA, SomeResourceB* resB): m_resA(resA), m_resB(resB) { ABC(); // ... more code here, could throw exceptions } ~MyClass(){ if(m_resA){ m_resA->Release(); } if(m_resB){ m_resB->Release(); } } }; void myFunc(void) { SomeResourceA* resA = NULL; SomeResourceB* resB = NULL; MyClass* pMyInst = NULL; try { resA = g_pPoolA->Allocate(); resB = g_pPoolB->Allocate(); pMyInst = new MyClass(resA,resB); resA=NULL; // ''ownership succesfully transfered to pMyInst resB=NULL; // ''ownership succesfully transfered to pMyInst // Do some work with pMyInst; ...; delete pMyInst; } catch (...) { // cleanup // need to check if resA, or resB were allocated prior // to construction of pMyInst. if(resA) resA->Release(); if(resB) resB->Release(); delete pMyInst; throw; // rethrow caught exception } }

    Read the article

  • error Caused by: java.lang.OutOfMemoryError Load Image

    - by user2493770
    This is my method to load images in background, the first and second load normally. But after these loading, a memory error appears. How can I fix this? public class MainArrayAdapterViewHolder extends ArrayAdapter<EmpresaListaPrincipal> { private final Context context; private ArrayList<EmpresaListaPrincipal> data_array; public DisplayImageOptions options; public ImageLoader imageLoader = ImageLoader.getInstance(); public MainArrayAdapterViewHolder(Context context, ArrayList<EmpresaListaPrincipal> list_of_ids) { super(context, R.layout.main_list_rowlayout, list_of_ids); this.context = context; this.data_array = list_of_ids; //------------- read more here https://github.com/nostra13/Android-Universal-Image-Loader options = new DisplayImageOptions.Builder().showImageForEmptyUri(R.drawable.ic_launcher).showImageOnFail(R.drawable.ic_launcher).resetViewBeforeLoading() .cacheOnDisc().imageScaleType(ImageScaleType.IN_SAMPLE_INT).bitmapConfig(Bitmap.Config.RGB_565).delayBeforeLoading(0).build(); File cacheDir = StorageUtils.getCacheDirectory(context); ImageLoaderConfiguration config = new ImageLoaderConfiguration.Builder(context).memoryCacheExtraOptions(720, 1280) // default = device screen // dimensions .discCacheExtraOptions(720, 1280, CompressFormat.JPEG, 100).threadPoolSize(3) // default .threadPriority(Thread.NORM_PRIORITY - 1) // default .memoryCacheSize(2 * 1024 * 1024).discCache(new UnlimitedDiscCache(cacheDir)) // default .discCacheSize(50 * 1024 * 1024).discCacheFileCount(100).discCacheFileNameGenerator(new HashCodeFileNameGenerator()) // default .imageDownloader(new BaseImageDownloader(context)) // default .tasksProcessingOrder(QueueProcessingType.FIFO) // default .defaultDisplayImageOptions(options) // default .build(); imageLoader.init(config); } @Override public View getView(int position, View convertView, ViewGroup parent) { ViewHolder viewholder; View v = convertView; //Asociamos el layout de la lista que hemos creado e incrustamos el ViewHolder if(convertView == null){ LayoutInflater inflater = (LayoutInflater) context.getSystemService(Context.LAYOUT_INFLATER_SERVICE); //View rowView = inflater.inflate(R.layout.main_list_rowlayout, parent, false); v = inflater.inflate(R.layout.main_list_rowlayout, parent, false); viewholder = new ViewHolder(); viewholder.textView_main_row_title = (TextView) v.findViewById(R.id.textView_main_row_title); viewholder.imageView_restaurant_icon = (ImageView) v.findViewById(R.id.imageView_restaurant_icon); viewholder.textView_main_row_direccion = (TextView) v.findViewById(R.id.textView_main_row_direccion); v.setTag(viewholder); } ImageLoadingListener mImageLoadingListenr = new ImageLoadingListener() { @Override public void onLoadingStarted(String arg0, View arg1) { // Log.e("* started *", String.valueOf("complete")); } @Override public void onLoadingComplete(String arg0, View arg1, Bitmap arg2) { // Log.e("* complete *", String.valueOf("complete")); } @Override public void onLoadingCancelled(String arg0, View arg1) { } @Override public void onLoadingFailed(String arg0, View arg1, FailReason arg2) { // TODO Auto-generated method stub } }; try { viewholder = (ViewHolder) v.getTag(); viewholder.textView_main_row_title.setText(data_array.get(position).getNOMBRE()); viewholder.textView_main_row_direccion.setText(data_array.get(position).getDIRECCION()); String image = data_array.get(position).getURL(); // ------- image --------- try { if (image.length() > 4) imageLoader.displayImage(image, viewholder.imageView_restaurant_icon, options, mImageLoadingListenr); } catch (Exception ex) { } //textView_main_row_title.setText(name); //textView_main_row_address.setText(address); } catch (Exception e) { // TODO: handle exception } return v; } public class ViewHolder { public TextView textView_main_row_title; public TextView textView_main_row_direccion; //public TextView cargo; public ImageView imageView_restaurant_icon; } }

    Read the article

  • Losing NSManaged Objects in my Application

    - by Wayfarer
    I've been doing quite a bit of work on a fun little iPhone app. At one point, I get a bunch of player objects from my Persistant store, and then display them on the screen. I also have the options of adding new player objects (their just custom UIButtons) and removing selected players. However, I believe I'm running into some memory management issues, in that somehow the app is not saving which "players" are being displayed. Example: I have 4 players shown, I select them all and then delete them all. They all disappear. But if I exit and then reopen the application, they all are there again. As though they had never left. So somewhere in my code, they are not "really" getting removed. MagicApp201AppDelegate *appDelegate = [[UIApplication sharedApplication] delegate]; context = [appDelegate managedObjectContext]; NSFetchRequest *request = [[NSFetchRequest alloc] init]; NSEntityDescription *desc = [NSEntityDescription entityForName:@"Player" inManagedObjectContext:context]; [request setEntity:desc]; NSError *error; NSMutableArray *objects = [[[context executeFetchRequest:request error:&error] mutableCopy] autorelease]; if (objects == nil) { NSLog(@"Shit man, there was an error taking out the single player object when the view did load. ", error); } int j = 0; while (j < [objects count]) { if ([[[objects objectAtIndex:j] valueForKey:@"currentMultiPlayer"] boolValue] == NO) { [objects removeObjectAtIndex:j]; j--; } else { j++; } } [self setPlayers:objects]; //This is a must, it NEEDS to work Objects are all the players playing So in this snippit (in the viewdidLoad method), I grab the players out of the persistant store, and then remove the objects I don't want (those whose boolValue is NO), and the rest are kept. This works, I'm pretty sure. I think the issue is where I remove the players. Here is that code: NSLog(@"Remove players"); /** For each selected player: Unselect them (remove them from SelectedPlayers) Remove the button from the view Remove the button object from the array Remove the player from Players */ NSLog(@"Debugging Removal: %d", [selectedPlayers count]); for (int i=0; i < [selectedPlayers count]; i++) { NSManagedObject *rPlayer = [selectedPlayers objectAtIndex:i]; [rPlayer setValue:[NSNumber numberWithBool:NO] forKey:@"currentMultiPlayer"]; int index = [players indexOfObjectIdenticalTo:rPlayer]; //this is the index we need for (int j = (index + 1); j < [players count]; j++) { UIButton *tempButton = [playerButtons objectAtIndex:j]; tempButton.tag--; } NSError *error; if ([context hasChanges] && ![context save:&error]) { NSLog(@"Unresolved error %@, %@", error, [error userInfo]); abort(); } UIButton *aButton = [playerButtons objectAtIndex:index]; [players removeObjectAtIndex:index]; [aButton removeFromSuperview]; [playerButtons removeObjectAtIndex:index]; } [selectedPlayers removeAllObjects]; NSError *error; if ([context hasChanges] && ![context save:&error]) { NSLog(@"Unresolved error %@, %@", error, [error userInfo]); abort(); } NSLog(@"About to refresh YES"); [self refreshAllPlayers:YES]; The big part in the second code snippet is I set them to NO for currentMultiPlayer. NO NO NO NO NO, they should NOT come back when the view does load, NEVER ever ever. Not until I say so. No other relevant part of the code sets that to YES. Which makes me think... perhaps they aren't being saved. Perhaps that doesn't save, perhaps those objects aren't being managed anymore, and so they don't get saved in. Is there a lifetime (metaphorically) of NSManaged object? The Players array is the same I set in the "viewDidLoad" method, and SelectedPlayers holds players that are selected, references to NSManagedObjects. Does it have something to do with Removing them from the array? I'm so confused, some insight would be greatly appreciated!!

    Read the article

  • How to copy depth buffer to CPU memory in DirectX?

    - by Ashwin
    I have code in OpenGL that uses glReadPixels to copy the depth buffer to a CPU memory buffer: glReadPixels(0, 0, w, h, GL_DEPTH_COMPONENT, GL_FLOAT, dbuf); How do I achieve the same in DirectX? I have looked at a similar question which gives the solution to copy the RGB buffer. I've tried to write similar code to copy the depth buffer: IDirect3DSurface9* d3dSurface; d3dDevice->GetDepthStencilSurface(&d3dSurface); D3DSURFACE_DESC d3dSurfaceDesc; d3dSurface->GetDesc(&d3dSurfaceDesc); IDirect3DSurface9* d3dOffSurface; d3dDevice->CreateOffscreenPlainSurface( d3dSurfaceDesc.Width, d3dSurfaceDesc.Height, D3DFMT_D32F_LOCKABLE, D3DPOOL_SCRATCH, &d3dOffSurface, NULL); // FAILS: D3DERR_INVALIDCALL D3DXLoadSurfaceFromSurface( d3dOffSurface, NULL, NULL, d3dSurface, NULL, NULL, D3DX_FILTER_NONE, 0); // Copy from offscreen surface to CPU memory ... The code fails on the call to D3DXLoadSurfaceFromSurface. It returns the error value D3DERR_INVALIDCALL. What is wrong with my code?

    Read the article

  • What to call objects that may delete cached data to meet memory constraints?

    - by Brent
    I'm developing some cross-platform software which is intended to run on mobile devices. Both iOS and Android provide low memory warnings. I plan to make a wrapper class that will free cached resources (like textures) when low memory warnings are issued (assuming the resource is not in use). If the resource returns to use, it'll re-cache it, etc... I'm trying to think of what this is called. In .Net, it's similar to a "weak reference" but that only really makes sense when dealing with garbage collection, and since I'm using c++ and shared_ptr, a weak reference already has a meaning which is distinct from the one I'm thinking of. There's also the difference that this class will be able to rebuild the cache when needed. What is this pattern/whatever is called? Edit: Feel free to recommend tags for this question.

    Read the article

  • How important is it for a programmer to know how to implement a QuickSort/MergeSort algorithm from memory?

    - by John Smith
    I was reviewing my notes and stumbled across the implementation of different sorting algorithms. As I attempted to make sense of the implementation of QuickSort and MergeSort, it occurred to me that although I do programming for a living and consider myself decent at what I do, I have neither the photographic memory nor the sheer brainpower to implement those algorithms without relying on my notes. All I remembered is that some of those algorithms are stable and some are not. Some take O(nlog(n)) or O(n^2) time to complete. Some use more memory than others... I'd feel like I don't deserve this kind of job if it weren't because my position doesn't require that I use any sorting algorithm other than those found in standard APIs. I mean, how many of you have a programming position where it actually is essential that you can remember or come up with this kind of stuff on your own?

    Read the article

  • SAP met l'accent sur son Cloud, le In-Memory et la mobilité et fait la promotion du temps-réel au SAPPHIRE NOW

    SAP : Cloud, In-Memory et mobilité Lors du SAPPHIRE NOW l'éditeur rappelle qu'il s'est diversifié et fait la promotion du temps-réel Depuis aujourd'hui, la grand messe de SAP, le SAPPHIRE NOW, se tient à Madrid. L'évènement regroupe plus de 10.000 personnes et s'est ouvert dans une ambiance de sciences fiction fortement inspirée de Riddley Scott. Mais derrière ce show d'ouverture « à l'américaine », très solennel et montrant un futur transformé (en bien) par les progrès technologiques, le message de SAP est lui bien ancré dans le présent : SAP a changé et veut le faire savoir. Il n'est plus l'éditeur d'un seul produit. Cloud, In-Memory et mobilité sont devenus ses trois piliers.

    Read the article

  • 13.10 upgrade dropping wifi [on hold]

    - by Daryl
    Almost a complete newb here. After my last upgrade from 12.04 to 13.10 my wifi now randomly drops. The only way I can get a signal back is a shutdown and restart otherwise it shows no network is even available to connect to. Had no problems until the upgrade. Any help would be appreciated. H/W path Device Class Description ==================================================== system h8-1534 (H2N64AA#ABA) /0 bus 2AC8 /0/0 memory 64KiB BIOS /0/4 processor AMD FX(tm)-6200 Six-Core Processor /0/4/5 memory 288KiB L1 cache /0/4/6 memory 6MiB L2 cache /0/4/7 memory 8MiB L3 cache /0/d memory 10GiB System Memory /0/d/0 memory DIMM Synchronous [empty] /0/d/1 memory 4GiB DIMM DDR3 Synchronous 1600 MHz (0.6 ns) /0/d/2 memory 2GiB DIMM DDR3 Synchronous 1600 MHz (0.6 ns) /0/d/3 memory 4GiB DIMM DDR3 Synchronous 1600 MHz (0.6 ns) /0/100 bridge RD890 PCI to PCI bridge (external gfx0 port B) /0/100/0.2 generic RD990 I/O Memory Management Unit (IOMMU) /0/100/2 bridge RD890 PCI to PCI bridge (PCI express gpp port B) /0/100/2/0 display Turks PRO [Radeon HD 7570] /0/100/2/0.1 multimedia Turks/Whistler HDMI Audio [Radeon HD 6000 Series] /0/100/5 bridge RD890 PCI to PCI bridge (PCI express gpp port E) /0/100/5/0 bus TUSB73x0 SuperSpeed USB 3.0 xHCI Host Controller /0/100/11 storage SB7x0/SB8x0/SB9x0 SATA Controller [RAID5 mode] /0/100/12 bus SB7x0/SB8x0/SB9x0 USB OHCI0 Controller /0/100/12.2 bus SB7x0/SB8x0/SB9x0 USB EHCI Controller /0/100/13 bus SB7x0/SB8x0/SB9x0 USB OHCI0 Controller /0/100/13.2 bus SB7x0/SB8x0/SB9x0 USB EHCI Controller /0/100/14 bus SBx00 SMBus Controller /0/100/14.2 multimedia SBx00 Azalia (Intel HDA) /0/100/14.3 bridge SB7x0/SB8x0/SB9x0 LPC host controller /0/100/14.4 bridge SBx00 PCI to PCI Bridge /0/100/14.5 bus SB7x0/SB8x0/SB9x0 USB OHCI2 Controller /0/100/15 bridge SB700/SB800/SB900 PCI to PCI bridge (PCIE port 0) /0/100/15.1 bridge SB700/SB800/SB900 PCI to PCI bridge (PCIE port 1) /0/100/15.2 bridge SB900 PCI to PCI bridge (PCIE port 2) /0/100/15.2/0 wlan0 network RT3290 Wireless 802.11n 1T/1R PCIe /0/100/15.2/0.1 generic RT3290 Bluetooth /0/100/15.3 bridge SB900 PCI to PCI bridge (PCIE port 3) /0/100/15.3/0 eth0 network RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller /0/100/16 bus SB7x0/SB8x0/SB9x0 USB OHCI0 Controller /0/100/16.2 bus SB7x0/SB8x0/SB9x0 USB EHCI Controller /0/101 bridge Family 15h Processor Function 0 /0/102 bridge Family 15h Processor Function 1 /0/103 bridge Family 15h Processor Function 2 /0/104 bridge Family 15h Processor Function 3 /0/105 bridge Family 15h Processor Function 4 /0/106 bridge Family 15h Processor Function 5 /0/1 scsi0 storage /0/1/0.0.0 /dev/sda disk 1TB WDC WD1002FAEX-0 /0/1/0.0.0/1 volume 189MiB Windows FAT volume /0/1/0.0.0/2 /dev/sda2 volume 244MiB data partition /0/1/0.0.0/3 /dev/sda3 volume 931GiB LVM Physical Volume /0/2 scsi2 storage /0/2/0.0.0 /dev/cdrom disk DVD A DH16ACSHR /0/3 scsi6 storage /0/3/0.0.0 /dev/sdb disk SCSI Disk /0/3/0.0.1 /dev/sdc disk SCSI Disk /0/3/0.0.2 /dev/sdd disk SCSI Disk /0/3/0.0.3 /dev/sde disk MS/MS-Pro /0/3/0.0.3/0 /dev/sde disk /1 power Standard Efficiency I apologize for my newbness. I hope this is enough info for the hardware. Thanks Bruno for pointing out I needed to add more info. If I am lacking anything else please let me know and I'll post it.

    Read the article

  • SAP ouvre sa plateforme In Memory aux Startups et organise une série d'événements pour construire un écosystème autour d'HANA

    SAP ouvre sa plateforme In Memory aux Startups et organise une série d'événements pour construire un écosystème fiable autour d'HANA SAP organise une série d'événements pour aider les développeurs et startups qui utilisent la plateforme In Memory HANA à tirer parti de celle-ci. SAP HANA (High-Performance Analytic Appliance) permet de produire des environnements de Data Warehouse dopés, qui fournissent des données clients en temps réel. Elle permet également d'animer un réseau en ligne et offre une plateforme ouverte aux développeurs. La société souhaite qu'un écosystème fiable soit construit autour de sa plateforme grâce à son programme de soutien aux startups du monde en...

    Read the article

  • Recommend a basic 3D graphics card for HP ProLiant DL360 G5?

    - by arathorn
    We've got an HP ProLiant DL360 G5 running Windows Server 2008 R2 that needs a graphics card with 3D acceleration (the integrated ATI ES1000 graphics don't have this). This is an unfortunate requirement that we can't avoid, as it is required by S/W installed on the server that is beyond our control. Looks like the PCI-Express slots in the box are x8 (1 full and 1 low profile). We just need a basic card, as long as it has the 3D acceleration. Any recommendations?

    Read the article

  • Nginx redirect requests to sub-domains that do not exist to custom 404 page when wild card A record is set?

    - by Anagio
    Is there a way to capture all requests to arbitrary sub-domains which do not have a virtual host setup, and redirect to a custom 404 page in nginx? I will have a wild card A record setup *.example.com and all our users will have a sub-domain username.example.com. If someone enters a sub-domain which does not exist how can I redirect to a custom 404 page rather than have it resolve since wild card is setup?

    Read the article

  • Is integrated graphics card Radeon HD 4200 capable to handle full HD?

    - by develroot
    I enjoy my integrated graphics card Radeon HD 4200 at resolution of 1280x1024 pixels on a 19" inches LG Flatron (5:4 aspect ratio) (playing FIFA 10 at max resolution, max quality). But recently i decided to upgrade my monitor and to get an 24" inches BENQ, 1920x1080, fullHD. Would I experience any problems with that graphics card on a such a big monitor? Usually I don't play games, just movies/music/and of programming, but it would be nice to be able to play some Counter Strike without artifacts.

    Read the article

  • Can you boot an Acer Aspire One from an SD card when no BIOS is available?

    - by henrijs
    Is it possible to boot the Acer Aspire One PC from an SD card? I have bricked an Aspire One, but it does not even start the BIOS. Aspire One have this issue and a BIOS update usually work and it helped me once in the past, but this time it's all over, and the BIOS update fails. It still reads the SD card with the magic Ctrl + Esc shortcut used to launch the BIOS update. Can I trick the computer into booting somehow using this shortcut?

    Read the article

  • Can I boot my notebook via eSata pci-xpress card?

    - by OliverS
    I would like to boot directly from an external hard disk to improve performance over my internal notebook hard disk. My notebook has no native eSata jack but a pci express card. As my BIOS doesn't support the card on boot time so no way directly booting it. My question is, is it possible to work around this issue by using an USB stick or similar with a boot loader like grub and if so, will this only work for Linux or Windows as well?

    Read the article

  • Is it possible to connect an internal USB 3 card reader in a computer with only USB 2?

    - by Grzenio
    I would like to buy an internal flash card reader. There are now loads of USB 3 reader, however I still have an USB 2 system. Would it be possible to connect the new reader to the old USB port on the motherboard? I understand that I will not be able to take advantage of having the faster reader with the old motherboard, but I am planning an upgrade next year and I would like to avoid having to upgrade the card reader as well...

    Read the article

  • How to analyze 'dbcc memorystatus' result in SQL Server 2008

    - by envykok
    Currently i am facing a sql memory pressure issue. i have run 'dbcc memorystatus', here is part of my result: Memory Manager KB VM Reserved 23617160 VM Committed 14818444 Locked Pages Allocated 0 Reserved Memory 1024 Reserved Memory In Use 0 Memory node Id = 0 KB VM Reserved 23613512 VM Committed 14814908 Locked Pages Allocated 0 MultiPage Allocator 387400 SinglePage Allocator 3265000 MEMORYCLERK_SQLBUFFERPOOL (node 0) KB VM Reserved 16809984 VM Committed 14184208 Locked Pages Allocated 0 SM Reserved 0 SM Committed 0 SinglePage Allocator 0 MultiPage Allocator 408 MEMORYCLERK_SQLCLR (node 0) KB VM Reserved 6311612 VM Committed 141616 Locked Pages Allocated 0 SM Reserved 0 SM Committed 0 SinglePage Allocator 1456 MultiPage Allocator 20144 CACHESTORE_SQLCP (node 0) KB VM Reserved 0 VM Committed 0 Locked Pages Allocated 0 SM Reserved 0 SM Committed 0 SinglePage Allocator 3101784 MultiPage Allocator 300328 Buffer Pool Value Committed 1742946 Target 1742946 Database 1333883 Dirty 940 In IO 1 Latched 18 Free 89 Stolen 408974 Reserved 2080 Visible 1742946 Stolen Potential 1579938 Limiting Factor 13 Last OOM Factor 0 Page Life Expectancy 5463 Process/System Counts Value Available Physical Memory 258572288 Available Virtual Memory 8771398631424 Available Paging File 16030617600 Working Set 15225597952 Percent of Committed Memory in WS 100 Page Faults 305556823 System physical memory high 1 System physical memory low 0 Process physical memory low 0 Process virtual memory low 0 Procedure Cache Value TotalProcs 11382 TotalPages 430160 InUsePages 28 Can you lead me to analyze this result ? Is it a lot execute plan have been cached causing the memory issue or other reasons?

    Read the article

  • How to shrink-to-fit an std::vector in a memory-efficient way?

    - by dehmann
    I would like to 'shrink-to-fit' an std::vector, to reduce its capacity to its exact size, so that additional memory is freed. The standard trick seems to be the one described here: template< typename T, class Allocator > void shrink_capacity(std::vector<T,Allocator>& v) { std::vector<T,Allocator>(v.begin(),v.end()).swap(v); } The whole point of shrink-to-fit is to save memory, but doesn't this method first create a deep copy and then swaps the instances? So at some point -- when the copy is constructed -- the memory usage is doubled? If that is the case, is there a more memory-friendly method of shrink-to-fit? (In my case the vector is really big and I cannot afford to have both the original plus a copy of it in memory at any time.)

    Read the article

  • Why doesn't my graphic card software support 1280*1024?

    - by Allwar
    Hi, I have an external monitor which is an 20" 1280*1024. In windows 7 it works fine with that resolution but in ubuntu it can't. Example: In windows I connect it and activates it, done. In ubuntu I connect and the only resolution that is available is the ones my laptop screen support, 12" 1366*768. My laptop is an asus 1201n. If I force it to use 1280*1024 both screen crashes and i have to force a reboot. When I force it I only force the external monitor, the laptop is already at maximum 1366*768. I connect it throw VGA. ((The graphic card supports 1280*1024 in windows 7, #Fail)) alvar@alvars-laptop:~$ disper -l display DFP-0: HSD121PHW1 resolutions: 320x175, 320x200, 360x200, 320x240, 400x300, 416x312, 512x384, 640x350, 576x432, 640x400, 680x384, 720x400, 640x480, 720x450, 640x512, 700x525, 800x512, 840x525, 800x600, 960x540, 832x624, 1024x768, 1366x768 display CRT-0: CRT-0 resolutions: 320x240, 400x300, 512x384, 680x384, 640x480, 800x600, 1024x768, 1152x864, 1360x768

    Read the article

< Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >