Search Results

Search found 12774 results on 511 pages for 'memory corruption'.

Page 297/511 | < Previous Page | 293 294 295 296 297 298 299 300 301 302 303 304  | Next Page >

  • (Phaser) Preload Future States in Create?

    - by Brian
    I'm a first time user of Phaser, been trying to make a simple point and click type game. I'm trying to keep things very modular, so I'm defining a list of levels (states) in a JSON, and then every level has its own JSON containing the objects within that level. However, I'm encountering an issue in that, when changing states, I get a black flash while the assets for the next state load (this happens whether I iterate through the JSON list or define everything manually). From what I've read, all sprites should be loaded in the preload stage, however, by doing this I'm causing that tiny but noticeable black pause. I know one way would be to simply load every asset at the start of the game, but that seems incredibly inefficient (wouldn't that fill up the memory immensely?). I would rather load a state's assets from the "parent" state. However, in my quick test (which maybe I did wrong) it seems that game.load doesn't work properly if done within the create stage? What is the best approach to doing this?

    Read the article

  • What is the primary use of Vertex Buffer Objects?

    - by sensae
    From what I've read, it seems VBOs are purely for performance. I'm working on a very rudimentary learning project in lwjgl and I'm just trying to figure out what more advanced features of the library I should be delving into, and what their use is. My understanding is that VBOs allow a person to keep vertexes in VRAM while they aren't currently being drawn in a scene. In my case, I'm just drawing quads and performance probably isn't a concern at all, but I'm trying to piece together what's happening under the hood. If I'm drawing quads directly, I'm drawing from the CPU memory, correct? Also, if I'm not doing any checks for visibility, does that mean I'm rendering absolutely everything in the "scene", regardless of whether its in view? Are VBOs a way to store objects and only render what's needed?

    Read the article

  • Limitation of high level languages? [closed]

    - by user1705796
    My question may look bit philosophical and nonsense! But I need to know kind of instructions those are not well suitable in high level languages even in c? Or rarely use in the development of software? Like read/write content of CPU registers may useful in debugging programs. And access to cache memory required when developing OS (maybe I am wrong at this point). Is this kind of instruction available languages like Java, Python, C? I also have a second question: And Why all high level languages not having same uniform syntax; at-least same standard library interface name? In python there is and. Or operator is almost same as && and ||. I think Python is developed after C but space indentation is compulsory in Python. Why Python does not use brackets {}. I already know this question going to be highly down-voted.

    Read the article

  • How does a linked library combined with the main executable program file interact with a kernel?

    - by I ask Questions For a Reason
    I was attempting to find an answer to this, and I did to some degree, but definitely not anywhere good enough to form a respectable, sensible and clear answer. If I am using Windows, Mac, Linux, or nearly any modern made OS for desktop IBM-compatible PCs, laptops, even tablets and smartphones, there's virtual memory. Clearly, compiling, at least on Windows I know this, an executable object file, such as a simple C "Hello World" output to a terminal, will be linked with the standard library, and several other Window's system software, dynamic linked libraries, and the like. However, how does linking all of these executables together or resources form a connectable interaction with, say, a device driver or any other stuff on the lower level?

    Read the article

  • Rendering trillions of "atoms" instead of polygons?

    - by Baring
    I just saw a video about what the publishers call the "next major step after the invention of 3D". According to the person speaking in it, they use a huge amount of atoms grouped into clouds instead of polygons, to reach a level of unlimited detail. They tried their best to make the video understandable for persons with no knowledge of any rendering techniques, and therefore or for other purposes left out all details of how their engine works. The level of detail in their video does look quite impressive to me. How is it possible to render scenes using custom atoms instead of polygons on current hardware? (Speed, memory-wise) If this is real, why has nobody else even thought about it so far? I'm, as an OpenGL developer, really baffled by this and would really like to hear what experts have to say. Therefore I also don't want this to look like a cheap advert and will include the link to the video only if requested, in the comments section.

    Read the article

  • Implementing cache system in Java Web Application

    - by TGM
    I worked with JPA (Eclipselink implementation) and Hibernate. As I understand these two have great caching systems. I am interested in caching in a Web application and in order to better understand the process I'm trying to implement something on my own. Sadly, I cannot find any in depth documentation about this subject. I'm interested in things like high scalability, sharing memory on different machines and other important theoretical matters. Is there any tutorial or open project I could check out? Thank you! *LE: * I want to cache DB information in POJOs just like JPA or Eclipselink

    Read the article

  • What makes Java so suitable for writing NoSQL Databases

    - by good_computer
    Looking at this page that aggregates the current NoSQL landscape, one can see that the majority of these projects are written in Java. Databases are complex systems software dealing with the file system, and so C/C++ would be a better choice than Java for this. (that's my thinking which might be flawed) Secondly, databases deal with transferring large amounts of data from disk to RAM -- which they call a working set. The JVM takes non-trivial amount of RAM for it's own purpose -- so it would be more efficient to use a platform that leaves lots of memory for data instead of hogging it for its own operations. The major relational databases are ALL written in C/C++ MySQL C, C++ Oracle Assembler, C, C++ SQL Server C++ PostgreSQL C SQLite C So what makes Java so popular in NoSQL world.

    Read the article

  • Multiple copies of JBoss acting as one? [migrated]

    - by scphantm
    I have a few ideas how to solve the problem, but one question about jboss clustering. Please, keep in mind these applications were written very poorly, that is why they require so much memory and there is nothing i can do about that right now. So, I have clustered applications on Jboss where the application was small enough to run on one box. Meaning that one machine could handle the load. But, the current problem is that i have been asked to run several systems on the same environment. Our machines are virtuals and due to limited hardware, are restricted to 8 GB RAM, which gives jboss about 7GB to itself. Unfortunately, that isn't enough to run the group of applications. Im constantly getting heap errors and crashes. If i cluster 2 or 3 jboss instances together, can i run applications that consume more resources than a single box can handle?

    Read the article

  • Is there a single book that covers the breadth of computer science fundamentals? [closed]

    - by superFoo
    When I did my undergraduate studies in elecrical engineering, there was this book called "Basic Electricity" by Van Valkenburgh. If you read that book cover to cover, your fundamentals in electrical engineering would be bulletproof. I would recommend it all my juniors and I absolutely loved it. Is there such a book in the field of computer science? I am not so concerned about the algorithms. I am looking more into something that tells me how does everything work beneath the covers. TCPIP, memory management, DNS, routing, SSL, buffer, queuing etc.

    Read the article

  • Database Insider - November 2012 issue

    - by Javier Puerta
    The November issue of the Database Insider newsletter is now available. (Full newsletter here) Mark Hurd: Oracle Database Wrap-up from Oracle OpenWorld 2012 Oracle executives kicked off Oracle OpenWorld 2012, discussing the needs of customers, the brand-new Oracle Exadata Database Machine X3, and the latest Oracle Database innovations. (Read More) Webcast: Introduction to Oracle Exadata Database Machine X3 Oracle’s next-generation database machine, Oracle Exadata X3, combines massive memory and low-cost disks to deliver the highest performance at the lowest cost. Available in an eight-rack configuration, it allows you to start small and grow. Webcast: SAP Applications Run Better on Oracle Exadata Find out why a growing number of SAP application customers are turning to Oracle Exadata Database Machine for better performance, better productivity—and big savings. 

    Read the article

  • How do you manage all of the information you have learned and found? [closed]

    - by B Seven
    Possible Duplicate: How do you manage your knowledge base? What do you use for personal note taking to keep track of everything you learn? Are you always Googling or searching StackOverflow to answer the same questions? Or searching for and copying and pasting existing code? I feel like I have a poor memory, especially remembering things like syntax. Are there any knowledge management systems that would work well for a programming language or operating system? It would be great if there were a way to save everything I learn in an easy to search system. Does such a thing exist? Maybe you would be able to search by question (How to sort an array?, How to set static IP?), or by tag (sort, array, enumeration, iterator, IP). I know it would be easy to develop my own system, but I thought it would be great to learn what works for other people.

    Read the article

  • Force Gnome 3 to work

    - by nkg5
    I have Ubuntu 12.10 on dual-boot with windows XP. My PC specifications are AMD Sempron 2800+ 1,6GHz with 512 MB ram and ATI Radeon 9250 graphic card with 128 MB memory. As Unity works slow and I don't like it's look, I installed gnome-shell. But as you know, Gnome 3 won't work on it. But gnome classic without effects works great. Thing is, when I turn off windows (by holding the power button or pressing restart button) my resolution on Linux is changed to 1024x768, and I can only change it by turning off the PC and turning off it's power source. But it is not the problem. The problem is that it runs Gnome 3 after one restart, it also runs better than Unity. My question is: Can I somehow force gnome 3 to work always and disable some of it's effects so it can run better?

    Read the article

  • Application window as polygon texture?

    - by nekome
    Is there a way, or method, to have some application rendered as texture in 3D scene on some polygon, and also have full interactivity with it? I'm talking about Windows platform, and maybe OpenGL but I guess it doesn't matter is it OGL or DX. For example: I run Calculator using WINAPI functions (preferably hidden, not showing on desktop) and I want to render it inside 3D scene on some polygon but still be able to type or click buttons and have it respond. My idea to realize this is to have WINAPI take screenshot (or render it to memory if possible) of that Calculator and pass it to OpenGL as texture for each frame (I'm experimenting with SDL through pygame) and for mouse interactivity to use coordination translation and calculate where on application window it would act, and then use WINAPI functions such as SetCursorPos to set cursor ant others to simulate click or something else. I haven't found any tutorials with topic similar to this one. Am I on a right track? Is there better way to do this if possible at all?

    Read the article

  • IPC (inter-process communication) server to connect different runtimes

    - by wvxvw
    Couple days ago I've spoken to someone who mentioned a server product that has as its main goal the ability to create inter-runtime layer that allows sharing in-memory object between different languages. The person also described the process of creating such layer as defining a number of structs in a special way such that different languages which access the objects would have the same idea of what the data they access looks like. Unfortunately, I forgot what this product is called, and Google didn't help me finding it so far. Would you happen to know what he was talking about?

    Read the article

  • What is the easiest and fastest way to display an SDL_Surface in a window with SDL2?

    - by Semmu
    I would like to have an SDL_Surface representing the contents of the window, just like in the old days with SDL1.2. What is the best and fastest way to do it in SDL2? What I found is that I need an SDL_Window, an SDL_Renderer for that window, an SDL_Texture to render, and an SDL_Surface to create a texture from. This seems a bit too much to me, since I just want to display a single image on the screen. Not to mention the impact on the performance. On my machine (Lenovo Y510p laptop) this whole procedure takes 9ms, without any memory allocation, only using pre-allocated variables and totally black SDL_Surface. Is there a way I could speed up things?

    Read the article

  • How to remove items from an arraylist without shrinking the list [migrated]

    - by user73710
    I have a case where I am using the ArrayList to keep a list of items that are keyed by their position in the list. Other objects reference the ArrayList items by their position. If I delete one of the items from the list, I don't want the list to shrink because that would invalidate all other references to items in the list (e.g. item 2 is now in position 1). My solution to the shrinking array list problem is to null the position in the arraylist so that the list will not shrink. I am curious whether this will free the memory formerly held by the item at that position. If there is a better way to accomplish this requirement, I would like to know about it.

    Read the article

  • Organize a game set

    - by jncunha
    I'm developing a endless running game and I'm not really sure on how to make the set. The first approach was to make a BIG set like 10240x3072 pixels so that we have a nice portion of set. After having like 3 or 4 sets that go along with each other I would work on making their elements sequential and repeatable. However this is getting really heavy for the iPad 1 (it's running good in the iPad 2 and the New iPad) even though I'm splitting all the set in slices through Photoshop. For the implementation I'm using Cocos2D. Is there any better approach to make something like this but truly efficient for the iPad memory? Thank you.

    Read the article

  • Large Sprite Performance

    - by Iansen
    I've got a large Sprite generated using a set of vertices(x,y coordinates) and a bitmap pattern (using moveTo, lineTo, beginBitmapFill, endFill ...etc). It's about 15000 pixels wide and between 1500 - 2000 pixels high depending on the level -it's the terrain for a 2D game. My question is: what is the best way to display/move it on the stage - performance wise? Currently I'm just adding it to the stage as is...I get decent frame rate/ memory/ cpu usage but I want to optimize it for slower PCs. Any ideas? I've been reading a little about blitting but I'm not sure how to implement it in my case. Thanks.

    Read the article

  • I’m new to c++. can anyone help me in the following function [migrated]

    - by Laian Alsabbagh
    In this code I’m using a function Distance{to calculate the distance between two nodes } I declare the function like this : int Distance( int x1 , int y1 ,int x2 , y2){ int distance_x = x1-x2; int distance_y = y1- y2; int distance =sqrt( (distance_x * distance_x) + (distance_y * distance_y)); return distance ; } and in the main memory I have 2 for loops what iam asking for ,can I pass the values like this?? Distance (i, j , i+1 ,j+1) for(int i=0;i< No_Max;i++) { for(int j=0;j<No_Max;j++) { if( Distance (i, j , i+1 ,j+1)<=Radio_Range)//the function node_degree[i]=node_degree[i]+1; cout<<node_degree[i]<<endl; } }

    Read the article

  • WPF TreeView Virtualization

    - by Carlo
    I'm trying to figure out this virtualization feature, I'm not sure if I'm understanding it wrong or what's going on, but I'm using the ANTS memory profiler to check the number of items in a virtualized TreeView, and it just keeps increasing. I have a TreeView with 1,001 items (1 root, 1000 sub-items), and I always get up to 1,001 TreeViewItems, 1,001 ToggleButtons and 1,001 TextBlocks. Isn't virtualization supposed to re-use the items? If so, why would I have 1,001 of each? Also, the CleanUpVirtualizedItem never fires. Let me know if I'm understanding this wrong and if you have resources on how to use this. I've searched over the internet but haven't found anything useful. EDIT: Even the memory used by the tree grows from aporx. 4mb to 12mb when I expand and scroll through all the items. Let me know thanks. This is my code. XAML: <Window x:Class="RadTreeViewExpandedProblem.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Window1" Height="300" Width="300"> <Grid> <TreeView x:Name="treeView" VirtualizingStackPanel.IsVirtualizing="True" VirtualizingStackPanel.CleanUpVirtualizedItem="TreeView_CleanUpVirtualizedItem"> <TreeView.ItemsPanel> <ItemsPanelTemplate> <VirtualizingStackPanel /> </ItemsPanelTemplate> </TreeView.ItemsPanel> </TreeView> </Grid> </Window> C#: public partial class Window1 : Window { public Window1() { InitializeComponent(); TreeViewItem rootItem = new TreeViewItem() { Header = "Item Level 0" }; for (int i = 0; i < 1000; i++) { TreeViewItem itemLevel1 = new TreeViewItem() { Header = "Item Level 1" }; itemLevel1.Items.Add(new TreeViewItem()); rootItem.Items.Add(itemLevel1); } treeView.Items.Add(rootItem); } private void TreeView_CleanUpVirtualizedItem(object sender, CleanUpVirtualizedItemEventArgs e) { } }

    Read the article

  • Xcode: Unable to open project... cannot be opened because the project file cannot be parsed...

    - by Chris Butler
    Hi everyone. I have been working for a while to create an iPhone app. Today when my battery was low, I was working and constantly saving my source files then the power went out... Now when I plugged my computer back in and it is getting good power I try to open my project file and I get an error: "Unable to Open Project Project ... cannot be opened because the project file cannot be parsed." Is there a way that people know of that I can recover from this? I tried using an older project file and re inserting it and then compiling. It gives me a funky error which is probably because it isn't finding all the files it wants... I really don't want to rebuild my project from scratch if possible. Thanks in advance. EDIT Ok, I did a diff between this and a slightly older project file that worked and saw that there was some corruption in the file. After merging them (the good and newest parts) it is now working. Great points about the SVN. I have one, but there has been some funkiness trying to sync XCode with it. I'll definitely spend more time with it now... ;-) Thanks for everyone's comments and suggestions.

    Read the article

  • Reset push notification settings for app

    - by hanno
    I am developing an app with push notifications. To check all possible ways of user interaction, I'd like to test my app when a user declines to have push notifications enabled for my app during the first start. The dialog (initiated by registerForRemoteNotificationTypes), however, appears only once per app. How do I reset the iPhone OS's memory of my app. Deleting the app and reinstalling doesn't help.

    Read the article

  • Android: OutofMemoryError: bitmap size exceeds VM budget with no reason I can see.

    - by Meymann
    Hi. I am having an OutOfMemory exception with a gallery over 600x800 pixels JPEG's. The environment I've been using Gallery with JPG images around 600x800 pixels. Since my content may be a bit more complex than just images, I have set each view to be a RelativeLayout that wraps ImageView with the JPG. In order to "speed up" the user experience I have a simple cache of 4 slots that prefetches (in a looper) about 1 image left and 1 image right to the displayed image and keeps them in a 4 slot HashMap. The platform I am using AVD of 256 RAM and 128 Heap Size, with a 600x800 screen. It also happens on an Entourage Edge target, except that with the device it's harder to debug. The problem I have been getting an exception: OutofMemoryError: bitmap size exceeds VM budget And it happens when fetching the fifth image. I have tried to change the size of my image cache, and it is still the same. The strange thing: There should not be a memory problem In order to make sure the heap limit is very far away from what I need, I have defined a dummy 8MB array in the beginning, and left it unreferenced so it's immediately dispatched. It is a member of the activity thread and is defined as following static { @SuppressWarnings("unused") byte dummy[] = new byte[ 8*1024*1024 ]; } The result is that the heap size is nearly 11MB and it's all free. Note I have added that trick after it began to crash. It makes OutOfMemory less frequent. Now, I am using DDMS. Just before the crash (does not change much after the crash), DDMS shows: ID Heap Size Allocated Free %Used #Objects 1 11.195 MB 2.428 MB 8.767 MB 21.69% 47,156 And in the detail table it shows: Type Count Total Size Smallest Largest Median Average free 1,536 8.739MB 16B 7.750MB 24B 5.825KB The largest block is 7.7MB. And yet the LogCat says: ERROR/dalvikvm-heap(1923): 925200-byte external allocation too large for this process. If you mind the relation of the median and the average, it is plausible to assume that most of the available blocks are very small. However, there is a block large enough for the bitmap, it's 7.7M. How come it is still not enough? Note: I recorded a heap trace. When looking at the amount of data allocated, it does not feel like more than 2M is allocated. It does match the free memory report by DDMS. Could it be that I experience some problem like heap-fragmentation? How do I solve/workaround the problem? Is the heap shared to all threads? Could it be that I interpret the DDMS readout in a wrong way, and there is really no 900K block to allocate? If so, can anybody please tell me where I can see that? Thanks a lot Meymann

    Read the article

  • Is there Linq to Nhibernate for stateless session?

    - by Jenea
    I was using regular session for loading some items from database via linq. The problem is that it caches the entities and memory load increases very much unnecessarily. Is there a way to replace session with stateless session without introducing many changes in client code?

    Read the article

  • performing simple stack overflow on Mac os 10.6

    - by REALFREE
    I'm trying to learn about stack base overflow and write a simple code to exploit stack. But somehow it doesn't work at all but showing only Abort trap on my machine (mac os leopard) I guess Mac os treats overflow differently, it won't allow me to overwrite memory through c code. for example, strcpy(buffer, input) // lets say char buffer[6] but input is 7 bytes on Linux machine, this code successfully overwrite next stack, but prevented on mac os (Abort trap) Anyone know how to perform a simple stack-base overflow on mac machine?

    Read the article

< Previous Page | 293 294 295 296 297 298 299 300 301 302 303 304  | Next Page >