Search Results

Search found 22500 results on 900 pages for 'memory issues'.

Page 300/900 | < Previous Page | 296 297 298 299 300 301 302 303 304 305 306 307  | Next Page >

  • OpenLayers FramedCloud Autosizing

    - by Kyle
    According to the documentation, I should be able to configure the size of my OpenLayers popup by declaring an OpenLayers.Size object in the FramedCloud constructor: this.popup = new OpenLayers.Popup.FramedCloud('featurePopup', this.options.feature.geometry.getBounds().getCenterLonLat(), new OpenLayers.Size(80, 60), this.template(this.formattedAttrs()), null, true, this.onPopupClose ); Currently the popup that is rendered is autosized no matter what dimensions I use in the constructor. I've tried manually setting the autosizing attribute of the FramedCloud to false as well as manually adjusting the css styling for the popup without achieving the results I need. I checked and found some similar issues in the OpenLayers 2.11 issues list, but I haven't found a workaround. Any ideas?

    Read the article

  • How to write a datatable to excel work book at client side

    - by senthilkumar
    Hi Guys Currently i am doing one project in that we need to generate report from database. Since my server memory is too low im getting 'Out of Memory' Exception when im writing it at serverside and also when i write directly to a excel file using http header as excel file im not able to create multiple sheets since my database table is huge more than 65536 rows. I saw many solution using a third party tool but i cant use those into mine..If anyone already worked on this please give me some direction. Also i tried using javascript but for that i need to use datagrid at server side?? but in my project i m not allowed to use like this.

    Read the article

  • jKey (JavaScript key shortcut plugin) Issue

    - by Oscar Godson
    Me and a friend are writing a plugin for jQuery that makes it easy for devs to add key shortcuts and we're damn close but no cigar. We're having issues with the key combos. It seems like we are having issues when you call the same selector multiple times on a page. Try pressing alt+a... youll see it works one time, then gets all mangled up. Anyone know how to fix it? It'll be on github after it's corrected and I'd be happy to add "thank you to" link to whoever can fix this in the header with the copyright info :) It's nicely documented and i have all the code and stuff here. So... anyone? http://jsbin.com/azaha4

    Read the article

  • How to return a string literal from a function

    - by skydoor
    Hi I am always confused about return a string literal or a string from a function. I was told that there might be memory leak because you don't know when the memory will be deleted? For example, in the code below, how to implement foo() so that make the output of the code is "Hello World"? void foo ( ) // you can add parameters here. { } int main () { char *c; foo ( ); printf ("%s",c); return 0; } Also if the return type of foo() is not void, but you can return char*, what should it be.

    Read the article

  • Type Declaration - Pointer Asterisk Position

    - by sahs
    Hello, in C++, the following means "allocate memory for an int pointer": int* number; So, the asterisk is part of the variable type; without it, that would mean something else (that's why I usually don't separate the asterisk from the variable type). Then what is the reason the asterisk is considered something else, instead of being part of the type? For example, it seems better, if the following meant "allocate memory for two int pointers": int* number1, number2; Am I wrong?

    Read the article

  • thread management in nbody code of cuda-sdk

    - by xnov
    When I read the nbody code in Cuda-SDK, I went through some lines in the code and I found that it is a little bit different than their paper in GPUGems3 "Fast N-Body Simulation with CUDA". My questions are: First, why the blockIdx.x is still involved in loading memory from global to share memory as written in the following code? for (int tile = blockIdx.y; tile < numTiles + blockIdx.y; tile++) { sharedPos[threadIdx.x+blockDim.x*threadIdx.y] = multithreadBodies ? positions[WRAP(blockIdx.x + q * tile + threadIdx.y, gridDim.x) * p + threadIdx.x] : //this line positions[WRAP(blockIdx.x + tile, gridDim.x) * p + threadIdx.x]; //this line __syncthreads(); // This is the "tile_calculation" function from the GPUG3 article. acc = gravitation(bodyPos, acc); __syncthreads(); } isn't it supposed to be like this according to paper? I wonder why sharedPos[threadIdx.x+blockDim.x*threadIdx.y] = multithreadBodies ? positions[WRAP(q * tile + threadIdx.y, gridDim.x) * p + threadIdx.x] : positions[WRAP(tile, gridDim.x) * p + threadIdx.x]; Second, in the multiple threads per body why the threadIdx.x is still involved? Isn't it supposed to be a fix value or not involving at all because the sum only due to threadIdx.y if (multithreadBodies) { SX_SUM(threadIdx.x, threadIdx.y).x = acc.x; //this line SX_SUM(threadIdx.x, threadIdx.y).y = acc.y; //this line SX_SUM(threadIdx.x, threadIdx.y).z = acc.z; //this line __syncthreads(); // Save the result in global memory for the integration step if (threadIdx.y == 0) { for (int i = 1; i < blockDim.y; i++) { acc.x += SX_SUM(threadIdx.x,i).x; //this line acc.y += SX_SUM(threadIdx.x,i).y; //this line acc.z += SX_SUM(threadIdx.x,i).z; //this line } } } Can anyone explain this to me? Is it some kind of optimization for faster code?

    Read the article

  • Silverlight Binding: User controls inside datagrid.

    - by Eric
    I have a DataGrid in my silverlight app that has a few columns. A couple basic columns bound with no issues. One column has a UserControl in it and the XAML is as follows: <data:DataGridTemplateColumn Header="" CanUserSort="True" Width="107"> <data:DataGridTemplateColumn.CellTemplate> <DataTemplate> <local:StaticPageEnlistment EnlistmentName="{Binding SiteName}" Width="400" Height="150"/> </DataTemplate> </data:DataGridTemplateColumn.CellTemplate> </data:DataGridTemplateColumn> So I have a public property that's a string called EnlistmentName that I have bound to the SiteName value. I use this same "{Binding SiteName}" in all my other colums with no issues, why can't the user control accept the same binding string?

    Read the article

  • C++ conditional compilation

    - by Shaown
    I have the following code snippet #ifdef DO_LOG #define log(p) record(p) #else #define log(p) #endif void record(char *data){ ..... ..... } Now if I call log("hello world") in my code and DO_LOG isn't defined, will the line be compiled, in other words will it eat up the memory for the string "hello world"? P.S. There are a lot of record calls in the program and it is memory sensitive, so is there any other way to conditionally compile so that it only depends on the #define DO_LOG? Thanks in advance.

    Read the article

  • Read random lines from huge CSV file in Python

    - by jbssm
    I have this quite big CSV file (15 Gb) and I need to read about 1 million random lines from it. As far as I can see - and implement - the CSV utility in Python only allows to iterate sequentially in the file. It's very memory consuming to read the all file into memory to use some random choosing and it's very time consuming to go trough all the file and discard some values and choose others, so, is there anyway to choose some random line from the CSV file and read only that line? I tried without success: import csv with open('linear_e_LAN2A_F_0_435keV.csv') as file: reader = csv.reader(file) print reader[someRandomInteger] A sample of the CSV file: 331.093,329.735 251.188,249.994 374.468,373.782 295.643,295.159 83.9058,0 380.709,116.221 352.238,351.891 183.809,182.615 257.277,201.302 61.4598,40.7106

    Read the article

  • are runtime linking library globals shared among plugins loaded with dlopen?

    - by conejoroy
    I've a C++ program that links at runtime with, lets say, mylib.so. then, the same program uses dlopen()/dlsym() to load a function from myplugin.so, dynamic library that in turn has dependencies to mylib.so. My question is: will the program AND the function in the plugin access the same globals defined in mydlib.so in the same memory area reserved for the program, or each will be assigned different, unrelated copies in its own memory space? if the latter is the default behaviour, is it possible to change that? Thanks in advance =)!

    Read the article

  • How to set a bean property before executing this f:event listener

    - by user
    How to set a bean property from jsf page before executing this f:event listener: <f:event type="preRenderComponent" listener="bean.method}"/> I tried the below code but it does not set the value to the bean property. <f:event type="preRenderComponent" listener="bean.method}"> <f:setPropertyActionListener target="#{bean.howMany}" value="2"/> </f:event> JSF2.1.6 with PF 3.3 EDIT Any issues with this below code? (This works! but I just want to confirm if there are any issues with this!?) <f:event type="preRenderComponent" listener="#{bean.setHowMany(15)}"/> <f:event type="preRenderComponent" listener="#{bean.method}"/>

    Read the article

  • jQuery UI Slider Problem

    - by OneNerd
    I have been using jQuery sliders for about a week now without issues in my project, but I just hit an issue. I am adding 3 sliders to my page All 3 are added exact same way (like this): $('#slider_id').slider({value:100,'slide':function(e, ui){// some code}}); 2 work properly One does not work (it gives me a fiebug error 'f is undefined') when I drag the slider handle The only glaring difference I can see is that the one giving the error is inside of a jQuery UI dialog(). Interestingly, when I place it outside of the dialog, it works! So, wondering if there are known issues with sliders inside dialogs, and/or if there are any workarounds. Thanks

    Read the article

  • How can I position QDockWidgets as the screen shot shows using code?

    - by Nathan
    I want a Qt window to come up with the following arrangement of dock widgets on the right. Qt allows you to provide an argument to the addDockWidget method of QMainWindow to specify the position (top, bottom, left or right) but apparently not how two QDockWidgets placed on the same side will be arranged. Here is the code that adds the dock widgets. this uses PyQt4 but it should be the same for Qt with C++ self.memUseGraph = mem_use_widget(self) self.memUseDock = QDockWidget("Memory Usage") self.memUseDock.setObjectName("Memory Usage") self.memUseDock.setWidget(self.memUseGraph) self.addDockWidget(Qt.DockWidgetArea(Qt.RightDockWidgetArea),self.memUseDock) self.diskUsageGraph = disk_usage_widget(self) self.diskUsageDock = QDockWidget("Disk Usage") self.diskUsageDock.setObjectName("Disk Usage") self.diskUsageDock.setWidget(self.diskUsageGraph) self.addDockWidget(Qt.DockWidgetArea(Qt.RightDockWidgetArea),self.diskUsageDock) When this code is used to add both of them to the right side, one is above the other, not like the screen shot I made. The way I made that shot was to drag them there with the mouse after starting the program, but I need it to start that way.

    Read the article

  • How to manage maintenance/bug-fix branches in Subversion when third-party installers are involved?

    - by Mike Spross
    We have a suite of related products written in VB6, with some C# and VB.NET projects, and all the source is kept in a single Subversion repository. We haven't been using branches in Subversion (although we do tag releases now), and simply do all development in trunk, creating new releases when the trunk is stable enough. This causes no end of grief when we release a new version, issues are found with it, and we have already begun working on new features or major changes to the trunk. In the past, we would address this in one of two ways, depending on the severity of the issues and how stable we thought the trunk was: Hurry to stabilize the trunk, fix the issues, and then release a maintenance update based on the HEAD revision, but this had the side effect of releases that fixed the bugs but introduced new issues because of half-finished features or bugfixes that were in trunk. Make customers wait until the next official release, which is usually a few months. We want to change our policies to better deal with this situation. I was considering creating a "maintenance branch" in Subversion whenever I tag an official release. Then, new development would continue in trunk, and I can periodically merge specific fixes from trunk into the maintenance branch, and create a maintenance release when enough fixes are accumulated, while we continue to work on the next major update in parallel. I know we could also have a more stable trunk and create a branch for new updates instead, but keeping current development in trunk seems simpler to me. The major problem is that while we can easily branch the source code from a release tag and recompile it to get the binaries for that release, I'm not sure how to handle the setup and installer projects. We use QSetup to create all of our setup programs, and right now when we need to modify a setup project, we just edit the project file in-place (all the setup projects and any dependencies that we don't compile ourselves are stored on a separate server, and we make sure to always compile the setup projects on that machine only). However, since we may add or remove files to the setup as our code changes, there is no guarantee that today's setup projects will work with yesterday's source code. I was going to put all the QSetup projects in Subversion to deal with this, but I see some problems with this approach. I want the creation of setup programs to be as automated as possible, and at the very least, I want a separate build machine where I can build the release that I want (grabbing the code from Subversion first), grab the setup project for that release from Subversion, recompile the setup, and then copy the setup to another place on the network for QA testing and eventual release to customers. However, when someone needs to change a setup project (to add a new dependency that trunk now requires or to make other changes), there is a problem. If they treat it like a source file and check it out on their own machine to edit it, they won't be able to add files to the project unless they first copy the files they need to add to the build machine (so they are available to other developers), then copy all the other dependencies from the build machine to their machine, making sure to match the folder structure exactly. The issue here is that QSetup uses absolute paths for any files added to a setup project. However, this means installing a bunch of setup dependencies onto development machines, which seems messy (and which could destabilize the development environment if someone accidentally runs the setup project on their machine). Also, how do we manage third-party dependencies? For example, if the current maintenance branch used MSXML 3.0 and the trunk now requires MSXML 4.0, we can't go back and create a maintenance release if we have already replaced the MSXML library on the build machine with the latest version (assuming both versions have the same filename). The only solution I can think is to either put all the third-party dependencies in Subversion along with the source code, or to make sure we put different library versions in separate folders (i.e. C:\Setup\Dependencies\MSXML\v3.0 and C:\Setup\Dependencies\MSXML\v4.0). Is one way "better" or more common than the other? Are there any best practices for dealing with this situation? Basically, if we release v2.0 of our software, we want to be able to release v2.0.1, v2.0.2, and v.2.0.3 while we work on v2.1, but the whole setup/installation project and setup dependency issue is making this more complicated than the the typical "just create a branch in Subversion and recompile as needed" answer.

    Read the article

  • Efficient alternatives to merge for larger data.frames R

    - by Etienne Low-Décarie
    I am looking for an efficient (both computer resource wise and learning/implementation wise) method to merge two larger (size1 million / 300 KB RData file) data frames. "merge" in base R and "join" in plyr appear to use up all my memory effectively crashing my system. Example load test data frame and try test.merged<-merge(test, test) or test.merged<-join(test, test, type="all") - The following post provides a list of merge and alternatives: How to join data frames in R (inner, outer, left, right)? The following allows object size inspection: https://heuristically.wordpress.com/2010/01/04/r-memory-usage-statistics-variable/ Data produced by anonym

    Read the article

  • Side effects of reordering columns in PostgreSQL

    - by Summer
    I sometimes re-order the columns in my Postgres DB. Since Postgres can only add columns at the end of tables, I end up re-ordering by adding new columns at the end of the table, setting them equal to existing columns, and then dropping the original columns. My question is: what does PostgreSQL do with the memory that's freed by dropped columns? Does it automatically re-use the memory, so a single record consumes the same amount of space as it did beforehand? But that would require a re-write of the whole table, so to avoid that, does it just keep a bunch of blank space around in each record? Thanks! ~S

    Read the article

  • How Flask loads blueprint internaly?

    - by Ignas B.
    I'm just interested how Flask's blueprints gets imported. It still imports the python module at the end of all the stuff done by Flask and if I'm right python does two things when importing: registers the module name in the namespace and then initialize it if needed. So if Flask blueprint is initialized when it gets registered, so all the module then is in memory and if there are lots of blueprints to register, the memory just gets wasted, because in one request basically you use one blueprint. Not a big loss but still... But if it is only registered in the namespace and initialized only when needed (when the real request reaches it), then it make sense to register them all at once (as is the recommended way I understood). This is I guess the case here :) But just wanted to ask and understand a bit deeper.

    Read the article

  • Maximum executable size on the iPhone?

    - by Sirp
    Is there a maximum size of executables or executables + shared objects on the iPhone? I've been developing an app that has been crashing on startup or early in execution with SIGSYS. Removing code from the program has helped, though structuring data so the code is simply not executed does not. This could be memory corruption, of some kind, however when I compiled with -Os rather than -O2 or -O3 the size of my executable goes down from 5.15MB to 3.60MB and the application runs perfectly. I also have a bunch of libraries I use, of course. I'm wondering, is there a limit on the size of executable code on the iPhone? Or am I just 'getting lucky' and masking memory corruption when I use -Os?

    Read the article

  • What the best approach to iterate and "store" files over a directory in C (Linux) ?

    - by Andrei Ciobanu
    I have written a function that checks if to files are duplicates or not. This function signature is: int check_dup_memmap(char *f1_name, char *f2_name) It returns: (-1) - If something went wrong; (0) - If the two files are similar; (+1) - If the two files are different; The next step is to write a function that iterates through all the files in a certain directory,apply the previous function, and gives a report on every existing duplicates. Initially I've thought to write a function that generates a file with all the filenames in a certain directory and then, read that file again and gain and compare every two files. Here is that version of the function, that gets all the filenames in a certain directory. void *build_dir_tree(char *dirname, FILE *f) { DIR *cdir = NULL; struct dirent *ent = NULL; struct stat buf; if(f == NULL){ fprintf(stderr, "NULL file submitted. [build_dir_tree].\n"); exit(-1); } if(dirname == NULL){ fprintf(stderr, "NULL dirname submitted. [build_dir_tree].\n"); exit(-1); } if((cdir = opendir(dirname)) == NULL){ char emsg[MFILE_LEN]; sprintf(emsg, "Cannot open dir: %s [build_dir_tree]\t",dirname); perror(emsg); } chdir(dirname); while ((ent = readdir(cdir)) != NULL) { lstat(ent->d_name, &buf); if (S_ISDIR(buf.st_mode)) { if (strcmp(".", ent->d_name) == 0 || strcmp("..", ent->d_name) == 0) { continue; } build_dir_tree(ent->d_name, f); } else{ fprintf(f, "/%s/%s\n",util_get_cwd(),ent->d_name); } } chdir(".."); closedir(cdir); } Still I consider this approach a little inefficient, as I have to parse the file again and again. In your opinion what are other approaches should I follow: Write a datastructure and hold the files instead of writing them in the file ? I think for a directory with a lot of files, the memory will become very fragmented. Hold all the filenames in auto-expanding array, so that I can easy access every file by their index, because they will in a contiguous memory location. Map this file in memory using mmap() ? But mmap may fail, as the file gets to big. Any opinions on this. I want to choose the most efficient path, and access as few resources as possible. This is the requirement of the program... EDIT: Is there a way to get the numbers of files in a certain directory, without iterating through it ?

    Read the article

  • python dictionary with constant value-type

    - by s.kap
    hi there, I bumped into a case where I need a big (=huge) python dictionary, which turned to be quite memory-consuming. However, since all of the values are of a single type (long) - as well as the keys, I figured I can use python (or numpy, doesn't really matter) array for the values ; and wrap the needed interface (in: x ; out: d[x]) with an object which actually uses these arrays for the keys and values storage. I can use a index-conversion object (input -- index, of 1..n, where n is the different-values counter), and return array[index]. I can elaborate on some techniques of how to implement such an indexing-methods with reasonable memory requirement, it works and even pretty good. However, I wonder if there is such a data-structure-object already exists (in python, or wrapped to python from C/++), in any package (I checked collections, and some Google searches). Any comment will be welcome, thanks.

    Read the article

  • Anyone Know a Great Sparse One Dimensional Array Library in Python?

    - by TheJacobTaylor
    I am working on an algorithm in Python that uses arrays heavily. The arrays are typically sparse and are read from and written to constantly. I am currently using relatively large native arrays and the performance is good but the memory usage is high (as expected). I would like to be able to have the array implementation not waste space for values that are not used and allow an index offset other than zero. As an example, if my numbers start at 1,000,000 I would like to be able to index my array starting at 1,000,000 and not be required to waste memory with a million unused values. Array reads and writes needs to be fast. Expanding into new territory can be a small delay but reads and writes should be O(1) if possible. Does anybody know of a library that can do it? Thanks!

    Read the article

  • C++: Why don't I need to check if references are invalid/null?

    - by jbu
    Hi all, Reading http://www.cprogramming.com/tutorial/references.html, it says: In general, references should always be valid because you must always initialize a reference. This means that barring some bizarre circumstances (see below), you can be certain that using a reference is just like using a plain old non-reference variable. You don't need to check to make sure that a reference isn't pointing to NULL, and you won't get bitten by an uninitialized reference that you forgot to allocate memory for. My question is how do I know that the object's memory hasn't been freed/deleted AFTER you've initialized the reference. What it comes down to is that I can't take this advice on faith and I need a better explanation. Can anyone shed some light? Thanks, jbu

    Read the article

  • data structure for counting frequencies in a database table-like format

    - by user373312
    i was wondering if there is a data structure optimized to count frequencies against data that is stored in a database table-like format. for example, the data comes in a (comma) delimited format below. col1, col2, col3 x, a, green x, b, blue ... y, c, green now i simply want to count the frequency of col1=x or col1=x and col2=green. i have been storing the data in a database table, but in my profiling and from empirical observation, database connection is the bottle-neck. i have tried using in-memory database solutions too, and that works quite well; the only problem is memory requirements and quirky init/destroy calls. also, i work mainly with java, but have experience with .net, and was wondering if there was any api to work with "tabular" data in a linq way using java. any help is appreciated.

    Read the article

< Previous Page | 296 297 298 299 300 301 302 303 304 305 306 307  | Next Page >