Search Results

Search found 260258 results on 10411 pages for 'stack size'.

Page 22/10411 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • A very basic auto-expanding list/array

    - by MainMa
    Hi, I have a method which returns an array of fixed type objects (let's say MyObject). The method creates a new empty Stack<MyObject>. Then, it does some work and pushes some number of MyObjects to the end of the Stack. Finally, it returns the Stack.ToArray(). It does not change already added items or their properties, nor remove them. The number of elements to add will cost performance. There is no need to sort/order the elements. Is Stack a best thing to use? Or must I switch to Collection or List to ensure better performance and/or lower memory cost?

    Read the article

  • Stack trace in website project, when debug = false

    - by chandmk
    We have a website project. We are logging unhanded exceptions via a appdomain level exception handler. When we set debug= true in web.config, the exception log is showing the offending line numbers in the stack trace. But when we set debug = false, in web.config, log is not displaying the line numbers. We are not in a position to convert the website project in to webapplication project type at this time. Its legacy application and almost all the code is in aspx pages. We also need to leave the project in 'updatable' mode. i.e. We can't user pre-compile option. We are generating pdb files. Is there anyway to tell this kind of website projects to generate the pdb files, and show the line numbers in the stack trace?

    Read the article

  • OutOfMemoryException, stack size is huge, large number of threads

    - by Captain Comic
    Hello, I was profiling my .net windows service. I was trying to discover OutOfMemoryException and discovered that my stack size is huge and is growing because the the number of threads keeps growing. Each thread gets 1024 KB on Windows x64 machine. Thus when my app has 754 threads the stack size would be 772 MB. The problem for me is that i don't know where these thread come from. Initially my app has a very limited number of threads and they keep growing with time. I have two suspicions - either these threads are created by WCF or by database connection. My application uses both WCF and datasets. Also I tried to profile my app in Ants do Trace i can see large number of System.ServiceModel.Channels.ClientReliableDuplexSessionChannel and this number is increasing with time. I can see thousands of these objects created. So what I want to know is who is creating threads (tools to discover, profilers) and if it is WCF who is creating these threads.

    Read the article

  • Changing the background colour of lines in the stack

    - by Mongus Pong
    I have just changed the colour scheme of my Visual Studio 2008 environment to have a dark backround with light text. This is so much easier on the eyes. The only problem is lines that are on the call stack... Those lines that are referred to in this thread here in visual studio some lines of code have light grey background while debugging These lines have a bright grey background, which against my light text means I cannot read the text at all. I have been through every single colour in Tools - Options - Fonts and Colours and cannot find one that matches. How can I change the background for lines on the current call stack?

    Read the article

  • CSS font-size increment - proportional?

    - by George
    Hello. I have several elements with already set fonts - like <div style="font-size: 10px"> some text </div> <div style="font-size: 20p"> some text </div> I want to increment the font size proprtionally, eg <div style="font-size: 15px"> .......................... <div style="font-size: 30px"> is that possible? div {font-size: whatever} simply overwrites the values

    Read the article

  • How to implement three stacks using a single array.

    - by buried-shopno
    Hi, I came across this problem in an interview website. The problem asks for efficiently implement three stacks in a single array, such that no stack overflows until there is no space left in the entire array space. For implementing 2 stacks in an array, it's pretty obvious: 1st stack grows from LEFT to RIGHT, and 2nd stack grows from RIGHT to LEFT; and when the stackTopIndex crosses, it signals an overflow. Thanks in advance for your insightful answer.

    Read the article

  • Recursive function causing a stack overflow

    - by dbyrne
    I am trying to write a simple sieve function to calculate prime numbers in clojure. I've seen this question about writing an efficient sieve function, but I am not to that point yet. Right now I am just trying to write a very simple (and slow) sieve. Here is what I have come up with: (defn sieve [potentials primes] (if-let [p (first potentials)] (recur (filter #(not= (mod % p) 0) potentials) (conj primes p)) primes)) For small ranges it works fine, but causes a stack overflow for large ranges: user=> (sieve (range 2 30) []) [2 3 5 7 11 13 17 19 23 29] user=> (sieve (range 2 15000) []) java.lang.StackOverflowError (NO_SOURCE_FILE:0) I thought that by using recur this would be a non-stack-consuming looping construct? What am I missing?

    Read the article

  • Manual stack backtrace on Windows mobile (SEH)

    - by caahab
    Following situation: I'm developing an windows mobile application using the sdk 6. Target machine is a nautiz x7. To improve the error reporting I want to catch the structured exceptions (SEH) and do a stack backtrace to store some information for analysis. So far I have the information where the exception was thrown (windows core.dll) and I can backtrace the return adresses thru the stack. But what I want to know is, which instruction in my code caused the exception? Does anyone know how to use the available exception and context information to get the appropriate function/instruction address? Unfortunately windows mobile 6 sdk for pocketpc does not support all the helper functions to do stackwalks or mini dumps.

    Read the article

  • Is this implementation truely tail-recursive?

    - by CFP
    Hello everyone! I've come up with the following code to compute in a tail-recursive way the result of an expression such as 3 4 * 1 + cos 8 * (aka 8*cos(1+(3*4))) The code is in OCaml. I'm using a list refto emulate a stack. type token = Num of float | Fun of (float->float) | Op of (float->float->float);; let pop l = let top = (List.hd !l) in l := List.tl (!l); top;; let push x l = l := (x::!l);; let empty l = (l = []);; let pile = ref [];; let eval data = let stack = ref data in let rec _eval cont = match (pop stack) with | Num(n) -> cont n; | Fun(f) -> _eval (fun x -> cont (f x)); | Op(op) -> _eval (fun x -> cont (op x (_eval (fun y->y)))); in _eval (fun x->x) ;; eval [Fun(fun x -> x**2.); Op(fun x y -> x+.y); Num(1.); Num(3.)];; I've used continuations to ensure tail-recursion, but since my stack implements some sort of a tree, and therefore provides quite a bad interface to what should be handled as a disjoint union type, the call to my function to evaluate the left branch with an identity continuation somehow irks a little. Yet it's working perfectly, but I have the feeling than in calling the _eval (fun y->y) bit, there must be something wrong happening, since it doesn't seem that this call can replace the previous one in the stack structure... Am I misunderstanding something here? I mean, I understand that with only the first call to _eval there wouldn't be any problem optimizing the calls, but here it seems to me that evaluation the _eval (fun y->y) will require to be stacked up, and therefore will fill the stack, possibly leading to an overflow... Thanks!

    Read the article

  • Determine cluster size of file system in Python

    - by Philip Fourie
    I would like to calculate the "size on disk" of a file in Python. Therefore I would like to determine the cluster size of the file system where the file is stored. How do I determine the cluster size in Python? Or another built-in method that calculates the "size on disk" will also work. I looked at os.path.getsize but it returns the file size in bytes, not taking the FS's block size into consideration. I am hoping that this can be done in an OS independent way...

    Read the article

  • R: How to get a stack trace from the snow package

    - by Keith
    How can I get a stack trace back from a snow node after an error occurs? I am using the snow package (version 0.3-3) on R 2.10.1 and I'm getting errors when I use parSapply that do not occur when I use sapply. Snow is nice enough to give me the error message but it would be much more useful for me to have the kind of stack trace you can get from traceback(). So far I have tried: options(showWarnCalls = T, showErrorCalls = T) setDefaultClusterOptions(outfile = "/dev/tty") and options(error=traceback) setDefaultClusterOptions(outfile = "/dev/tty") without luck. I'm currently just testing with a local cluster ie: makeSOCKcluster(c("localhost","localhost")) but I will eventually be using an MPI cluster. Thanks.

    Read the article

  • Save NSWindow Size on Resize & Close For User

    - by incarna
    I've noticed that all applications on OS X seem to save the size you set it at. The next time you open it it's typically in the same position and size. I'm making an app and I've noticed that after resizing, if I launch the application again it's just the size of what I've set in Xcode 4's IB and not the size that I resized it to on launch. Do I have to manually save the window size each time its changed? Or is there an easier way to do this through IB? (My window does have a minimum size set if that changes anything.)

    Read the article

  • stack dump accessing malloc char array

    - by robUK
    Hello, gcc 4.4.3 c89 I have the following source code. And getting a stack dump on the printf. char **devices; devices = malloc(10 * sizeof(char*)); strcpy(devices[0], "smxxxx1"); printf("[ %s ]\n", devices[0]); /* Stack dump trying to print */ I am thinking that this should create an char array like this. devices[0] devices[1] devices[2] devices[4] etc And each element I can store my strings. Many thanks for any suggestions,

    Read the article

  • Messing with the stack in assembly and c++

    - by user246100
    I want to do the following: I have a function that is not mine (it really doesn't matter here but just to say that I don't have control over it) and that I want to patch so that it calls a function of mine, preserving the arguments list (jumping is not an option). What I'm trying to do is, to put the stack pointer as it was before that function is called and then call mine (like going back and do again the same thing but with a different function). This doesn't work straight because the stack becomes messed up. I believe that when I do the call it replaces the return address. So, I did a step to preserve the return address saving it in a globally variable and it works but this is not ok because I want it to resist to recursitivy and you know what I mean. Anyway, i'm a newbie in assembly so that's why I'm here. Please, don't tell me about already made software to do this because I want to make things my way. Of course, this code has to be compiler and optimization independent. My code (If it is bigger than what is acceptable please tell me how to post it): // A function that is not mine but to which I have access and want to patch so that it calls a function of mine with its original arguments void real(int a,int b,int c,int d) { } // A function that I want to be called, receiving the original arguments void receiver(int a,int b,int c,int d) { printf("Arguments %d %d %d %d\n",a,b,c,d); } long helper; // A patch to apply in the "real" function and on which I will call "receiver" with the same arguments that "real" received. __declspec( naked ) void patch() { _asm { // This first two instructions save the return address in a global variable // If I don't save and restore, the program won't work correctly. // I want to do this without having to use a global variable mov eax, [ebp+4] mov helper,eax push ebp mov ebp, esp // Make that the stack becomes as it were before the real function was called add esp, 8 // Calls our receiver call receiver mov esp, ebp pop ebp // Restores the return address previously saved mov eax, helper mov [ebp+4],eax ret } } int _tmain(int argc, _TCHAR* argv[]) { FlushInstructionCache(GetCurrentProcess(),&real,5); DWORD oldProtection; VirtualProtect(&real,5,PAGE_EXECUTE_READWRITE,&oldProtection); // Patching the real function to go to my patch ((unsigned char*)real)[0] = 0xE9; *((long*)((long)(real) + sizeof(unsigned char))) = (char*)patch - (char*)real - 5; // calling real function (I'm just calling it with inline assembly because otherwise it seems to works as if it were un patched // that is strange but irrelevant for this _asm { push 666 push 1337 push 69 push 100 call real add esp, 16 } return 0; }

    Read the article

  • C++ Stack Overflow

    - by PhilMAN
    Here is some code: void main() { GameEngine ge("phil", "anotherguy"); string response; do { ge.playGame(); cout << endl << "Do you want to (r)eplay the same battle, (s)tart a new battle, or (q)uit? "; cin >> response; } while(response == "r" || response == "R" || response == "s" || response == "S" ); } GameEngine::GameEngine(string name1, string name2) { p1Name = name1; p2Name = name2; } void GameEngine::playGame() { cout << "PLAY GAME" << endl; Army p1, p2; Battlefield testField; RuleSet rs; int xSize = 13; // Number of rows int ySize = 13; // Number of columns loadData(p1, p2, testField, rs, xSize, ySize); ... } void GameEngine::loadData(Army& p1, Army& p2, Battlefield& testField, RuleSet& rs, int& xSize, int& ySize) { string terrain = BattlefieldUtils::pickTerrain(); string armySplit[14];//id index 1 string ruleSplit[19];//in index 7 string armyP1, armyP2, ruleSet; Skill p1Skills[8]; Skill p2Skills[8]; CreatureStack p1Stacks[20]; CreatureStack p2Stacks[20]; ... } CreatureStack(){quantity = 0; isLive = false; id = -1;}; Army(){}; Battlefield(){}; RuleSet(){}; I have posted every line of code that executes until the program crashes. This code ran fine for a long time, I added some stuff that does not even execute until way after the code I have posted here, and bam stack overflow that occurs at GameEngine::loadData() line: CreatureStack p2Stacks[20]; will not go away. What am I doing wrong here? Is that all the stack can handle? I increased the stack size in Visual Studio and got the error to go away, but that slowed things down considerably, so I would rather just get to the source of the issue and fix that.

    Read the article

  • Size of a class with 'this' pointer

    - by psvaibhav
    The size of a class with no data members is returned as 1 byte, even though there is an implicit 'this' pointer declared. Shouldn't the size returned be 4 bytes(on a 32 bit machine)? I came across articles which indicated that 'this' pointer is not counted for calculating the size of the object. But I am unable to understand the reason for this. Also, if any member function is declared virtual, the size of the class is now returned as 4 bytes. This means that the vptr is counted for calculating the size of the object. Why is the vptr considered and 'this' pointer ignored for calculating the size of object?

    Read the article

  • Investigating on xVelocity (VertiPaq) column size

    - by Marco Russo (SQLBI)
      In January I published an article about how to optimize high cardinality columns in VertiPaq. In the meantime, VertiPaq has been rebranded to xVelocity: the official name is now “xVelocity in-memory analytics engine (VertiPaq)” but using xVelocity and VertiPaq when we talk about Analysis Services has the same meaning. In this post I’ll show how to investigate on columns size of an existing Tabular database so that you can find the most important columns to be optimized. A first approach can be looking in the DataDir of Analysis Services and look for the folder containing the database. Then, look for the biggest files in all subfolders and you will find the name of a file that contains the name of the most expensive column. However, this heuristic process is not very optimized. A better approach is using a DMV that provides the exact information. For example, by using the following query (open SSMS, open an MDX query on the database you are interested to and execute it) you will see all database objects sorted by used size in a descending way. SELECT * FROM $SYSTEM.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS ORDER BY used_size DESC You can look at the first rows in order to understand what are the most expensive columns in your tabular model. The interesting data provided are: TABLE_ID: it is the name of the object – it can be also a dictionary or an index COLUMN_ID: it is the column name the object belongs to – you can also see ID_TO_POS and POS_TO_ID in case they refer to internal indexes RECORDS_COUNT: it is the number of rows in the column USED_SIZE: it is the used memory for the object By looking at the ration between USED_SIZE and RECORDS_COUNT you can understand what you can do in order to optimize your tabular model. Your options are: Remove the column. Yes, if it contains data you will never use in a query, simply remove the column from the tabular model Change granularity. If you are tracking time and you included milliseconds but seconds would be enough, round the data source column to the nearest second. If you have a floating point number but two decimals are good enough (i.e. the temperature), round the number to the nearest decimal is relevant to you. Split the column. Create two or more columns that have to be combined together in order to produce the original value. This technique is described in VertiPaq optimization article. Sort the table by that column. When you read the data source, you might consider sorting data by this column, so that the compression will be more efficient. However, this technique works better on columns that don’t have too many distinct values and you will probably move the problem to another column. Sorting data starting from the lower density columns (those with a few number of distinct values) and going to higher density columns (those with high cardinality) is the technique that provides the best compression ratio. After the optimization you should be able to reduce the used size and improve the count/size ration you measured before. If you are interested in a longer discussion about internal storage in VertiPaq and you want understand why this approach can save you space (and time), you can attend my 24 Hours of PASS session “VertiPaq Under the Hood” on March 21 at 08:00 GMT.

    Read the article

  • Investigating on xVelocity (VertiPaq) column size

    - by Marco Russo (SQLBI)
      In January I published an article about how to optimize high cardinality columns in VertiPaq. In the meantime, VertiPaq has been rebranded to xVelocity: the official name is now “xVelocity in-memory analytics engine (VertiPaq)” but using xVelocity and VertiPaq when we talk about Analysis Services has the same meaning. In this post I’ll show how to investigate on columns size of an existing Tabular database so that you can find the most important columns to be optimized. A first approach can be looking in the DataDir of Analysis Services and look for the folder containing the database. Then, look for the biggest files in all subfolders and you will find the name of a file that contains the name of the most expensive column. However, this heuristic process is not very optimized. A better approach is using a DMV that provides the exact information. For example, by using the following query (open SSMS, open an MDX query on the database you are interested to and execute it) you will see all database objects sorted by used size in a descending way. SELECT * FROM $SYSTEM.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS ORDER BY used_size DESC You can look at the first rows in order to understand what are the most expensive columns in your tabular model. The interesting data provided are: TABLE_ID: it is the name of the object – it can be also a dictionary or an index COLUMN_ID: it is the column name the object belongs to – you can also see ID_TO_POS and POS_TO_ID in case they refer to internal indexes RECORDS_COUNT: it is the number of rows in the column USED_SIZE: it is the used memory for the object By looking at the ration between USED_SIZE and RECORDS_COUNT you can understand what you can do in order to optimize your tabular model. Your options are: Remove the column. Yes, if it contains data you will never use in a query, simply remove the column from the tabular model Change granularity. If you are tracking time and you included milliseconds but seconds would be enough, round the data source column to the nearest second. If you have a floating point number but two decimals are good enough (i.e. the temperature), round the number to the nearest decimal is relevant to you. Split the column. Create two or more columns that have to be combined together in order to produce the original value. This technique is described in VertiPaq optimization article. Sort the table by that column. When you read the data source, you might consider sorting data by this column, so that the compression will be more efficient. However, this technique works better on columns that don’t have too many distinct values and you will probably move the problem to another column. Sorting data starting from the lower density columns (those with a few number of distinct values) and going to higher density columns (those with high cardinality) is the technique that provides the best compression ratio. After the optimization you should be able to reduce the used size and improve the count/size ration you measured before. If you are interested in a longer discussion about internal storage in VertiPaq and you want understand why this approach can save you space (and time), you can attend my 24 Hours of PASS session “VertiPaq Under the Hood” on March 21 at 08:00 GMT.

    Read the article

  • Stack , data and address space limits on an Ubuntu server

    - by PaulDaviesC
    I am running an Ubuntu server which has around 5000 users. The users are allowed to SSH in to the system. So in order to cap the memory used up by a process I have capped the address space limits using limits.conf. So my question is , should I be limiting the data and stack ? I feel that is not required since I am capping address space. Are there any pitfalls if I do not cap the stack and data limits?

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >