Search Results

Search found 10366 results on 415 pages for 'const char pointer'.

Page 389/415 | < Previous Page | 385 386 387 388 389 390 391 392 393 394 395 396  | Next Page >

  • Are Objective-C initializers allowed to share the same name?

    - by NattKatt
    I'm running into an odd issue in Objective-C when I have two classes using initializers of the same name, but differently-typed arguments. For example, let's say I create classes A and B: A.h: #import <Cocoa/Cocoa.h> @interface A : NSObject { } - (id)initWithNum:(float)theNum; @end A.m: #import "A.h" @implementation A - (id)initWithNum:(float)theNum { self = [super init]; if (self != nil) { NSLog(@"A: %f", theNum); } return self; } @end B.h: #import <Cocoa/Cocoa.h> @interface B : NSObject { } - (id)initWithNum:(int)theNum; @end B.m: #import "B.h" @implementation B - (id)initWithNum:(int)theNum { self = [super init]; if (self != nil) { NSLog(@"B: %d", theNum); } return self; } @end main.m: #import <Foundation/Foundation.h> #import "A.h" #import "B.h" int main (int argc, const char * argv[]) { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; A *a = [[A alloc] initWithNum:20.0f]; B *b = [[B alloc] initWithNum:10]; [a release]; [b release]; [pool drain]; return 0; } When I run this, I get the following output: 2010-04-26 20:44:06.820 FnTest[14617:a0f] A: 20.000000 2010-04-26 20:44:06.823 FnTest[14617:a0f] B: 1 If I reverse the order of the imports so it imports B.h first, I get: 2010-04-26 20:45:03.034 FnTest[14635:a0f] A: 0.000000 2010-04-26 20:45:03.038 FnTest[14635:a0f] B: 10 For some reason, it seems like it's using the data type defined in whichever @interface gets included first for both classes. I did some stepping through the debugger and found that the isa pointer for both a and b objects ends up the same. I also found out that if I no longer make the alloc and init calls inline, both initializations seem to work properly, e.g.: A *a = [A alloc]; [a initWithNum:20.0f]; If I use this convention when I create both a and b, I get the right output and the isa pointers seem to be different for each object. Am I doing something wrong? I would have thought multiple classes could have the same initializer names, but perhaps that is not the case.

    Read the article

  • casting doubles to integers in order to gain speed

    - by antirez
    Hello all, in Redis (http://code.google.com/p/redis) there are scores associated to elements, in order to take this elements sorted. This scores are doubles, even if many users actually sort by integers (for instance unix times). When the database is saved we need to write this doubles ok disk. This is what is used currently: snprintf((char*)buf+1,sizeof(buf)-1,"%.17g",val); Additionally infinity and not-a-number conditions are checked in order to also represent this in the final database file. Unfortunately converting a double into the string representation is pretty slow. While we have a function in Redis that converts an integer into a string representation in a much faster way. So my idea was to check if a double could be casted into an integer without lost of data, and then using the function to turn the integer into a string if this is true. For this to provide a good speedup of course the test for integer "equivalence" must be fast. So I used a trick that is probably undefined behavior but that worked very well in practice. Something like that: double x = ... some value ... if (x == (double)((long long)x)) use_the_fast_integer_function((long long)x); else use_the_slow_snprintf(x); In my reasoning the double casting above converts the double into a long, and then back into an integer. If the range fits, and there is no decimal part, the number will survive the conversion and will be exactly the same as the initial number. As I wanted to make sure this will not break things in some system, I joined #c on freenode and I got a lot of insults ;) So I'm now trying here. Is there a standard way to do what I'm trying to do without going outside ANSI C? Otherwise, is the above code supposed to work in all the Posix systems that currently Redis targets? That is, archs where Linux / Mac OS X / *BSD / Solaris are running nowaday? What I can add in order to make the code saner is an explicit check for the range of the double before trying the cast at all. Thank you for any help.

    Read the article

  • unable to trigger event in IE during clonning

    - by Abhimanyu
    Following is the code which will clone a set of div with their events(onclick) which is working fine for FF but in case of IE it is not firing events associated with each div. <html> <head> <style type='text/css'> .firstdiv{ border:1px solid red; } </style> <script language="JavaScript"> function show_tooltip(idx,condition,ev) { alert(idx +"=="+condition+"=="+ev); } function createCloneNode () { var cloneObj = document.getElementById("firstdiv").cloneNode(true); document.getElementById("maindiv").appendChild(cloneObj); } function init(){ var mainDiv = document.createElement("div"); mainDiv.id = 'maindiv'; var firstDiv = document.createElement("div"); firstDiv.id ='firstdiv'; firstDiv.className ='firstdiv'; for(var j=0;j<4;j++) { var summaryDiv = document.createElement("div"); summaryDiv.id = "sDiv"+j summaryDiv.className ='summaryDiv'; summaryDiv.onmouseover = function() {this.setAttribute("style","text-decoration:underline;cursor:pointer;");} summaryDiv.onmouseout = function() {this.setAttribute("style","text-decoration:none;");} summaryDiv.setAttribute("onclick", "show_tooltip("+j+",'view_month',event)"); summaryDiv.innerHTML = 'Div'+j; firstDiv.appendChild(summaryDiv); } mainDiv.appendChild(firstDiv); var secondDiv = document.createElement("div"); var linkDiv = document.createElement("div"); linkDiv.innerHTML ='create clone of above element'; linkDiv.onclick = function() { createCloneNode(); } secondDiv.appendChild(linkDiv); mainDiv.appendChild(secondDiv); document.body.appendChild(mainDiv); } </script> </head> <body> <script language="JavaScript"> init() </script> </body> </html> can anybody tell me whats the problem in above code please correct me..

    Read the article

  • Forced closed only when put alphabetical string in edit text

    - by Abdullah Al Mubarok
    So, I make a checker if an id is in the database or not, the id is in numerical string, the type in database is char(6) though. So this is my code public class input extends Activity{ /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.input); final EditText edittext = (EditText)findViewById(R.id.editText1); Button button = (Button)findViewById(R.id.button1); button.setOnClickListener(new OnClickListener(){ @Override public void onClick(View arg0) { // TODO Auto-generated method stub String nopel = edittext.getText().toString(); if(nopel.length() == 0){ Toast.makeText(getApplicationContext(), "error", Toast.LENGTH_SHORT).show(); }else{ List<NameValuePair> pairs = new ArrayList<NameValuePair>(); pairs.add(new BasicNameValuePair("nopel", nopel)); JSON json_dp = new JSON(); JSONObject jobj_dp = json_dp.getJSON("http://10.0.2.2/KP/pdam/nopel.php", pairs); try { if(jobj_dp.getInt("row") == 0){ Toast.makeText(getApplicationContext(), "error", Toast.LENGTH_SHORT).show(); }else{ String snopel = jobj_dp.getString("nopel"); String snama = jobj_dp.getString("nama"); String salamat = jobj_dp.getString("alamat"); String sgolongan = jobj_dp.getString("golongan"); Intent i = new Intent(input.this, list.class); i.putExtra("nopel", snopel); i.putExtra("nama", snama); i.putExtra("alamat", salamat); i.putExtra("golongan", sgolongan); startActivity(i); } } catch (JSONException e) { // TODO Auto-generated catch block e.printStackTrace(); } } } }); } } the first check is to check if an input is null, it's going right for now, the second check is to check if an id in the database, and it's the problem. When I try some id in numerical value like "0001" or "02013" it's fine, and can run. but when I just got to put "abushd" it forced close. anyone know why I got this?

    Read the article

  • How to return the value from MySql Stored Proc ??

    - by karthik
    I am using the below storedproc to generate the Insert statements of a specified table It is build-ed without any errors. Now i want to return the result set in "V_string" as output of the SP DELIMITER $$ DROP PROCEDURE IF EXISTS `demo`.`InsertGenerator` $$ CREATE DEFINER=`root`@`localhost` PROCEDURE `InsertGenerator`() SWL_return: BEGIN -- SQLWAYS_EVAL# to retrieve column specific information -- SQLWAYS_EVAL# table DECLARE v_string NATIONAL VARCHAR(3000); -- SQLWAYS_EVAL# first half -- SQLWAYS_EVAL# tement DECLARE v_stringData NATIONAL VARCHAR(3000); -- SQLWAYS_EVAL# data -- SQLWAYS_EVAL# statement DECLARE v_dataType NATIONAL VARCHAR(1000); -- SQLWAYS_EVAL# -- SQLWAYS_EVAL# columns DECLARE v_colName NATIONAL VARCHAR(50); DECLARE NO_DATA INT DEFAULT 0; DECLARE cursCol CURSOR FOR SELECT column_name,data_type FROM `columns` -- WHERE table_name = v_tableName; WHERE table_name = 'tbl_users'; DECLARE CONTINUE HANDLER FOR SQLEXCEPTION BEGIN SET NO_DATA = -2; END; DECLARE CONTINUE HANDLER FOR NOT FOUND SET NO_DATA = -1; OPEN cursCol; SET v_string = CONCAT('INSERT ',v_tableName,'('); SET v_stringData = ''; SET NO_DATA = 0; FETCH cursCol INTO v_colName,v_dataType; IF NO_DATA <> 0 then -- NOT SUPPORTED print CONCAT('Table ',@tableName, ' not found, processing skipped.') close cursCol; LEAVE SWL_return; end if; WHILE NO_DATA = 0 DO IF v_dataType in('varchar','char','nchar','nvarchar') then SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# ll(',v_colName,'SQLWAYS_EVAL# ''+'); ELSE if v_dataType in('text','ntext') then -- SQLWAYS_EVAL# -- SQLWAYS_EVAL# else SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# ll(cast(',v_colName,'SQLWAYS_EVAL# 00)),'''')+'''''',''+'); ELSE IF v_dataType = 'money' then -- SQLWAYS_EVAL# doesn't get converted -- SQLWAYS_EVAL# implicitly SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# y,''''''+ isnull(cast(',v_colName,'SQLWAYS_EVAL# 0)),''0.0000'')+''''''),''+'); ELSE IF v_dataType = 'datetime' then SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# time,''''''+ isnull(cast(',v_colName, 'SQLWAYS_EVAL# 0)),''0'')+''''''),''+'); ELSE IF v_dataType = 'image' then SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# ll(cast(convert(varbinary,',v_colName, 'SQLWAYS_EVAL# 6)),''0'')+'''''',''+'); ELSE SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# ll(cast(',v_colName,'SQLWAYS_EVAL# 0)),''0'')+'''''',''+'); end if; end if; end if; end if; end if; SET v_string = CONCAT(v_string,v_colName,','); SET NO_DATA = 0; FETCH cursCol INTO v_colName,v_dataType; END WHILE; END $$ DELIMITER ;

    Read the article

  • B-trees, databases, sequential inputs, and speed.

    - by IanC
    I know from experience that b-trees have awful performance when data is added to them sequentially (regardless of the direction). However, when data is added randomly, best performance is obtained. This is easy to demonstrate with the likes of an RB-Tree. Sequential writes cause a maximum number of tree balances to be performed. I know very few databases use binary trees, but rather used n-order balanced trees. I logically assume they suffer a similar fate to binary trees when it comes to sequential inputs. This sparked my curiosity. If this is so, then one could deduce that writing sequential IDs (such as in IDENTITY(1,1)) would cause multiple re-balances of the tree to occur. I have seen many posts argue against GUIDs as "these will cause random writes". I never use GUIDs, but it struck me that this "bad" point was in fact a good point. So I decided to test it. Here is my code: SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[T1]( [ID] [int] NOT NULL CONSTRAINT [T1_1] PRIMARY KEY CLUSTERED ([ID] ASC) ) GO CREATE TABLE [dbo].[T2]( [ID] [uniqueidentifier] NOT NULL CONSTRAINT [T2_1] PRIMARY KEY CLUSTERED ([ID] ASC) ) GO declare @i int, @t1 datetime, @t2 datetime, @t3 datetime, @c char(300) set @t1 = GETDATE() set @i = 1 while @i < 2000 begin insert into T2 values (NEWID(), @c) set @i = @i + 1 end set @t2 = GETDATE() WAITFOR delay '0:0:10' set @t3 = GETDATE() set @i = 1 while @i < 2000 begin insert into T1 values (@i, @c) set @i = @i + 1 end select DATEDIFF(ms, @t1, @t2) AS [Int], DATEDIFF(ms, @t3, getdate()) AS [GUID] drop table T1 drop table T2 Note that I am not subtracting any time for the creation of the GUID nor for the considerably extra size of the row. The results on my machine were as follows: Int: 17,340 ms GUID: 6,746 ms This means that in this test, random inserts of 16 bytes was almost 3 times faster than sequential inserts of 4 bytes. Would anyone like to comment on this? Ps. I get that this isn't a question. It's an invite to discussion, and that is relevant to learning optimum programming.

    Read the article

  • Casting to derived type problem in C++

    - by GONeale
    Hey there everyone, I am quite new to C++, but have worked with C# for years, however it is not helping me here! :) My problem: I have an Actor class which Ball and Peg both derive from on an objective-c iphone game I am working on. As I am testing for collision, I wish to set an instance of Ball and Peg appropriately depending on the actual runtime type of actorA or actorB. My code that tests this as follows: // Actors that collided Actor *actorA = (Actor*) bodyA->GetUserData(); Actor *actorB = (Actor*) bodyB->GetUserData(); Ball* ball; Peg* peg; if (static_cast<Ball*> (actorA)) { // true ball = static_cast<Ball*> (actorA); } else if (static_cast<Ball*> (actorB)) { ball = static_cast<Ball*> (actorB); } if (static_cast<Peg*> (actorA)) { // also true?! peg = static_cast<Peg*> (actorA); } else if (static_cast<Peg*> (actorB)) { peg = static_cast<Peg*> (actorB); } if (peg != NULL) { [peg hitByBall]; } Once ball and peg are set, I then proceed to run the hitByBall method (objective c). Where my problem really lies is in the casting procedurel Ball casts fine from actorA; the first if (static_cast<>) statement steps in and sets the ball pointer appropriately. The second step is to assign the appropriate type to peg. I know peg should be a Peg type and I previously know it will be actorB, however at runtime, detecting the types, I was surprised to find actually the third if (static_cast<>) statement stepped in and set this, this if statement was to check if actorA was a Peg, which we already know actorA is a Ball! Why would it have stepped here and not in the fourth if statement? The only thing I can assume is how casting works differently from c# and that is it finds that actorA which is actually of type Ball derives from Actor and then it found when static_cast<Peg*> (actorA) is performed it found Peg derives from Actor too, so this is a valid test? This could all come down to how I have misunderstood the use of static_cast. How can I achieve what I need? :) I'm really uneasy about what feels to me like a long winded brute-casting attempt here with a ton of ridiculous if statements. I'm sure there is a more elegant way to achieve a simple cast to Peg and cast to Ball dependent on actual type held in actorA and actorB. Hope someone out there can help! :) Thanks a lot.

    Read the article

  • why gcc emits segmentation ,after my code run

    - by gcc
    every time ,why have I encountered with segmentation fault ? still,I have not found my fault sometimes, gcc emits segmentation fault ,sometimes, memory satck limit is exceeded I think you will understand (easily) what is for that code void my_main() { char *tutar[50],tempc; int i=0,temp,g=0,a=0,j=0,d=1,current=0,command=0,command2=0; do{ tutar[i]=malloc(sizeof(int)); scanf("%s",tutar[i]); temp=*tutar[i]; } while(temp=='x'); i=0; num_arrays=atoi(tutar[i]); i=1; tempc=*tutar[i]; if(tempc!='x' && tempc!='d' && tempc!='a' && tempc!='j') { current=1; arrays[current]=malloc(sizeof(int)); l_arrays[current]=calloc(1,sizeof(int)); c_arrays[current]=calloc(1,sizeof(int));} i=1; current=1; while(1) { tempc=*tutar[i]; if(tempc=='x') break; if(tempc=='n') { ++current; arrays[current]=malloc(sizeof(int)); l_arrays[current]=calloc(1,sizeof(int)); c_arrays[current]=calloc(1,sizeof(int)); ++i; continue; } if(tempc=='d') { ++i; command=atoi(tutar[i])-1; free(arrays[command]); free(l_arrays[command]); free(c_arrays[command]); ++i; continue; } if(tempc=='j') { ++i; current=atoi(tutar[i])-1; if(arrays[current]==NULL) { arrays[current]=malloc(sizeof(int)); l_arrays[current]=calloc(1,sizeof(int)); c_arrays[current]=calloc(1,sizeof(int)); ++i; } else { a=l_arrays[current]; j=l_arrays[current]; ++i;} continue; } } }

    Read the article

  • I'm having trouble traversing a newly appended DOM element with jQuery

    - by culov
    I have a form that I want to be used to add entries. Once an entry is added, the original form should be reset to prepare it for the next entry, and the saved form should be duplicated prior to resetting and appended onto a div for 'storedEntries.' This much is working (for the most part), but Im having trouble accessing the newly created form... I need to change the value attribute of the submit button from 'add' to 'edit' so properly communicate what clicking that button should do. heres my form: <div class="newTruck"> <form id="addNewTruck" class='updateschedule' action="javascript:sub(sTime.value, eTime.value, lat.value, lng.value, street.value);"> <b style="color:green;">Opening at: </b> <input id="sTime" name="sTime" title="Opening time" value="Click to set opening time" class="datetimepicker"/> <b style="color:red;">Closing at: </b> <input id="eTime" name= "eTime" title="Closing time" value="Click to set closing time" class="datetimepicker"/> <label for='street'>Address</label> <input type='text' name='street' id='street' class='text' autocomplete='off'/> <input id='submit' class='submit' style="cursor: pointer; cursor: hand;" type="submit" value='Add new stop'/> <div id='suggests' class='auto_complete' style='display:none'></div> <input type='hidden' name='lat' id='lat'/> <input type='hidden' name='lng' id='lng'/> ive tried using a hundred different selectors with jquery to no avail... heres my script as it stands: function cloneAndClear(){ var id = name+now; $j("#addNewTruck").clone(true).attr("id",id).appendTo(".scheduledTrucks"); $j('#'+id).filter('#submit').attr('value', 'Edit'); $j("#addNewTruck")[0].reset(); createPickers(); } the element is properly cloned and inserted into the div, but i cant find a way to access this element... the third line in the script never works. Another problem i am having is that the 'values' in the cloned form revert back to the value in the source of the html rather than what the user inputs. advice on how to solve either of these issues is greatly appreciated!

    Read the article

  • Daemonize() issues on Debian

    - by djTeller
    Hi, I'm currently writing a multi-process client and a multi-treaded server for some project i have. The server is a Daemon. In order to accomplish that, i'm using the following daemonize() code: static void daemonize(void) { pid_t pid, sid; /* already a daemon */ if ( getppid() == 1 ) return; /* Fork off the parent process */ pid = fork(); if (pid < 0) { exit(EXIT_FAILURE); } /* If we got a good PID, then we can exit the parent process. */ if (pid > 0) { exit(EXIT_SUCCESS); } /* At this point we are executing as the child process */ /* Change the file mode mask */ umask(0); /* Create a new SID for the child process */ sid = setsid(); if (sid < 0) { exit(EXIT_FAILURE); } /* Change the current working directory. This prevents the current directory from being locked; hence not being able to remove it. */ if ((chdir("/")) < 0) { exit(EXIT_FAILURE); } /* Redirect standard files to /dev/null */ freopen( "/dev/null", "r", stdin); freopen( "/dev/null", "w", stdout); freopen( "/dev/null", "w", stderr); } int main( int argc, char *argv[] ) { daemonize(); /* Now we are a daemon -- do the work for which we were paid */ return 0; } I have a strange side effect when testing the server on Debian (Ubuntu). The accept() function always fail to accept connections, the pid returned is -1 I have no idea what causing this, since in RedHat & CentOS it works well. When i remove the call to daemonize(), everything works well on Debian, when i add it back, same accept() error reproduce. I've been monitring the /proc//fd, everything looks good. Something in the daemonize() and the Debian release just doesn't seem to work. (Debian GNU/Linux 5.0, Linux 2.6.26-2-286 #1 SMP) Any idea what causing this? Thank you

    Read the article

  • Refactoring Singleton Overuse

    - by drharris
    Today I had an epiphany, and it was that I was doing everything wrong. Some history: I inherited a C# application, which was really just a collection of static methods, a completely procedural mess of C# code. I refactored this the best I knew at the time, bringing in lots of post-college OOP knowledge. To make a long story short, many of the entities in code have turned out to be Singletons. Today I realized I needed 3 new classes, which would each follow the same Singleton pattern to match the rest of the software. If I keep tumbling down this slippery slope, eventually every class in my application will be Singleton, which will really be no logically different from the original group of static methods. I need help on rethinking this. I know about Dependency Injection, and that would generally be the strategy to use in breaking the Singleton curse. However, I have a few specific questions related to this refactoring, and all about best practices for doing so. How acceptable is the use of static variables to encapsulate configuration information? I have a brain block on using static, and I think it is due to an early OO class in college where the professor said static was bad. But, should I have to reconfigure the class every time I access it? When accessing hardware, is it ok to leave a static pointer to the addresses and variables needed, or should I continually perform Open() and Close() operations? Right now I have a single method acting as the controller. Specifically, I continually poll several external instruments (via hardware drivers) for data. Should this type of controller be the way to go, or should I spawn separate threads for each instrument at the program's startup? If the latter, how do I make this object oriented? Should I create classes called InstrumentAListener and InstrumentBListener? Or is there some standard way to approach this? Is there a better way to do global configuration? Right now I simply have Configuration.Instance.Foo sprinkled liberally throughout the code. Almost every class uses it, so perhaps keeping it as a Singleton makes sense. Any thoughts? A lot of my classes are things like SerialPortWriter or DataFileWriter, which must sit around waiting for this data to stream in. Since they are active the entire time, how should I arrange these in order to listen for the events generated when data comes in? Any other resources, books, or comments about how to get away from Singletons and other pattern overuse would be helpful.

    Read the article

  • C++ class member functions instantiated by traits

    - by Jive Dadson
    I am reluctant to say I can't figure this out, but I can't figure this out. I've googled and searched Stack Overflow, and come up empty. The abstract, and possibly overly vague form of the question is, how can I use the traits-pattern to instantiate non-virtual member functions? The question came up while modernizing a set of multivariate function optimizers that I wrote more than 10 years ago. The optimizers all operate by selecting a straight-line path through the parameter space away from the current best point (the "update"), then finding a better point on that line (the "line search"), then testing for the "done" condition, and if not done, iterating. There are different methods for doing the update, the line-search, and conceivably for the done test, and other things. Mix and match. Different update formulae require different state-variable data. For example, the LMQN update requires a vector, and the BFGS update requires a matrix. If evaluating gradients is cheap, the line-search should do so. If not, it should use function evaluations only. Some methods require more accurate line-searches than others. Those are just some examples. The original version instantiates several of the combinations by means of virtual functions. Some traits are selected by setting mode bits that are tested at runtime. Yuck. It would be trivial to define the traits with #define's and the member functions with #ifdef's and macros. But that's so twenty years ago. It bugs me that I cannot figure out a whiz-bang modern way. If there were only one trait that varied, I could use the curiously recurring template pattern. But I see no way to extend that to arbitrary combinations of traits. I tried doing it using boost::enable_if, etc.. The specialized state information was easy. I managed to get the functions done, but only by resorting to non-friend external functions that have the this-pointer as a parameter. I never even figured out how to make the functions friends, much less member functions. The compiler (VC++ 2008) always complained that things didn't match. I would yell, "SFINAE, you moron!" but the moron is probably me. Perhaps tag-dispatch is the key. I haven't gotten very deeply into that. Surely it's possible, right? If so, what is best practice?

    Read the article

  • Remote XP -> Win98 WMI Connection

    - by Logan Young
    I've asked this on Technet, but because Win98 is no longer supported, I can't get any decent information, I was hoping there might be some "old school" developers here who might be able to help me. There is an application that we use a lot at work. This application should run 8am-5pm with as little interruption as possible. Most of the computers where this application runs are using Win98, and we have no way to upgrade them because we can't buy new hardware at the moment. My computer is running WinXP, so I thought of a way to make sure that this application runs all the time: The idea I had was to develop a Windows Service that executes a VBScript file that contains a WMI query to get a list of processes from each computer. Each list is then examined, and, depending on whether or not the target application is running, it will either do nothing, or it will execute another VBScript file that contains a WMI query that will be used to start the target application remotely. I later found a way to do this all with 1 VBScript file (see code below) My problem is in the remote connection to the target computers. I've installed WMI Core 1.5 on them, but every time I try the remote connection, I get the following: The remote server is unavailable or does not exist: 'GetObject' VBScript runtime error 800A01CE I've done some research, and all I've found is info about DCOM Config and Windows Firewall, but Win98 doesn't have either of these. ' #### Variables and constants #### Const HIDDEN_WINDOW = 12 Dim T ' #### End Variables and constants #### Main() Sub Main() ' #### Get Process information from WMI Computer = "." Set WMI = GetObject("winmgmts:" & _ "{ImpersonationLevel=Impersonate}!\\" & Computer & "\root\cimv2") Set Settings = WMI.ExecQuery("SELECT * FROM Win32_Process") For Each Process In Settings ' #### If the application is found to be running, set a value to indicate this If Process.Name = "NOTEPAD.EXE" Then T = True End If Next ' #### T will only have a value if the application is not running. We therefore ' #### evaluate it to determine if it has a value or not. If not, start the application If Not T Then 'MsgBox("Application not found.") Set Startup = WMI.Get("Win32_ProcessStartup") Set Config = Startup.SpawnInstance_ Config.ShowWindow = HIDDEN_WINDOW Set Process = GetObject("winmgmts:root\cimv2:Win32_Process") errReturn = Process.Create(_ "C:\Windows\notepad.exe", null, Config, intProcessID) End If End Sub This uses WMI to get the list of processes from the local computer and, if the target application is running, it'll do nothing, otherwise it'll forcefully start the target application. The problem is that this works only if I specify the local comuter, if I target another computer, I get the error mentioned above. Does anyone have any ideas? Thanks in advance for the help!

    Read the article

  • What is GC holes?

    - by tianyi
    I wrote a long TCP connection socket server in C#. Spike in memory in my server happens. I used dotNet Memory Profiler(a tool) to detect where the memory leaks. Memory Profiler indicates the private heap is huge, and the memory is something like below(the number is not real,what I want to show is the GC0 and GC2's Holes are very very huge, the data size is normal): Managed heaps - 1,500,000KB Normal heap - 1400,000KB Generation #0 - 600,000KB Data - 100,000KB "Holes" - 500,000KB Generation #1 - xxKB Data - 0KB "Holes" - xKB Generation #2 - xxxxxxxxxxxxxKB Data - 100,000KB "Holes" - 700,000KB Large heap - 131072KB Large heap - 83KB Overhead/unused - 130989KB Overhead - 0KB Howerver, what is GC hole? I read an article about the hole: http://kaushalp.blogspot.com/2007/04/what-is-gc-hole-and-how-to-create-gc.html The author said : The code snippet below is the simplest way to introduce a GC hole into the system. //OBJECTREF is a typedef for Object*. { PointerTable *pTBL = o_pObjectClass->GetPointerTable(); OBJECTREF aObj = AllocateObjectMemory(pTBL); OBJECTREF bObj = AllocateObjectMemory(pTBL); //WRONG!!! “aObj” may point to garbage if the second //“AllocateObjectMemory” triggered a GC. DoSomething (aOb, bObj); } All it does is allocate two managed objects, and then does something with them both. This code compiles fine, and if you run simple pre-checkin tests, it will probably “work.” But this code will crash eventually. Why? If the second call to “AllocateObjectMemory” triggers a GC, that GC discards the object instance you just assigned to “aObj”. This code, like all C++ code inside the CLR, is compiled by a non-managed compiler and the GC cannot know that “aObj” holds a root reference to an object you want kept live. ======================================================================== I can't understand what he explained. Does the sample mean aObj becomes a wild pointer after GC? Is it mean { aObj = (*aObj)malloc(sizeof(object)); free(aObj); function(aObj);? } ? I hope somebody can explain it.

    Read the article

  • Illegal Instruction When Programming C++ on Linux

    - by remagen
    Heyo, My program, which does exactly the same thing every time it runs (moves a point sprite into the distance) will randomly fail with the text on the terminal 'Illegal Instruction'. My googling has found people encountering this when writing assembly which makes sense because assembly throws those kinds of errors. But why would g++ be generating an illegal instruction like this? It's not like I'm compiling for Windows then running on Linux (which even then, as long as both are on x86 shouldn't AFAIK cause an Illegal Instruction). I'll post the main file below. I can't reliably reproduce the error. Although, if I make random changes (add a space here, change a constant there) that force a recompile I can get a binary which will fail with Illegal Instruction every time it is run, until I try setting a break point, which makes the illegal instruction 'dissapear'. :( #include <stdio.h> #include <stdlib.h> #include <GL/gl.h> #include <GL/glu.h> #include <SDL/SDL.h> #include "Screen.h" //Simple SDL wrapper #include "Textures.h" //Simple OpenGL texture wrapper #include "PointSprites.h" //Simple point sprites wrapper double counter = 0; /* Here goes our drawing code */ int drawGLScene() { /* These are to calculate our fps */ static GLint T0 = 0; static GLint Frames = 0; /* Move Left 1.5 Units And Into The Screen 6.0 */ glLoadIdentity(); glTranslatef(0.0f, 0.0f, -6); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT); glEnable(GL_POINT_SPRITE_ARB); glTexEnvi(GL_POINT_SPRITE, GL_COORD_REPLACE, GL_TRUE); glBegin( GL_POINTS ); /* Drawing Using Triangles */ glVertex3d(0.0,0.0, 0); glVertex3d(1.0,0.0, 0); glVertex3d(1.0,1.0, counter); glVertex3d(0.0,1.0, 0); glEnd( ); /* Finished Drawing The Triangle */ /* Move Right 3 Units */ /* Draw it to the screen */ SDL_GL_SwapBuffers( ); /* Gather our frames per second */ Frames++; { GLint t = SDL_GetTicks(); if (t - T0 >= 50) { GLfloat seconds = (t - T0) / 1000.0; GLfloat fps = Frames / seconds; printf("%d frames in %g seconds = %g FPS\n", Frames, seconds, fps); T0 = t; Frames = 0; counter -= .1; } } return 1; } GLuint objectID; int main( int argc, char **argv ) { Screen screen; screen.init(); screen.resize(800,600); LoadBMP("./dist/Debug/GNU-Linux-x86/particle.bmp"); InitPointSprites(); while(true){drawGLScene();} }

    Read the article

  • jQuery .live() not working.

    - by Silvio Iannone
    Hi there, my actual problem is that .live() jQuery method is not working. This si the code where i use it: jQuery.fn.sb_animateMenuItem = function() { var mousehoverColor = '#0089F7'; var duration = 250; return this.each(function() { var originalColor = $(this).css('background-color'); $(this).live('mouseover', function() { this.style.cursor = 'pointer'; $(this).animate().stop(); $(this).animate( { backgroundColor: mousehoverColor }, duration); }); $(this).live('mouseout', function() { this.style.cursor = 'default'; $(this).animate( { backgroundColor: originalColor }, duration); }); }); }; This snipped is used i another page in this way: <script type="text/javascript" src="ui/js/jquery-1.4.2.js"></script> <script type="text/javascript" src="ui/js/jquery-ui-1.8.1.custom.min.js"></script> <script type="text/javascript" src="ui/js/color.js"></script> <script type="text/javascript" src="engine/js/tiny_mce/tiny_mce.js"></script> <script type="text/javascript" src="ui/js/ui.js"></script> <script type="text/javascript"> // UI effects $(document).ready(function() { $('button').sb_animateButton(); $('input').sb_animateInput(); $('.top_menu_item').sb_animateMenuItem(); $('.top_menu_item_right').sb_animateMenuItem(); $('.left_menu_item').sb_animateMenuItem(); }); </script> Since my site uses AJAX requests i used the .live method in the first snippet, but when i load the page the effects are not applied the the button/input... tags. If i remove the .live method and use the 'normal' way, ui effects defined in the first snipped are applied but only the the elements loaded before any AJAX request. The elements loaded after the ajax request are not affected by first snippet (though they have the same selector). Thanks for helping.

    Read the article

  • close file with fclose() but file still in use

    - by Marco
    Hi all, I've got a problem with deleting/overwriting a file using my program which is also being used(read) by my program. The problem seems to be that because of the fact my program is reading data from the file (output.txt) it puts the file in a 'in use' state which makes it impossible to delete or overwrite the file. I don't understand why the file stays 'in use' because I close the file after use with fclose(); this is my code: bool bBool = true while(bBool){ //Run myprogram.exe tot generate (a new) output.txt //Create file pointer and open file FILE* pInputFile = NULL; pInputFile = fopen("output.txt", "r"); // //then I do some reading using fscanf() // //And when I'm done reading I close the file using fclose() fclose(pInputFile); //The next step is deleting the output.txt if( remove( "output.txt" ) == -1 ){ //ERROR }else{ //Succesfull } } I use fclose() to close the file but the file remains in use by my program until my program is totally shut down. What is the solution to free the file so it can be deleted/overwrited? In reality my code isn't a loop without an end ; ) Thanks in advance! Marco Update Like ask a part of my code which also generates the file 'in use'. This is not a loop and this function is being called from the main(); Here is a piece of code: int iShapeNr = 0; void firstRun() { //Run program that generates output.txt runProgram(); //Open Shape data file FILE* pInputFile = NULL; int iNumber = 0; pInputFile = fopen("output.txt", "r"); //Put all orientations of al detected shapes in an array int iShapeNr = 0; int iRotationBuffer[1024];//1024 is maximum detectable shapes, can be changed in RoboRealm int iXMinBuffer[1024]; int iXMaxBuffer[1024]; int iYMinBuffer[1024]; int iYMaxBuffer[1024]; while(feof(pInputFile) == 0){ for(int i=0;i<9;i++){ fscanf(pInputFile, "%d", &iNumber); fscanf(pInputFile, ","); if(i == 1) { iRotationBuffer[iShapeNr] = iNumber; } if(i == 3){//xmin iXMinBuffer[iShapeNr] = iNumber; } if(i == 4){//xmax iXMaxBuffer[iShapeNr] = iNumber; } if(i == 5){//ymin iYMinBuffer[iShapeNr] = iNumber; } if(i == 6){//ymax iYMaxBuffer[iShapeNr] = iNumber; } } iShapeNr++; } fflush(pInputFile); fclose(pInputFile); } The while loop parses the file. The output.txt contains sets of 9 variables, the number of sets is unknown but always in sets of 9. output.txt could contain for example: 0,1,2,3,4,5,6,7,8,8,7,6,5,4,1,2,3,0

    Read the article

  • Syntax error while copying an multidimensional array to another in C

    - by mantuko
    We are programming a ST269 microcontroller which has two IR distance sensors. To calibrate these sensors we made one table for each sensor with the distance we measured and the corresponding value we get from the ADC. Now we want to use one function to approximate the values in between. So we defined two two-dimensional arrays (one for each sensor) as global variables. In our function we then want to copy the one array we want to work with to a working array and approximate our values. So here's the code: ... unsigned int ir_werte_re[][] = { {8,553}, ... {83,133} }; unsigned int ir_werte_li[][] = { {8,566}, ... {83,147} }; ... unsigned int geradenaproximation(unsigned int messwert, unsigned int seite) { unsigned int working_array[16][16]; unsigned int i = 0; if (seite == 0) { for (i = 0; i < sizeof(working_array); i++) { working_array[i][0] = ir_werte_li[i][0]; i++; } } else { for (i = 0; i < sizeof(working_array); i++) { working_array[i][0] = ir_werte_re[i][0]; i++; } } i = 0; unsigned int y1 = 0; unsigned int x1 = 0; ... } This code is in a file called sensor.c. We didn't write anything about our global arrays in the sensor.h should we? The sensor.h of course is included in our main.c and there the function is called. We also tried to copy the arrays via memcpy(working_array, ir_werte_li, sizeof(working_array)); And in every way we do this we get a syntax error near unsigned in the line where we're declaring unsigned int y1 = 0; and I'm pretty sure that there is no syntax error in this line : ) The last time I spend coding in C is a few years away so I'm not sure if the way we try to do this is good. Perhaps we can solve this by using a pointer instead of really copying the array or something. So please help me out I'll appreciate your bits on this.

    Read the article

  • Make sense of Notification Watcher source objective-c

    - by Chris
    From Notification Watcher source. - (void)selectNotification:(NSNotification*)aNotification { id sender = [aNotification object]; [selectedDistNotification release]; selectedDistNotification = nil; [selectedWSNotification release]; selectedWSNotification = nil; NSNotification **targetVar; NSArray **targetList; if (sender == distNotificationList) { targetVar = &selectedDistNotification; targetList = &distNotifications; } else { targetVar = &selectedWSNotification; targetList = &wsNotifications; } if ([sender selectedRow] != -1) { [*targetVar autorelease]; *targetVar = [[*targetList objectAtIndex:[sender selectedRow]] retain]; } if (*targetVar == nil) { [objectText setStringValue:@""]; } else { id obj = [*targetVar object]; NSMutableAttributedString *objStr = nil; if (obj == nil) { NSFont *aFont = [objectText font]; NSDictionary *attrDict = italicAttributesForFont(aFont); objStr = [[NSMutableAttributedString alloc] initWithString:@"(null)" attributes:attrDict]; } else { /* Line 1 */ objStr = [[NSMutableAttributedString alloc] initWithString: [NSString stringWithFormat:@" (%@)", [obj className]]]; [objStr addAttributes:italicAttributesForFont([objectText font]) range:NSMakeRange(1,[[obj className] length]+2)]; if ([obj isKindOfClass:[NSString class]]) { [objStr replaceCharactersInRange:NSMakeRange(0,0) withString:obj]; } else if ([obj respondsToSelector:@selector(stringValue)]) { [objStr replaceCharactersInRange:NSMakeRange(0,0) withString:[obj performSelector:@selector(stringValue)]]; } else { // Remove the space since we have no value to display [objStr replaceCharactersInRange:NSMakeRange(0,1) withString:@""]; } } [objectText setObjectValue:objStr]; /* LINE 2 */ [objStr release]; } [userInfoList reloadData]; } Over at //LINE 2 objStr is being released. Is this because we are assigning it with alloc in //LINE 1? Also, why is //LINE 1 not: objStr = [NSMutableAttributedString* initWithString:@"(null)" attributes:attrDict] If I create a new string like (NSString*) str = [NSString initWithString:@"test"]; ... str = @"another string"; Would I have to release str, or is this wrong and if I do that I have to use [[NSString alloc] initWithString:@"test"]? Why isn't the pointer symbol used as in [[NSString* alloc] ...? Thanks

    Read the article

  • random quote generator with php, ajax and mysql

    - by fusion
    i've tried using this code and this to make a random quote generator, but it doesn't display anything. my questions are: what is wrong with my code? in the above tut, the quote is generated on a button click, i'd like a random quote to be displayed every 30 mins automatically. how do i do this? //////////////////////// quote.html: <!DOCTYPE html> <script src="ajax.js" type="text/javascript"></script> <body> <!–create the div for the quotes land–> <div id="quote"><strong>this</strong></div> <div><a style="cursor:pointer" onclick="run_query();">Next quote …</a></div> </body> </html> ///////////////////// quote.php: <?php include 'config.php'; // 'text' is the name of your table that contains // the information you want to pull from $rowcount = mysql_query("select count(*) as rows from quotes"); // Gets the total number of items pulled from database. while ($row = mysql_fetch_assoc($rowcount)) { $max = $row["rows"]; } // Selects an item's index at random $rand = rand(1,$max)-1; $result = mysql_query("select * from quotes limit $rand, 1"); $row = mysql_fetch_array($result); $randomOutput = $row['storedText']; echo '<p>' . $randomOutput . '</p>'; //////////// ajax.js: var xmlHttp function run_query() { xmlHttp=GetXmlHttpObject(); if (xmlHttp==null) { alert ("This browser does not support HTTP Request"); return; } // end if var url="quote.php"; xmlHttp.onreadystatechange=stateChanged; xmlHttp.open("GET",url,true); xmlHttp.send(null); } //end function function stateChanged(){ if (xmlHttp.readyState==4 || xmlHttp.readyState=="complete"){ document.getElementById("quote").innerHTML=xmlHttp.responseText; } //end if } //end function function GetXmlHttpObject() { var xmlHttp=null; try { // For these browsers: Firefox, Opera 8.0+, Safari xmlHttp=new XMLHttpRequest(); }catch (e){ //For Internet Explorer try{ xmlHttp=new ActiveXObject("Msxml2.XMLHTTP"); } catch (e) { xmlHttp=new ActiveXObject("Microsoft.XMLHTTP"); } } return xmlHttp; } //end function

    Read the article

  • Where are the function literals c++?

    - by academicRobot
    First of all, maybe literals is not the right term for this concept, but its the closest I could think of (not literals in the sense of functions as first class citizens). The idea is that when you make a conventional function call, it compiles to something like this: callq <immediate address> But if you make a function call using a function pointer, it compiles to something like this: mov <memory location>,%rax callq *%rax Which is all well and good. However, what if I'm writing a template library that requires a callback of some sort with a specified argument list and the user of the library is expected to know what function they want to call at compile time? Then I would like to write my template to accept a function literal as a template parameter. So, similar to template <int int_literal> struct my_template {...};` I'd like to write template <func_literal_t func_literal> struct my_template {...}; and have calls to func_literal within my_template compile to callq <immediate address>. Is there a facility in C++ for this, or a work around to achieve the same effect? If not, why not (e.g. some cataclysmic side effects)? How about C++0x or another language? Solutions that are not portable are fine. Solutions that include the use of member function pointers would be ideal. I'm not particularly interested in being told "You are a <socially unacceptable term for a person of low IQ>, just use function pointers/functors." This is a curiosity based question, and it seems that it might be useful in some (albeit limited) applications. It seems like this should be possible since function names are just placeholders for a (relative) memory address, so why not allow more liberal use (e.g. aliasing) of this placeholder. p.s. I use function pointers and functions objects all the the time and they are great. But this post got me thinking about the don't pay for what you don't use principle in relation to function calls, and it seems like forcing the use of function pointers or similar facility when the function is known at compile time is a violation of this principle, though a small one.

    Read the article

  • Loosely coupled implicit conversion

    - by ltjax
    Implicit conversion can be really useful when types are semantically equivalent. For example, imagine two libraries that implement a type identically, but in different namespaces. Or just a type that is mostly identical, except for some semantic-sugar here and there. Now you cannot pass one type into a function (in one of those libraries) that was designed to use the other, unless that function is a template. If it's not, you have to somehow convert one type into the other. This should be trivial (or otherwise the types are not so identical after-all!) but calling the conversion explicitly bloats your code with mostly meaningless function-calls. While such conversion functions might actually copy some values around, they essentially do nothing from a high-level "programmers" point-of-view. Implicit conversion constructors and operators could obviously help, but they introduce coupling, so that one of those types has to know about the other. Usually, at least when dealing with libraries, that is not the case, because the presence of one of those types makes the other one redundant. Also, you cannot always change libraries. Now I see two options on how to make implicit conversion work in user-code: The first would be to provide a proxy-type, that implements conversion-operators and conversion-constructors (and assignments) for all the involved types, and always use that. The second requires a minimal change to the libraries, but allows great flexibility: Add a conversion-constructor for each involved type that can be externally optionally enabled. For example, for a type A add a constructor: template <class T> A( const T& src, typename boost::enable_if<conversion_enabled<T,A>>::type* ignore=0 ) { *this = convert(src); } and a template template <class X, class Y> struct conversion_enabled : public boost::mpl::false_ {}; that disables the implicit conversion by default. Then to enable conversion between two types, specialize the template: template <> struct conversion_enabled<OtherA, A> : public boost::mpl::true_ {}; and implement a convert function that can be found through ADL. I would personally prefer to use the second variant, unless there are strong arguments against it. Now to the actual question(s): What's the preferred way to associate types for implicit conversion? Are my suggestions good ideas? Are there any downsides to either approach? Is allowing conversions like that dangerous? Should library implementers in-general supply the second method when it's likely that their type will be replicated in software they are most likely beeing used with (I'm thinking of 3d-rendering middle-ware here, where most of those packages implement a 3D vector).

    Read the article

  • How does System.TraceListener prepend message with process name?

    - by btlog
    I have been looking at using System.Diagnostics.Trace for doing logging is a very basic app. Generally it does all I need it to do. The downside is that if I call Trace.TraceInformation("Some info"); The output is "SomeApp.Exe Information: 0: Some info". Initally this entertained me but no longer. I would like to just output "Some info" to console. So I thought writing a cusom TraceListener, rather than using the inbuilt ConsoleTraceListener, would solve the problem. I can see a specific format that I want all the text after the second colon. Here is my attempt to see if this would work. class LogTraceListener : TraceListener { public override void Write(string message) { int firstColon = message.IndexOf(":"); int secondColon = message.IndexOf(":", firstColon + 1); Console.Write(message); } public override void WriteLine(string message) { int firstColon = message.IndexOf(":"); int secondColon = message.IndexOf(":", firstColon + 1); Console.WriteLine(message); } } If I output the value of firstColon it is always -1. If I put a break point the message is always just "Some info". Where does all the other information come from? So I had a look at the call stack at the point just before Console.WriteLine was called. The method that called my WriteLine method is: System.dll!System.Diagnostics.TraceListener.TraceEvent(System.Diagnostics.TraceEventCache eventCache, string source, System.Diagnostics.TraceEventType eventType, int id, string message) + 0x33 bytes When I use Reflector to look at this message it all seems pretty straight forward. I can't see any code that changes the value of the string after I have sent it to Console.WriteLine. The only method that could posibly change the underlying string value is a call to UnsafeNativeMethods.EventWriteString which has a parameter that is a pointer to the message. Does anyone understand what is going on here and whether I can change the output to be just my message with out the additional fluff. It seems like evil magic that I can pass a string "Some info" to Console.WriteLine (or any other method for that matter) and the string that output is different.

    Read the article

  • hi this is my css horizontal menu i need the submeus for this?

    - by kalaivani
    /* CSS Document */ . rhm1{ width:780px; height:64px; margin:0 auto; background:url(images/rhm1_bg.g if) repeat-x; } .rhm1-left{ background:url(images/rhm1_l.gi f) no-repeat; width:15px; height:64px; float:left; } .rhm1-right{ background:url(images/rhm1_r.gi f) no-repeat; width:15px; height:64px; float:right; } .rhm1-bg{ background:url(images/rhm1_bg.g if) repeat-x; height:64px; } .rhm1-bg ul{ list-style:none; margin:0 auto; } .rhm1-bg li{ float:left; list-style:none; } .rhm1-bg li a{ float:left; display:block; color:#ffe8cc; text-decoration:none; font:12px 'Lucida Sans', sans-serif; font-weight:bold; padding:0 0 0 18px; height:64px; line-height:40px; text-align:center; cursor:pointer; } .rhm1-bg li a span{ float:left; display:block; padding:0 32px 0 18px; } .rhm1-bg li.current a{ color:#fff; background:url(images/rhm1_hove r_l.gif) no-repeat left 5px; } .rhm1-bg li.current a span{ color:#fff; background:url(images/rhm1_hove r_r.gif) no-repeat right 5px; } .rhm1-bg li a:hover{ color:#fff; background:url(images/rhm1_hove r_l.gif) no-repeat left 5px; } .rhm1-bg li a:hover span{ color:#fff; background:url(images/rhm1_hove r_r.gif) no-repeat right 5px; }

    Read the article

  • .NET Free memory usage (how to prevent overallocation / release memory to the OS)

    - by Ronan Thibaudau
    I'm currently working on a website that makes large use of cached data to avoid roundtrips. At startup we get a "large" graph (hundreds of thouthands of different kinds of objects). Those objects are retrieved over WCF and deserialized (we use protocol buffers for serialization) I'm using redgate's memory profiler to debug memory issues (the memory didn't seem to fit with how much memory we should need "after" we're done initializing and end up with this report Now what we can gather from this report is that: 1) Most of the memory .NET allocated is free (it may have been rightfully allocated during deserialisation, but now that it's free, i'd like for it to return to the OS) 2) Memory is fragmented (which is bad, as everytime i refresh the cash i need to redo the memory hungry deserialisation process and this, in turn creates large object that may throw an OutOfMemoryException due to fragmentation) 3) I have no clue why the space is fragmented, because when i look at the large object heap, there are only 30 instances, 15 object[] are directly attached to the GC and totally unrelated to me, 1 is a char array also attached directly to the GC Heap, the remaining 15 are mine but are not the cause of this as i get the same report if i comment them out in code. So my question is, what can i do to go further with this? I'm not really sure what to look for in debugging / tools as it seems my memory is fragmented, but not by me, and huge amounts of free spaces are allocated by .net , which i can't release. Also please make sure you understand the question well before answering, i'm not looking for a way to free memory within .net (GC.Collect), but to free memory that is already free in .net , to the system as well as to defragment said memory. Note that a slow solution is fine, if it's possible to manually defragment the large heap i'd be all for it as i can call it at the end of RefreshCache and it's ok if it takes 1 or 2 second to run. Thanks for your help! A few notes i forgot: 1) The project is a .net 2.0 website, i get the same results running it in a .net 4 pool, idem if i run it in a .net 4 pool and convert it to .net 4 and recompile. 2) These are results of a release build, so debug build can not be the issue. 3) And this is probably quite important, i do not get these issues at all in the webdev server, only in IIS, in the webdev i get memory consumption rather close to my actual consumption (well more, but not 5-10X more!)

    Read the article

< Previous Page | 385 386 387 388 389 390 391 392 393 394 395 396  | Next Page >