Search Results

Search found 5202 results on 209 pages for 'char'.

Page 194/209 | < Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >

  • why gcc emits segmentation ,after my code run

    - by gcc
    every time ,why have I encountered with segmentation fault ? still,I have not found my fault sometimes, gcc emits segmentation fault ,sometimes, memory satck limit is exceeded I think you will understand (easily) what is for that code void my_main() { char *tutar[50],tempc; int i=0,temp,g=0,a=0,j=0,d=1,current=0,command=0,command2=0; do{ tutar[i]=malloc(sizeof(int)); scanf("%s",tutar[i]); temp=*tutar[i]; } while(temp=='x'); i=0; num_arrays=atoi(tutar[i]); i=1; tempc=*tutar[i]; if(tempc!='x' && tempc!='d' && tempc!='a' && tempc!='j') { current=1; arrays[current]=malloc(sizeof(int)); l_arrays[current]=calloc(1,sizeof(int)); c_arrays[current]=calloc(1,sizeof(int));} i=1; current=1; while(1) { tempc=*tutar[i]; if(tempc=='x') break; if(tempc=='n') { ++current; arrays[current]=malloc(sizeof(int)); l_arrays[current]=calloc(1,sizeof(int)); c_arrays[current]=calloc(1,sizeof(int)); ++i; continue; } if(tempc=='d') { ++i; command=atoi(tutar[i])-1; free(arrays[command]); free(l_arrays[command]); free(c_arrays[command]); ++i; continue; } if(tempc=='j') { ++i; current=atoi(tutar[i])-1; if(arrays[current]==NULL) { arrays[current]=malloc(sizeof(int)); l_arrays[current]=calloc(1,sizeof(int)); c_arrays[current]=calloc(1,sizeof(int)); ++i; } else { a=l_arrays[current]; j=l_arrays[current]; ++i;} continue; } } }

    Read the article

  • urgent..haskell mini interpreter

    - by mohamed elshikh
    i'm asked to implement this project and i have problems in part b which is the eval function this is the full describtion of the project You are required to implement an interpreter for mini-Haskell language. An interpreter is dened in Wikipedia as a computer program that executes, i.e. performs, instructions written in a programming language. The interpreter should be able to evaluate functions written in a special notation, which you will dene. A function is dened by: Function name Input Parameters : dened as a list of variables. The body of the function. The body of the function can be any of the following statements: a) Variable: The function may return any of the input variables. b) Arithmetic Expressions: The arithmetic expressions include input variables and addition, sub- traction, multiplication, division and modulus operations on arithmetic expressions. c) Boolean Expressions: The Boolean expressions include the ordering of arithmetic expressions (applying the relationships: <, =<, , = or =) and the anding, oring and negation of Boolean expressions. d) If-then-else statements: where the if keyword is followed by a Boolean expression. The then and else parts may be followed by any of the statements described here. e) Guarded expressions: where each case consists of a boolean expression and any of the statements described here. The expression consists of any number of cases. The rst case whose condition is true, its body should be evaluated. The guarded expression has to terminate with an otherwise case. f) Function calls: the body of the function may have a call to another function. Note that all inputs passed to the function will be of type Int. The output of the function can be of type Int or Bool. To implement the interpreter, you are required to implement the following: a) Dene a datatype for the following expressions: Variables Arithmetic expressions Boolean expressions If-then-else statements Guarded expressions Functions b) Implement the function eval which evaluates a function. It takes 3 inputs: The name of a function to be evaluated represented as a string. A list of inputs to that function. The arguments will always be of datatype Int. A list of functions. Each function is represented as instance of the datatype that you have created for functions. c) Implement the function get_type that returns the type of the function (as a string). The input to this function is the same as in part b. here is what i've done data Variable = v(char) data Arth= va Variable | Add Arth Arth | Sub Arth Arth | Times Arth Arth | Divide Arth Arth data Bol= Great Arth Arth | Small Arth Arth | Geq Arth Arth | Seq Arth Arth | And Bol Bol | Or Bol Bol | Neg Bol data Cond = data Guard = data Fun =cons String [Variable] Body data Body= bodycons(String) |Bol |Cond |Guard |Arth

    Read the article

  • Why I get a segmentation fault?

    - by frx08
    Why I get a segmentation fault? int main() { int height, width, step, step_mono, channels; int y, x; char str[15]; uchar *data, *data_mono; CvMemStorage* storage = cvCreateMemStorage(0); CvSeq* contour = 0; CvPoint* p; CvFont font; CvCapture *capture; IplImage *frame = 0, *mono_thres = 0; capture = cvCaptureFromAVI("source.avi"); //capture video if(!cvGrabFrame(capture)) exit(0); frame = cvRetrieveFrame(capture); //capture the first frame from video source cvNamedWindow("Result", 1); while(1){ mono_thres = cvCreateImage(cvGetSize(frame), 8, 1); height = frame -> height; width = frame -> width; step = frame -> widthStep; step_mono = mono_thres -> widthStep; channels = frame -> nChannels; data = (uchar *)frame -> imageData; data_mono = (uchar *)mono_thres -> imageData; //converts the image to a binary highlighting the lightest zone for(y=0;y < height;y++) for(x=0;x < width;x++) data_mono[y*step_mono+x*1+0] = data[y*step+x*channels+0]; cvThreshold(mono_thres, mono_thres, 230, 255, CV_THRESH_BINARY); cvFindContours(mono_thres, storage, &contour, sizeof(CvContour), CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE); //gets the coordinates of the contours and draws a circle and the coordinates in that point p = CV_GET_SEQ_ELEM(CvPoint, contour, 1); cvCircle(frame, *p, 1, CV_RGB(0,0,0), 2); cvInitFont(&font, CV_FONT_HERSHEY_PLAIN, 1.1, 1.1, 0, 1); sprintf(str, "(%d ,%d)", p->x, p->y); cvPutText(frame, str, cvPoint(p->x+5,p->y-5), &font, CV_RGB(0,0,0)); cvShowImage("Result", mono_thres); //next frame if(!cvGrabFrame(capture)) break; frame = cvRetrieveFrame(capture); if((cvWaitKey(10) & 255) == 27) break; } cvReleaseCapture(&capture); cvDestroyWindow("Result"); return 0; }

    Read the article

  • Are Objective-C initializers allowed to share the same name?

    - by NattKatt
    I'm running into an odd issue in Objective-C when I have two classes using initializers of the same name, but differently-typed arguments. For example, let's say I create classes A and B: A.h: #import <Cocoa/Cocoa.h> @interface A : NSObject { } - (id)initWithNum:(float)theNum; @end A.m: #import "A.h" @implementation A - (id)initWithNum:(float)theNum { self = [super init]; if (self != nil) { NSLog(@"A: %f", theNum); } return self; } @end B.h: #import <Cocoa/Cocoa.h> @interface B : NSObject { } - (id)initWithNum:(int)theNum; @end B.m: #import "B.h" @implementation B - (id)initWithNum:(int)theNum { self = [super init]; if (self != nil) { NSLog(@"B: %d", theNum); } return self; } @end main.m: #import <Foundation/Foundation.h> #import "A.h" #import "B.h" int main (int argc, const char * argv[]) { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; A *a = [[A alloc] initWithNum:20.0f]; B *b = [[B alloc] initWithNum:10]; [a release]; [b release]; [pool drain]; return 0; } When I run this, I get the following output: 2010-04-26 20:44:06.820 FnTest[14617:a0f] A: 20.000000 2010-04-26 20:44:06.823 FnTest[14617:a0f] B: 1 If I reverse the order of the imports so it imports B.h first, I get: 2010-04-26 20:45:03.034 FnTest[14635:a0f] A: 0.000000 2010-04-26 20:45:03.038 FnTest[14635:a0f] B: 10 For some reason, it seems like it's using the data type defined in whichever @interface gets included first for both classes. I did some stepping through the debugger and found that the isa pointer for both a and b objects ends up the same. I also found out that if I no longer make the alloc and init calls inline, both initializations seem to work properly, e.g.: A *a = [A alloc]; [a initWithNum:20.0f]; If I use this convention when I create both a and b, I get the right output and the isa pointers seem to be different for each object. Am I doing something wrong? I would have thought multiple classes could have the same initializer names, but perhaps that is not the case.

    Read the article

  • How to return the value from MySql Stored Proc ??

    - by karthik
    I am using the below storedproc to generate the Insert statements of a specified table It is build-ed without any errors. Now i want to return the result set in "V_string" as output of the SP DELIMITER $$ DROP PROCEDURE IF EXISTS `demo`.`InsertGenerator` $$ CREATE DEFINER=`root`@`localhost` PROCEDURE `InsertGenerator`() SWL_return: BEGIN -- SQLWAYS_EVAL# to retrieve column specific information -- SQLWAYS_EVAL# table DECLARE v_string NATIONAL VARCHAR(3000); -- SQLWAYS_EVAL# first half -- SQLWAYS_EVAL# tement DECLARE v_stringData NATIONAL VARCHAR(3000); -- SQLWAYS_EVAL# data -- SQLWAYS_EVAL# statement DECLARE v_dataType NATIONAL VARCHAR(1000); -- SQLWAYS_EVAL# -- SQLWAYS_EVAL# columns DECLARE v_colName NATIONAL VARCHAR(50); DECLARE NO_DATA INT DEFAULT 0; DECLARE cursCol CURSOR FOR SELECT column_name,data_type FROM `columns` -- WHERE table_name = v_tableName; WHERE table_name = 'tbl_users'; DECLARE CONTINUE HANDLER FOR SQLEXCEPTION BEGIN SET NO_DATA = -2; END; DECLARE CONTINUE HANDLER FOR NOT FOUND SET NO_DATA = -1; OPEN cursCol; SET v_string = CONCAT('INSERT ',v_tableName,'('); SET v_stringData = ''; SET NO_DATA = 0; FETCH cursCol INTO v_colName,v_dataType; IF NO_DATA <> 0 then -- NOT SUPPORTED print CONCAT('Table ',@tableName, ' not found, processing skipped.') close cursCol; LEAVE SWL_return; end if; WHILE NO_DATA = 0 DO IF v_dataType in('varchar','char','nchar','nvarchar') then SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# ll(',v_colName,'SQLWAYS_EVAL# ''+'); ELSE if v_dataType in('text','ntext') then -- SQLWAYS_EVAL# -- SQLWAYS_EVAL# else SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# ll(cast(',v_colName,'SQLWAYS_EVAL# 00)),'''')+'''''',''+'); ELSE IF v_dataType = 'money' then -- SQLWAYS_EVAL# doesn't get converted -- SQLWAYS_EVAL# implicitly SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# y,''''''+ isnull(cast(',v_colName,'SQLWAYS_EVAL# 0)),''0.0000'')+''''''),''+'); ELSE IF v_dataType = 'datetime' then SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# time,''''''+ isnull(cast(',v_colName, 'SQLWAYS_EVAL# 0)),''0'')+''''''),''+'); ELSE IF v_dataType = 'image' then SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# ll(cast(convert(varbinary,',v_colName, 'SQLWAYS_EVAL# 6)),''0'')+'''''',''+'); ELSE SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# ll(cast(',v_colName,'SQLWAYS_EVAL# 0)),''0'')+'''''',''+'); end if; end if; end if; end if; end if; SET v_string = CONCAT(v_string,v_colName,','); SET NO_DATA = 0; FETCH cursCol INTO v_colName,v_dataType; END WHILE; END $$ DELIMITER ;

    Read the article

  • Daemonize() issues on Debian

    - by djTeller
    Hi, I'm currently writing a multi-process client and a multi-treaded server for some project i have. The server is a Daemon. In order to accomplish that, i'm using the following daemonize() code: static void daemonize(void) { pid_t pid, sid; /* already a daemon */ if ( getppid() == 1 ) return; /* Fork off the parent process */ pid = fork(); if (pid < 0) { exit(EXIT_FAILURE); } /* If we got a good PID, then we can exit the parent process. */ if (pid > 0) { exit(EXIT_SUCCESS); } /* At this point we are executing as the child process */ /* Change the file mode mask */ umask(0); /* Create a new SID for the child process */ sid = setsid(); if (sid < 0) { exit(EXIT_FAILURE); } /* Change the current working directory. This prevents the current directory from being locked; hence not being able to remove it. */ if ((chdir("/")) < 0) { exit(EXIT_FAILURE); } /* Redirect standard files to /dev/null */ freopen( "/dev/null", "r", stdin); freopen( "/dev/null", "w", stdout); freopen( "/dev/null", "w", stderr); } int main( int argc, char *argv[] ) { daemonize(); /* Now we are a daemon -- do the work for which we were paid */ return 0; } I have a strange side effect when testing the server on Debian (Ubuntu). The accept() function always fail to accept connections, the pid returned is -1 I have no idea what causing this, since in RedHat & CentOS it works well. When i remove the call to daemonize(), everything works well on Debian, when i add it back, same accept() error reproduce. I've been monitring the /proc//fd, everything looks good. Something in the daemonize() and the Debian release just doesn't seem to work. (Debian GNU/Linux 5.0, Linux 2.6.26-2-286 #1 SMP) Any idea what causing this? Thank you

    Read the article

  • Having trouble wrapping functions in the linux kernel

    - by Corey Henderson
    I've written a LKM that implements Trusted Path Execution (TPE) into your kernel: https://github.com/cormander/tpe-lkm I run into an occasional kernel OOPS (describe at the end of this question) when I define WRAP_SYSCALLS to 1, and am at my wit's end trying to track it down. A little background: Since the LSM framework doesn't export its symbols, I had to get creative with how I insert the TPE checking into the running kernel. I wrote a find_symbol_address() function that gives me the address of any function I need, and it works very well. I can call functions like this: int (*my_printk)(const char *fmt, ...); my_printk = find_symbol_address("printk"); (*my_printk)("Hello, world!\n"); And it works fine. I use this method to locate the security_file_mmap, security_file_mprotect, and security_bprm_check functions. I then overwrite those functions with an asm jump to my function to do the TPE check. The problem is, the currently loaded LSM will no longer execute the code for it's hook to that function, because it's been totally hijacked. Here is an example of what I do: int tpe_security_bprm_check(struct linux_binprm *bprm) { int ret = 0; if (bprm->file) { ret = tpe_allow_file(bprm->file); if (IS_ERR(ret)) goto out; } #if WRAP_SYSCALLS stop_my_code(&cs_security_bprm_check); ret = cs_security_bprm_check.ptr(bprm); start_my_code(&cs_security_bprm_check); #endif out: return ret; } Notice the section between the #if WRAP_SYSCALLS section (it's defined as 0 by default). If set to 1, the LSM's hook is called because I write the original code back over the asm jump and call that function, but I run into an occasional kernel OOPS with an "invalid opcode": invalid opcode: 0000 [#1] SMP RIP: 0010:[<ffffffff8117b006>] [<ffffffff8117b006>] security_bprm_check+0x6/0x310 I don't know what the issue is. I've tried several different types of locking methods (see the inside of start/stop_my_code for details) to no avail. To trigger the kernel OOPS, write a simple bash while loop that endlessly starts a backgrounded "ls" command. After a minute or so, it'll happen. I'm testing this on a RHEL6 kernel, also works on Ubuntu 10.04 LTS (2.6.32 x86_64). While this method has been the most successful so far, I have tried another method of simply copying the kernel function to a pointer I created with kmalloc but when I try to execute it, I get: kernel tried to execute NX-protected page - exploit attempt? (uid: 0). If anyone can tell me how to kmalloc space and have it marked as executable, that would also help me solve the above problem. Any help is appreciated!

    Read the article

  • mysql changing delimiter

    - by jimsmith
    I'm trying to add this function using php myadmin, first off I get on error line 5, which is apparently because you need to change the delimiter from ; to something else so i tried this DELIMITER | CREATE FUNCTION LEVENSHTEIN (s1 VARCHAR(255), s2 VARCHAR(255)) RETURNS INT DETERMINISTIC BEGIN DECLARE s1_len, s2_len, i, j, c, c_temp, cost INT; DECLARE s1_char CHAR; DECLARE cv0, cv1 VARBINARY(256); SET s1_len = CHAR_LENGTH(s1), s2_len = CHAR_LENGTH(s2), cv1 = 0x00, j = 1, i = 1, c = 0; IF s1 = s2 THEN RETURN 0; ELSEIF s1_len = 0 THEN RETURN s2_len; ELSEIF s2_len = 0 THEN RETURN s1_len; ELSE WHILE j <= s2_len DO SET cv1 = CONCAT(cv1, UNHEX(HEX(j))), j = j + 1; END WHILE; WHILE i <= s1_len DO SET s1_char = SUBSTRING(s1, i, 1), c = i, cv0 = UNHEX(HEX(i)), j = 1; WHILE j <= s2_len DO SET c = c + 1; IF s1_char = SUBSTRING(s2, j, 1) THEN SET cost = 0; ELSE SET cost = 1; END IF; SET c_temp = CONV(HEX(SUBSTRING(cv1, j, 1)), 16, 10) + cost; IF c > c_temp THEN SET c = c_temp; END IF; SET c_temp = CONV(HEX(SUBSTRING(cv1, j+1, 1)), 16, 10) + 1; IF c > c_temp THEN SET c = c_temp; END IF; SET cv0 = CONCAT(cv0, UNHEX(HEX(c))), j = j + 1; END WHILE; SET cv1 = cv0, i = i + 1; END WHILE; END IF; RETURN c; END DELIMITER ; But I get this error: #1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'delimiter | Please help !?

    Read the article

  • Forced closed only when put alphabetical string in edit text

    - by Abdullah Al Mubarok
    So, I make a checker if an id is in the database or not, the id is in numerical string, the type in database is char(6) though. So this is my code public class input extends Activity{ /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.input); final EditText edittext = (EditText)findViewById(R.id.editText1); Button button = (Button)findViewById(R.id.button1); button.setOnClickListener(new OnClickListener(){ @Override public void onClick(View arg0) { // TODO Auto-generated method stub String nopel = edittext.getText().toString(); if(nopel.length() == 0){ Toast.makeText(getApplicationContext(), "error", Toast.LENGTH_SHORT).show(); }else{ List<NameValuePair> pairs = new ArrayList<NameValuePair>(); pairs.add(new BasicNameValuePair("nopel", nopel)); JSON json_dp = new JSON(); JSONObject jobj_dp = json_dp.getJSON("http://10.0.2.2/KP/pdam/nopel.php", pairs); try { if(jobj_dp.getInt("row") == 0){ Toast.makeText(getApplicationContext(), "error", Toast.LENGTH_SHORT).show(); }else{ String snopel = jobj_dp.getString("nopel"); String snama = jobj_dp.getString("nama"); String salamat = jobj_dp.getString("alamat"); String sgolongan = jobj_dp.getString("golongan"); Intent i = new Intent(input.this, list.class); i.putExtra("nopel", snopel); i.putExtra("nama", snama); i.putExtra("alamat", salamat); i.putExtra("golongan", sgolongan); startActivity(i); } } catch (JSONException e) { // TODO Auto-generated catch block e.printStackTrace(); } } } }); } } the first check is to check if an input is null, it's going right for now, the second check is to check if an id in the database, and it's the problem. When I try some id in numerical value like "0001" or "02013" it's fine, and can run. but when I just got to put "abushd" it forced close. anyone know why I got this?

    Read the article

  • Implementing default constructors

    - by James
    Implement the default constructor, the constructors with one and two int parameters. The one-parameter constructor should initialize the first member of the pair, the second member of the pair is to be 0. Overload binary operator + to add the pairs as follows: (a, b) + (c, d) = (a + c, b + d); Overload the - analogously. Overload the * on pairs ant int as follows: (a, b) * c = (a * c, b * c). Write a program to test all the member functions and overloaded operators in your class definition. You will also need to write accessor (get) functions for each member. The definition of the class Pairs: class Pairs { public: Pairs(); Pairs(int first, int second); Pairs(int first); // other members and friends friend istream& operator>> (istream&, Pair&); friend ostream& operator<< (ostream&, const Pair&); private: int f; int s; }; Self-Test Exercise #17: istream& operator (istream& ins, Pair& second) { char ch; ins ch; // discard init '(' ins second.f; ins ch; // discard comma ',' ins second.s; ins ch; // discard final '(' return ins; } ostream& operator<< (ostream& outs, const Pair& second) { outs << '('; outs << second.f; outs << ", " ;// I followed the Author's suggestion here. outs << second.s; outs << ")"; return outs; }

    Read the article

  • Can't iterate over nestled dict in django

    - by fredrik
    Hi, Im trying to iterate over a nestled dict list. The first level works fine. But the second level is treated like a string not dict. In my template I have this: {% for product in Products %} <li> <p>{{ product }}</p> {% for partType in product.parts %} <p>{{ partType }}</p> {% for part in partType %} <p>{{ part }}</p> {% endfor %} {% endfor %} </li> {% endfor %} It's the {{ part }} that just list 1 char at the time based on partType. And it seams that it's treated like a string. I can however via dot notation reach all dict but not with a for loop. The current output looks like this: Color C o l o r Style S ..... The Products object looks like this in the log: [{'product': <models.Products.Product object at 0x1076ac9d0>, 'parts': {u'Color': {'default': u'Red', 'optional': [u'Red', u'Blue']}, u'Style': {'default': u'Nice', 'optional': [u'Nice']}, u'Size': {'default': u'8', 'optional': [u'8', u'8.5']}}}] What I trying to do is to pair together a dict/list for a product from a number of different SQL queries. The web handler looks like this: typeData = Products.ProductPartTypes.all() productData = Products.Product.all() langCode = 'en' productList = [] for product in productData: typeDict = {} productDict = {} for type in typeData: typeDict[type.typeId] = { 'default' : '', 'optional' : [] } productDict['product'] = product productDict['parts'] = typeDict defaultPartsData = Products.ProductParts.gql('WHERE __key__ IN :key', key = product.defaultParts) optionalPartsData = Products.ProductParts.gql('WHERE __key__ IN :key', key = product.optionalParts) for defaultPart in defaultPartsData: label = Products.ProductPartLabels.gql('WHERE __key__ IN :key AND partLangCode = :langCode', key = defaultPart.partLabelList, langCode = langCode).get() productDict['parts'][defaultPart.type.typeId]['default'] = label.partLangLabel for optionalPart in optionalPartsData: label = Products.ProductPartLabels.gql('WHERE __key__ IN :key AND partLangCode = :langCode', key = optionalPart.partLabelList, langCode = langCode).get() productDict['parts'][optionalPart.type.typeId]['optional'].append(label.partLangLabel) productList.append(productDict) logging.info(productList) templateData = { 'Languages' : Settings.Languges.all().order('langCode'), 'ProductPartTypes' : typeData, 'Products' : productList } I've tried making the dict in a number of different ways. Like first making a list, then a dict, used tulpes anything I could think of. Any help is welcome! Bouns: If someone have an other approach to the SQL quires, that is more then welcome. I feel that it kinda stupid to run that amount of quires. What is happening that each product part has a different label base on langCode. ..fredrik

    Read the article

  • .NET Free memory usage (how to prevent overallocation / release memory to the OS)

    - by Ronan Thibaudau
    I'm currently working on a website that makes large use of cached data to avoid roundtrips. At startup we get a "large" graph (hundreds of thouthands of different kinds of objects). Those objects are retrieved over WCF and deserialized (we use protocol buffers for serialization) I'm using redgate's memory profiler to debug memory issues (the memory didn't seem to fit with how much memory we should need "after" we're done initializing and end up with this report Now what we can gather from this report is that: 1) Most of the memory .NET allocated is free (it may have been rightfully allocated during deserialisation, but now that it's free, i'd like for it to return to the OS) 2) Memory is fragmented (which is bad, as everytime i refresh the cash i need to redo the memory hungry deserialisation process and this, in turn creates large object that may throw an OutOfMemoryException due to fragmentation) 3) I have no clue why the space is fragmented, because when i look at the large object heap, there are only 30 instances, 15 object[] are directly attached to the GC and totally unrelated to me, 1 is a char array also attached directly to the GC Heap, the remaining 15 are mine but are not the cause of this as i get the same report if i comment them out in code. So my question is, what can i do to go further with this? I'm not really sure what to look for in debugging / tools as it seems my memory is fragmented, but not by me, and huge amounts of free spaces are allocated by .net , which i can't release. Also please make sure you understand the question well before answering, i'm not looking for a way to free memory within .net (GC.Collect), but to free memory that is already free in .net , to the system as well as to defragment said memory. Note that a slow solution is fine, if it's possible to manually defragment the large heap i'd be all for it as i can call it at the end of RefreshCache and it's ok if it takes 1 or 2 second to run. Thanks for your help! A few notes i forgot: 1) The project is a .net 2.0 website, i get the same results running it in a .net 4 pool, idem if i run it in a .net 4 pool and convert it to .net 4 and recompile. 2) These are results of a release build, so debug build can not be the issue. 3) And this is probably quite important, i do not get these issues at all in the webdev server, only in IIS, in the webdev i get memory consumption rather close to my actual consumption (well more, but not 5-10X more!)

    Read the article

  • B-trees, databases, sequential inputs, and speed.

    - by IanC
    I know from experience that b-trees have awful performance when data is added to them sequentially (regardless of the direction). However, when data is added randomly, best performance is obtained. This is easy to demonstrate with the likes of an RB-Tree. Sequential writes cause a maximum number of tree balances to be performed. I know very few databases use binary trees, but rather used n-order balanced trees. I logically assume they suffer a similar fate to binary trees when it comes to sequential inputs. This sparked my curiosity. If this is so, then one could deduce that writing sequential IDs (such as in IDENTITY(1,1)) would cause multiple re-balances of the tree to occur. I have seen many posts argue against GUIDs as "these will cause random writes". I never use GUIDs, but it struck me that this "bad" point was in fact a good point. So I decided to test it. Here is my code: SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[T1]( [ID] [int] NOT NULL CONSTRAINT [T1_1] PRIMARY KEY CLUSTERED ([ID] ASC) ) GO CREATE TABLE [dbo].[T2]( [ID] [uniqueidentifier] NOT NULL CONSTRAINT [T2_1] PRIMARY KEY CLUSTERED ([ID] ASC) ) GO declare @i int, @t1 datetime, @t2 datetime, @t3 datetime, @c char(300) set @t1 = GETDATE() set @i = 1 while @i < 2000 begin insert into T2 values (NEWID(), @c) set @i = @i + 1 end set @t2 = GETDATE() WAITFOR delay '0:0:10' set @t3 = GETDATE() set @i = 1 while @i < 2000 begin insert into T1 values (@i, @c) set @i = @i + 1 end select DATEDIFF(ms, @t1, @t2) AS [Int], DATEDIFF(ms, @t3, getdate()) AS [GUID] drop table T1 drop table T2 Note that I am not subtracting any time for the creation of the GUID nor for the considerably extra size of the row. The results on my machine were as follows: Int: 17,340 ms GUID: 6,746 ms This means that in this test, random inserts of 16 bytes was almost 3 times faster than sequential inserts of 4 bytes. Would anyone like to comment on this? Ps. I get that this isn't a question. It's an invite to discussion, and that is relevant to learning optimum programming.

    Read the article

  • Problem in storing the dynamic data in NSMutableArray?

    - by Rajendra Bhole
    I want to develop an application in which firstly i was develop an structure code for storing X and y axis. struct TCo_ordinates { float x; float y; }; . Then in drawRect method i generating an object of structure like. struct TCo_ordinates *tCoordianates; Now I drawing the graph of Y-Axis its code is. fltX1 = 30; fltY1 = 5; fltX2 = fltX1; fltY2 = 270; CGContextMoveToPoint(ctx, fltX1, fltY1); CGContextAddLineToPoint(ctx, fltX2, fltY2); NSArray *hoursInDays = [[NSArray alloc] initWithObjects:@"1",@"2",@"3",@"4",@"5",@"6",@"7",@"8",@"9",@"10",@"11",@"12", nil]; for(int intIndex = 0 ; intIndex < [hoursInDays count] ; fltY2-=20, intIndex++) { CGContextSetRGBStrokeColor(ctx, 2, 2, 2, 1); //CGContextSetRGBStrokeColor(ctx, 1.0f/255.0f, 1.0f/255.0f, 1.0f/255.0f, 1.0f); CGContextMoveToPoint(ctx, fltX1-3 , fltY2-40); CGContextAddLineToPoint(ctx, fltX1+3, fltY2-40); CGContextSelectFont(ctx, "Helvetica", 14.0, kCGEncodingMacRoman); CGContextSetTextDrawingMode(ctx, kCGTextFill); CGContextSetRGBFillColor(ctx, 0, 255, 255, 1); CGAffineTransform xform = CGAffineTransformMake( 1.0, 0.0, 0.0, -1.0, 0.0, 0.0); CGContextSetTextMatrix(ctx, xform); const char *arrayDataForYAxis = [[hoursInDays objectAtIndex:intIndex] UTF8String]; float x1 = fltX1-23; float y1 = fltY2-37; CGContextShowTextAtPoint(ctx, x1, y1, arrayDataForYAxis, strlen(arrayDataForYAxis)); CGContextStrokePath(ctx); Now i want to store generated the values of x1 and y1 in NSMutableArray dynamically, for that i was written the code. NSMutableArray *yAxisCoordinates = [[NSMutableArray alloc] autorelease]; for(int yObject = 0; yObject < intIndex; yObject++) { [yAxisCoordinates insertObject:(tCoordianates->x = x1,tCoordianates->y = y1) atIndex:yObject]; } But it didn't working. How i store the x1 and y1 values in yAxisCoordinates object. The above code is correct?????????????

    Read the article

  • No Matching Function Error for inserting into a list in c++

    - by Josh Curren
    I am getting an error when I try to insert an item into a list (in C++). The error is that there is no matching function for call to the insert(). I also tried push_front() but got the same error. Here is the error message: main.cpp:38: error: no matching function for call to ‘std::list<Salesperson, std::allocator<Salesperson> >::insert(Salesperson&)’ /usr/lib/gcc/i686-pc-cygwin/4.3.4/include/c++/bits/list.tcc:99: note: candidates are: std::_List_iterator<_Tp> std::list<_Tp, _Alloc>::insert(std::_List_iterator<_Tp>, const _Tp&) [with _Tp = Salesperson, _Alloc = std::allocator<Salesperson>] /usr/lib/gcc/i686-pc-cygwin/4.3.4/include/c++/bits/stl_list.h:961: note: void std::list<_Tp, _Alloc>::insert(std::_List_iterator<_Tp>, size_t, const _Tp&) [with _Tp = Salesperson, _Alloc = std::allocator<Salesperson>] Here is the code: #include <stdlib.h> #include <iostream> #include <fstream> #include <string> #include <list> #include "Salesperson.h" #include "Salesperson.cpp" #include "OrderedList.h" #include "OrderedList.cpp" using namespace std; int main(int argc, char** argv) { cout << "\n------------ Asn 8 - Sales Report ------------" << endl; list<Salesperson> s; int id; string fName, lName; int numOfSales; string year; std::ifstream input("Sales.txt"); while( !std::getline(input, year, ',').eof() ) { input >> id; input >> lName; input >> fName; input >> numOfSales; Salesperson sp = Salesperson( id, fName, lName ); s.insert( sp ); //THIS IS LINE 38 ************************** for( int i = 0; i < numOfSales; i++ ) { double sale; input >> sale; sp.sales.insert( sale ); } } cout << endl; return (EXIT_SUCCESS); }

    Read the article

  • What can cause System.Move to occasionaly give wrong results?

    - by Fredrik Loftheim
    The last few days we have had some strange problems with our database components developed by a third party. There has been no changes to these components for months. The code that HAS changed the last few days is our own code and we have also updated our gui-components developed by another third party. After debugging I have found that a call to System.Move in one of the database component procedures occationaly gives wrong results! Please take a look at the code below from the database components and read my comments. How can this inconsistent behaviour happen? Can anyone give me an idea of how to procede to find the cause of this inconsistent behaviour? NB! I dont think there is anything wrong with THIS code, it is only shown to explain the problem "symptoms". My guess is that there is some sort of memory corruption or something, caused by our code or the updated gui-component-code. Edit: Take a look at the blogpost linked below. It seems that it could be related to my problem. At least as I read it it confirms that System.Move can give wrong results: http://blog.excastle.com/2007/08/28/delphi-bug-of-the-day-fpu-stack-leak/ Procedure InternalDescribe; var cbufl: sb4; //sb4=LongInt cbuf: array[0..30] of char; cbufp: PChar; //.... begin //..Some code repeat //...Some code to initialize cbufp and cbufl //On the 15. iteration the values immediately Before Move are always these: //cbufp = 'STDPRODUCTSTOREDELEMENTSCOUNT' //cbuf = ('S', 'T', 'A', 'T', 'U', 'S', #0, 'E', 'V', 'A', 'R', 'R', 'E', 'C', 'I', 'D', #0, 'D', 'U', 'C', 'T', 'I', 'D', #0, #0, #0, #0, #0, #0, #0, #0) //cbufl = 29 Move(cbufp^, cbuf, cbufl); //Values immediately After Move should then be: //cbuf = ('S', 'T', 'D', 'P', 'R', 'O', 'D', 'U', 'C', 'T', 'S', 'T', 'O', 'R', 'E', 'D', 'E', 'L', 'E', 'M', 'E', 'N', 'T', 'S', 'C', 'O', 'U', 'N', 'T', #0, #0) //But sometimes this Move results in this value( 1 in 5..15 times): //cbuf = ('S', 'T', 'D', 'P', 'R', 'O', 'D', 'U', 'C', 'T', 'S', 'T', 'O', 'R', 'E', 'D', #0, #0, #0, #0, #0, 'N', 'T', 'S', 'C', 'O', 'U', 'N', 'T', #0, #0) } until SomeCondition; //...Some more code end;

    Read the article

  • No matter what, I can't get this stupid progress bar to update from a thread!

    - by Synthetix
    I have a Windows app written in C (using gcc/MinGW) that works pretty well except for a few UI problems. One, I simply cannot get the progress bar to update from a thread. In fact, I probably can't get ANY UI stuff to update. Basically, I have a spawned thread that does some processing, and from that thread I attempt to update the progress bar in the main thread. I tried this by using PostMessage() to the main hwnd, but no luck even though I can do other things like open message boxes. However, it's unclear whether the message box is getting called within the thread or on the main thread. Here's some code: //in header/globally accessible HWND wnd; //main application window HWND progress_bar; //progress bar typedef struct { //to pass to thread DWORD mainThreadId; HWND mainHwnd; char *filename; } THREADSTUFF; //callback function LRESULT CALLBACK WndProc(HWND hwnd, UINT msg, WPARAM wParam, LPARAM lParam){ switch(msg){ case WM_CREATE:{ //create progress bar progress_bar = CreateWindowEx( 0, PROGRESS_CLASS, (LPCTSTR)NULL, WS_CHILD | WS_VISIBLE, 79,164,455,15, hwnd, (HMENU)20, NULL, NULL); break; } case WM_COMMAND:{ if(LOWORD(wParam)==2){ //do some processing in a thread //struct of stuff I need to pass to thread THREADSTUFF *threadStuff; threadStuff = (THREADSTUFF*)malloc(sizeof(*threadStuff)); threadStuff->mainThreadId = GetCurrentThreadId(); threadStuff->mainHwnd = hwnd; threadStuff->filename = (void*)&filename; hThread1 = CreateThread(NULL,0,convertFile (LPVOID)threadStuff,0,NULL); }else if(LOWORD(wParam)==5){ //update progress bar MessageBox(hwnd,"I got a message!", "Message", MB_OK | MB_ICONINFORMATION); PostMessage(progress_bar,PBM_STEPIT,0,CLR_DEFAULT); } break; } } } This all seems to work okay. The problem is in the thread: DWORD WINAPI convertFile(LPVOID params){ //get passed params, this works perfectly fine THREADSTUFF *tData = (THREADSTUFF*)params; MessageBox(tData->mainHwnd,tData->filename,"File name",MB_OK | MB_ICONINFORMATION); //yep PostThreadMessage(tData->mainThreadId,WM_COMMAND,5,0); //only shows message PostMessage(tData->mainHwnd,WM_COMMAND,5,0); //only shows message } When I say, "only shows message," that means the MessageBox() function in the callback works, but not the PostMessage() to update the position of the progress bar. What am I missing?

    Read the article

  • Java program grading

    - by pasito15
    I've been working on this program for hours and I can't figure out how to get the program to actually print the grades from the scores Text file public class Assign7{ private double finalScore; private double private_quiz1; private double private_quiz2; private double private_midTerm; private double private_final; private final char grade; public Assign7(double finalScore){ private_quiz1 = 1.25; private_quiz2 = 1.25; private_midTerm = 0.25; private_final = 0.50; if (finalScore >= 90) { grade = 'A'; } else if (finalScore >= 80) { grade = 'B'; } else if (finalScore >= 70) { grade = 'C'; } else if (finalScore>= 60) { grade = 'D'; } else { grade = 'F'; } } public String toString(){ return finalScore+":"+private_quiz1+":"+private_quiz2+":"+private_midTerm+":"+private_final; } } this code compiles as well as this one import java.util.*; import java.io.*; public class Assign7Test{ public static void main(String[] args)throws Exception{ int q1,q2; int m = 0; int f = 0; int Record ; String name; Scanner myIn = new Scanner( new File("scores.txt") ); System.out.println( myIn.nextLine() +" avg "+"letter"); while( myIn.hasNext() ){ name = myIn.next(); q1 = myIn.nextInt(); q2 = myIn.nextInt(); m = myIn.nextInt(); f = myIn.nextInt(); Record myR = new Record( name, q1,q2,m,f); System.out.println(myR); } } public static class Record { public Record() { } public Record(String name, int q1, int q2, int m, int f) { } } } once a compile the code i get this which dosent exactly compute the numbers I have in the scores.txt Name quiz1 quiz2 midterm final avg letter Assign7Test$Record@4bcc946b Assign7Test$Record@642423 Exception in thread "main" java.until.InputMismatchException at java.until.Scanner.throwFor(Unknown Source) at java.until.Scanner.next(Unknown Source) at java.until.Scanner.nextInt(Unknown Source) at java.until.Scanner.nextInt(Unknown Source) at Assign7Test.main(Assign7Test.java:25)

    Read the article

  • Linux termios VTIME not working?

    - by San Jacinto
    We've been bashing our heads off of this one all morning. We've got some serial lines setup between an embedded linux device and an Ubuntu box. Our reads are getting screwed up because our code usually returns two (sometimes more, sometimes exactly one) message reads instead of one message read per actual message sent. Here is the code that opens the serial port. InterCharTime is set to 4. void COMBaseClass::OpenPort() { cerr<< "openning port"<< port <<"\n"; struct termios newtio; this->fd = -1; int fdTemp; fdTemp = open( port, O_RDWR | O_NOCTTY); if (fdTemp < 0) { portOpen = 0; cerr<<"problem openning "<< port <<". Retrying"<<endl; usleep(1000000); return; } newtio.c_cflag = BaudRate | CS8 | CLOCAL | CREAD ;//| StopBits; newtio.c_iflag = IGNPAR; newtio.c_oflag = 0; /* set input mode (non-canonical, no echo,...) */ newtio.c_lflag = 0; newtio.c_cc[VTIME] = InterCharTime; /* inter-character timer in .1 secs */ newtio.c_cc[VMIN] = readBufferSize; /* blocking read until 1 char received */ tcflush(fdTemp, TCIFLUSH); tcsetattr(fdTemp,TCSANOW,&newtio); this->fd = fdTemp; portOpen = 1; } The other end is configured similarly for communication, and has one small section of particular iterest: while (1) { sprintf(out, "\r\nHello world %lu", ++ulCount); puts(out); WritePort((BYTE *)out, strlen(out)+1); sleep(2); } //while Now, when I run a read thread on the receiving machine, "hello world" is usually broken up over a couple messages. Here is some sample output: 1: Hello 2: world 1 3: Hello 4: world 2 5: Hello 6: world 3 where number followed by a colon is one message recieved. Can you see any error we are making? Thank you. Edit: For clarity, please view section 3.2 of this resource href="http://www.faqs.org/docs/Linux-HOWTO/Serial-Programming-HOWTO.html. To my understanding, with a VTIME of a couple seconds (meaning vtime is set anywhere between 10 and 50, trial-and-error), and a VMIN of 1, there should be no reason that the message is broken up over two separate messages.

    Read the article

  • JNA array structure

    - by Burny
    I want to use a dll (IEC driver) in Java, for that I am using JNA. The problem in pseudo code: start the server allocate new memory for an array (JNA) client connect writing values from an array to the memory sending this array to the client client disconnect new client connect allocate new memory for an array (JNA) - JVM crash (EXCEPTION_ACCESS_VIOLATION) The JVM crash not by primitve data types and if the values will not writing from the array to the memory. the code in c: struct DataAttributeData CrvPtsArrayDAData = {0}; CrvPtsArrayDAData.ucType = DATATYPE_ARRAY; CrvPtsArrayDAData.pvData = XYValDAData; XYValDAData[0].ucType = FLOAT; XYValDAData[0].uiBitLength = sizeof(Float32)*8; XYValDAData[0].pvData = &(inUpdateValue.xVal); XYValDAData[1].ucType = FLOAT; XYValDAData[1].uiBitLength = sizeof(Float32)*8; XYValDAData[1].pvData = &(inUpdateValue.yVal); Send(&CrvPtsArrayDAData, 1); the code in Java: DataAttributeData[] data_array = (DataAttributeData[]) new DataAttributeData() .toArray(d.bitLength); for (DataAttributeData d_temp : data_array) { d_temp.data = new Memory(size / 8); d_temp.type = type_iec; d_temp.bitLength = size; d_temp.write(); } d.data = data_array[0].getPointer(); And then writing values whith this code: for (int i = 0; i < arraySize; i++) { DataAttributeData dataAttr = new DataAttributeData(d.data.share(i * d.size())); dataAttr.read(); dataAttr.data.setFloat(0, f[i]); dataAttr.write(); } the struct in c: struct DataAttributeData{ unsigned char ucType; int iArrayIndex; unsigned int uiBitLength; void * pvData;}; the struct in java: public static class DataAttributeData extends Structure { public DataAttributeData(Pointer p) { // TODO Auto-generated constructor stub super(p); } public DataAttributeData() { // TODO Auto-generated constructor stub super(); } public byte type; public int iArrayIndex; public int bitLength; public Pointer data; @Override protected List<String> getFieldOrder() { // TODO Auto-generated method stub return Arrays.asList(new String[] { "type", "iArrayIndex", "bitLength", "data" }); } } Can anybody help me?

    Read the article

  • non blocking TCP-acceptor not reading from socket

    - by Abruzzo Forte e Gentile
    I have the code below implementing a NON-Blocking TCP acceptor. Clients are able to connect without any problem and the writing seems occurring as well, but the acceptor doesn't read anything from the socket and the call to read() blocks indefinitely. Am I using some wrong setting for the acceptor? Kind Regards AFG int main(){ create_programming_socket(); poll_programming_connect(); while(1){ poll_programming_read(); } } int create_programming_socket(){ int cnt = 0; p_listen_socket = socket( AF_INET, SOCK_STREAM, 0 ); if( p_listen_socket < 0 ){ return 1; } int flags = fcntl( p_listen_socket, F_GETFL, 0 ); if( fcntl( p_listen_socket, F_SETFL, flags | O_NONBLOCK ) == -1 ){ return 1; } bzero( (char*)&p_serv_addr, sizeof(p_serv_addr) ); p_serv_addr.sin_family = AF_INET; p_serv_addr.sin_addr.s_addr = INADDR_ANY; p_serv_addr.sin_port = htons( p_port ); if( bind( p_listen_socket, (struct sockaddr*)&p_serv_addr , sizeof(p_serv_addr) ) < 0 ) { return 1; } listen( p_listen_socket, 5 ); return 0; } int poll_programming_connect(){ int retval = 0; static socklen_t p_clilen = sizeof(p_cli_addr); int res = accept( p_listen_socket, (struct sockaddr*)&p_cli_addr, &p_clilen ); if( res > 0 ){ p_conn_socket = res; int flags = fcntl( p_conn_socket, F_GETFL, 0 ); if( fcntl( p_conn_socket, F_SETFL, flags | O_NONBLOCK ) == -1 ){ retval = 1; }else{ p_connected = true; } }else if( res == -1 && ( errno == EWOULDBLOCK || errno == EAGAIN ) ) { //printf( "poll_sock(): accept(c_listen_socket) would block\n"); }else{ retval = 1; } return retval; } int poll_programming_read(){ int retval = 0; bzero( p_buffer, 256 ); int numbytes = read( p_conn_socket, p_buffer, 255 ); if( numbytes > 0 ) { fprintf( stderr, "poll_sock(): read() read %d bytes\n", numbytes ); pkt_struct2_t tx_buf; int fred; int i; } else if( numbytes == -1 && ( errno == EWOULDBLOCK || errno == EAGAIN ) ) { //printf( "poll_sock(): read() would block\n"); } else { close( p_conn_socket ); p_connected = false; retval = 1; } return retval; }

    Read the article

  • Illegal Instruction When Programming C++ on Linux

    - by remagen
    Heyo, My program, which does exactly the same thing every time it runs (moves a point sprite into the distance) will randomly fail with the text on the terminal 'Illegal Instruction'. My googling has found people encountering this when writing assembly which makes sense because assembly throws those kinds of errors. But why would g++ be generating an illegal instruction like this? It's not like I'm compiling for Windows then running on Linux (which even then, as long as both are on x86 shouldn't AFAIK cause an Illegal Instruction). I'll post the main file below. I can't reliably reproduce the error. Although, if I make random changes (add a space here, change a constant there) that force a recompile I can get a binary which will fail with Illegal Instruction every time it is run, until I try setting a break point, which makes the illegal instruction 'dissapear'. :( #include <stdio.h> #include <stdlib.h> #include <GL/gl.h> #include <GL/glu.h> #include <SDL/SDL.h> #include "Screen.h" //Simple SDL wrapper #include "Textures.h" //Simple OpenGL texture wrapper #include "PointSprites.h" //Simple point sprites wrapper double counter = 0; /* Here goes our drawing code */ int drawGLScene() { /* These are to calculate our fps */ static GLint T0 = 0; static GLint Frames = 0; /* Move Left 1.5 Units And Into The Screen 6.0 */ glLoadIdentity(); glTranslatef(0.0f, 0.0f, -6); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT); glEnable(GL_POINT_SPRITE_ARB); glTexEnvi(GL_POINT_SPRITE, GL_COORD_REPLACE, GL_TRUE); glBegin( GL_POINTS ); /* Drawing Using Triangles */ glVertex3d(0.0,0.0, 0); glVertex3d(1.0,0.0, 0); glVertex3d(1.0,1.0, counter); glVertex3d(0.0,1.0, 0); glEnd( ); /* Finished Drawing The Triangle */ /* Move Right 3 Units */ /* Draw it to the screen */ SDL_GL_SwapBuffers( ); /* Gather our frames per second */ Frames++; { GLint t = SDL_GetTicks(); if (t - T0 >= 50) { GLfloat seconds = (t - T0) / 1000.0; GLfloat fps = Frames / seconds; printf("%d frames in %g seconds = %g FPS\n", Frames, seconds, fps); T0 = t; Frames = 0; counter -= .1; } } return 1; } GLuint objectID; int main( int argc, char **argv ) { Screen screen; screen.init(); screen.resize(800,600); LoadBMP("./dist/Debug/GNU-Linux-x86/particle.bmp"); InitPointSprites(); while(true){drawGLScene();} }

    Read the article

  • FILE_NOT_FOUND when trying to open COM port C++

    - by Moutabreath
    I am trying to open a com port for reading and writing using C++ but I can't seem to pass the first stage of actually opening it. I get an INVALID_HANDLE_VALUE on the handle with GetLastError FILE_NOT_FOUND. I have searched around the web for a couple of days I'm fresh out of ideas. I have searched through all the questions regarding COM on this website too. I have scanned through the existing ports (or so I believe) to get the name of the port right. I also tried combinations of _T("COM1") with the slashes, without the slashes, with colon, without colon and without the _T I'm using windows 7 on 64 bit machine. this is the code i got I'll be glad for any input on this void SendToCom(char* data, int len) { DWORD cbNeeded = 0; DWORD dwPorts = 0; EnumPorts(NULL, 1, NULL, 0, &cbNeeded, &dwPorts); //What will be the return value BOOL bSuccess = FALSE; LPCSTR COM1 ; BYTE* pPorts = static_cast<BYTE*>(malloc(cbNeeded)); bSuccess = EnumPorts(NULL, 1, pPorts, cbNeeded, &cbNeeded, &dwPorts); if (bSuccess){ PORT_INFO_1* pPortInfo = reinterpret_cast<PORT_INFO_1*>(pPorts); for (DWORD i=0; i<dwPorts; i++) { //If it looks like "COMX" then size_t nLen = _tcslen(pPortInfo->pName); if (nLen > 3) { if ((_tcsnicmp(pPortInfo->pName, _T("COM"), 3) == 0) ){ COM1 =pPortInfo->pName; //COM1 ="\\\\.\\COM1"; HANDLE m_hCommPort = CreateFile( COM1 , GENERIC_READ|GENERIC_WRITE, // access ( read and write) 0, // (share) 0:cannot share the COM port NULL, // security (None) OPEN_EXISTING, // creation : open_existing FILE_FLAG_OVERLAPPED, // we want overlapped operation NULL // no templates file for COM port... ); if (m_hCommPort==INVALID_HANDLE_VALUE) { DWORD err = GetLastError(); if (err == ERROR_FILE_NOT_FOUND) { MessageBox(hWnd,"ERROR_FILE_NOT_FOUND",NULL,MB_ABORTRETRYIGNORE); } else if(err == ERROR_INVALID_NAME) { MessageBox(hWnd,"ERROR_INVALID_NAME",NULL,MB_ABORTRETRYIGNORE); } else { MessageBox(hWnd,"unkown error",NULL,MB_ABORTRETRYIGNORE); } } else{ WriteAndReadPort(m_hCommPort,data); } } pPortInfo++; } } } }

    Read the article

  • What is wrong with this C++ Code ?

    - by mr.bio
    Hi .. i am a beginner and i have a problem : this code doesnt compile : main.cpp: #include <stdlib.h> #include "readdir.h" #include "mysql.h" #include "readimage.h" int main(int argc, char** argv) { if (argc>1){ readdir(argv[1]); // test(); return (EXIT_SUCCESS); } std::cout << "Bitte Pfad angeben !" << std::endl ; return (EXIT_FAILURE); } readimage.cpp #include <Magick++.h> #include <iostream> #include <vector> using namespace Magick; using namespace std; void readImage(std::vector<string> &filenames) { for (unsigned int i = 0; i < filenames.size(); ++i) { try { Image img("binary/" + filenames.at(i)); for (unsigned int y = 1; y < img.rows(); y++) { for (unsigned int x = 1; x < img.columns(); x++) { ColorRGB rgb(img.pixelColor(x, y)); // cout << "x: " << x << " y: " << y << " : " << rgb.red() << endl; } } cout << "done " << i << endl; } catch (Magick::Exception & error) { cerr << "Caught Magick++ exception: " << error.what() << endl; } } } readimage.h #ifndef _READIMAGE_H #define _READIMAGE_H #include <Magick++.h> #include <iostream> #include <vector> #include <string> using namespace Magick; using namespace std; void readImage(vector<string> &filenames) #endif /* _READIMAGE_H */ If want to compile it with this code : g++ main.cpp Magick++-config --cflags --cppflags --ldflags --libs readimage.cpp i get this error message : main.cpp:5: error: expected initializer before ‘int’ i have no clue , why ? :( Can somebody help me ? :)

    Read the article

  • Can't insert a number into a C++ custom streambuf/ostream

    - by 0xbe5077ed
    I have written a custom std::basic_streambuf and std::basic_ostream because I want an output stream that I can get a JNI string from in a manner similar to how you can call std::ostringstream::str(). These classes are quite simple. namespace myns { class jni_utf16_streambuf : public std::basic_streambuf<char16_t> { JNIEnv * d_env; std::vector<char16_t> d_buf; virtual int_type overflow(int_type); public: jni_utf16_streambuf(JNIEnv *); jstring jstr() const; }; typedef std::basic_ostream<char16_t, std::char_traits<char16_t>> utf16_ostream; class jni_utf16_ostream : public utf16_ostream { jni_utf16_streambuf d_buf; public: jni_utf16_ostream(JNIEnv *); jstring jstr() const; }; // ... } // namespace myns In addition, I have made four overloads of operator<<, all in the same namespace: namespace myns { // ... utf16_ostream& operator<<(utf16_ostream&, jstring) throw(std::bad_cast); utf16_ostream& operator<<(utf16_ostream&, const char *); utf16_ostream& operator<<(utf16_ostream&, const jni_utf16_string_region&); jni_utf16_ostream& operator<<(jni_utf16_ostream&, jstring); // ... } // namespace myns The implementation of jni_utf16_streambuf::overflow(int_type) is trivial. It just doubles the buffer width, puts the requested character, and sets the base, put, and end pointers correctly. It is tested and I am quite sure it works. The jni_utf16_ostream works fine inserting unicode characters. For example, this works fine and results in the stream containing "hello, world": myns::jni_utf16_ostream o(env); o << u"hello, wor" << u'l' << u'd'; My problem is as soon as I try to insert an integer value, the stream's bad bit gets set, for example: myns::jni_utf16_ostream o(env); if (o.badbit()) throw "bad bit before"; // does not throw int32_t x(5); o << x; if (o.badbit()) throw "bad bit after"; // throws :( I don't understand why this is happening! Is there some other method on std::basic_streambuf I need to be implementing????

    Read the article

< Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >