Search Results

Search found 188 results on 8 pages for 'memset'.

Page 3/8 | < Previous Page | 1 2 3 4 5 6 7 8  | Next Page >

  • UTF-8 to Unicode conversion

    - by sandeep
    Hi, I am having problems with converting UTF-8 to Unicode. Below is the code: int charset_convert( char * string, char * to_string,char* charset_from, char* charset_to) { char *from_buf, *to_buf, *pointer; size_t inbytesleft, outbytesleft, ret; size_t TotalLen; iconv_t cd; if (!charset_from || !charset_to || !string) /* sanity check */ return -1; if (strlen(string) < 1) return 0; /* we are done, nothing to convert */ cd = iconv_open(charset_to, charset_from); /* Did I succeed in getting a conversion descriptor ? */ if (cd == (iconv_t)(-1)) { /* I guess not */ printf("Failed to convert string from %s to %s ", charset_from, charset_to); return -1; } from_buf = string; inbytesleft = strlen(string); /* allocate max sized buffer, assuming target encoding may be 4 byte unicode */ outbytesleft = inbytesleft *4 ; pointer = to_buf = (char *)malloc(outbytesleft); memset(to_buf,0,outbytesleft); memset(pointer,0,outbytesleft); ret = iconv(cd, &from_buf, &inbytesleft, &pointer, &outbytesleft);ing memcpy(to_string,to_buf,(pointer-to_buf); } main(): int main() { char UTF []= {'A', 'B'}; char Unicode[1024]= {0}; char* ptr; int x=0; iconv_t cd; charset_convert(UTF,Unicode,"UTF-8","UNICODE"); ptr = Unicode; while(*ptr != '\0') { printf("Unicode %x \n",*ptr); ptr++; } return 0; } It should give A and B but i am getting: ffffffff fffffffe 41 Thanks, Sandeep

    Read the article

  • select always returns -1 while trying to read from socket and stdin

    - by Aleyna
    Hello I have the following code implemented on C++(Linux) to check on my listening socket and stdin using select. select however keeps returning -1 no matter what I try to do! What's wrong with that code :s I will appreciate any help. Thanks highsock = m_sock; //listening socket memset((char *) &connectlist, 0, sizeof(connectlist)); memset((char *) &socks, 0, sizeof(socks)); int readsocks; struct timeval timeout; timeout.tv_sec = 60; timeout.tv_usec = 0; while (1) { updateSelectList(); //cout << "highest sock: " << highsock << endl; tempreadset = readset; readsocks = select(highsock+1, &tempreadset, NULL, NULL, &timeout); //cout << "# ready: " << readsocks << endl; if (readsocks < 0) { if (errno == EINTR) continue; cout << "select error" << endl; } if (readsocks > 0) { readFromSocks(); } } void readFromSocks() { if (FD_ISSET(STDIN, &readset)) { ... } else if (FD_ISSET(m_sock, &readset)) { ... } } void updateSelectList() { FD_ZERO(&readset); FD_SET(STDIN, &readset); FD_SET(m_sock, &readset); for (int i=0; i<MAXCONNECTIONS; i++) { if (connectlist[i] != 0) { FD_SET(connectlist[i], &readset); if (connectlist[i] > highsock) highsock = connectlist[i]; } } highsock = max(max(m_sock, STDIN), highsock); }

    Read the article

  • create an independent hidden process

    - by Jessica
    I'm creating an application with its main window hidden by using the following code: STARTUPINFO siStartupInfo; PROCESS_INFORMATION piProcessInfo; memset(&siStartupInfo, 0, sizeof(siStartupInfo)); memset(&piProcessInfo, 0, sizeof(piProcessInfo)); siStartupInfo.cb = sizeof(siStartupInfo); siStartupInfo.dwFlags = STARTF_USESHOWWINDOW | STARTF_FORCEOFFFEEDBACK | STARTF_USESTDHANDLES; siStartupInfo.wShowWindow = SW_HIDE; if(CreateProcess(MyApplication, "", 0, 0, FALSE, 0, 0, 0, &siStartupInfo, &piProcessInfo) == FALSE) { // blah return 0; } Everything works correctly except my main application (the one calling this code) window loses focus when I open the new program. I tried lowering the priority of the new process but the focus problem is still there. Is there anyway to avoid this? furthermore, is there any way to create another process without using CreateProcess (or any of the API's that call CreateProcess like ShellExecute)? My guess is that my app is losing focus because it was given to the new process, even when it's hidden. To those of you curious out there that will certainly ask the usual "why do you want to do this", my answer is because I have a watchdog process that cannot be a service and it gets started whenever I open my main application. Satisfied? Thanks for the help. Code will be appreciated. Jess.

    Read the article

  • Access array of c-structs using Python ctypes

    - by sadris
    I have a C-function that allocates memory at the address passed to and is accessed via Python. The pointer contents does contain an array of structs in the C code, but I am unable to get ctypes to access the array properly beyond the 0th element. How can I get the proper memory offset to be able to access the non-zero elements? Python's ctypes.memset is complaining about TypeErrors if I try to use their ctypes.memset function. typedef struct td_Group { unsigned int group_id; char groupname[256]; char date_created[32]; char date_modified[32]; unsigned int user_modified; unsigned int user_created; } Group; int getGroups(LIBmanager * handler, Group ** unallocatedPointer); ############# python code below: class Group(Structure): _fields_ = [("group_id", c_uint), ("groupname", c_char*256), ("date_created", c_char*32), ("date_modified", c_char*32), ("user_modified", c_uint), ("user_created", c_uint)] myGroups = c_void_p() count = libnativetest.getGroups( nativePointer, byref(myGroups) ) casted = cast( myGroups, POINTER(Group*count) ) for x in range(0,count): theGroup = cast( casted[x], POINTER(Group) ) # this only works for the first entry in the array: print "~~~~~~~~~~" + theGroup.contents.groupname Related: Access c_char_p_Array_256 in Python using ctypes

    Read the article

  • Problem with passing array of pointers to struct among functions in C

    - by karatemonkey
    The Code that follows segfaults on the call to strncpy and I can't see what I am doing wrong. I need another set of eyes to look it this. Essentially I am trying to alloc memory for a struct that is pointed to by an element in a array of pointers to struct. #include <stdio.h> #include <stdlib.h> #include <string.h> #define MAX_POLICY_NAME_SIZE 64 #define POLICY_FILES_TO_BE_PROCESSED "SPFPolicyFilesReceivedOffline\0" typedef struct TarPolicyPair { int AppearanceTime; char *IndividualFile; char *FullPolicyFile; } PolicyPair; enum { bwlist = 0, fzacts, atksig, rules, MaxNumberFileTypes }; void SPFCreateIndividualPolicyListing(PolicyPair *IndividualPolicyPairtoCreate ) { IndividualPolicyPairtoCreate = (PolicyPair *) malloc(sizeof(PolicyPair)); IndividualPolicyPairtoCreate->IndividualFile = (char *)malloc((MAX_POLICY_NAME_SIZE * sizeof(char))); IndividualPolicyPairtoCreate->FullPolicyFile = (char *)malloc((MAX_POLICY_NAME_SIZE * sizeof(char))); IndividualPolicyPairtoCreate->AppearanceTime = 0; memset(IndividualPolicyPairtoCreate->IndividualFile, '\0', (MAX_POLICY_NAME_SIZE * sizeof(char))); memset(IndividualPolicyPairtoCreate->FullPolicyFile, '\0', (MAX_POLICY_NAME_SIZE * sizeof(char))); } void SPFCreateFullPolicyListing(SPFPolicyPair **CurrentPolicyPair, char *PolicyName, char *PolicyRename) { int i; for(i = 0; i < MaxNumberFileTypes; i++) { CreateIndividualPolicyListing((CurrentPolicyPair[i])); // segfaults on this call strncpy((*CurrentPolicyPair)[i].IndividualFile, POLICY_FILES_TO_BE_PROCESSED, (SPF_POLICY_NAME_SIZE * sizeof(char))); } } int main() { SPFPolicyPair *CurrentPolicyPair[MaxNumberFileTypes] = {NULL, NULL, NULL, NULL}; int i; CreateFullPolicyListing(&CurrentPolicyPair, POLICY_FILES_TO_BE_PROCESSED, POLICY_FILES_TO_BE_PROCESSED); return 0; }

    Read the article

  • Initializing an object to all zeroes

    - by dash-tom-bang
    Oftentimes data structures' valid initialization is to set all members to zero. Even when programming in C++, one may need to interface with an external API for which this is the case. Is there any practical difference between: some_struct s; memset(s, 0, sizeof(s)); and simply some_struct s = { 0 }; Do folks find themselves using both, with a method for choosing which is more appropriate for a given application? For myself, as mostly a C++ programmer who doesn't use memset much, I'm never certain of the function signature so I find the second example is just easier to use in addition to being less typing, more compact, and maybe even more obvious since it says "this object is initialized to zero" right in the declaration rather than waiting for the next line of code and seeing, "oh, this object is zero initialized." When creating classes and structs in C++ I tend to use initialization lists; I'm curious about folks thoughts on the two "C style" initializations above rather than a comparison against what is available in C++ since I suspect many of us interface with C libraries even if we code mostly in C++ ourselves.

    Read the article

  • Receiving broadcast packets using packet socket

    - by user314336
    Hello I try to send DHCP RENEW packets to the network and receive the responses. I broadcast the packet and I can see that it's successfully sent using Wireshark. But I have difficulties receiving the responses.I use packet sockets to catch the packets. I can see that there are responses to my RENEW packet using Wireshark, but my function 'packet_receive_renew' sometimes catch the packets but sometimes it can not catch the packets. I set the file descriptor using FDSET but the 'select' in my code can not realize that there are new packets for that file descriptor and timeout occurs. I couldn't make it clear that why it sometimes catches the packets and sometimes doesn't. Anybody have an idea? Thanks in advance. Here's the receive function. int packet_receive_renew(struct client_info* info) { int fd; struct sockaddr_ll sock, si_other; struct sockaddr_in si_me; fd_set rfds; struct timeval tv; time_t start, end; int bcast = 1; int ret = 0, try = 0; char buf[1500] = {'\0'}; uint8_t tmp[BUFLEN] = {'\0'}; struct dhcp_packet pkt; socklen_t slen = sizeof(si_other); struct dhcps* new_dhcps; memset((char *) &si_me, 0, sizeof(si_me)); memset((char *) &si_other, 0, sizeof(si_other)); memset(&pkt, 0, sizeof(struct dhcp_packet)); define SERVER_AND_CLIENT_PORTS ((67 << 16) + 68) static const struct sock_filter filter_instr[] = { /* check for udp */ BPF_STMT(BPF_LD|BPF_B|BPF_ABS, 9), BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, IPPROTO_UDP, 0, 4), /* L5, L1, is UDP? */ /* skip IP header */ BPF_STMT(BPF_LDX|BPF_B|BPF_MSH, 0), /* L5: */ /* check udp source and destination ports */ BPF_STMT(BPF_LD|BPF_W|BPF_IND, 0), BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, SERVER_AND_CLIENT_PORTS, 0, 1), /* L3, L4 */ /* returns */ BPF_STMT(BPF_RET|BPF_K, 0x0fffffff ), /* L3: pass */ BPF_STMT(BPF_RET|BPF_K, 0), /* L4: reject */ }; static const struct sock_fprog filter_prog = { .len = sizeof(filter_instr) / sizeof(filter_instr[0]), /* casting const away: */ .filter = (struct sock_filter *) filter_instr, }; printf("opening raw socket on ifindex %d\n", info->interf.if_index); if (-1==(fd = socket(PF_PACKET, SOCK_DGRAM, htons(ETH_P_IP)))) { perror("packet_receive_renew::socket"); return -1; } printf("got raw socket fd %d\n", fd); /* Use only if standard ports are in use */ /* Ignoring error (kernel may lack support for this) */ if (-1==setsockopt(fd, SOL_SOCKET, SO_ATTACH_FILTER, &filter_prog, sizeof(filter_prog))) perror("packet_receive_renew::setsockopt"); sock.sll_family = AF_PACKET; sock.sll_protocol = htons(ETH_P_IP); //sock.sll_pkttype = PACKET_BROADCAST; sock.sll_ifindex = info->interf.if_index; if (-1 == bind(fd, (struct sockaddr *) &sock, sizeof(sock))) { perror("packet_receive_renew::bind"); close(fd); return -3; } if (-1 == setsockopt(fd, SOL_SOCKET, SO_BROADCAST, &bcast, sizeof(bcast))) { perror("packet_receive_renew::setsockopt"); close(fd); return -1; } FD_ZERO(&rfds); FD_SET(fd, &rfds); tv.tv_sec = TIMEOUT; tv.tv_usec = 0; ret = time(&start); if (-1 == ret) { perror("packet_receive_renew::time"); close(fd); return -1; } while(1) { ret = select(fd + 1, &rfds, NULL, NULL, &tv); time(&end); if (TOTAL_PENDING <= (end - start)) { fprintf(stderr, "End receiving\n"); break; } if (-1 == ret) { perror("packet_receive_renew::select"); close(fd); return -4; } else if (ret) { new_dhcps = (struct dhcps*)calloc(1, sizeof(struct dhcps)); if (-1 == recvfrom(fd, buf, 1500, 0, (struct sockaddr*)&si_other, &slen)) { perror("packet_receive_renew::recvfrom"); close(fd); return -4; } deref_packet((unsigned char*)buf, &pkt, info); if (-1!=(ret=get_option_val(pkt.options, DHO_DHCP_SERVER_IDENTIFIER, tmp))) { sprintf((char*)tmp, "%d.%d.%d.%d", tmp[0],tmp[1],tmp[2],tmp[3]); fprintf(stderr, "Received renew from %s\n", tmp); } else { fprintf(stderr, "Couldnt get DHO_DHCP_SERVER_IDENTIFIER%s\n", tmp); close(fd); return -5; } new_dhcps->dhcps_addr = strdup((char*)tmp); //add to list if (info->dhcps_list) info->dhcps_list->next = new_dhcps; else info->dhcps_list = new_dhcps; new_dhcps->next = NULL; } else { try++; tv.tv_sec = TOTAL_PENDING - try * TIMEOUT; tv.tv_usec = 0; fprintf(stderr, "Timeout occured\n"); } } close(fd); printf("close fd:%d\n", fd); return 0; }

    Read the article

  • WinVerifyTrust API problem

    - by Shayan
    I'm using WinVerifyTrust API in windows XP and I don't want any kind of user interaction. But when I set the WTD_UI_NONE attribute, although it doesn't show any dialog boxes, but it waits for a long time on the files that in fact wanted user interaction (I mean files which without mentioning the NO UI it will ask the user for that file). This is my code: WINTRUST_FILE_INFO FileData; memset(&FileData, 0, sizeof(FileData)); FileData.cbStruct = sizeof(WINTRUST_FILE_INFO); wchar_t fileName[32769]; FileData.pcwszFilePath = fileName; FileData.hFile = NULL; FileData.pgKnownSubject = NULL; /* WVTPolicyGUID specifies the policy to apply on the file WINTRUST_ACTION_GENERIC_VERIFY_V2 policy checks: 1) The certificate used to sign the file chains up to a root certificate located in the trusted root certificate store. This implies that the identity of the publisher has been verified by a certification authority. 2) In cases where user interface is displayed (which this example does not do), WinVerifyTrust will check for whether the end entity certificate is stored in the trusted publisher store, implying that the user trusts content from this publisher. 3) The end entity certificate has sufficient permission to sign code, as indicated by the presence of a code signing EKU or no EKU. */ GUID WVTPolicyGUID = WINTRUST_ACTION_GENERIC_VERIFY_V2; WINTRUST_DATA WinTrustData; // Initialize the WinVerifyTrust input data structure. // Default all fields to 0. memset(&WinTrustData, 0, sizeof(WinTrustData)); WinTrustData.cbStruct = sizeof(WinTrustData); // Use default code signing EKU. WinTrustData.pPolicyCallbackData = NULL; // No data to pass to SIP. WinTrustData.pSIPClientData = NULL; // Disable WVT UI. WinTrustData.dwUIChoice = WTD_UI_NONE; // No revocation checking. WinTrustData.fdwRevocationChecks = WTD_REVOKE_NONE; // Verify an embedded signature on a file. WinTrustData.dwUnionChoice = WTD_CHOICE_FILE; // Default verification. WinTrustData.dwStateAction = 0; // Not applicable for default verification of embedded signature. WinTrustData.hWVTStateData = NULL; // Not used. WinTrustData.pwszURLReference = NULL; // Default. WinTrustData.dwProvFlags = WTD_REVOCATION_CHECK_END_CERT; // This is not applicable if there is no UI because it changes // the UI to accommodate running applications instead of // installing applications. WinTrustData.dwUIContext = 0; // Set pFile. WinTrustData.pFile = &FileData; // WinVerifyTrust verifies signatures as specified by the GUID // and Wintrust_Data. lStatus = WinVerifyTrust( (HWND)INVALID_HANDLE_VALUE, &WVTPolicyGUID, &WinTrustData); printf("%x\n", lStatus);

    Read the article

  • error in coding a lexer in c

    - by mekasperasky
    #include<stdio.h> #include<ctype.h> #include<string.h> /* this is a lexer which recognizes constants , variables ,symbols, identifiers , functions , comments and also header files . It stores the lexemes in 3 different files . One file contains all the headers and the comments . Another file will contain all the variables , another will contain all the symbols. */ int main() { int i=0,j,k,count=0; char a,b[100],c[10000],d[100]; memset ( d, 0, 100 ); j=30; FILE *fp1,*fp2; fp1=fopen("source.txt","r"); //the source file is opened in read only mode which will passed through the lexer fp2=fopen("lext.txt","w"); //now lets remove all the white spaces and store the rest of the words in a file if(fp1==NULL) { perror("failed to open source.txt"); //return EXIT_FAILURE; } i=0; k=0; while(!feof(fp1)) { a=fgetc(fp1); if(a!=' '&&a!='\n') { if (!isalpha(a)) { switch(a) { case '+':{fprintf(fp2,"+ ----> PLUS \n"); i=0;break;} case '-':{fprintf(fp2,"- ---> MINUS \n"); i=0;break;} case '*':{fprintf(fp2, "* --->MULT \n"); i=0;break;} case '/':{fprintf(fp2, "/ --->DIV \n"); i=0;break;} //case '+=':fprintf(fp2, "%.20s\n", "ADD_ASSIGN"); //case '-=':fprintf(fp2, "%.20s\n", "SUB_ASSIGN"); case '=':{fprintf(fp2, "= ---> ASSIGN \n"); i=0;break;} case '%':{fprintf(fp2, "% ---> MOD \n"); i=0;break;} case '<':{fprintf(fp2, "< ---> LESSER_THAN \n"); i=0;break;} case '>':{fprintf(fp2, "> --> GREATER_THAN \n"); i=0;break;} //case '++':fprintf(fp2, "%.20s\n", "INCREMENT"); //case '--':fprintf(fp2, "%.20s\n", "DECREMENT"); //case '==':fprintf(fp2, "%.20s\n", "ASSIGNMENT"); case ';':{fprintf(fp2, "; --->SEMI_COLUMN \n"); i=0;break;} case ':':{fprintf(fp2, ": --->COLUMN \n"); i=0;break;} case '(':{fprintf(fp2, "( --->LPAR \n"); i=0;break;} case ')':{fprintf(fp2, ") --->RPAR \n"); i=0;break;} case '{':{fprintf(fp2, "{ --->LBRACE \n"); i=0;break;} case '}':{fprintf(fp2, "} ---> RBRACE \n"); i=0;break;} } } else { d[i]=a; //printf("%c\n",d[i]); i=i+1; } //} /* we can make the lexer more complex by including even more depths of checks for the symbols*/ } else { d[i+1]='\0'; printf("\n"); if((strcmp(d,"if ")==0)){fprintf(fp2,"if ----> IDENTIFIER \n"); //printf("%s \n",d); memset ( d, 0, 100 ); //printf("%s \n",d); count=count+1;} else if(strcmp(d,"then")==0){fprintf(fp2,"then ----> IDENTIFIER \n"); count=count+1;} else if(strcmp(d,"else")==0){fprintf(fp2,"else ----> IDENTIFIER \n"); count=count+1;} else if(strcmp(d,"switch")==0){fprintf(fp2,"switch ----> IDENTIFIER \n"); count=count+1;} else if(strcmp(d,"printf")==0){fprintf(fp2,"prtintf ----> IDENTIFIER \n"); count=count+1;} else if(strcmp(d,"scanf")==0){fprintf(fp2,"scanf ----> IDENTIFIER \n"); count=count+1;} else if(strcmp(d,"NULL")==0){fprintf(fp2,"NULL ----> IDENTIFIER \n"); count=count+1;} else if(strcmp(d,"int")==0){fprintf(fp2,"INT ----> IDENTIFIER \n"); count=count+1;} else if(strcmp(d,"char")==0){fprintf(fp2,"char ----> IDENTIFIER \n"); count=count+1;} else if(strcmp(d,"float")==0){fprintf(fp2,"float ----> IDENTIFIER \n"); count=count+1;} else if(strcmp(d,"long")==0){fprintf(fp2,"long ----> IDENTIFIER \n"); count=count+1;} else if(strcmp(d,"double")==0){fprintf(fp2,"double ----> IDENTIFIER \n"); count=count+1;} else if(strcmp(d,"const")==0){fprintf(fp2,"const ----> IDENTIFIER \n"); count=count+1;} else if(strcmp(d,"continue")==0)fprintf(fp2,"continue ----> IDENTIFIER \n"); else if(strcmp(d,"size of")==0){fprintf(fp2,"size of ----> IDENTIFIER \n"); count=count+1;} else if(strcmp(d,"register")==0){fprintf(fp2,"register ----> IDENTIFIER \n"); count=count+1;} else if(strcmp(d,"short")==0){fprintf(fp2,"short ----> IDENTIFIER \n"); count=count+1;} else if(strcmp(d,"auto")==0){fprintf(fp2,"auto ----> IDENTIFIER \n"); count=count+1;} else if(strcmp(d,"while")==0){fprintf(fp2,"while ----> IDENTIFIER \n"); count=count+1;} else if(strcmp(d,"do")==0){fprintf(fp2,"do ----> IDENTIFIER \n"); count=count+1;} else if(strcmp(d,"case")==0){fprintf(fp2,"case ----> IDENTIFIER \n"); count=count+1;} else if (isdigit(d[i])) { fprintf(fp2,"%s ---->NUMBER",d); } else if (isalpha(a)) { fprintf(fp2,"%s ----> Variable",d); //printf("%s",d); // memset ( d, 0, 100 );} //fprintf(fp2, "s\n", b); i=0; k=k+1; continue; } i=i+1; k=k+1; } fclose(fp1); fclose(fp2); printf("%d",count); return 0; } In this code , my source.txt has if (a+b) stored . But only ( , + and ) is getting written into lext.txt and not the identifier if or the variable a and b . Any particular reason why?

    Read the article

  • CreateProcessAsUser error 1314

    - by kampi
    Hi! I want create a process under another user. So I use LogonUser and CreateProcessAsUser. But my problem is, that CreatePtocessAsUser always returns the errorcode 1314, which means "A required privilige is not held by the client". So my question is, what I am doing wrong? Or how can i give the priviliges to the handle? (I think the handle should have the privileges, or I am wrong?) Sorry for my english mistakes, but my english knowledge isn't the best :) Plesase help if anyone knows how to correct my application. This a part of my code. STARTUPINFO StartInfo; PROCESS_INFORMATION ProcInfo; TOKEN_PRIVILEGES tp; memset(&ProcInfo, 0, sizeof(ProcInfo)); memset(&StartInfo, 0 , sizeof(StartInfo)); StartInfo.cb = sizeof(StartInfo); HANDLE handle = NULL; if (!OpenProcessToken(GetCurrentProcess(), TOKEN_ALL_ACCESS, &handle)) printf("\nOpenProcessError"); if (!LookupPrivilegeValue(NULL,SE_TCB_NAME, //SE_TCB_NAME, &tp.Privileges[0].Luid)) { printf("\nLookupPriv error"); } tp.PrivilegeCount = 1; tp.Privileges[0].Attributes = SE_PRIVILEGE_ENABLED;//SE_PRIVILEGE_ENABLED; if (!AdjustTokenPrivileges(handle, FALSE, &tp, 0, NULL, 0)) { printf("\nAdjustToken error"); } i = LogonUser(user, domain, password, LOGON32_LOGON_INTERACTIVE, LOGON32_PROVIDER_DEFAULT, &handle); printf("\nLogonUser return : %d",i); i = GetLastError(); printf("\nLogonUser getlast : %d",i); if (! ImpersonateLoggedOnUser(handle) ) printf("\nImpLoggedOnUser!"); i = CreateProcessAsUser(handle, "c:\\windows\\system32\\notepad.exe",NULL, NULL, NULL, true, CREATE_UNICODE_ENVIRONMENT |NORMAL_PRIORITY_CLASS | CREATE_NEW_CONSOLE, NULL, NULL, &StartInfo, &ProcInfo); printf("\nCreateProcessAsUser return : %d",i); i = GetLastError(); printf("\nCreateProcessAsUser getlast : %d",i); CloseHandle(handle); CloseHandle(ProcInfo.hProcess); CloseHandle(ProcInfo.hThread); Thanks in advance!

    Read the article

  • Changing code at runtime

    - by Pagis
    I have a pointer to a function (which i get from a vtable) and I want to edit the function by changing the assembler code (changing a few bytes) at runtime. I tried using memset and also tried assigning the new value directly (something like mPtr[0] = X, mPtr[1] = Y etc.) but I keep getting segmentation fault. How can I change the code? (I'm using C++)

    Read the article

  • Timestamp issue with localtime and mktime

    - by egiakoum1984
    Please see the code below: #include <iostream> #include <stdlib.h> #include <time.h> using namespace std; int main(void) { time_t current_time = 1270715952; cout << "Subscriber current timestamp:" << current_time << endl; tm* currentTm = localtime(&current_time); char tmp_str[256]; //2010-04-08T11:39:12 snprintf(tmp_str, sizeof(tmp_str), "%04d%02d%02d%02d%02d%02d.000", currentTm->tm_year+1900, currentTm->tm_mon+1, currentTm->tm_mday, currentTm->tm_hour, currentTm->tm_min, currentTm->tm_sec); cout << "Subscriber current date:" << tmp_str << endl; tm tmpDateScheduleFrom, tmpDateScheduleTo; memset(&tmpDateScheduleFrom, 0, sizeof(tm)); memset(&tmpDateScheduleTo, 0, sizeof(tm)); //2010-04-08T11:00 tmpDateScheduleFrom.tm_sec = 0; tmpDateScheduleFrom.tm_min = 0; tmpDateScheduleFrom.tm_hour = 11; tmpDateScheduleFrom.tm_mday = 8; tmpDateScheduleFrom.tm_mon = 3; tmpDateScheduleFrom.tm_year = 110; //2010-04-08T12:00 tmpDateScheduleTo.tm_sec = 0; tmpDateScheduleTo.tm_min = 0; tmpDateScheduleTo.tm_hour = 12; tmpDateScheduleTo.tm_mday = 8; tmpDateScheduleTo.tm_mon = 3; tmpDateScheduleTo.tm_year = 110; time_t localFrom = mktime(&tmpDateScheduleFrom); time_t localTo = mktime(&tmpDateScheduleTo); cout << "Subscriber current timestamp:" << current_time << endl; cout << "Subscriber localFrom:" << localFrom << endl; cout << "Subscriber localTo:" << localTo << endl; return 0; } The results are the following: Subscriber current timestamp:1270715952 Subscriber current date:20100408113912.000 Subscriber current timestamp:1270715952 Subscriber localFrom:1270717200 Subscriber localTo:1270720800 Why the current subscriber timestamp (subscriber date and time: 2010-04-08T11:39:12) is not between the range localFrom (timestamp of date/time: 2010-04-08T11:00:00) and LocalTo (timestamp of date/time: 2010-04-08T12:00:00)?

    Read the article

  • ReadFile doesn't work asynchronously on Win7 and Win2k8

    - by f0b0s
    According to MSDN ReadFile can read data 2 different ways: synchronously and asynchronously. I need the second one. The folowing code demonstrates usage with OVERLAPPED struct: #include <windows.h> #include <stdio.h> #include <time.h> void Read() { HANDLE hFile = CreateFileA("c:\\1.avi", GENERIC_READ, 0, NULL, OPEN_EXISTING, FILE_FLAG_OVERLAPPED, NULL); if ( hFile == INVALID_HANDLE_VALUE ) { printf("Failed to open the file\n"); return; } int dataSize = 256 * 1024 * 1024; char* data = (char*)malloc(dataSize); memset(data, 0xFF, dataSize); OVERLAPPED overlapped; memset(&overlapped, 0, sizeof(overlapped)); printf("reading: %d\n", time(NULL)); BOOL result = ReadFile(hFile, data, dataSize, NULL, &overlapped); printf("sent: %d\n", time(NULL)); DWORD bytesRead; result = GetOverlappedResult(hFile, &overlapped, &bytesRead, TRUE); // wait until completion - returns immediately printf("done: %d\n", time(NULL)); CloseHandle(hFile); } int main() { Read(); } On Windows XP output is: reading: 1296651896 sent: 1296651896 done: 1296651899 It means that ReadFile didn't block and returned imediatly at the same second, whereas reading process continued for 3 seconds. It is normal async reading. But on windows 7 and windows 2008 I get following results: reading: 1296661205 sent: 1296661209 done: 1296661209. It is a behavior of sync reading. MSDN says that async ReadFile sometimes can behave as sync (when the file is compressed or encrypted for example). But the return value in this situation should be TRUE and GetLastError() == NO_ERROR. On Windows 7 I get FALSE and GetLastError() == ERROR_IO_PENDING. So WinApi tells me that it is an async call, but when I look at the test I see that it is not! I'm not the only one who found this "bug": read the comment on ReadFile MSDN page. So what's the solution? Does anybody know? It is been 14 months after Denis found this strange behavior.

    Read the article

  • C - How to use both aio_read() and aio_write().

    - by Slav
    I implement game server where I need to both read and write. So I accept incoming connection and start reading from it using aio_read() but when I need to send something, I stop reading using aio_cancel() and then use aio_write(). Within write's callback I resume reading. So, I do read all the time but when I need to send something - I pause reading. It works for ~20% of time - in other case call to aio_cancel() fails with "Operation now in progress" - and I cannot cancel it (even within permanent while cycle). So, my added write operation never happens. How to use these functions well? What did I missed? EDIT: Used under Linux 2.6.35. Ubuntu 10 - 32 bit. Example code: void handle_read(union sigval sigev_value) { /* handle data or disconnection */ } void handle_write(union sigval sigev_value) { /* free writing buffer memory */ } void start() { const int acceptorSocket = socket(AF_INET, SOCK_STREAM, 0); struct sockaddr_in addr; memset(&addr, 0, sizeof(struct sockaddr_in)); addr.sin_family = AF_INET; addr.sin_addr.s_addr = INADDR_ANY; addr.sin_port = htons(port); bind(acceptorSocket, (struct sockaddr*)&addr, sizeof(struct sockaddr_in)); listen(acceptorSocket, SOMAXCONN); struct sockaddr_in address; socklen_t addressLen = sizeof(struct sockaddr_in); for(;;) { const int incomingSocket = accept(acceptorSocket, (struct sockaddr*)&address, &addressLen); if(incomingSocket == -1) { /* handle error ... */} else { //say socket to append outcoming messages at writing: const int currentFlags = fcntl(incomingSocket, F_GETFL, 0); if(currentFlags < 0) { /* handle error ... */ } if(fcntl(incomingSocket, F_SETFL, currentFlags | O_APPEND) == -1) { /* handle another error ... */ } //start reading: struct aiocb* readingAiocb = new struct aiocb; memset(readingAiocb, 0, sizeof(struct aiocb)); readingAiocb->aio_nbytes = MY_SOME_BUFFER_SIZE; readingAiocb->aio_fildes = socketDesc; readingAiocb->aio_buf = mySomeReadBuffer; readingAiocb->aio_sigevent.sigev_notify = SIGEV_THREAD; readingAiocb->aio_sigevent.sigev_value.sival_ptr = (void*)mySomeData; readingAiocb->aio_sigevent.sigev_notify_function = handle_read; if(aio_read(readingAiocb) != 0) { /* handle error ... */ } } } } //called at any time from server side: send(void* data, const size_t dataLength) { //... some thread-safety precautions not needed here ... const int cancellingResult = aio_cancel(socketDesc, readingAiocb); if(cancellingResult != AIO_CANCELED) { //this one happens ~80% of the time - embracing previous call to permanent while cycle does not help: if(cancellingResult == AIO_NOTCANCELED) { puts(strerror(aio_return(readingAiocb))); // "Operation now in progress" /* don't know what to do... */ } } //otherwise it's okay to send: else { aio_write(...); } }

    Read the article

  • stxxl Assertion `it != root_node_.end()' failed

    - by Fabrizio Silvestri
    I am receiving this assertion failed error when trying to insert an element in a stxxl map. The entire assertion error is the following: resCache: /usr/include/stxxl/bits/containers/btree/btree.h:470: std::pair , bool stxxl::btree::btree::insert(const value_type&) [with KeyType = e_my_key, DataType = unsigned int, CompareType = comp_type, unsigned int RawNodeSize = 16384u, unsigned int RawLeafSize = 131072u, PDAllocStrategy = stxxl::SR, stxxl::btree::btree::value_type = std::pair]: Assertion `it != root_node_.end()' failed. Aborted Any idea? Edit: Here's the code fragment void request_handler::handle_request(my_key& query, reply& rep) { c_++; strip(query.content); std::cout << "Received query " << query.content << " by thread " << boost::this_thread::get_id() << ". It is number " << c_ << "\n"; strcpy(element.first.content, query.content); element.second = c_; testcache_.insert(element); STXXL_MSG("Records in map: " << testcache_.size()); } Edit2 here's more details (I omit constants, e.g. MAX_QUERY_LEN) struct comp_type : std::binary_function<my_key, my_key, bool> { bool operator () (const my_key & a, const my_key & b) const { return strncmp(a.content, b.content, MAX_QUERY_LEN) < 0; } static my_key max_value() { return max_key; } static my_key min_value() { return min_key; } }; typedef stxxl::map<my_key, my_data, comp_type> cacheType; cacheType testcache_; request_handler::request_handler() :testcache_(NODE_CACHE_SIZE, LEAF_CACHE_SIZE) { c_ = 0; memset(max_key.content, (std::numeric_limits<unsigned char>::max)(), MAX_QUERY_LEN); memset(min_key.content, (std::numeric_limits<unsigned char>::min)(), MAX_QUERY_LEN); testcache_.enable_prefetching(); STXXL_MSG("Records in map: " << testcache_.size()); }

    Read the article

  • Applications: How to create a custom dialog box for Windows Mobile 6 (native)

    - by TechTwaddle
    Ashraf, on the MSDN forum, asks, “Is there a way to make a default choice for the messagebox that happens after a period of time if the user doesn't choose (Clicked ) Yes or No buttons.” To elaborate, the requirement is to show a message box to the user with certain options to select, and if the user does not respond within a predefined time limit (say 8 seconds) then the message box must dismiss itself and select a default option. Now such a functionality is not available with the MessageBox() api, you will have to write your own custom dialog box. Surely, creating a dialog box is quite a simple task using the DialogBox() api, and we have been creating full screen dialog boxes all the while. So how will this custom message box be any different? It’s not much different from a regular dialog box except for a few changes in its properties. First, it has a title bar but no buttons on the title bar (no ‘x’ or ‘ok’ button on the title bar), it doesn’t occupy full screen and it contains the controls that you put into it, thus justifying the title ‘custom’. So in this post we create a custom dialog box with two buttons, ‘Black’ and ‘White’. The user is given 8 seconds to select one of those colours, if the user doesn’t make a selection in 8 seconds, the default option ‘Black’ is selected. Before going into the implementation here is a video of how the dialog box works; Custom dialog box To start off, add a new dialog resource into your application, size it appropriately and add whatever controls you need to the dialog. In my case, I added two static text labels and two buttons, as below; Now we need to write up the window procedure for this dialog, here is the complete function; BOOL CALLBACK CustomDialogProc(HWND hDlg, UINT uMessage, WPARAM wParam, LPARAM lParam) {     int wmID, wmEvent;     PAINTSTRUCT ps;     HDC hdc;     static int timeCount = 0;     switch(uMessage)     {         case WM_INITDIALOG:             {                 SHINITDLGINFO shidi;                 memset(&shidi, 0, sizeof(shidi));                 shidi.dwMask = SHIDIM_FLAGS;                 //shidi.dwFlags = SHIDIF_DONEBUTTON | SHIDIF_SIPDOWN | SHIDIF_SIZEDLGFULLSCREEN | SHIDIF_EMPTYMENU;                 shidi.dwFlags = SHIDIF_SIPDOWN | SHIDIF_EMPTYMENU;                 shidi.hDlg = hDlg;                 SHInitDialog(&shidi);                 SHDoneButton(hDlg, SHDB_HIDE);                 timeCount = 0;                 SetWindowText(GetDlgItem(hDlg, IDC_STATIC_TIME_REMAINING), L"Time remaining: 8 second(s)");                 SetTimer(hDlg, MY_TIMER, 1000, NULL);             }             return TRUE;         case WM_COMMAND:             {                 wmID = LOWORD(wParam);                 wmEvent = HIWORD(wParam);                 switch(wmID)                 {                     case IDC_BUTTON_BLACK:                         KillTimer(hDlg, MY_TIMER);                         EndDialog(hDlg, IDC_BUTTON_BLACK);                         break;                     case IDC_BUTTON_WHITE:                         KillTimer(hDlg, MY_TIMER);                         EndDialog(hDlg, IDC_BUTTON_WHITE);                         break;                 }             }             break;         case WM_TIMER:             {                 if (wParam == MY_TIMER)                 {                     WCHAR wszText[128];                     memset(&wszText, 0, sizeof(wszText));                     timeCount++;                     //8 seconds are over, dismiss the dialog, select def value                     if (timeCount >= 8)                     {                         KillTimer(hDlg, MY_TIMER);                         EndDialog(hDlg, IDC_BUTTON_BLACK_DEF);                     }                     wsprintf(wszText, L"Time remaining: %d second(s)", 8-timeCount);                     SetWindowText(GetDlgItem(hDlg, IDC_STATIC_TIME_REMAINING), wszText);                     UpdateWindow(GetDlgItem(hDlg, IDC_STATIC_TIME_REMAINING));                 }             }             break;         case WM_PAINT:             {                 hdc = BeginPaint(hDlg, &ps);                 EndPaint(hDlg, &ps);             }             break;     }     return FALSE; } The MSDN documentation mentions that you need to specify the flag WS_NONAVDONEBUTTON, but I got an error saying that the value could not be found, so we can ignore this for now. Next up, while calling SHInitDialog() for your custom dialog, make sure that you don’t specify SHDIF_DONEBUTTON in the dwFlags member of the SHINITDIALOG structure, this member makes the ‘ok’ button appear on the dialog title bar. Finally, we need to call SHDoneButton() with SHDB_HIDE flag to, well, hide the Done button. The ‘Done’ button is the same as the ‘ok’ button, so this step might seem redundant, and the dialog works fine without calling SHDoneButton() too, but it’s better to stick with the documentation (; So you can see that we have followed all these steps above, under WM_INITDIALOG. We also setup a few things like a variable to keep track of the time, and setting off a one second timer. Every time the timer fires, we receive a WM_TIMER message. We then update the static label displaying the amount of time left to the user. If 8 seconds go by without the user selecting any option, we kill the timer and end the dialog with IDC_BUTTON_BLACK_DEF. This is just a #define’d integer value, make sure it’s unique. You’ll see why this is important. If the user makes a selection, either Black or White, we kill the timer and end the dialog with corresponding selection the user made, that is, either IDC_BUTTON_BLACK or IDC_BUTTON_WHITE. Ok, so now our custom dialog is ready to be used. I invoke the custom dialog from a menu entry in the main windows as below, case IDM_MENU_CUSTOMDLG:     {         int ret = DialogBox(g_hInst, MAKEINTRESOURCE(IDD_CUSTOM_DIALOG), hWnd, CustomDialogProc);         switch(ret)         {             case IDC_BUTTON_BLACK_DEF:                 SetWindowText(g_hStaticSelection, L"You Selected: Black (default)");                 break;             case IDC_BUTTON_BLACK:                 SetWindowText(g_hStaticSelection, L"You Selected: Black");                 break;             case IDC_BUTTON_WHITE:                 SetWindowText(g_hStaticSelection, L"You Selected: White");                 break;         }         UpdateWindow(g_hStaticSelection);     }     break; So you see why ending the dialog with the corresponding value was important, that’s what the DialogBox() api returns with. And in the main window I update a static text label to show which option was selected. I cranked this out in about an hour, and unfortunately don’t have time for a managed C# version. That will have to be another post, if I manage to get it working that is (;

    Read the article

  • 'inode table usage' spiking every morning at 8am

    - by Harry Wood
    I installed munin on my ubuntu server. It's showing my 'inode table usage' spiking every morning at 8am. It then rapidly curves down and settles over the course of several hours. What might cause this? I thought it might be something running in /etc/cron.daily but this was set to run at 6a.m. and I've changed it to 4am. The spike remains at 8am. I also enabled cron logging, but can't see anything getting launched at 8a.m. It's a virtual server hosted by memset. Could it be caused by something happening on the virtual host?

    Read the article

  • How do I sign my certificate using the root certificate

    - by Asif Alam
    I am using certificate based authentication between my server and client. I have generated Root Certificate. My client at the time of installation will generate a new Certificate and use the Root Certificate to sign it. I need to use Windows API. Cannot use any windows tools like makecert. Till now I have been able to Install the Root certificate in store. Below code X509Certificate2 ^ certificate = gcnew X509Certificate2("C:\\rootcert.pfx","test123"); X509Store ^ store = gcnew X509Store( "teststore",StoreLocation::CurrentUser ); store->Open( OpenFlags::ReadWrite ); store->Add( certificate ); store->Close(); Then open the installed root certificate to get the context GetRootCertKeyInfo(){ HCERTSTORE hCertStore; PCCERT_CONTEXT pSignerCertContext=NULL; DWORD dwSize = NULL; CRYPT_KEY_PROV_INFO* pKeyInfo = NULL; DWORD dwKeySpec; if ( !( hCertStore = CertOpenStore(CERT_STORE_PROV_SYSTEM, 0, NULL, CERT_SYSTEM_STORE_CURRENT_USER,L"teststore"))) { _tprintf(_T("Error 0x%x\n"), GetLastError()); } pSignerCertContext = CertFindCertificateInStore(hCertStore,MY_ENCODING_TYPE,0,CERT_FIND_ANY,NULL,NULL); if(NULL == pSignerCertContext) { _tprintf(_T("Error 0x%x\n"), GetLastError()); } if(!(CertGetCertificateContextProperty( pSignerCertContext, CERT_KEY_PROV_INFO_PROP_ID, NULL, &dwSize))) { _tprintf(_T("Error 0x%x\n"), GetLastError()); } if(pKeyInfo) free(pKeyInfo); if(!(pKeyInfo = (CRYPT_KEY_PROV_INFO*)malloc(dwSize))) { _tprintf(_T("Error 0x%x\n"), GetLastError()); } if(!(CertGetCertificateContextProperty( pSignerCertContext, CERT_KEY_PROV_INFO_PROP_ID, pKeyInfo, &dwSize))) { _tprintf(_T("Error 0x%x\n"), GetLastError()); } return pKeyInfo; } Then finally created the certificate and signed with the pKeyInfo // Acquire key container if (!CryptAcquireContext(&hCryptProv, _T("trykeycon"), NULL, PROV_RSA_FULL, CRYPT_MACHINE_KEYSET)) { _tprintf(_T("Error 0x%x\n"), GetLastError()); // Try to create a new key container _tprintf(_T("CryptAcquireContext... ")); if (!CryptAcquireContext(&hCryptProv, _T("trykeycon"), NULL, PROV_RSA_FULL, CRYPT_NEWKEYSET | CRYPT_MACHINE_KEYSET)) { _tprintf(_T("Error 0x%x\n"), GetLastError()); return 0; } else { _tprintf(_T("Success\n")); } } else { _tprintf(_T("Success\n")); } // Generate new key pair _tprintf(_T("CryptGenKey... ")); if (!CryptGenKey(hCryptProv, AT_SIGNATURE, 0x08000000 /*RSA-2048-BIT_KEY*/, &hKey)) { _tprintf(_T("Error 0x%x\n"), GetLastError()); return 0; } else { _tprintf(_T("Success\n")); } //some code CERT_NAME_BLOB SubjectIssuerBlob; memset(&SubjectIssuerBlob, 0, sizeof(SubjectIssuerBlob)); SubjectIssuerBlob.cbData = cbEncoded; SubjectIssuerBlob.pbData = pbEncoded; // Prepare algorithm structure for self-signed certificate CRYPT_ALGORITHM_IDENTIFIER SignatureAlgorithm; memset(&SignatureAlgorithm, 0, sizeof(SignatureAlgorithm)); SignatureAlgorithm.pszObjId = szOID_RSA_SHA1RSA; // Prepare Expiration date for self-signed certificate SYSTEMTIME EndTime; GetSystemTime(&EndTime); EndTime.wYear += 5; // Create self-signed certificate _tprintf(_T("CertCreateSelfSignCertificate... ")); CRYPT_KEY_PROV_INFO* aKeyInfo; aKeyInfo = GetRootCertKeyInfo(); pCertContext = CertCreateSelfSignCertificate(NULL, &SubjectIssuerBlob, 0, aKeyInfo, &SignatureAlgorithm, 0, &EndTime, 0); With the above code I am able to create the certificate but it does not looks be signed by the root certificate. I am unable to figure what I did is right or not.. Any help with be greatly appreciated.. Thanks Asif

    Read the article

  • Reciving UDP packets on iPhone

    - by Eli
    I'm trying to establish UDP communication between a MAC OS and an iPod through Wi-Fi, at this point I'm able to send packets from the iPod and I can see those packets have the right MAC and ip addresses (I'm using wireshark to monitor the network) but the MAC receives the packets only when the wireshark is on, otherwise recvfrom() returns -1. When I try to transmit from MAC to iPhone I have the same result, I can see the packets are sent but the iPhone doesn't seem to get them. I'm using the next code to send: struct addrinfo hints; int rv; memset(&hints, 0, sizeof hints); hints.ai_family = AF_UNSPEC; hints.ai_socktype = SOCK_DGRAM; if ((rv = getaddrinfo(IP, SERVERPORT, &hints, &servinfo)) != 0) { fprintf(stderr, "getaddrinfo: %s\n", gai_strerror(rv)); return 1; } // loop through all the results and make a socket for(p = servinfo; p != NULL; p = p->ai_next) { if ((sockfd = socket(p->ai_family, p->ai_socktype, p->ai_protocol)) == -1) { perror("talker: socket"); continue; } break; } if (p == NULL) { fprintf(stderr, "talker: failed to bind socket\n"); return 2; } while (cond) sntBytes += sendto(sockfd, message, strlen(message), 0, p->ai_addr, p->ai_addrlen); return 0; and this code to receive: struct addrinfo hints, *p; int rv; memset(&hints, 0, sizeof hints); hints.ai_family = AF_UNSPEC; // set hints.ai_socktype = SOCK_DGRAM; hints.ai_flags = AI_PASSIVE; // use to AF_INET to force IPv4 my IP if ((rv = getaddrinfo(NULL, MYPORT, &hints, &servinfo)) != 0) { fprintf(stderr, "getaddrinfo: %s\n", gai_strerror(rv)); return 1; } // loop through all the results and bind to the first we can for(p = servinfo; p != NULL; p = p->ai_next) { if ((sockfd = socket(p->ai_family, p->ai_socktype, p->ai_protocol)) == -1) { perror("listener: socket"); continue; } if (bind(sockfd, p->ai_addr, p->ai_addrlen) == -1) { close(sockfd); perror("listener: bind"); continue; } break; } if (p == NULL) { fprintf(stderr, "listener: failed to bind socket\n"); return 2; } addr_len = sizeof their_addr; fcntl(sockfd, F_SETFL,O_NONBLOCK); int rcvbuf_size = 128 * 1024; // That's 128Kb of buffer space. setsockopt(sockfd, SOL_SOCKET, SO_RCVBUF, &rcvbuf_size, sizeof(rcvbuf_size)); printf("listener: waiting to recvfrom...\n"); while (cond) rcvBytes = recvfrom(sockfd, buf, MAXBUFLEN-1 , 0, (struct sockaddr *)&their_addr, &addr_len); return 0; What am I missing?

    Read the article

  • FreeType2 Bitmap to System::Drawing::Bitmap.

    - by Dennis Roche
    Hi, I'm trying to convert a FreeType2 bitmap to a System::Drawing::Bitmap in C++/CLI. FT_Bitmap has a unsigned char* buffer that contains the data to write. I have got somewhat working save it disk as a *.tga, but when saving as *.bmp it renders incorrectly. I believe that the size of byte[] is incorrect and that my data is truncated. Any hints/tips/ideas on what is going on here would be greatly appreciated. Links to articles explaining byte layout and pixel formats etc. would be helpful. Thanks!! C++/CLI code. FT_Bitmap *bitmap = &face->glyph->bitmap; int width = (face->bitmap->metrics.width / 64); int height = (face->bitmap->metrics.height / 64); // must be aligned on a 32 bit boundary or 4 bytes int depth = 8; int stride = ((width * depth + 31) & ~31) >> 3; int bytes = (int)(stride * height); // as *.tga void *buffer = bytes ? malloc(bytes) : NULL; if (buffer) { memset(buffer, 0, bytes); for (int i = 0; i < glyph->rows; ++i) memcpy((char *)buffer + (i * width), glyph->buffer + (i * glyph->pitch), glyph->pitch); WriteTGA("Test.tga", buffer, width, height); } array<Byte>^ values = gcnew array<Byte>(bytes); Marshal::Copy((IntPtr)glyph->buffer, values, 0, bytes); // as *.bmp Bitmap^ systemBitmap = gcnew Bitmap(width, height, PixelFormat::Format24bppRgb); // create bitmap data, lock pixels to be written. BitmapData^ bitmapData = systemBitmap->LockBits(Rectangle(0, 0, width, height), ImageLockMode::WriteOnly, bitmap->PixelFormat); Marshal::Copy(values, 0, bitmapData->Scan0, bytes); systemBitmap->UnlockBits(bitmapData); systemBitmap->Save("Test.bmp"); Reference, FT_Bitmap typedef struct FT_Bitmap_ { int rows; int width; int pitch; unsigned char* buffer; short num_grays; char pixel_mode; char palette_mode; void* palette; } FT_Bitmap; Reference, WriteTGA bool WriteTGA(const char *filename, void *pxl, uint16 width, uint16 height) { FILE *fp = NULL; fopen_s(&fp, filename, "wb"); if (fp) { TGAHeader header; memset(&header, 0, sizeof(TGAHeader)); header.imageType = 3; header.width = width; header.height = height; header.depth = 8; header.descriptor = 0x20; fwrite(&header, sizeof(header), 1, fp); fwrite(pxl, sizeof(uint8) * width * height, 1, fp); fclose(fp); return true; } return false; }

    Read the article

  • unsigned char* buffer (FreeType2 Bitmap) to System::Drawing::Bitmap.

    - by Dennis Roche
    Hi, I'm trying to convert a FreeType2 bitmap to a System::Drawing::Bitmap in C++/CLI. FT_Bitmap has a unsigned char* buffer that contains the data to write. I have got somewhat working save it disk as a *.tga, but when saving as *.bmp it renders incorrectly. I believe that the size of byte[] is incorrect and that my data is truncated. Any hints/tips/ideas on what is going on here would be greatly appreciated. Links to articles explaining byte layout and pixel formats etc. would be helpful. Thanks!! C++/CLI code. FT_Bitmap *bitmap = &face->glyph->bitmap; int width = (face->bitmap->metrics.width / 64); int height = (face->bitmap->metrics.height / 64); // must be aligned on a 32 bit boundary or 4 bytes int depth = 8; int stride = ((width * depth + 31) & ~31) >> 3; int bytes = (int)(stride * height); // as *.tga void *buffer = bytes ? malloc(bytes) : NULL; if (buffer) { memset(buffer, 0, bytes); for (int i = 0; i < glyph->rows; ++i) memcpy((char *)buffer + (i * width), glyph->buffer + (i * glyph->pitch), glyph->pitch); WriteTGA("Test.tga", buffer, width, height); } // as *.bmp array<Byte>^ values = gcnew array<Byte>(bytes); Marshal::Copy((IntPtr)glyph->buffer, values, 0, bytes); Bitmap^ systemBitmap = gcnew Bitmap(width, height, PixelFormat::Format24bppRgb); // create bitmap data, lock pixels to be written. BitmapData^ bitmapData = systemBitmap->LockBits(Rectangle(0, 0, width, height), ImageLockMode::WriteOnly, bitmap->PixelFormat); Marshal::Copy(values, 0, bitmapData->Scan0, bytes); systemBitmap->UnlockBits(bitmapData); systemBitmap->Save("Test.bmp"); Reference, FT_Bitmap typedef struct FT_Bitmap_ { int rows; int width; int pitch; unsigned char* buffer; short num_grays; char pixel_mode; char palette_mode; void* palette; } FT_Bitmap; Reference, WriteTGA bool WriteTGA(const char *filename, void *pxl, uint16 width, uint16 height) { FILE *fp = NULL; fopen_s(&fp, filename, "wb"); if (fp) { TGAHeader header; memset(&header, 0, sizeof(TGAHeader)); header.imageType = 3; header.width = width; header.height = height; header.depth = 8; header.descriptor = 0x20; fwrite(&header, sizeof(header), 1, fp); fwrite(pxl, sizeof(uint8) * width * height, 1, fp); fclose(fp); return true; } return false; } Update FT_Bitmap *bitmap = &face->glyph->bitmap; // stride must be aligned on a 32 bit boundary or 4 bytes int depth = 8; int stride = ((width * depth + 31) & ~31) >> 3; int bytes = (int)(stride * height); target = gcnew Bitmap(width, height, PixelFormat::Format8bppIndexed); // create bitmap data, lock pixels to be written. BitmapData^ bitmapData = target->LockBits(Rectangle(0, 0, width, height), ImageLockMode::WriteOnly, target->PixelFormat); array<Byte>^ values = gcnew array<Byte>(bytes); Marshal::Copy((IntPtr)bitmap->buffer, values, 0, bytes); Marshal::Copy(values, 0, bitmapData->Scan0, bytes); target->UnlockBits(bitmapData);

    Read the article

  • Win32 window capture with BitBlt not displaying border

    - by user292533
    I have written some c++ code to capture a window to a .bmp file. BITMAPFILEHEADER get_bitmap_file_header(int width, int height) { BITMAPFILEHEADER hdr; memset(&hdr, 0, sizeof(BITMAPFILEHEADER)); hdr.bfType = ((WORD) ('M' << 8) | 'B'); // is always "BM" hdr.bfSize = 0;//sizeof(BITMAPFILEHEADER) + sizeof(BITMAPINFOHEADER) + (width * height * sizeof(int)); hdr.bfReserved1 = 0; hdr.bfReserved2 = 0; hdr.bfOffBits = (DWORD)(sizeof(BITMAPFILEHEADER) + sizeof(BITMAPINFOHEADER)); return hdr; } BITMAPINFO get_bitmap_info(int width, int height) { BITMAPINFO bmi; memset(&bmi.bmiHeader, 0, sizeof(BITMAPINFOHEADER)); //initialize bitmap header bmi.bmiHeader.biSize = sizeof(BITMAPINFOHEADER); bmi.bmiHeader.biWidth = width; bmi.bmiHeader.biHeight = height; bmi.bmiHeader.biPlanes = 1; bmi.bmiHeader.biBitCount = 4 * 8; bmi.bmiHeader.biCompression = BI_RGB; bmi.bmiHeader.biSizeImage = width * height * 4; return bmi; } void get_bitmap_from_window(HWND hWnd, int * imageBuff) { HDC hDC = GetWindowDC(hWnd); SIZE size = get_window_size(hWnd); HDC hMemDC = CreateCompatibleDC(hDC); RECT r; HBITMAP hBitmap = CreateCompatibleBitmap(hDC, size.cx, size.cy); HBITMAP hOld = (HBITMAP)SelectObject(hMemDC, hBitmap); BitBlt(hMemDC, 0, 0, size.cx, size.cy, hDC, 0, 0, SRCCOPY); //PrintWindow(hWnd, hMemDC, 0); BITMAPINFO bmi = get_bitmap_info(size.cx, size.cy); GetDIBits(hMemDC, hBitmap, 0, size.cy, imageBuff, &bmi, DIB_RGB_COLORS); SelectObject(hMemDC, hOld); DeleteDC(hMemDC); ReleaseDC(NULL, hDC); } void save_image(HWND hWnd, char * name) { int * buff; RECT r; SIZE size; GetWindowRect(hWnd, &r); size.cx = r.right-r.left; size.cy = r.bottom-r.top; buff = (int*)malloc(size.cx * size.cy * sizeof(int)); get_bitmap_from_window(hWnd, buff); BITMAPINFO bmi = get_bitmap_info(size.cx, size.cy); BITMAPFILEHEADER hdr = get_bitmap_file_header(size.cx, size.cy); FILE * fout = fopen(name, "w"); fwrite(&hdr, 1, sizeof(BITMAPFILEHEADER), fout); fwrite(&bmi.bmiHeader, 1, sizeof(BITMAPINFOHEADER), fout); fwrite(buff, 1, size.cx * size.cy * sizeof(int), fout); fflush(fout); fclose(fout); free(buff); } It works find under XP, but under Vista the border of the window is transparent. Using PrintWindow solves the problem, but is unacceptable for performance reasons. Is there a performant code change, or a setting that can be changed to make the border non-transparent?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8  | Next Page >