Search Results

Search found 389 results on 16 pages for 'dword'.

Page 7/16 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Registry remotley hacked win 7 need help tracking the perp

    - by user577229
    I was writing some .VBS code at thhe office that would allow certain file extensions to be downloaded without a warning dialog on a w7x32 system. The system I was writing this on is in a lab on a segmented subnet. All web access is via a proxy server. The only means of accessing my machine is via the internet or from within the labs MSFT AD domain. While writing and testing my code I found a message of sorts. Upon refresing the registry to verify my code changed a dword, instead the message HELLO was written and visible in regedit where the dword value wass called for. I took a screen shot and proceeded to edit my code. This same weird behavior occurred last time I was writing registry code except on another internal server. I understand that remote registry access exists for windows systems. I will block this immediately once I return to the office. What I want to know is, can I trace who made this connection? How would I do this? I suspect the cause of this is the cause of other "odd" behaviors I'm experiencing at work such as losing control of my input director master control for over an hour and unchanged code that all of a sudden fails for no logical region. These failures occur at funny times, whenver I'm about to give a demonstration of my test code. I know this sounds crazy however knowledge of the registry component makes this believable. Once the registry can be accessed, the entire system is compromised. Any help or sanity checking is appreciated.

    Read the article

  • Destroy process-less console windows left by Visual Studio debug sessions

    - by jon hanson
    A known bug with security update KB978037 can occur with Visual Studio 2003 (and 2008) where sometimes if you restart a debugging session on a console app then the console window doesn't get closed even though the owner process no longer exists. The problem is discussed further here: http://stackoverflow.com/questions/2402875/visual-studio-debug-console-sometimes-stays-open-and-is-impossible-to-close These zombie windows then can not be closed via the Taskbar or via the TaskManager, and typically require a power off/on to get rid of them. Over the period of even a single day you can accumulate quite a few of them, which clog up your TaskBar and are generally annoying. I thought I would knock up a simple C++ Win32 utility to attempt to call DestroyWindow() on these windows by passing the windows handle as a cmd-line argument and converting it to a HWND. I'm converting the handle from a string by parsing it as a DWORD then casting the DWORD to a HWND. This appears to be working as if I call GetWindowInfo() on the handle it succeeds. However calling DestroyWindow() on the handle fails with error 5 (access denied), presumably because the caller process (i.e. my app) doesn't own the window in question. Any ideas as to how I might get rid of the zombie windows, either via the above approach or any other alternative short of rebooting? I'm in a corporate environment so installing/uninstalling updates/service-packs etc isn't an option.

    Read the article

  • Crash generated during destruction of hash_map

    - by Alien01
    I am using hash_map in application as typedef hash_map<DWORD,CComPtr<IInterfaceXX>> MapDword2Interface; In main application I am using static instance of this map static MapDword2Interface m_mapDword2Interface; I have got one crash dump from one of the client machines which point to the crash in clearing this map I opened that crash dump and here is assembly during debugging > call std::list<std::pair<unsigned long const ,ATL::CComPtr<IInterfaceXX> >,std::allocator<std::pair<unsigned long const ,ATL::CComPtr<IInterfaceXX> > > >::clear > mov eax,dword ptr [CMainApp::m_mapDword2Interface+8 (49XXXXX)] Here is code where crash dump is pointing. Below code is from stl:list file void clear() { // erase all #if _HAS_ITERATOR_DEBUGGING this->_Orphan_ptr(*this, 0); #endif /* _HAS_ITERATOR_DEBUGGING */ _Nodeptr _Pnext; _Nodeptr _Pnode = _Nextnode(_Myhead); _Nextnode(_Myhead) = _Myhead; _Prevnode(_Myhead) = _Myhead; _Mysize = 0; for (; _Pnode != _Myhead; _Pnode = _Pnext) { // delete an element _Pnext = _Nextnode(_Pnode); this->_Alnod.destroy(_Pnode); this->_Alnod.deallocate(_Pnode, 1); } } Crash is pointing to the this->_Alnod.destroy(_Pnode); statement in above code. I am not able to guess it, what could be reason. Any ideas??? How can I make sure, even is there is something wrong with the map , it should not crash?

    Read the article

  • List local printers

    - by vladimir
    hi all, i am using this routine to list the local printers installed on on a machine... var p: pointer; hpi: _PRINTER_INFO_2A; hGlobal: cardinal; dwNeeded, dwReturned: DWORD; bFlag: boolean; i: dword; begin p := nil; EnumPrinters(PRINTER_ENUM_LOCAL, nil, 2, p, 0, dwNeeded, dwReturned); if (dwNeeded = 0) then exit; GetMem(p,dwNeeded); if (p = nil) then exit; bFlag := EnumPrinters(PRINTER_ENUM_LOCAL, nil, 2, p, dwneeded, dwNeeded, dwReturned); if (not bFlag) then exit; CbLblPrinterPath.Properties.Items.Clear; for i := 0 to dwReturned - 1 do begin CbLblPrinterPath.Properties.Items.Add(TPrinterInfos(p^)[i].pPrinterName); end; FreeMem(p); TPrinterInfos(p^)[i].pPrinterName returns a 'P' for printer name. i have a PdfCreator installed as a printer. TPrinterInfos is an array of _PRINTER_INFO_2A. how can i fix this?

    Read the article

  • CEIL is one too high for exact integer divisions

    - by Synetech
    This morning I lost a bunch of files, but because the volume they were one was both internally and externally defragmented, all of the information necessary for a 100% recovery is available; I just need to fill in the FAT where required. I wrote a program to do this and tested it on a copy of the FAT that I dumped to a file and it works perfectly except that for a few of the files (17 out of 526), the FAT chain is one single cluster too long, and thus cross-linked with the next file. Fortunately I know exactly what the problem is. I used ceil in my EOF calculation because even a single byte over will require a whole extra cluster: //Cluster is the starting cluster of the file //Size is the size (in bytes) of the file //BPC is the number of bytes per cluster //NumClust is the number of clusters in the file //EOF is the last cluster of the file’s FAT chain DWORD NumClust = ceil( (float)(Size / BPC) ) DWORD EOF = Cluster + NumClust; This algorithm works fine for everything except files whose size happens to be exactly a multiple of the cluster size, in which case they end up being one cluster too much. I thought about it for a while but am at a loss as to a way to do this. It seems like it should be simple but somehow it is surprisingly tricky. What formula would work for files of any size?

    Read the article

  • Closing thread using ExitThread - C

    - by Jamie Keeling
    I have a simple program that creates a thread, loops twenty times and then makes a call to close itself and perform the necessary cleanup. When I debug the program it reaches the ExitThread(); method and pauses, ignoring the printf(); I have set up after it to signal to me it's closed. Is this normal or am I forgetting to do something? I'm new to threading using C. Main() void main() { Time t; int i = 0; StartTimer(); for(i = 0; i < 20; i++) { t = GetTime(); printf("%d.%.3d\n", t.seconds, t.milliseconds); Sleep(100); } StopTimer(); } Thread Creation void StartTimer() { DWORD threadId; seconds = 0; milliseconds = 0; // Create child thread hThread = CreateThread( NULL, // lpThreadAttributes (default) 0, // dwStackSize (default) ThreadFunc, // lpStartAddress NULL, // lpParameter 0, // dwCreationFlags &threadId // lpThreadId (returned by function) ); // Check child thread was created successfully if(hThread == NULL) { printf("Error creating thread\n"); } } Thread Close void StopTimer() { DWORD exitCode; if(GetExitCodeThread(hThread,&exitCode) != 0) { ExitThread(exitCode); printf("Thread closed"); if(CloseHandle(hThread)) { printf("Handle closed"); } } }

    Read the article

  • Reverse reading WORD from a binary file?

    - by Angel
    Hi, I have a structure: struct JFIF_HEADER { WORD marker[2]; // = 0xFFD8FFE0 WORD length; // = 0x0010 BYTE signature[5]; // = "JFIF\0" BYTE versionhi; // = 1 BYTE versionlo; // = 1 BYTE xyunits; // = 0 WORD xdensity; // = 1 WORD ydensity; // = 1 BYTE thumbnwidth; // = 0 BYTE thumbnheight; // = 0 }; This is how I read it from the file: HANDLE file = CreateFile(filename, GENERIC_READ, FILE_SHARE_READ, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, 0); DWORD tmp = 0; DWORD size = GetFileSize(file, &tmp); BYTE *DATA = new BYTE[size]; ReadFile(file, DATA, size, &tmp, 0); JFIF_HEADER header; memcpy(&header, DATA, sizeof(JFIF_HEADER)); This is how the beginning of my file looks in hex editor: 0xFF 0xD8 0xFF 0xE0 0x00 0x10 0x4A 0x46 0x49 0x46 0x00 0x01 0x01 0x00 0x00 0x01 When I print header.marker, it shows exactly what it should (0xFFD8FFE0). But when I print header.length, it shows 0x1000 instead of 0x0010. The same thing is with xdensity and ydensity. Why do I get wrong data when reading a WORD? Thank you.

    Read the article

  • Delphi: How to avoid EIntOverflow underflow when subtracting?

    - by Ian Boyd
    Microsoft already says, in the documentation for GetTickCount, that you could never compare tick counts to check if an interval has passed. e.g.: Incorrect (pseudo-code): DWORD endTime = GetTickCount + 10000; //10 s from now ... if (GetTickCount > endTime) break; The above code is bad because it is suceptable to rollover of the tick counter. For example, assume that the clock is near the end of it's range: endTime = 0xfffffe00 + 10000 = 0x00002510; //9,488 decimal Then you perform your check: if (GetTickCount > endTime) Which is satisfied immediatly, since GetTickCount is larger than endTime: if (0xfffffe01 > 0x00002510) The solution Instead you should always subtract the two time intervals: DWORD startTime = GetTickCount; ... if (GetTickCount - startTime) > 10000 //if it's been 10 seconds break; Looking at the same math: if (GetTickCount - startTime) > 10000 if (0xfffffe01 - 0xfffffe00) > 10000 if (1 > 10000) Which is all well and good in C/C++, where the compiler behaves a certain way. But what about Delphi? But when i perform the same math in Delphi, with overflow checking on ({Q+}, {$OVERFLOWCHECKS ON}), the subtraction of the two tick counts generates an EIntOverflow exception when the TickCount rolls over: if (0x00000100 - 0xffffff00) > 10000 0x00000100 - 0xffffff00 = 0x00000200 What is the intended solution for this problem? Edit: i've tried to temporarily turn off OVERFLOWCHECKS: {$OVERFLOWCHECKS OFF}] delta = GetTickCount - startTime; {$OVERFLOWCHECKS ON} But the subtraction still throws an EIntOverflow exception. Is there a better solution, involving casts and larger intermediate variable types?

    Read the article

  • C++ NetUserAdd() not working?

    - by Brett Powell
    I posted earlier about how to do this, and got some great replies, and have managed to get the code written based off the MSDN example. However, it does not seem to be working properly. Its printing out the ERROR_ACCESS_DENIED message, but im not sure why as I am running it as a full admin. I was initially trying to create a USER_PRIV_ADMIN, but the MSDN said it can only use USER_PRIV_USER, but sadly neither work. Im hoping someone can spot a mistake or has an idea. Thanks! void AddRDPUser() { USER_INFO_1 ui; DWORD dwLevel = 1; DWORD dwError = 0; NET_API_STATUS nStatus; ui.usri1_name = L"DummyUserAccount"; ui.usri1_password = L"a2cDz3rQpG8"; //ignored by NetUserAdd //ui.usri1_password_age = -1; ui.usri1_priv = USER_PRIV_USER; //USER_PRIV_ADMIN; ui.usri1_home_dir = NULL; ui.usri1_comment = NULL; ui.usri1_flags = UF_SCRIPT; ui.usri1_script_path = NULL; nStatus = NetUserAdd(NULL, dwLevel, (LPBYTE)&ui, &dwError); switch (nStatus) { case NERR_Success: { Msg("SUCCESS!\n"); break; } case NERR_InvalidComputer: { fprintf(stderr, "A system error has occurred: NERR_InvalidComputer\n"); break; } case NERR_NotPrimary: { fprintf(stderr, "A system error has occurred: NERR_NotPrimary\n"); break; } case NERR_GroupExists: { fprintf(stderr, "A system error has occurred: NERR_GroupExists\n"); break; } case NERR_UserExists: { fprintf(stderr, "A system error has occurred: NERR_UserExists\n"); break; } case NERR_PasswordTooShort: { fprintf(stderr, "A system error has occurred: NERR_PasswordTooShort\n"); break; } case ERROR_ACCESS_DENIED: { fprintf(stderr, "A system error has occurred: ERROR_ACCESS_DENIED\n"); break; } } }

    Read the article

  • Howcome some C++ functions with unspecified linkage build with C linkage?

    - by christoffer
    This is something that makes me fairly perplexed. I have a C++ file that implements a set of functions, and a header file that defines prototypes for them. When building with Visual Studio or MingW-gcc, I get linking errors on two of the functions, and adding an 'extern "C"' qualifier resolved the error. How is this possible? Header file, "some_header.h": // Definition of struct DEMO_GLOBAL_DATA omitted DWORD WINAPI ThreadFunction(LPVOID lpData); void WriteLogString(void *pUserData, const char *pString, unsigned long nStringLen); void CheckValid(DEMO_GLOBAL_DATA *pData); int HandleStart(DEMO_GLOBAL_DATA * pDAta, TCHAR * pLogFileName); void HandleEnd(DEMO_GLOBAL_DATA *pData); C++ file, "some_implementation.cpp" #include "some_header.h" DWORD WINAPI ThreadFunction(LPVOID lpData) { /* omitted */ } void WriteLogString(void *pUserData, const char *pString, unsigned long nStringLen) { /* omitted */ } void CheckValid(DEMO_GLOBAL_DATA *pData) { /* omitted */ } int HandleStart(DEMO_GLOBAL_DATA * pDAta, TCHAR * pLogFileName) { /* omitted */ } void HandleEnd(DEMO_GLOBAL_DATA *pData) { /* omitted */ } The implementations compile without warnings, but when linking with the UI code that calls these, I get a normal error LNK2001: unresolved external symbol "int __cdecl HandleStart(struct _DEMO_GLOBAL_DATA *, wchar_t *) error LNK2001: unresolved external symbol "void __cdecl CheckValid(struct _DEMO_MAIN_GLOBAL_DATA * What really confuses me, now, is that only these two functions (HandleStart and CheckValid) seems to be built with C linkage. Explicitly adding "extern 'C'" declarations for only these two resolved the linking error, and the application builds and runs. Adding "extern 'C'" on some other function, such as HandleEnd, introduces a new linking error, so that one is obviously compiled correctly. The implementation file is never modified in any of this, only the prototypes.

    Read the article

  • How do you convert bytes of bitmap into x, y location of pixels?

    - by Jon
    I have a win32 program that creates a bitmap screenshot. I am trying to figure out the x and y coordinates of the bmBits. Below is the code I have so far: UINT32 nScreenX = GetSystemMetrics(SM_CXSCREEN); UINT32 nScreenY = GetSystemMetrics(SM_CYSCREEN); HDC hdc = GetDC(NULL); HDC hdcScreen = CreateCompatibleDC(hdc); HBITMAP hbmpScreen = CreateDIBSection( hdcDesk, ( BITMAPINFO* )&bitmapInfo.bmiHeader,DIB_RGB_COLORS, &bitmapDataPtr, NULL, 0 ); SelectObject(hdcScreen, hbmpScreen); BitBlt(hdcScreen, 0, 0, nScreenX , nScreenY , hdc, 0, 0, SRCCOPY); ReleaseDC(NULL, hdc); BITMAP bmpScreen; GetObject(hbmpScreen, sizeof(bmpScreen), &bmpScreen); DWORD *pScreenPixels = (DWORD*)bmpScreen.bmBits, UINT32 x = 0; UINT32 y = 0; UINT32 nCntPixels = nScreenX * nScreenY; for(int n = 0; n < nCntPixels; n++) { x = n % nScreenX; y = n / nScreenX; //do stuff with the x and y vals } The code seem correct to me but, when I use this code the x and y values appear to be off. Where does the first pixel of bmBits start? When x and y are both 0. Is that the top left, bottom left, bottom right or top right? Thanks.

    Read the article

  • How can I dial GPRS/EDGE in Win CE

    - by brontes
    Hello all. I am developing application in python on Windows CE which needs connection to the internet (via GPRS/EDGE). When I turn on the device, the internet connection is not active. It becomes active if I open internet explorer. I would like to activate connection in my application. I'm trying to do this with RasDial function over ctypes library, but I can't get it to work. Is this the right way or I should do something else? Below is my current code. The ResDial function keeps returning error 87 – Invalid parameter. I don't know anymore what is wrong with it. I would really appreciate any kind of help. Thanks in advance. encoding: utf-8 import ppygui as gui from ctypes import * import os class MainFrame(gui.CeFrame): def init(self, parent = None): gui.CeFrame.init(self, title=u"Zgodovina dokumentov", menu="Menu") DWORD = c_ulong TCHAR = c_wchar ULONG_PTR = c_ulong class RASDIALPARAMS(Structure): _fields_ = [("dwSize", DWORD), ("szEntryName", TCHAR*21), ("szPhoneNumber", TCHAR*129), ("szCallbackNumber", TCHAR*49), ("szUserName", TCHAR*257), ("szPassword", TCHAR*257), ("szDomain", TCHAR*16), ] try: param = RASDIALPARAMS() param.dwSize = 1462 # also tried 1464 and sizeof(RASDIALPARAMS()). Makes no difference. param.szEntryName = u"My Connection" param.szPhoneNumber = u"0" param.szCallbackNumber = u"0" param.szUserName = u"0" param.szPassword = u"0" param.szDomain = u"0" iNasConn = c_ulong(0) ras = windll.coredll.RasDial(None, None, param, c_ulong(0xFFFFFFFF), c_voidp(self._w32_hWnd), byref(iNasConn)) print ras, repr(iNasConn) #this prints 87 c_ulong(0L) except Exception, e: print "Error" print e if name == 'main': app = gui.Application(MainFrame(None)) # create an application bound to our main frame instance app.run() #launch the app !

    Read the article

  • How to backup using backup API's in c++

    - by user1603185
    I am writing an application that used to backup some specified file, therefore using the backup API calls i.e CreateFile BackupRead and WriteFile API's. getting errors Access violation reading location. I have attached code below. #include <windows.h> int main() { HANDLE hInput, hOutput; //m_filename is a variable holding the file path to read from hInput = CreateFile(L"C:\\Key.txt", GENERIC_READ, 0, NULL, OPEN_EXISTING, FILE_FLAG_BACKUP_SEMANTICS, NULL); //strLocation contains the path of the file I want to create. hOutput= CreateFile(L"C:\\tmp\\", GENERIC_WRITE, NULL, NULL, CREATE_ALWAYS, NULL, NULL); DWORD dwBytesToRead = 1024 * 1024 * 10; BYTE *buffer; buffer = new BYTE[dwBytesToRead]; BOOL bReadSuccess = false,bWriteSuccess = false; DWORD dwBytesRead,dwBytesWritten; LPVOID lpContext; //Now comes the important bit: do { bReadSuccess = BackupRead(hInput, buffer, sizeof(BYTE) *dwBytesToRead, &dwBytesRead, false, true, &lpContext); bWriteSuccess= WriteFile(hOutput, buffer, sizeof(BYTE) *dwBytesRead, &dwBytesWritten, NULL); }while(dwBytesRead == dwBytesToRead); return 0; } Any one suggest me how to use these API's? Thanks.

    Read the article

  • Improving File Read Performance (single file, C++, Windows)

    - by david
    I have large (hundreds of MB or more) files that I need to read blocks from using C++ on Windows. Currently the relevant functions are: errorType LargeFile::read( void* data_out, __int64 start_position, __int64 size_bytes ) const { if( !m_open ) { // return error } else { seekPosition( start_position ); DWORD bytes_read; BOOL result = ReadFile( m_file, data_out, DWORD( size_bytes ), &bytes_read, NULL ); if( size_bytes != bytes_read || result != TRUE ) { // return error } } // return no error } void LargeFile::seekPosition( __int64 position ) const { LARGE_INTEGER target; target.QuadPart = LONGLONG( position ); SetFilePointerEx( m_file, target, NULL, FILE_BEGIN ); } The performance of the above does not seem to be very good. Reads are on 4K blocks of the file. Some reads are coherent, most are not. A couple questions: Is there a good way to profile the reads? What things might improve the performance? For example, would sector-aligning the data be useful? I'm relatively new to file i/o optimization, so suggestions or pointers to articles/tutorials would be helpful.

    Read the article

  • C++ read registry string value in char*

    - by Sunny
    I'm reading a registry value like this: char mydata[2048]; DWORD dataLength = sizeof(mydata); DWORD dwType = REG_SZ; ..... open key, etc ReqQueryValueEx(hKey, keyName, 0, &dwType, (BYTE*)mydata, &dataLength); My problem is, that after this, mydata content looks like: [63, 00, 3A, 00, 5C, 00...], i.e. this looks like a unicode?!?!. I need to convert this somehow to be a normal char array, without these [00], as they fail a simple logging function I have. I.e. if I call like this: WriteMessage(mydata), it outputs only "c", which is the first char in the registry. I have calls to this logging function all over the place, so I'd better of not modify it, but somehow "fix" the registry value. Here is the log function: void Logger::WriteMessage(const char *msg) { time_t now = time(0); struct tm* tm = localtime(&now); std::ofstream logFile; logFile.open(filename, std::ios::out | std::ios::app); if ( logFile.is_open() ) { logFile << tm->tm_mon << '/' << tm->tm_mday << '/' << tm->tm_year << ' '; logFile << tm->tm_hour << ':' << tm->tm_min << ':' << tm->tm_sec << "> "; logFile << msg << "\n"; logFile.close(); } }

    Read the article

  • throwing exception from APCProc crashes program

    - by lazy_banana
    I started to do some research on how terminate a multithreaded application properly and I found those 2 post(first, second) about how to use QueueUserAPC to signal other threads to terminate. I thought I should give it a try, and the application keeps crashing when I throw the exception from the APCProc. Code: #include <stdio.h> #include <windows.h> class ExitException { public: char *desc; DWORD exit_code; ExitException(char *desc,int exit_code): desc(desc), exit_code(exit_code) {} }; //I use this class to check if objects are deconstructed upon termination class Test { public: char *s; Test(char *s): s(s) { printf("%s ctor\n",s); } ~Test() { printf("%s dctor\n",s); } }; DWORD CALLBACK ThreadProc(void *useless) { try { Test t("thread_test"); SleepEx(INFINITE,true); return 0; } catch (ExitException &e) { printf("Thread exits\n%s %lu",e.desc,e.exit_code); return e.exit_code; } } void CALLBACK exit_apc_proc(ULONG_PTR param) { puts("In APCProc"); ExitException e("Application exit signal!",1); throw e; return; } int main() { HANDLE thread=CreateThread(NULL,0,ThreadProc,NULL,0,NULL); Sleep(1000); QueueUserAPC(exit_apc_proc,thread,0); WaitForSingleObject(thread,INFINITE); puts("main: bye"); return 0; } My question is why does this happen? I use mingw for compilation and my OS is 64bit. Can this be the reason?I read that you shouldn't call QueueApcProc from a 32bit app for a thread which runs in a 64bit process or vice versa, but this shouldn't be the case.

    Read the article

  • Running exe built in VC++ on XP and WIN7

    - by rajivpradeep
    sprintf_s(cmd, "%c:\index.exe", driver); STARTUPINFOA si; PROCESS_INFORMATION pi; ::SecureZeroMemory(&si, sizeof(STARTUPINFO)); ::SecureZeroMemory(&pi, sizeof(PROCESS_INFORMATION)); si.dwFlags = STARTF_USESHOWWINDOW | STARTF_USESTDHANDLES; si.wShowWindow = SW_SHOW; RES = ::CreateProcessA(NULL, cmd, NULL, NULL, NULL, NULL, NULL, NULL, &si, &pi); DWORD exitcode; DWORD err; do { Sleep(100); GetExitCodeProcess(pi.hProcess, &exitcode); } while (exitcode !=0); GetExitCodeThread(pi.hThread, &exitcode); RES = TerminateThread(pi.hThread, exitcode); if (RES == 0) err = GetLastError(); I am trying to run a flash file, the application is built in VS 2008 , on win 7. The application works well on WIN7 but fails in XP. Ie the application launches but doesn't complete the task. I see the application running in Task Manager

    Read the article

  • Encrypting a file in win API

    - by Kristian
    hi I have to write a windows api code that encrypts a file by adding three to each character. so I wrote this now its not doing anything ... where i go wronge #include "stdafx.h" #include <windows.h> int _tmain(int argc, _TCHAR* argv[]) { HANDLE filein,fileout; filein=CreateFile (L"d:\\test.txt",GENERIC_READ,0,NULL,OPEN_ALWAYS,FILE_ATTRIBUTE_NORMAL,NULL); fileout=CreateFile (L"d:\\test.txt",GENERIC_WRITE,0,NULL,CREATE_ALWAYS,FILE_ATTRIBUTE_NORMAL,NULL); DWORD really; //later this will be used to store how many bytes I succeed to read do { BYTE x[1024]; //the buffer the thing Im using to read in ReadFile(filein,x,1024,&really,NULL); for(int i=0 ; i<really ; i++) { x[i]= (x[i]+3) % 256; } DWORD really2; WriteFile(fileout,x,really,&really2,NULL); }while(really==1024); CloseHandle(filein); CloseHandle(fileout); return 0; } and if Im right how can i know its ok

    Read the article

  • Convert one delphi code line to c++

    - by user1332636
    How can I write that line in c++? This is the code in delphi type TSettings = record sFileName: String[50]; siInstallFolder: Byte; bRunFile: Boolean; ... end; .. var i: dword; sZdData: PChar; Settings :Tsettings; begin .... ZeroMemory(@Settings, sizeof(Tsettings)); settings := Tsettings(Pointer(@sZdData[i])^); // this code to c++ c++ code (hope the rest is OK) struct TSettings{ char sFileName[50]; byte siInstallFolder; bool bRunFile; ... } Settings; ... DWORD i; LPBYTE sZdData; ZeroMemory(&Settings, sizeof(TSettings)); Settings = ????? // im failing here i dunno what to do // i need same as in delphi code above

    Read the article

  • Windows File I/O Reading

    - by eyeanand
    Currently working on open/read images in VC++. Some examples i came across on the internet use Windows.h I/O routines like ReadFile...but there seems to be inconsistency in there declaration. Here's what i have got. //So i have this function to load file BYTE* LoadFile ( int* width, int* height, long* size, LPCWSTR bmpfile ) { BITMAPFILEHEADER bmpheader; BITMAPINFOHEADER bmpinfo; DWORD bytesread = 0; HANDLE file = CreateFile ( bmpfile , GENERIC_READ, FILE_SHARE_READ,NULL, OPEN_EXISTING, FILE_FLAG_SEQUENTIAL_SCAN, NULL ); if ( NULL == file ) return NULL; if ( ReadFile ( file, &bmpheader, sizeof ( BITMAPFILEHEADER ),&bytesread, NULL ) == false ) { CloseHandle ( file ); return NULL; } . . . return appropriate value; } Now the ReadFile API function is declared as follows in WinBase.h WINBASEAPI BOOL WINAPI ReadFile( In HANDLE hFile, Out LPVOID lpBuffer, In DWORD nNumberOfBytesToRead, _Out_opt_ LPDWORD lpNumberOfBytesRead, _Inout_opt_ LPOVERLAPPED lpOverlapped ); And in MSDN examples... They call this function like this. ReadFile(hFile, chBuffer, BUFSIZE, &dwBytesRead, NULL) Which expects that "bytesRead" is sort of out parameter. so it gives me number of bytes read. But in my code ..it is giving error message. 'ReadFile' : cannot convert parameter 4 from 'LPDWORD *' to 'LPDWORD' so i just initialized bytesRead to 0 and passed by value.( which is wrong..but just to check if it works ). then it gives this exception Unhandled exception at 0x774406ae in ImProc.exe: 0xC0000005: Access violation writing location 0x00000000. Kindly suggest . Kindly tell if any code i missed out....including while forming the question itself. Thanks.

    Read the article

  • Disable shift override for autostart programs per user

    - by Jens
    When starting up windows, one can normally disable all programs in the autostart folder by pressing and holding the SHIFT key. This behavior can be disabled by creating the key ignoreShiftOverride with DWORD setting of 1 in the registry key HKEY_LOCAL_MACHINE\ Software\ Microsoft\ Windows NT\ CurrentVersion\ Winlogon. I'd like to be able to configure this setting for each individual user. How can this be done?

    Read the article

  • Windows 2008 RenderFarm Service: CreateProcessAsUser "Session 0 Isolation" and OpenGL

    - by holtavolt
    Hello, I have a legacy Windows server service and (spawned) application that works fine in XP-64 and W2K3, but fails on W2K8. I believe it is because of the new "Session 0 isolation " feature. (Note: As a StackOverflow newbie I'm being limited to one link in this post, so you'll need to scroll to bottom to lookup the links for '' items)* Consequently, I'm looking for code samples/security settings mojo that let you create a new process from a windows service for Windows 2008 Server such that I can restore (and possibly surpass) the previous behavior. I need a solution that: Creates the new process in a non-zero session to get around session-0 isolation restrictions (no access to graphics hardware from session 0) - the official MS line on this is: Because Session 0 is no longer a user session, services that are running in Session 0 do not have access to the video driver. This means that any attempt that a service makes to render graphics fails. Querying the display resolution and color depth in Session 0 reports the correct results for the system up to a maximum of 1920x1200 at 32 bits per pixel. The new process gets a windows station/desktop (e.g. winsta0/default) that can be used to create windows DCs. I've found a solution (that launches OK in an interactive session) for this here: *(Starting an Interactive Client Process in C++ - 2) The windows DC, when used as the basis for an *(OpenGL DescribePixelFormat enumeration - 3), is able to find and use the hardware-accelerated format (on a system appropriately equipped with OpenGL hardware.) Note that our current solution works OK on XP-64 and W2K3, except if a terminal services session is running (VNC works fine.) A solution that also allowed the process to work (i.e. run with OpenGL hardware acceleration even when a terminal services session is open) would be fanastic, although not required. I'm stuck at item #1 currently, and although there are some similar postings that discuss this (like *(this -4), and *(this - 5) - they are not suitable solutions, as there is no guarantee of a user session logged in already to "take" a session id from, nor am I running from a LocalSystem account (I'm running from a domain account for the service, for which I can adjust the privileges of, within reason, although I'd prefer to not have to escalate priorities to include SeTcbPrivileges.) For instance - here's a stub that I think should work, but always returns an error 1314 on the SetTokenInformation call (even though the AdjustTokenPrivileges returned no errors) I've used some alternate strategies involving "LogonUser" as well (instead of opening the existing process token), but I can't seem to swap out the session id. I'm also dubious about using the WTSActiveConsoleSessionId in all cases (for instance, if no interactive user is logged in) - although a quick test of the service running with no sessions logged in seemed to return a reasonable session value (1). I’ve removed error handling for ease of reading (still a bit messy - apologies) //Also tried using LogonUser(..) here OpenProcessToken(GetCurrentProcess(), TOKEN_QUERY | TOKEN_ADJUST_PRIVILEGES | TOKEN_ADJUST_SESSIONID | TOKEN_ADJUST_DEFAULT | TOKEN_ASSIGN_PRIMARY | TOKEN_DUPLICATE, &hToken) GetTokenInformation( hToken, TokenSessionId, &logonSessionId, sizeof(DWORD), &dwTokenLength ) DWORD consoleSessionId = WTSGetActiveConsoleSessionId(); /* Can't use this - requires very elevated privileges (LOCAL only, SeTcbPrivileges as well) if( !WTSQueryUserToken(consoleSessionId, &hToken)) ... */ DuplicateTokenEx(hToken, (TOKEN_QUERY | TOKEN_ADJUST_PRIVILEGES | TOKEN_ADJUST_SESSIONID | TOKEN_ADJUST_DEFAULT | TOKEN_ASSIGN_PRIMARY | TOKEN_DUPLICATE), NULL, SecurityIdentification, TokenPrimary, &hDupToken)) // Look up the LUID for the TCB Name privilege. LookupPrivilegeValue(NULL, SE_TCB_NAME, &tp.Privileges[0].Luid)) // Enable the TCB Name privilege in the token. tp.PrivilegeCount = 1; tp.Privileges[0].Attributes = SE_PRIVILEGE_ENABLED; if (!AdjustTokenPrivileges(hDupToken, FALSE, &tp, sizeof(TOKEN_PRIVILEGES), NULL, 0)) { DisplayError("AdjustTokenPrivileges"); ... } if (GetLastError() == ERROR_NOT_ALL_ASSIGNED) { DEBUG( "Token does not have the necessary privilege.\n"); } else { DEBUG( "No error reported from AdjustTokenPrivileges!\n"); } // Never errors here DEBUG(LM_INFO, "Attempting setting of sessionId to: %d\n", consoleSessionId ); if (!SetTokenInformation(hDupToken, TokenSessionId, &consoleSessionId, sizeof(DWORD))) *** ALWAYS FAILS WITH 1314 HERE *** All the debug output looks fine up until the SetTokenInformation call - I see session 0 is my current process session, and in my case, it's trying to set session 1 (the result of the WTSGetActiveConsoleSessionId). (Note that I'm logged into the W2K8 box via VNC, not RDC) So - a the questions: Is this approach valid, or are all service-initiated processes restricted to session 0 intentionally? Is there a better approach (short of "Launch on logon" and auto-logon for the servers?) Is there something wrong with this code, or a different way to create a process token where I can swap out the session id to indicate I want to spawn the process in a new session? I did try using LogonUser instead of OpenProcessToken, but that didn't work either. (I don't care if all spawned processes share the same non-zero session or not at this point.) Any help much appreciated - thanks! (You need to replace the 'zttp' with 'http' - StackOverflow restriction on one link in my newbie post) 2: http://msdn.microsoft.com/en-us/library/aa379608(VS.85).aspx 3: http://www.opengl.org/resources/faq/technical/mswindows.htm 4: http://stackoverflow.com/questions/2237696/creating-a-process-in-a-non-zero-session-from-a-service-in-windows-2008-server 5: http://stackoverflow.com/questions/1602996/how-can-i-lauch-a-process-which-has-a-ui-from-windows-service

    Read the article

  • HTML5 and CSS3 Editing in Windows Live Writer

    - by Rick Strahl
    Windows Live Writer is a wonderful tool for editing blog posts and getting them posted to your blog. What makes it nice is that it has a small set of useful features, plus a simple plug-in model that has spawned many useful add-ins. Small tool with a reasonably decent plug-in model to extend equals a great solution to a simple problem. If you're running Windows, have a blog and aren’t using Live Writer you’re probably doing it wrong…One of Live Writer’s nice features is that it can download your blog’s CSS for preview and edit displays. It lets you edit your content inside of the context of that CSS using the WYSIWYG editor, so your content actually looks very close to what you’ll see on your blog while you’re editing your post. Unfortunately Live Writer renders the HTML content in the Web Browser Control’s  default IE 7 rendering mode. Yeah you read that right: IE 7 is the default for the Web Browser control and most applications that use it, are stuck in this modus unless the application explicitly overrides this default. The Web Browser control does not use the version of Internet Explorer installed on the system (IE 10 on my Win8 machine) but uses IE 7 mode for ‘compatibility’ for old applications.If you are importing your blog’s CSS that may suck if you’re using rich HTML 5 and CSS 3 formatting. Hack the Registry to get Live Writer to render using IE 9 or 10In order to get Live Writer (or any other application that uses the Web Browser Control for that matter) to render you can apply a registry hack that overrides the Web Browser Control engine usage for a specific application. I wrote about this in detail in a previous blog post a couple of years back.Here’s how you can set up Windows Live Writer to render your CSS 3 by making a change in your registry:The above is for setup on a 64 bit machine, where I configure Live Writer which is a 32 bit application for using IE 10 rendering. The keys set are as follows:32bit Configuration on 64 bit machine:HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Internet Explorer\Main\FeatureControl\FEATURE_BROWSER_EMULATIONKey: WindowsLiveWriter.exeValue: 9000 or 10000  (IE 9 or 10 respectively) (DWORD value)On a 32 bit only machine: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Internet Explorer\Main\FeatureControl\FEATURE_BROWSER_EMULATIONKey: WindowsLiveWriter.exeValue: 9000 or 10000  (IE 9 or 10 respectively) (DWORD value)Use decimal values of 9000, 10000 or 11000 to specify specific versions of Internet Explorer. This is a minor tweak, but it’s nice to actually see my blog posts now with the proper CSS formatting intact. Notice the rounded borders and shadow on the code blocks as well as the overflow-x and scrollbars that show up. In this particular case I can see what the code blocks actually look like in a specific resolution – much better than in the old plain view which just chopped things off at the end of the window frame. There are a few other elements that now show properly in the editor as well including block quotes and note boxes that I occasionally use. It’s minor stuff, but it makes the editing experience better yet and closer to the final things so there are less republish operations than I previously had. Sweet!Note that this approach of putting an IE version override into the registry works with most applications that use the Web Browser control. If you are using the Web Browser control in your own applications, it’s a good idea to switch the browser to a more recent version so you can take advantage of HTML 5 and CSS 3 in your browser displayed content by automatically setting this flag in the registry or as part of the application’s startup routine if not dedicated setup tool is used. At the very least you might set it to 9000 (IE 9) which supports most of the basic CSS3 features and is a decent baseline that works for most Windows 7 and 8 machines. If running pre-IE9, the browser will fall back to IE7 rendering and look bad but at least more recent browsers will see an improved experience.I’m surprised that there aren’t more vendors and third party apps using this feature. You can see in my first screen shot that there are only very few entries in the registry key group on my machine – any other apps use the Web Browser control are using IE7. Go figure. Certainly Windows Live Writer should be writing this key into the registry automatically as part of installation to support this functionality out of the box, but alas since it does not, this registry hack lets you get your way anyway…Resources.reg Files to register Live Write Browser Emulation (set for IE9)Specifying Internet Explorer Version for ApplicationsSnagIt LiveWriter Plug-inDownload Windows Live WriterDownload Windows Live Writer with Chocolatey© Rick Strahl, West Wind Technologies, 2005-2013Posted in Live Writer  Windows   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Platform Builder: Disable the USB Driver Dialog

    - by Bruce Eitman
    For a long time, Windows CE developers and users have wanted to disable the USB Driver Dialog that is displayed when an unknown USB device is plugged into the host controller.   Of course the question is always why would you want to do such a thing? The simple answer is that there are USB devices that are needed, like printers, which expose multiple functions to the bus, like scanners and faxes, which no Windows CE driver exists to support.   So the printer quietly loads a driver, but then the other functions cause a dialog to be shown. One solution is to create a USB Class driver that loads by default if no other driver has been loaded. This driver just accepts anything that it sees and then does nothing with it. Starting with the Windows Embedded CE 6.0 R3 March QFE/update, the USB 2.0 driver has a registry value to disable the dialog: [HKEY_LOCAL_MACHINE\Drivers\USB\LoadClients]       "DoNotPromptUser"=dword:0   Setting the DoNotPromptUser value to 1 disables the dialog. The default value is zero, so the driver continues to behave in the same way it always did unless you change this registry value.     Copyright © 2010 – Bruce Eitman All Rights Reserved

    Read the article

  • Something in the world of Firewall Hosted SSL VPN's

    - by AreYouSerious
    I run a Physical firewall at my residence. Call me paranoid, but I appreciate the added security. I have been working to get the VPN to work properly, but had until today not managed this. I worked with ensuring that the VPN configurations were correct, that the port filters were correct,  I could connect to the Firewall GUI, but never to the VPN. Turns out that in W7, if you add a key, it suddenly works.Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNELAdd DWORD(32-bit) - SendExtraRecord --> value 2 and voila, suddenly your presented with the login screen. I won't mention the specific vendor, as they don't have this listed in their fixes... but there are several venders where this is an issue. So, if you are having an issue connecting to an SSL VPN (web vpn) this might just be the solution that you need.

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >