Search Results

Search found 1466 results on 59 pages for 'sizeof'.

Page 53/59 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • Delphi: Minimize application to systray

    - by marco92w
    I want to minimize a Delphi application to the systray instead of the task bar. The necessary steps seem to be the following: Create icon which should then be displayed in the systray. When the user clicks the [-] to minimize the application, do the following: Hide the form. Add the icon (step #1) to the systray. Hide/delete the application's entry in the task bar. When the user double-clicks the application's icon in the systray, do the following: Show the form. Un-minimize the application again and bring it to the front. If "WindowState" is "WS_Minimized" set to "WS_Normal". Hide/delete the application's icon in the systray. When the user terminates the application, do the following: Hide/delete the application's icon in the systray. That's it. Right? How could one implement this in Delphi? I've found the following code but I don't know why it works. It doesn't follow my steps described above ... unit uMinimizeToTray; interface uses Windows, Messages, SysUtils, Variants, Classes, Graphics, Controls, Forms, Dialogs, StdCtrls, ShellApi; const WM_NOTIFYICON = WM_USER+333; type TMinimizeToTray = class(TForm) procedure FormCreate(Sender: TObject); procedure FormClose(Sender: TObject; var Action: TCloseAction); procedure CMClickIcon(var msg: TMessage); message WM_NOTIFYICON; private { Private-Deklarationen } public { Public-Deklarationen } end; var MinimizeToTray: TMinimizeToTray; implementation {$R *.dfm} procedure TMinimizeToTray.CMClickIcon(var msg: TMessage); begin if msg.lparam = WM_LBUTTONDBLCLK then Show; end; procedure TMinimizeToTray.FormCreate(Sender: TObject); VAR tnid: TNotifyIconData; HMainIcon: HICON; begin HMainIcon := LoadIcon(MainInstance, 'MAINICON'); Shell_NotifyIcon(NIM_DELETE, @tnid); tnid.cbSize := sizeof(TNotifyIconData); tnid.Wnd := handle; tnid.uID := 123; tnid.uFlags := NIF_MESSAGE or NIF_ICON or NIF_TIP; tnid.uCallbackMessage := WM_NOTIFYICON; tnid.hIcon := HMainIcon; tnid.szTip := 'Tooltip'; Shell_NotifyIcon(NIM_ADD, @tnid); end; procedure TMinimizeToTray.FormClose(Sender: TObject; var Action: TCloseAction); begin Action := caNone; Hide; end; end.

    Read the article

  • How to Determine the Size of MSADO Command Parameters

    - by Adam
    I am new to MS ADO and trying to understand how to set the size on command parameters as created by the command.CreateParameter (Name, Type, Direction, Size, Value) The documentation says the following: Size Optional. A Long value that specifies the maximum length for the parameter value in characters or bytes. ... If you specify a variable-length data type in the Type argument, you must either pass a Size argument or set the Size property of the Parameter object before appending it to the Parameters collection; otherwise, an error occurs. 1.) What should one pass for fixed-size parameters? Is it a "don't care"? I was a bit confused by the example found here, in which they set size to 3 for an adInteger parameter with Value set to a variant of type VT_I2 pPrmByRoyalty->Type = adInteger; pPrmByRoyalty->Size = 3; pPrmByRoyalty->Direction = adParamInput; pPrmByRoyalty->Value = vtroyal; VT_I2 implies two bytes. A tagVARIANT struct is 16 bytes. How did they land on three? I see that the enum value for adInteger happens to be three, but I suspect that is just a coincidence. So it's a bit confusing what to pass for fixed-size parameters. The team I'm working with has always passed sizeof(int) for adInteger, and it seems to work. Is that correct? Now, for "variable-length" parameters: we are instructed by the documentation to pass "the maximum length .. in characters or bytes". 2.) For adVarChar, is it sufficient to pass the max width as defined in the database? 3.) What about the Wide types (e.g. adVarWChar)? Is it characters or bytes? 4.) How about adVariant, which could contain fixed- or variable-length data? 5.) Do arrays ever come into play here? (we don't pass them as parameters, just curious) Any references or personal insights are welcome.

    Read the article

  • Rails + AMcharts (with export image php script) - PHP script converted to controller?

    - by Elliot
    Hey Guys, This one might be a little confusing. I'm using AMCharts with rails. Amcharts comes with a PHP script to export images called "export.php" I'm trying to figure out how to take the code in export.php and put it into a controller. Here is the code: <?php // amcharts.com export to image utility // set image type (gif/png/jpeg) $imgtype = 'jpeg'; // set image quality (from 0 to 100, not applicable to gif) $imgquality = 100; // get data from $_POST or $_GET ? $data = &$_POST; // get image dimensions $width = (int) $data['width']; $height = (int) $data['height']; // create image object $img = imagecreatetruecolor($width, $height); // populate image with pixels for ($y = 0; $y < $height; $y++) { // innitialize $x = 0; // get row data $row = explode(',', $data['r'.$y]); // place row pixels $cnt = sizeof($row); for ($r = 0; $r < $cnt; $r++) { // get pixel(s) data $pixel = explode(':', $row[$r]); // get color $pixel[0] = str_pad($pixel[0], 6, '0', STR_PAD_LEFT); $cr = hexdec(substr($pixel[0], 0, 2)); $cg = hexdec(substr($pixel[0], 2, 2)); $cb = hexdec(substr($pixel[0], 4, 2)); // allocate color $color = imagecolorallocate($img, $cr, $cg, $cb); // place repeating pixels $repeat = isset($pixel[1]) ? (int) $pixel[1] : 1; for ($c = 0; $c < $repeat; $c++) { // place pixel imagesetpixel($img, $x, $y, $color); // iterate column $x++; } } } // set proper content type header('Content-type: image/'.$imgtype); header('Content-Disposition: attachment; filename="chart.'.$imgtype.'"'); // stream image $function = 'image'.$imgtype; if ($imgtype == 'gif') { $function($img); } else { $function($img, null, $imgquality); } // destroy imagedestroy($img); ?>

    Read the article

  • Regular exp to validate email in C

    - by Liju Mathew
    Hi, We need to write a email validation program in C. We are planning to use GNU Cregex.h) regular expression. The regular expression we prepared is [a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*@(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])? But the below code is failing while compiling the regex. #include <stdio.h> #include <regex.h> int main(const char *argv, int argc) { const char *reg_exp = "[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*@(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?"; int status = 1; char email[71]; regex_t preg; int rc; printf("The regex = %s\n", reg_exp); rc = regcomp(&preg, reg_exp, REG_EXTENDED|REG_NOSUB); if (rc != 0) { if (rc == REG_BADPAT || rc == REG_ECOLLATE) fprintf(stderr, "Bad Regex/Collate\n"); if (rc == REG_ECTYPE) fprintf(stderr, "Invalid Char\n"); if (rc == REG_EESCAPE) fprintf(stderr, "Trailing \\\n"); if (rc == REG_ESUBREG || rc == REG_EBRACK) fprintf(stderr, "Invalid number/[] error\n"); if (rc == REG_EPAREN || rc == REG_EBRACE) fprintf(stderr, "Paren/Bracket error\n"); if (rc == REG_BADBR || rc == REG_ERANGE) fprintf(stderr, "{} content invalid/Invalid endpoint\n"); if (rc == REG_ESPACE) fprintf(stderr, "Memory error\n"); if (rc == REG_BADRPT) fprintf(stderr, "Invalid regex\n"); fprintf(stderr, "%s: Failed to compile the regular expression:%d\n", __func__, rc); return 1; } while (status) { fgets(email, sizeof(email), stdin); status = email[0]-48; rc = regexec(&preg, email, (size_t)0, NULL, 0); if (rc == 0) { fprintf(stderr, "%s: The regular expression is a match\n", __func__); } else { fprintf(stderr, "%s: The regular expression is not a match: %d\n", __func__, rc); } } regfree(&preg); return 0; } The regex compilation is failing with the below error. The regex = [a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*@(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])? Invalid regex main: Failed to compile the regular expression:13 What is the cause of this error? Whether the regex need to be modified? Thanks, Mathew Liju

    Read the article

  • How to start new browser window in cpecified location whith cpecified size

    - by Pritorian
    Hi all! I create a new instance and trying to resize new instance of browser like this: [System.Runtime.InteropServices.DllImport("user32.dll")] private static extern bool GetWindowInfo(IntPtr hwnd, ref tagWINDOWINFO pwi); [System.Runtime.InteropServices.StructLayout(System.Runtime.InteropServices.LayoutKind.Sequential)] public struct tagRECT { /// LONG->int public int left; /// LONG->int public int top; /// LONG->int public int right; /// LONG->int public int bottom; } [System.Runtime.InteropServices.StructLayout(System.Runtime.InteropServices.LayoutKind.Sequential)] public struct tagWINDOWINFO { /// DWORD->unsigned int public uint cbSize; /// RECT->tagRECT public tagRECT rcWindow; /// RECT->tagRECT public tagRECT rcClient; /// DWORD->unsigned int public uint dwStyle; /// DWORD->unsigned int public uint dwExStyle; /// DWORD->unsigned int public uint dwWindowStatus; /// UINT->unsigned int public uint cxWindowBorders; /// UINT->unsigned int public uint cyWindowBorders; /// ATOM->WORD->unsigned short public ushort atomWindowType; /// WORD->unsigned short public ushort wCreatorVersion; } [System.Runtime.InteropServices.DllImport("user32.dll")] private static extern bool MoveWindow(IntPtr hWnd, int X, int Y, int nWidth, int nHeight, bool bRepaint); [System.Runtime.InteropServices.DllImport("user32.dll")] private static extern bool UpdateWindow(IntPtr hWnd); private void button2_Click(object sender, EventArgs e) { using (System.Diagnostics.Process browserProc = new System.Diagnostics.Process()) { browserProc.StartInfo.FileName = webBrowser1.Url.ToString(); browserProc.StartInfo.WindowStyle = System.Diagnostics.ProcessWindowStyle.Minimized; int i= browserProc.Id; tagWINDOWINFO info = new tagWINDOWINFO(); info.cbSize = (uint)System.Runtime.InteropServices.Marshal.SizeOf(info); browserProc.Start(); GetWindowInfo(browserProc.MainWindowHandle, ref info); browserProc.WaitForInputIdle(); string str = browserProc.MainWindowTitle; MoveWindow(browserProc.MainWindowHandle, 100, 100, 100, 100, true); UpdateWindow(browserProc.MainWindowHandle); } } But I get an "No process is associated with this object". Could anyone help? Or mb other ideas how to run new browser window whith specified size and location?

    Read the article

  • Drawing a texture with an alpha channel doesn't work -- draws black

    - by DevDevDev
    I am modifying GLPaint to use a different background, so in this case it is white. Anyway the existing stamp they are using assumes the background is black, so I made a new background with an alpha channel. When I draw on the canvas it is still black, what gives? When I actually draw, I just bind the texture and it works. Something is wrong in this initialization. Here is the photo - (id)initWithCoder:(NSCoder*)coder { CGImageRef brushImage; CGContextRef brushContext; GLubyte *brushData; size_t width, height; if (self = [super initWithCoder:coder]) { CAEAGLLayer *eaglLayer = (CAEAGLLayer *)self.layer; eaglLayer.opaque = YES; // In this application, we want to retain the EAGLDrawable contents after a call to presentRenderbuffer. eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:YES], kEAGLDrawablePropertyRetainedBacking, kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil]; context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES1]; if (!context || ![EAGLContext setCurrentContext:context]) { [self release]; return nil; } // Create a texture from an image // First create a UIImage object from the data in a image file, and then extract the Core Graphics image brushImage = [UIImage imageNamed:@"test.png"].CGImage; // Get the width and height of the image width = CGImageGetWidth(brushImage); height = CGImageGetHeight(brushImage); // Texture dimensions must be a power of 2. If you write an application that allows users to supply an image, // you'll want to add code that checks the dimensions and takes appropriate action if they are not a power of 2. // Make sure the image exists if(brushImage) { brushData = (GLubyte *) calloc(width * height * 4, sizeof(GLubyte)); brushContext = CGBitmapContextCreate(brushData, width, width, 8, width * 4, CGImageGetColorSpace(brushImage), kCGImageAlphaPremultipliedLast); CGContextDrawImage(brushContext, CGRectMake(0.0, 0.0, (CGFloat)width, (CGFloat)height), brushImage); CGContextRelease(brushContext); glGenTextures(1, &brushTexture); glBindTexture(GL_TEXTURE_2D, brushTexture); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, brushData); free(brushData); } //Set up OpenGL states glMatrixMode(GL_PROJECTION); CGRect frame = self.bounds; glOrthof(0, frame.size.width, 0, frame.size.height, -1, 1); glViewport(0, 0, frame.size.width, frame.size.height); glMatrixMode(GL_MODELVIEW); glDisable(GL_DITHER); glEnable(GL_TEXTURE_2D); glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_DST_ALPHA); glEnable(GL_POINT_SPRITE_OES); glTexEnvf(GL_POINT_SPRITE_OES, GL_COORD_REPLACE_OES, GL_TRUE); glPointSize(width / kBrushScale); } return self; }

    Read the article

  • Use native HBitmap in C# while preserving alpha channel/transparency. Please check this code, it works on my computer...

    - by David
    Let's say I get a HBITMAP object/handle from a native Windows function. I can convert it to a managed bitmap using Bitmap.FromHbitmap(nativeHBitmap), but if the native image has transparency information (alpha channel), it is lost by this conversion. There are a few questions on Stack Overflow regarding this issue. Using information from the first answer of this question (How to draw ARGB bitmap using GDI+?), I wrote a piece of code that I've tried and it works. It basically gets the native HBitmap width, height and the pointer to the location of the pixel data using GetObject and the BITMAP structure, and then calls the managed Bitmap constructor: Bitmap managedBitmap = new Bitmap(bitmapStruct.bmWidth, bitmapStruct.bmHeight, bitmapStruct.bmWidth * 4, PixelFormat.Format32bppArgb, bitmapStruct.bmBits); As I understand (please correct me if I'm wrong), this does not copy the actual pixel data from the native HBitmap to the managed bitmap, it simply points the managed bitmap to the pixel data from the native HBitmap. And I don't draw the bitmap here on another Graphics (DC) or on another bitmap, to avoid unnecessary memory copying, especially for large bitmaps. I can simply assign this bitmap to a PictureBox control or the the Form BackgroundImage property. And it works, the bitmap is displayed correctly, using transparency. When I no longer use the bitmap, I make sure the BackgroundImage property is no longer pointing to the bitmap, and I dispose both the managed bitmap and the native HBitmap. The Question: Can you tell me if this reasoning and code seems correct. I hope I will not get some unexpected behaviors or errors. And I hope I'm freeing all the memory and objects correctly. private void Example() { IntPtr nativeHBitmap = IntPtr.Zero; /* Get the native HBitmap object from a Windows function here */ // Create the BITMAP structure and get info from our nativeHBitmap NativeMethods.BITMAP bitmapStruct = new NativeMethods.BITMAP(); NativeMethods.GetObjectBitmap(nativeHBitmap, Marshal.SizeOf(bitmapStruct), ref bitmapStruct); // Create the managed bitmap using the pointer to the pixel data of the native HBitmap Bitmap managedBitmap = new Bitmap( bitmapStruct.bmWidth, bitmapStruct.bmHeight, bitmapStruct.bmWidth * 4, PixelFormat.Format32bppArgb, bitmapStruct.bmBits); // Show the bitmap this.BackgroundImage = managedBitmap; /* Run the program, use the image */ MessageBox.Show("running..."); // When the image is no longer needed, dispose both the managed Bitmap object and the native HBitmap this.BackgroundImage = null; managedBitmap.Dispose(); NativeMethods.DeleteObject(nativeHBitmap); } internal static class NativeMethods { [StructLayout(LayoutKind.Sequential)] public struct BITMAP { public int bmType; public int bmWidth; public int bmHeight; public int bmWidthBytes; public ushort bmPlanes; public ushort bmBitsPixel; public IntPtr bmBits; } [DllImport("gdi32", CharSet = CharSet.Auto, EntryPoint = "GetObject")] public static extern int GetObjectBitmap(IntPtr hObject, int nCount, ref BITMAP lpObject); [DllImport("gdi32.dll")] internal static extern bool DeleteObject(IntPtr hObject); }

    Read the article

  • wrapping boost::ublas with swig

    - by leon
    I am trying to pass data around the numpy and boost::ublas layers. I have written an ultra thin wrapper because swig cannot parse ublas' header correctly. The code is shown below #include <boost/numeric/ublas/vector.hpp> #include <boost/numeric/ublas/matrix.hpp> #include <boost/lexical_cast.hpp> #include <algorithm> #include <sstream> #include <string> using std::copy; using namespace boost; typedef boost::numeric::ublas::matrix<double> dm; typedef boost::numeric::ublas::vector<double> dv; class dvector : public dv{ public: dvector(const int rhs):dv(rhs){;}; dvector(); dvector(const int size, double* ptr):dv(size){ copy(ptr, ptr+sizeof(double)*size, &(dv::data()[0])); } ~dvector(){} }; with the SWIG interface that looks something like %apply(int DIM1, double* INPLACE_ARRAY1) {(const int size, double* ptr)} class dvector{ public: dvector(const int rhs); dvector(); dvector(const int size, double* ptr); %newobject toString; char* toString(); ~dvector(); }; I have compiled them successfully via gcc 4.3 and vc++9.0. However when I simply run a = dvector(array([1.,2.,3.])) it gives me a segfault. This is the first time I use swigh with numpy and not have fully understanding between the data conversion and memory buffer passing. Does anyone see something obvious I have missed? I have tried to trace through with a debugger but it crashed within the assmeblys of python.exe. I have no clue if this is a swig problem or of my simple wrapper. Anything is appreciated.

    Read the article

  • C Programming - My program is good enough for my assignment but I know its not good

    - by Joe
    Hi there I'm just starting an assignment for uni and it's raised a question for me. I don't understand how to return a string from a function without having a memory leak. char* trim(char* line) { int start = 0; int end = strlen(line) - 1; /* find the start position of the string */ while(isspace(line[start]) != 0) { start++; } //printf("start is %d\n", start); /* find the position end of the string */ while(isspace(line[end]) != 0) { end--; } //printf("end is %d\n", end); /* calculate string length and add 1 for the sentinel */ int len = end - start + 2; /* initialise char array to len and read in characters */ int i; char* trimmed = calloc(sizeof(char), len); for(i = 0; i < (len - 1); i++) { trimmed[i] = line[start + i]; } trimmed[len - 1] = '\0'; return trimmed; } as you can see I am returning a pointer to char which is an array. I found that if I tried to make the 'trimmed' array by something like: char trimmed[len]; then the compiler would throw up a message saying that a constant was expected on this line. I assume this meant that for some reason you can't use variables as the array length when initialising an array, although something tells me that can't be right. So instead I made my array by allocating some memory to a char pointer. I understand that this function is probably waaaaay sub-optimal for what it is trying to do, but what I really want to know is: 1. Can you normally initialise an array using a variable to declare the length like: char trimmed[len]; ? 2. If I had an array that was of that type (char trimmed[]) would it have the same return type as a pointer to char (ie char*). 3. If I make my array by callocing some memory and allocating it to a char pointer, how do I free this memory. It seems to me that once I have returned this array, I can't access it to free it as it is a local variable. Many thanks in advance Joe

    Read the article

  • Marshal.PtrToStructure (and back again) and generic solution for endianness swapping

    - by cgyDeveloper
    I have a system where a remote agent sends serialized structures (from and embedded C system) for me to read and store via IP/UDP. In some cases I need to send back the same structure types. I thought I had a nice setup using Marshal.PtrToStructure (receive) and Marshal.StructureToPtr (send). However, a small gotcha is that the network big endian integers need to be converted to my x86 little endian format to be used locally. When I'm sending them off again, big endian is the way to go. Here are the functions in question: private static T BytesToStruct<T>(ref byte[] rawData) where T: struct { T result = default(T); GCHandle handle = GCHandle.Alloc(rawData, GCHandleType.Pinned); try { IntPtr rawDataPtr = handle.AddrOfPinnedObject(); result = (T)Marshal.PtrToStructure(rawDataPtr, typeof(T)); } finally { handle.Free(); } return result; } private static byte[] StructToBytes<T>(T data) where T: struct { byte[] rawData = new byte[Marshal.SizeOf(data)]; GCHandle handle = GCHandle.Alloc(rawData, GCHandleType.Pinned); try { IntPtr rawDataPtr = handle.AddrOfPinnedObject(); Marshal.StructureToPtr(data, rawDataPtr, false); } finally { handle.Free(); } return rawData; } And a quick example structure that might be used like this: byte[] data = this.sock.Receive(ref this.ipep); Request request = BytesToStruct<Request>(ref data); Where the structure in question looks like: [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Ansi, Pack = 1)] private struct Request { public byte type; public short sequence; [MarshalAs(UnmanagedType.ByValArray, SizeConst = 5)] public byte[] address; } What (generic) way can I swap the endianness when marshalling the structures? My need is such that the locally stored 'public short sequence' in this example will be little-endian for displaying to the user. I don't want to have to swap the endianness on a structure-specific way. My first thought was to use Reflection, but I'm not very familiar with that feature. Also, I hoped that there would be a better solution out there that somebody could point me towards. Thanks in advance :)

    Read the article

  • Segfault (possibly due to casting)

    - by BSchlinker
    I don't normally go to stackoverflow for sigsegv errors, but I have done all I can with my debugger at the moment. The segmentation fault error is thrown following the completion of the function. Any ideas what I'm overlooking? I suspect that it is due to the casting of the sockaddr to the sockaddr_in, but I am unable to find any mistakes there. (Removing that line gets rid of the seg fault -- but I know that may not be the root cause here). // basic setup int sockfd; char str[INET_ADDRSTRLEN]; sockaddr* sa; socklen_t* sl; struct addrinfo hints, *servinfo, *p; int rv; memset(&hints, 0, sizeof hints); hints.ai_family = AF_UNSPEC; hints.ai_socktype = SOCK_DGRAM; // return string string foundIP; // setup the struct for a connection with selected IP if ((rv = getaddrinfo("4.2.2.1", NULL, &hints, &servinfo)) != 0) { fprintf(stderr, "getaddrinfo: %s\n", gai_strerror(rv)); return "1"; } // loop through all the results and make a socket for(p = servinfo; p != NULL; p = p->ai_next) { if ((sockfd = socket(p->ai_family, p->ai_socktype, p->ai_protocol)) == -1) { perror("talker: socket"); continue; } break; } if (p == NULL) { fprintf(stderr, "talker: failed to bind socket\n"); return "2"; } // connect the UDP socket to something connect(sockfd, p->ai_addr, p->ai_addrlen); // we need to connect to get the systems local IP // get information on the local IP from the socket we created getsockname(sockfd, sa, sl); // convert the sockaddr to a sockaddr_in via casting struct sockaddr_in *sa_ipv4 = (struct sockaddr_in *)sa; // get the IP from the sockaddr_in and print it inet_ntop(AF_INET, &(sa_ipv4->sin_addr), str, INET_ADDRSTRLEN); printf("%s\n", str); // return the IP return foundIP; }

    Read the article

  • error C2065: 'CComQIPtr' : undeclared identifier

    - by Ken Smith
    I'm still feeling my way around C++, and am a complete ATL newbie, so I apologize if this is a basic question. I'm starting with an existing VC++ executable project that has functionality I'd like to expose as an ActiveX object (while sharing as much of the source as possible between the two projects). I've approached this by adding an ATL project to the solution in question, and in that project have referenced all the .h and .cpp files from the executable project, added all the appropriate references, and defined all the preprocessor macros. So far so good. But I'm getting a compiler error in one file (HideDesktop.cpp). The relevant parts look like this: #include "stdafx.h" #define WIN32_LEAN_AND_MEAN #include <Windows.h> #include <WinInet.h> // Shell object uses INTERNET_MAX_URL_LENGTH (go figure) #if _MSC_VER < 1400 #define _WIN32_IE 0x0400 #endif #include <atlbase.h> // ATL smart pointers #include <shlguid.h> // shell GUIDs #include <shlobj.h> // IActiveDesktop #include "stdhdrs.h" struct __declspec(uuid("F490EB00-1240-11D1-9888-006097DEACF9")) IActiveDesktop; #define PACKVERSION(major,minor) MAKELONG(minor,major) static HRESULT EnableActiveDesktop(bool enable) { CoInitialize(NULL); HRESULT hr; CComQIPtr<IActiveDesktop, &IID_IActiveDesktop> pIActiveDesktop; // <- Problematic line (throws errors 2065 and 2275) hr = pIActiveDesktop.CoCreateInstance(CLSID_ActiveDesktop, NULL, CLSCTX_INPROC_SERVER); if (!SUCCEEDED(hr)) { return hr; } COMPONENTSOPT opt; opt.dwSize = sizeof(opt); opt.fActiveDesktop = opt.fEnableComponents = enable; hr = pIActiveDesktop->SetDesktopItemOptions(&opt, 0); if (!SUCCEEDED(hr)) { CoUninitialize(); // pIActiveDesktop->Release(); return hr; } hr = pIActiveDesktop->ApplyChanges(AD_APPLY_REFRESH); CoUninitialize(); // pIActiveDesktop->Release(); return hr; } This code is throwing the following compiler errors: error C2065: 'CComQIPtr' : undeclared identifier error C2275: 'IActiveDesktop' : illegal use of this type as an expression error C2065: 'pIActiveDesktop' : undeclared identifier The two weird bits: (1) CComQIPtr is defined in atlcomcli.h, which is included in atlbase.h, which is included in HideDesktop.cpp; and (2) this file is only throwing these errors when it's referenced in my new ATL/AX project: it's not throwing them in the original executable project, even though they have basically the same preprocessor definitions. (The ATL AX project, naturally enough, defines _ATL_DLL, but I can't see where that would make a difference.) My current workaround is to use a normal "dumb" pointer, like so: IActiveDesktop *pIActiveDesktop; HRESULT hr = ::CoCreateInstance(CLSID_ActiveDesktop, NULL, // no outer unknown CLSCTX_INPROC_SERVER, IID_IActiveDesktop, (void**)&pIActiveDesktop); And that works, provided I remember to release it. But I'd rather be using the ATL smart stuff. Any thoughts?

    Read the article

  • Problem getting correct parameters for C# P/Invoke call to C++ dll

    - by Jim Jones
    Trying to Interop a functionality from the Outside In API from Oracle. Have the following function: SCCERR EXOpenExport {VTHDOC hDoc, VTDWORD dwOutputId, VTDWORD dwSpecType, VTLPVOID pSpec, VTDWORD dwFlags, VTSYSPARAM dwReserved, VTLPVOID pCallbackFunc, VTSYSPARAM dwCallbackData, VTLPHEXPORT phExport); From the header files I reduced the parameters to: typedef VTSYSPARAM VTHDOC, VTLPHDOC * typedef DWORD_PTR VTSYSPARAM typedef unsigned long DWORD_PTR typedef unsigned long VTDWORD typedef VTVOID* VTLPVOID #define VTVOID void typedef VTHDOC VTHEXPORT, *VTLPEXPORT These are for 32 bit windows Going through the header files, the example programs, and the documentation I found: 1. That pSpec could be a pointer to a buffer or NULL, so I set it to a IntPtr.Zero (documentation). 2. That dwFlags and dwReserved according to the documentation "Must be set by the developer to 0". 3. That pCallbackFunc can be set to NULL if I don't want to handle callbacks. 4. That the last two are based on structs that I wrote C# wrappers for using the [StructLayout(LayoutKind.Sequential)]. Then instatiated an instance and generated the parameters by first creating a IntPtr with Marshal.AllocHGlobal(Marshal.SizeOf(instance)), then getting the address value which is passed as a uint for dwCallbackData and a IntPtr for phExport. The final parameter list is as follows: 1. phDoc as a IntPtr which was loaded with an address by the DAOpenDocument function called before. 2. dwOutputId as uint set to 1535 which represents FI_JPEGFIF 3. dwSpecType as int set to 2 which represents IOTYPE_ANSIPATH 4. pSpec as an IntPtr.Zero where the output will be written 5. dwFlags as uint set to 0 as directed 6. dwReserved as uint set to 0 as directed 7. pCallbackFunc as IntPtr set to NULL as I will handle results 8. dwCallBackDate as uint the address of a buffer for a struct 9. phExport as IntPtr to another struct buffer still get an undefined error from the API. Meaning that the call returns a 961 which is not defined in any of the header files. In the past I have gotten this when my choice of parameter types are incorrect. I started out using Interop Assistant which was helpful in learning how many of the parameter types get translated. It is however limited by how well I am able to glean the correct native type from the header files. For example the hDoc parameter used in the preceding function was defined as a non-filesytem handle, so attempted to use Marshal to create a handle, then used an IntPtr, and finally it turned out to be an int (actually it was &phDoc used here). So is there a more scientific way of doing this, other than trial and error? Jim

    Read the article

  • When is #include <new> library required in C++?

    - by Czarak
    Hi, According to this reference entry for operator new ( http://www.cplusplus.com/reference/std/new/operator%20new/ ) : Global dynamic storage operator functions are special in the standard library: All three versions of operator new are declared in the global namespace, not in the std namespace. The first and second versions are implicitly declared in every translation unit of a C++ program: The header does not need to be included for them to be present. This seems to me to imply that the third version of operator new (placement new) is not implicitly declared in every translation unit of a C++ program and the header <new> does need to be included for it to be present. Is that correct? If so, how is it that using both g++ and MS VC++ Express compilers it seems I can compile code using the third version of new without #include <new> in my source code? Also, the MSDN Standard C++ Library reference entry on operator new gives some example code for the three forms of operator new which contains the #include <new> statement, however the example seems to compile and run just the same for me without this include? // new_op_new.cpp // compile with: /EHsc #include<new> #include<iostream> using namespace std; class MyClass { public: MyClass( ) { cout << "Construction MyClass." << this << endl; }; ~MyClass( ) { imember = 0; cout << "Destructing MyClass." << this << endl; }; int imember; }; int main( ) { // The first form of new delete MyClass* fPtr = new MyClass; delete fPtr; // The second form of new delete char x[sizeof( MyClass )]; MyClass* fPtr2 = new( &x[0] ) MyClass; fPtr2 -> ~MyClass(); cout << "The address of x[0] is : " << ( void* )&x[0] << endl; // The third form of new delete MyClass* fPtr3 = new( nothrow ) MyClass; delete fPtr3; } Could anyone shed some light on this and when and why you might need to #include <new> - maybe some example code that will not compile without #include <new> ? Thanks.

    Read the article

  • Chaining multiple ShellExecute calls

    - by IVlad
    Consider the following code and its executable - runner.exe: #include <iostream> #include <string> #include <windows.h> using namespace std; int main(int argc, char *argv[]) { SHELLEXECUTEINFO shExecInfo; shExecInfo.cbSize = sizeof(SHELLEXECUTEINFO); shExecInfo.fMask = NULL; shExecInfo.hwnd = NULL; shExecInfo.lpVerb = "open"; shExecInfo.lpFile = argv[1]; string Params = ""; for ( int i = 2; i < argc; ++i ) Params += argv[i] + ' '; shExecInfo.lpParameters = Params.c_str(); shExecInfo.lpDirectory = NULL; shExecInfo.nShow = SW_SHOWNORMAL; shExecInfo.hInstApp = NULL; ShellExecuteEx(&shExecInfo); return 0; } These two batch files both do what they're supposed to, which is run notepad.exe and run notepad.exe and tell it to try to open test.txt: 1. runner.exe notepad.exe 2. runner.exe notepad.exe test.txt Now, consider this batch file: 3. runner.exe runner.exe notepad.exe This one should run runner.exe and send notepad.exe as one of its command line arguments, shouldn't it? Then, that second instance of runner.exe should run notepad.exe - which doesn't happen, I get a "Windows cannot find 'am'. Make sure you typed the name correctly, and then try again" error. If I print the argc argument, it's 14 for the second instance of runner.exe, and they are all weird stuff like Files\Microsoft, SQL, Files\Common and so on. I can't figure out why this happens. I want to be able to string as many runner.exe calls using command line arguments as possible, or at least 2. How can I do that? I am using Windows 7 if that makes a difference.

    Read the article

  • OpenGL suppresses exceptions in MFC dialog-based application

    - by Mikhail
    Hello. I have an MFC-driven dialog-based application created with MSVS2005. Here is my problem step by step. I have button on my dialog and corresponding click-handler with code like this: int* i = 0; *i = 3; I'm running debug version of program and when I click on the button, Visual Studio catches focus and alerts "Access violation writing location" exception, program cannot recover from the error and all I can do is to stop debugging. And this is the right behavior. Now I add some OpenGL initialization code in the OnInitDialog() method: HDC DC = GetDC(GetSafeHwnd()); static PIXELFORMATDESCRIPTOR pfd = { sizeof(PIXELFORMATDESCRIPTOR), // size of this pfd 1, // version number PFD_DRAW_TO_WINDOW | // support window PFD_SUPPORT_OPENGL | // support OpenGL PFD_DOUBLEBUFFER, // double buffered PFD_TYPE_RGBA, // RGBA type 24, // 24-bit color depth 0, 0, 0, 0, 0, 0, // color bits ignored 0, // no alpha buffer 0, // shift bit ignored 0, // no accumulation buffer 0, 0, 0, 0, // accum bits ignored 32, // 32-bit z-buffer 0, // no stencil buffer 0, // no auxiliary buffer PFD_MAIN_PLANE, // main layer 0, // reserved 0, 0, 0 // layer masks ignored }; int pixelformat = ChoosePixelFormat(DC, &pfd); SetPixelFormat(DC, pixelformat, &pfd); HGLRC hrc = wglCreateContext(DC); ASSERT(hrc != NULL); wglMakeCurrent(DC, hrc); Of course this is not exactly what I do, it is the simplified version of my code. Well now the strange things begin to happen: all initialization is fine, there are no errors in OnInitDialog(), but when I click the button... no exception is thrown. Nothing happens. At all. If I set a break-point at the *i = 3; and press F11 on it, the handler-function halts immediately and focus is returned to the application, which continue to work well. I can click button again and the same thing will happen. It seems like someone had handled occurred exception of access violation and silently returned execution into main application message-receiving cycle. If I comment the line wglMakeCurrent(DC, hrc);, all works fine as before, exception is thrown and Visual Studio catches it and shows window with error message and program must be terminated afterwards. I experience this problem under Windows 7 64-bit, NVIDIA GeForce 8800 with latest drivers (of 11.01.2010) available at website installed. My colleague has Windows Vista 32-bit and has no such problem - exception is thrown and application crashes in both cases. Well, hope good guys will help me :) PS The problem originally where posted under this topic.

    Read the article

  • c# interop with ghostscript

    - by yodaj007
    I'm trying to access some Ghostscript functions like so: [DllImport(@"C:\Program Files\GPLGS\gsdll32.dll", EntryPoint = "gsapi_revision")] public static extern int Foo(gsapi_revision_t x, int len); public struct gsapi_revision_t { [MarshalAs(UnmanagedType.LPTStr)] string product; [MarshalAs(UnmanagedType.LPTStr)] string copyright; long revision; long revisiondate; } public static void Main() { gsapi_revision_t foo = new gsapi_revision_t(); Foo(foo, Marshal.SizeOf(foo)); This corresponds with these definitions from the iapi.h header from ghostscript: typedef struct gsapi_revision_s { const char *product; const char *copyright; long revision; long revisiondate; } gsapi_revision_t; GSDLLEXPORT int GSDLLAPI gsapi_revision(gsapi_revision_t *pr, int len); But my code is reading nothing into the string fields. If I add 'ref' to the function, it reads gibberish. However, the following code reads in the data just fine: public struct gsapi_revision_t { IntPtr product; IntPtr copyright; long revision; long revisiondate; } public static void Main() { gsapi_revision_t foo = new gsapi_revision_t(); IntPtr x = Marshal.AllocHGlobal(20); for (int i = 0; i < 20; i++) Marshal.WriteInt32(x, i, 0); int result = Foo(x, 20); IntPtr productNamePtr = Marshal.ReadIntPtr(x); IntPtr copyrightPtr = Marshal.ReadIntPtr(x, 4); long revision = Marshal.ReadInt64(x, 8); long revisionDate = Marshal.ReadInt64(x, 12); byte[] dest = new byte[1000]; Marshal.Copy(productNamePtr, dest, 0, 1000); string name = Read(productNamePtr); string copyright = Read(copyrightPtr); } public static string Read(IntPtr p) { List<byte> bits = new List<byte>(); int i = 0; while (true) { byte b = Marshal.ReadByte(new IntPtr(p.ToInt64() + i)); if (b == 0) break; bits.Add(b); i++; } return Encoding.ASCII.GetString(bits.ToArray()); } So what am I doing wrong with marshaling?

    Read the article

  • Counting Alphabetic Characters That Are Contained in an Array with C

    - by Craig
    Hello everyone, I am having trouble with a homework question that I've been working at for quite some time. I don't know exactly why the question is asking and need some clarification on that and also a push in the right direction. Here is the question: (2) Solve this problem using one single subscripted array of counters. The program uses an array of characters defined using the C initialization feature. The program counts the number of each of the alphabetic characters a to z (only lower case characters are counted) and prints a report (in a neat table) of the number of occurrences of each lower case character found. Only print the counts for the letters that occur at least once. That is do not print a count if it is zero. DO NOT use a switch statement in your solution. NOTE: if x is of type char, x-‘a’ is the difference between the ASCII codes for the character in x and the character ‘a’. For example if x holds the character ‘c’ then x-‘a’ has the value 2, while if x holds the character ‘d’, then x-‘a’ has the value 3. Provide test results using the following string: “This is an example of text for exercise (2).” And here is my source code so far: #include<stdio.h> int main() { char c[] = "This is an example of text for exercise (2)."; char d[26]; int i; int j = 0; int k; j = 0; //char s = 97; for(i = 0; i < sizeof(c); i++) { for(s = 'a'; s < 'z'; s++){ if( c[i] == s){ k++; printf("%c,%d\n", s, k); k = 0; } } } return 0; } As you can see, my current solution is a little anemic. Thanks for the help, and I know everyone on the net doesn't necessarily like helping with other people's homework. ;P

    Read the article

  • CUDA, more threads for same work = Longer run time despite better occupancy, Why?

    - by zenna
    I encountered a strange problem where increasing my occupancy by increasing the number of threads reduced performance. I created the following program to illustrate the problem: #include <stdio.h> #include <stdlib.h> #include <cuda_runtime.h> __global__ void less_threads(float * d_out) { int num_inliers; for (int j=0;j<800;++j) { //Do 12 computations num_inliers += threadIdx.x*1; num_inliers += threadIdx.x*2; num_inliers += threadIdx.x*3; num_inliers += threadIdx.x*4; num_inliers += threadIdx.x*5; num_inliers += threadIdx.x*6; num_inliers += threadIdx.x*7; num_inliers += threadIdx.x*8; num_inliers += threadIdx.x*9; num_inliers += threadIdx.x*10; num_inliers += threadIdx.x*11; num_inliers += threadIdx.x*12; } if (threadIdx.x == -1) d_out[blockIdx.x*blockDim.x+threadIdx.x] = num_inliers; } __global__ void more_threads(float *d_out) { int num_inliers; for (int j=0;j<800;++j) { // Do 4 computations num_inliers += threadIdx.x*1; num_inliers += threadIdx.x*2; num_inliers += threadIdx.x*3; num_inliers += threadIdx.x*4; } if (threadIdx.x == -1) d_out[blockIdx.x*blockDim.x+threadIdx.x] = num_inliers; } int main(int argc, char* argv[]) { float *d_out = NULL; cudaMalloc((void**)&d_out,sizeof(float)*25000); more_threads<<<780,128>>>(d_out); less_threads<<<780,32>>>(d_out); return 0; } Note both kernels should do the same amount of work in total, the (if threadIdx.x == -1 is a trick to stop the compiler optimising everything out and leaving an empty kernel). The work should be the same as more_threads is using 4 times as many threads but with each thread doing 4 times less work. Significant results form the profiler results are as followsL: more_threads: GPU runtime = 1474 us,reg per thread = 6,occupancy=1,branch=83746,divergent_branch = 26,instructions = 584065,gst request=1084552 less_threads: GPU runtime = 921 us,reg per thread = 14,occupancy=0.25,branch=20956,divergent_branch = 26,instructions = 312663,gst request=677381 As I said previously, the run time of the kernel using more threads is longer, this could be due to the increased number of instructions. Why are there more instructions? Why is there any branching, let alone divergent branching, considering there is no conditional code? Why are there any gst requests when there is no global memory access? What is going on here! Thanks

    Read the article

  • Serious Memory Clash: Variables clashing in C

    - by Soham
    int main(int argc, char*argv[]) { Message* newMessage; Asset* Check; //manipulation and initialization of Check, so that it holds proper values. newMessage = parser("N,2376,01/02/2011 09:15:01.342,JPASSOCIAT FUTSTK 24FEB2011,B,84.05,2000,0,0",newMessage); // MessageProcess(newMessage,AssetMap); printf("LAST TRADE ADDRESS %p LAST TRADE TIME %s\n",Check->TradeBook,Check->Time); } Message* parser(char *message,Message* new_Message) { char a[9][256]; char* tmp =message; bool inQuote=0; int counter=0; int counter2=0; new_Message = (Message*)malloc(sizeof(Message)); while(*tmp!='\0') { switch(*tmp) { case ',': if(!inQuote) { a[counter][counter2]='\0'; counter++; counter2=0; } break; case '"': inQuote=!inQuote; break; default: a[counter][counter2]=*tmp; counter2++; break; } tmp++; } a[counter][counter2]='\0'; new_Message->type = *a[0]; new_Message->Time = &a[2][11]; new_Message->asset = a[3]; if(*a[4]=='S') new_Message->BS = 0; else new_Message->BS = 1; new_Message->price1=atof(a[5]); new_Message->shares1=atol(a[6]); new_Message->price2=atof(a[7]); new_Message->shares2=atol(a[8]); new_Message->idNum = atoi(a[1]); return(new_Message); } Here there is a serious memory clash, in two variables of different scope. I have investigated using gdb and it seems the address of new_Message->Time is equalling to the address of Check->Time. They both are structures of different types I am trying to resolve this issue, because, when parser changes the value of new_Message->Time it manipulates the contents of Check-Time Please do suggest how to solve this problem. I have lost(spent) around 10 hours and counting to resolve this issue, and tons of hair. Soham

    Read the article

  • How to implement code for multiple buttons using c++ in Silverlight for Windows Embedded

    - by Abhi
    Dear all I have referred the following link: Silverlight for Windows Embedded By referring this link i created a demo application which consist of two buttons created using Microsoft expression blend 2 tools. And then written a code referring the above site. Now my button names are "Browser Button" and "Media Button". On click of any one of the button i should able to launch the respective application. I was able to do for "Browser Button" but not for "Media Button" and if i do for "Media Button" then i am not able to do for "Browser Button".. I mean to say that how should i create event handler for both the buttons. This is the code in c++ which i should modify class BtnEventHandler { public: HRESULT OnClick(IXRDependencyObject* source,XRMouseButtonEventArgs* args) { RETAILMSG(1,(L"Browser event")); Execute(L"\\Windows\\iesample.exe",L""); return S_OK; } }; // entry point for the application. INT WINAPI WinMain(HINSTANCE hInstance,HINSTANCE hPrevInstance, LPWSTR lpCmdLine,int nCmdShow) { PrintMessage(); int exitCode = -1; HRESULT hr = S_OK; if (!XamlRuntimeInitialize()) return -1; HRESULT retcode; IXRApplicationPtr app; if (FAILED(retcode=GetXRApplicationInstance(&app))) return -1; if (FAILED(retcode=app->AddResourceModule(hInstance))) return -1; XRWindowCreateParams wp; ZeroMemory(&wp, sizeof(XRWindowCreateParams)); wp.Style = WS_OVERLAPPED; wp.pTitle = L"Bounce Test"; wp.Left = 0; wp.Top = 0; XRXamlSource xamlsrc; xamlsrc.SetResource(hInstance,TEXT("XAML"),MAKEINTRESOURCE(IDR_XAML1)); IXRVisualHostPtr vhost; if (FAILED(retcode=app->CreateHostFromXaml(&xamlsrc, &wp, &vhost))) return -1; IXRFrameworkElementPtr root; if (FAILED(retcode=vhost->GetRootElement(&root))) return -1; IXRButtonBasePtr btn; if (FAILED(retcode=root->FindName(TEXT("BrowserButton"), &btn))) return -1; IXRDelegate<XRMouseButtonEventArgs>* clickdelegate; BtnEventHandler handler; if(FAILED(retcode=CreateDelegate (&handler,&BtnEventHandler::OnClick,&clickdelegate))) return -1; if (FAILED(retcode=btn->AddClickEventHandler(clickdelegate))) return -1; UINT exitcode; if (FAILED(retcode=vhost->StartDialog(&exitcode))) return -1; return exitCode; } I have to add event handler for both the button so that on emulator whenever i click on any one of the button i should be able to launch the respective applications. Thanks in advance

    Read the article

  • Unmanaged Code calling leads to heavy memory leak!!

    - by konnychen
    Maybe I need change the title as "Unmanaged Code calling leads to heavy memory leak!" The leak is around 30M/hour I think maybe I need complete my code here because the memory leak maybe not from a static string whereas my real code derive this string from external device (see new code attached). so I handle also unmanaged code. Could it be possible the leak comes from unmanaged code? But I freed the resouce by Marshal.FreeCoTaskMem(pos); oThread2 = new Thread(new ThreadStart(Cyclic_Call)); oThread2.Start(); delegate void SetText_lab_Statubar(string text); private void m_SetText_lab_Statubar(string text) { if (this.lab_Statubar.InvokeRequired) { SetText_lab_Statubar d = new SetText_lab_Statubar(m_SetText_lab_Statubar); this.Invoke(d, new object[] { text }); } else { this.lab_Statubar.Text = text; } } private void Cyclic_Call() { do { //... ... ReadMatrixCode(Station6, 0, str_Code); this.m_SetText_lab_Statubar(str_Code[4]); Thread.Sleep(100); } while (!b_AbortThraed); } private void ReadMatrixCode(Station st, int ItemNr, string[] str_Code) { IntPtr pItemStates = IntPtr.Zero; IntPtr pErrors = IntPtr.Zero; int NumItems = itemServerHandles.Length; m_SyncIO.Read(DataSrc, NumItems, itemServerHandles, out pItemStates, out pErrors); // This calls external dll which has some of "out IntPtr" errors = new int[NumItems]; Marshal.Copy(pErrors, errors, 0, NumItems); IntPtr pos = pItemStates; // Now get the read values and check errors for (int dwCount = 0; dwCount < NumItems; dwCount++) { result[dwCount] = (ITEMSTATE)Marshal.PtrToStructure(pos, typeof(ITEMSTATE)); pos = (IntPtr)(pos.ToInt32() + Marshal.SizeOf(typeof(ITEMSTATE))); } // Free allocated COM-ressouces Marshal.FreeCoTaskMem(pItemStates); Marshal.FreeCoTaskMem(pErrors); pItemStates = IntPtr.Zero; pErrors = IntPtr.Zero; } m_syncIO is a class and finally it will call COM component which is defined below [Guid("39C12B52-011E-11D0-9675-1020AFD8ADB3")] [InterfaceType(1)] [ComConversionLoss] public interface ISyncIO { void Read(DATASOURCE dwSource, int dwCount, int[] phServer, out IntPtr ppItemValues, out IntPtr ppErrors); void Write(int dwCount, int[] phServer, object[] pItemValues, out IntPtr ppErrors); }

    Read the article

  • problem while displayin the texture image on view that works fine on iphone simulator but not on dev

    - by yunas
    hello i am trying to display an image on iphone by converting it into texture and then displaying it on the UIView. here is the code to load an image from an UIImage object - (void)loadImage:(UIImage *)image mipmap:(BOOL)mipmap texture:(uint32_t)texture { int width, height; CGImageRef cgImage; GLubyte *data; CGContextRef cgContext; CGColorSpaceRef colorSpace; GLenum err; if (image == nil) { NSLog(@"Failed to load"); return; } cgImage = [image CGImage]; width = CGImageGetWidth(cgImage); height = CGImageGetHeight(cgImage); colorSpace = CGColorSpaceCreateDeviceRGB(); // Malloc may be used instead of calloc if your cg image has dimensions equal to the dimensions of the cg bitmap context data = (GLubyte *)calloc(width * height * 4, sizeof(GLubyte)); cgContext = CGBitmapContextCreate(data, width, height, 8, width * 4, colorSpace, kCGImageAlphaPremultipliedLast); if (cgContext != NULL) { // Set the blend mode to copy. We don't care about the previous contents. CGContextSetBlendMode(cgContext, kCGBlendModeCopy); CGContextDrawImage(cgContext, CGRectMake(0.0f, 0.0f, width, height), cgImage); glGenTextures(1, &(_textures[texture])); glBindTexture(GL_TEXTURE_2D, _textures[texture]); if (mipmap) glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); else glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data); if (mipmap) glGenerateMipmapOES(GL_TEXTURE_2D); err = glGetError(); if (err != GL_NO_ERROR) NSLog(@"Error uploading texture. glError: 0x%04X", err); CGContextRelease(cgContext); } free(data); CGColorSpaceRelease(colorSpace); } The problem that i currently am facing is this code workd perfectly fine and displays the image on simulator where as on the device as seen on debugger an error is displayed i.e. Error uploading texture. glError: 0x0501 any idea how to tackle this bug.... thnx in advance 4 ur soluitons

    Read the article

  • Help me understand this C code

    - by Benjamin
    INT GetTree (HWND hWnd, HTREEITEM hItem, HKEY *pRoot, TCHAR *pszKey, INT nMax) { TV_ITEM tvi; TCHAR szName[256]; HTREEITEM hParent; HWND hwndTV = GetDlgItem (hWnd, ID_TREEV); memset (&tvi, 0, sizeof (tvi)); hParent = TreeView_GetParent (hwndTV, hItem); if (hParent) { // Get the parent of the parent of the... GetTree (hWnd, hParent, pRoot, pszKey, nMax); // Get the name of the item. tvi.mask = TVIF_TEXT; tvi.hItem = hItem; tvi.pszText = szName; tvi.cchTextMax = dim(szName); TreeView_GetItem (hwndTV, &tvi); //send the TVM_GETITEM message? lstrcat (pszKey, TEXT ("\\")); lstrcat (pszKey, szName); } else { *pszKey = TEXT ('\0'); szName[0] = TEXT ('\0'); // Get the name of the item. tvi.mask = TVIF_TEXT | TVIF_PARAM; tvi.hItem = hItem; tvi.pszText = szName; tvi.cchTextMax = dim(szName); if (TreeView_GetItem (hwndTV, &tvi)) //*pRoot = (HTREEITEM)tvi.lParam; //original hItem = (HTREEITEM)tvi.lParam; else { INT rc = GetLastError(); } } return 0; } The block of code that begins with the comment "Get the name of the item" does not make sense to me. If you are getting the listview item why does the code set the parameters of the item being retrieved? If you already had the values there would be no need to retrieve them. Secondly near the comment "original" is the original line of code which will compile with a warning under embedded visual c++ 4.0, but if you copy the exact same code into visual studio 2008 it will not compile. Since I did not write any of this code, and am trying to learn, is it possible the original author made a mistake on this line? The *pRoot should point to HKEY type yet he is casting to an HTREEITEM type which should never work since the data types don't match?

    Read the article

  • Establishing a tcp connection from within a DLL

    - by Nicholas Hollander
    I'm trying to write a piece of code that will allow me to establish a TCP connection from within a DLL file. Here's my situation: I have a ruby application that needs to be able to send and receive data over a socket, but I can not access the native ruby socket methods because of the environment in which it will be running. I can however access a DLL file and run the functions within that, so I figured I would create a wrapper for winsock. Unfortunately, attempting to take a piece of code that should connect to a TCP socket in a normal C++ application throws a slew of LNK2019 errors that I can not for the life of me resolve. This is the method I'm using to connect: //Socket variable SOCKET s; //Establishes a connection to the server int server_connect(char* addr, int port) { //Start up Winsock WSADATA wsadata; int error = WSAStartup(0x0202, &wsadata); //Check if something happened if (error) return -1; //Verify Winock version if (wsadata.wVersion != 0x0202) { //Clean up and close WSACleanup(); return -2; } //Get the information needed to finalize a socket SOCKADDR_IN target; target.sin_family = AF_INET; //Address family internet target.sin_port = _WINSOCKAPI_::htons(port); //Port # target.sin_addr.s_addr = inet_addr(addr); //Create the socket s = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP); if (s == INVALID_SOCKET) { return -3; } //Try connecting if (connect(s, (SOCKADDR *)&target, sizeof(target)) == SOCKET_ERROR) { //Failed to connect return -4; } else { //Success return 1; } } The exact errors that I'm receiving are: Error 1 error LNK2019: unresolved external symbol _closesocket@4 referenced in function _server_disconnect [Project Path] Error 2 error LNK2019: unresolved external symbol _connect@12 referenced in function _server_connect [Project Path] Error 3 error LNK2019: unresolved external symbol _htons@4 referenced in function _server_connect [Project Path] Error 4 error LNK2019: unresolved external symbol _inet_addr@4 referenced in function _server_connect [Project Path] Error 5 error LNK2019: unresolved external symbol _socket@12 referenced in function _server_connect [Project Path] Error 6 error LNK2019: unresolved external symbol _WSAStartup@8 referenced in function _server_connect [Project Path] Error 7 error LNK2019: unresolved external symbol _WSACleanup@0 referenced in function _server_connect [Project Path] Error 8 error LNK1120: 7 unresolved externals [Project Path] 1 1 Many thanks!

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59  | Next Page >