Search Results

Search found 14548 results on 582 pages for 'const reference'.

Page 518/582 | < Previous Page | 514 515 516 517 518 519 520 521 522 523 524 525  | Next Page >

  • "Access violation reading location" while accessing a global vector..

    - by djzmo
    Hello there, -- First of all, I don't know whether the vector can be called as a "global vector" if I declared it under a namespace, but not in a class or function. -- I'm now writing a simple Irrlicht (http://irrlicht.sourceforge.net) wrapper for my game to make things simpler and easier, but recently I got an "Access violation reading location" error when trying to push_back a vector declared in the global scope. Here is my code so far: irrwrap.cpp: namespace irrw { //......... IrrlichtDevice *device; IVideoDriver *driver; irr::core::array<irr::video::ITexture*> TextureCollector; vector<int> TextureConnector; //......... } //.............. void irrInit(int iGraphicsDriver, int iWindowWidth, int iWindowHeight, int iScreenDepth, bool bFullScreen) { E_DRIVER_TYPE drvT; if(iGraphicsDriver == GD_SOFTWARE) drvT = EDT_SOFTWARE; else if(iGraphicsDriver == GD_D3D8) drvT = EDT_DIRECT3D8; else if(iGraphicsDriver == GD_D3D9) drvT = EDT_DIRECT3D9; else if(iGraphicsDriver == GD_OPENGL) drvT = EDT_OPENGL; //.............. irrw::device = createDevice(drvT, dimension2d<u32>(iWindowWidth, iWindowHeight), iScreenDepth, bFullScreen); irrw::driver = irrw::device->getVideoDriver(); //.................. } void irrLoadImage(irr::core::stringc szFileName, int iID, int iTextureFlag) { //........ irrw::TextureCollector.push_back(irrw::driver->getTexture(szFileName)); // the call stack pointed to this line irrw::TextureConnector.push_back(iID); } main.cpp: //......... INT WINAPI WinMain(HINSTANCE hInst, HINSTANCE, LPSTR strCmdLine, INT) { //......... irrInit(GD_OPENGL, 800, 600, 16, false); irrLoadImage("picture.jpg", 100, 1); //......... } and the error: Unhandled exception at 0x692804d6 in Game.exe: 0xC0000005: Access violation reading location 0x00000558. Now I really got no idea on how to fix the problem. Any kind of help would be appreciated :) Here are some prototypes: virtual ITexture* getTexture(const io::path& filename) = 0; typedef core::string<fschar_t> path; // under 'io' namespace typedef char fschar_t; typedef string<c8> stringc; typedef char c8; Just FYI, I am using MSVC++ 2008 EE. (CODE UPDATED)

    Read the article

  • Error #1063: Argument count mismatch on com.flashden::MenuItem(). Expected 1, got 0.

    - by Suzanne
    I'm creating a new site using the below script embedded in my swf. But I keep getting this error on all the pages: Error #1063: Argument count mismatch on com.flashden::MenuItem(). Expected 1, got 0. package com.flashden { import flash.display.MovieClip; import flash.text.*; import flash.events.MouseEvent; import flash.events.*; import flash.net.URLRequest; import flash.display.Loader; public class MenuItem extends MovieClip { private var scope; public var closedX; :Number public static const OPEN_MENU = "openMenu"; public function MenuItem(scope) { // set scope to talk back to -------------------------------// this.scope = scope; // disable all items not to be clickable -------------------// txt_label.mouseEnabled = false; menuItemShine.mouseEnabled = false; menuItemArrow.mouseEnabled = false; // make background clip the item to be clicked (button) ----// menuItemBG.buttonMode = true; // add click event listener to the header background -------// menuItemBG.addEventListener(MouseEvent.CLICK, clickHandler); } private function clickHandler (e:MouseEvent) { scope.openMenuItem(this); } public function loadContent (contentURL:String) { var loader:Loader = new Loader(); configureListeners(loader.contentLoaderInfo); var request:URLRequest = new URLRequest(contentURL); loader.load(request); // place x position of content at the bottom of the header so the top is not cut off ----// loader.x = 30; // we add the content at level 1, because the background clip is at level 0 ----// addChildAt(loader, 1); } private function configureListeners(dispatcher:IEventDispatcher):void { dispatcher.addEventListener(Event.COMPLETE, completeHandler); dispatcher.addEventListener(HTTPStatusEvent.HTTP_STATUS, httpStatusHandler); dispatcher.addEventListener(Event.INIT, initHandler); dispatcher.addEventListener(IOErrorEvent.IO_ERROR, ioErrorHandler); dispatcher.addEventListener(Event.OPEN, openHandler); dispatcher.addEventListener(ProgressEvent.PROGRESS, progressHandler); dispatcher.addEventListener(Event.UNLOAD, unLoadHandler); } private function completeHandler(event:Event):void { //trace("completeHandler: " + event); // remove loader animation ----------------// removeChild(getChildByName("mc_preloader")); } private function httpStatusHandler(event:HTTPStatusEvent):void { // trace("httpStatusHandler: " + event); } private function initHandler(event:Event):void { //trace("initHandler: " + event); } private function ioErrorHandler(event:IOErrorEvent):void { //trace("ioErrorHandler: " + event); } private function openHandler(event:Event):void { //trace("openHandler: " + event); } private function progressHandler(event:ProgressEvent):void { //trace("progressHandler: bytesLoaded=" + event.bytesLoaded + " bytesTotal=" + event.bytesTotal); } private function unLoadHandler(event:Event):void { //trace("unLoadHandler: " + event); } } }

    Read the article

  • Implementing a logging library in .NET with a database as the storage medium

    - by Dave
    I'm just starting to work on a logging library that everyone can use to keep track of any sort of system information while the user is running our application. The simplest example so far is to track Info, Warnings, and Errors. I want all plugins to be able to use this feature, but since each developer might have a different idea of what's important to report, I want to keep this as generic as possible. In the C++ world, I would normally use something like a stl::pair<string,string> to act as a key value pair structure, and have a stl::list of these to act as a "row" in the log. The log cache would then be a list<list<pair<string,string>>> (ugh!). This way, the developers can use a const string key like INFO, WARNING, ERROR to have a consistent naming for a column in the database (for SELECTing specific types of information). I'd like the database to be able to deal with any number of distinct column names. For example, John might have an INFO row with a column called USER, and Bill might have an INFO row with a column called FILENAME. I want the log viewer to be able to display all information, and if one report doesn't have a value for INFO / FILENAME, those fields should just appear blank. So one option is to use List<List<KeyValuePair<String,String>>, and the another is to have the log library consumer somehow "register" its schema, and then have the database do an ALTER TABLE to handle this situation. Yet another idea is to have a table that's just for key value pairs, with a foreign key that maps the key value pairs back to the original log entry. I obviously don't want logging to bog down the system, so I only lock the log cache to make a copy of the data (and remove the already-copied data), then a background thread will dump the information to the database. My specific questions regarding this are: Do you see any performance issues? In other words, have you ever tried something like this and found that certain things just don't work well in practice? Is there a more .NETish way to implement the key value pairs, other than List<List<KeyValuePair<String,String>>>? Even if there is a way to do #2 better, is the ALTER TABLE idea I proposed above a Bad Thing? Would you recommend multiple databases over a single one? I don't yet have an idea of how frequently the log would get written to, but we ideally would like to have lots of low level information. Perhaps there should be a DB with a fixed schema only for the low level stuff, and then another DB that's more flexible for reporting information back to users.

    Read the article

  • user defined Copy ctor, and copy-ctors further down the chain - compiler bug ? programmers brainbug

    - by J.Colmsee
    Hi. i have a little problem, and I am not sure if it's a compiler bug, or stupidity on my side. I have this struct : struct BulletFXData { int time_next_fx_counter; int next_fx_steps; Particle particles[2];//this is the interesting one ParticleManager::ParticleId particle_id[2]; }; The member "Particle particles[2]" has a self-made kind of smart-ptr in it (resource-counted texture-class). this smart-pointer has a default constructor, that initializes to the ptr to 0 (but that is not important) I also have another struct, containing the BulletFXData struct : struct BulletFX { BulletFXData data; BulletFXRenderFunPtr render_fun_ptr; BulletFXUpdateFunPtr update_fun_ptr; BulletFXExplosionFunPtr explode_fun_ptr; BulletFXLifetimeOverFunPtr lifetime_over_fun_ptr; BulletFX( BulletFXData data, BulletFXRenderFunPtr render_fun_ptr, BulletFXUpdateFunPtr update_fun_ptr, BulletFXExplosionFunPtr explode_fun_ptr, BulletFXLifetimeOverFunPtr lifetime_over_fun_ptr) :data(data), render_fun_ptr(render_fun_ptr), update_fun_ptr(update_fun_ptr), explode_fun_ptr(explode_fun_ptr), lifetime_over_fun_ptr(lifetime_over_fun_ptr) { } /* //USER DEFINED copy-ctor. if it's defined things go crazy BulletFX(const BulletFX& rhs) :data(data),//this line of code seems to do a plain memory-copy without calling the right ctors render_fun_ptr(render_fun_ptr), update_fun_ptr(update_fun_ptr), explode_fun_ptr(explode_fun_ptr), lifetime_over_fun_ptr(lifetime_over_fun_ptr) { } */ }; If i use the user-defined copy-ctor my smart-pointer class goes crazy, and it seems that calling the CopyCtor / assignment operator aren't called as they should. So - does this all make sense ? it seems as if my own copy-ctor of struct BulletFX should do exactly what the compiler-generated would, but it seems to forget to call the right constructors down the chain. compiler bug ? me being stupid ? Sorry about the big code, some small example could have illustrated too. but often you guys ask for the real code, so well - here it is :D EDIT : more info : typedef ParticleId unsigned int; Particle has no user defined copyctor, but has a member of type : Particle { .... Resource<Texture> tex_res; ... } Resource is a smart-pointer class, and has all ctor's defined (also asignment operator) and it seems that Resource is copied bitwise. EDIT : henrik solved it... data(data) is stupid of course ! it should of course be rhs.data !!! sorry for huge amount of code, with a very little bug in it !!! (Guess you shouldn't code at 1 in the morning :D )

    Read the article

  • how to solve ran time error NSString, sqlite3_column_text NULL problem?

    - by Ajeet Kumar Yadav
    I am new in iphone application developer i am using sqlite3 database and in app delegate i am wright following code and run properly we also find value from database to in my aplication, but immediately the application is going to crass why this is occurs i am not understand. code is given bellow -(void)Data { databaseName = @"dataa.sqlite"; NSArray *documentPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDir = [documentPaths objectAtIndex:0]; databasePath =[documentsDir stringByAppendingPathComponent:databaseName]; [self checkAndCreateDatabase]; list1 = [[NSMutableArray alloc] init]; sqlite3 *database; if (sqlite3_open([databasePath UTF8String], &database) == SQLITE_OK) { if(detailStmt == nil) { const char *sql = "Select * from Dataa"; if(sqlite3_prepare_v2(database, sql, -1, &detailStmt, NULL) == SQLITE_OK) { //NSLog(@"Hiiiiiii"); //sqlite3_bind_text(detailStmt, 1, [t1 UTF8String], -1, SQLITE_TRANSIENT); //sqlite3_bind_text(detailStmt, 2, [t2 UTF8String], -2, SQLITE_TRANSIENT); //sqlite3_bind_int(detailStmt, 3, t3); while(sqlite3_step(detailStmt) == SQLITE_ROW) { //NSLog(@"Helllloooooo"); NSString *item= [NSString stringWithUTF8String:(char *)sqlite3_column_text(detailStmt, 0)]; //NSString *fame= [NSString stringWithUTF8String:(char *)sqlite3_column_text(detailStmt, 1)]; //NSString *cinemax = [NSString stringWithUTF8String:(char *)sqlite3_column_text(detailStmt, 2)]; //NSString *big= [NSString stringWithUTF8String:(char *)sqlite3_column_text(detailStmt, 3)]; //pvr1 = pvr; item1=item; //NSLog(@"%@",item1); data = [[NSMutableArray alloc] init]; list *animal=[[list alloc] initWithName:item1]; // Add the animal object to the animals Array [list1 addObject:animal]; //[list1 addObject:item]; } sqlite3_reset(detailStmt); } sqlite3_finalize(detailStmt); // sqlite3_clear_bindings(detailStmt); } } detailStmt = nil; sqlite3_close(database); } when we see console they show the following error giving bellow 2010-03-09 10:02:40.262 SanjeevKapoor[430:20b] *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** +[NSString stringWithUTF8String:]: NULL cString' when we see debugger they show error in following line NSString *item= [NSString stringWithUTF8String:(char *)sqlite3_column_text(detailStmt, 0)]; I am not able to solve that problum plz help me.

    Read the article

  • Using pointers, references, handles to generic datatypes, as generic and flexible as possible

    - by Patrick
    In my application I have lots of different data types, e.g. Car, Bicycle, Person, ... (they're actually other data types, but this is just for the example). Since I also have quite some 'generic' code in my application, and the application was originally written in C, pointers to Car, Bicycle, Person, ... are often passed as void-pointers to these generic modules, together with an identification of the type, like this: Car myCar; ShowNiceDialog ((void *)&myCar, DATATYPE_CAR); The 'ShowNiceDialog' method now uses meta-information (functions that map DATATYPE_CAR to interfaces to get the actual data out of Car) to get information of the car, based on the given data type. That way, the generic logic only has to be written once, and not every time again for every new data type. Of course, in C++ you could make this much easier by using a common root class, like this class RootClass { public: string getName() const = 0; }; class Car : public RootClass { ... }; void ShowNiceDialog (RootClass *root); The problem is that in some cases, we don't want to store the data type in a class, but in a totally different format to save memory. In some cases we have hundreds of millions of instances that we need to manage in the application, and we don't want to make a full class for every instance. Suppose we have a data type with 2 characteristics: A quantity (double, 8 bytes) A boolean (1 byte) Although we only need 9 bytes to store this information, putting it in a class means that we need at least 16 bytes (because of the padding), and with the v-pointer we possibly even need 24 bytes. For hundreds of millions of instances, every byte counts (I have a 64-bit variant of the application and in some cases it needs 6 GB of memory). The void-pointer approach has the advantage that we can almost encode anything in a void-pointer and decide how to use it if we want information from it (use it as a real pointer, as an index, ...), but at the cost of type-safety. Templated solutions don't help since the generic logic forms quite a big part of the application, and we don't want to templatize all this. Additionally, the data model can be extended at run time, which also means that templates won't help. Are there better (and type-safer) ways to handle this than a void-pointer? Any references to frameworks, whitepapers, research material regarding this?

    Read the article

  • Converting OpenGL co-ordinates to lower UIView (and UIImagePickerController)

    - by John Qualis
    Hi, I am new to OpenGL over iPhone. I am developing an iPhone app similar to a barcode reader but with an extra OpenGL layer. The bottommost layer is UIImagePickerController, then I use UIView on top and draw a rectangle at certain co-ordinates on the iphone screen. So far everything is OK. Then I am trying to draw an OpenGL 3-D model in that rectangle. I am able to load a 3-D model in the iPhone based on this code here - http://iphonedevelopment.blogspot.com/2008/12/start-of-wavefront-obj-file-loader.html I am not able to transform the co-ordinates of the rectangle into OpenGL co-ordinates. Appreciate any help. Do I need to use a matrix to translate the currentPosition of the 3-D model so it is drawn within myRect? The code is given below.. Appreciate any help/pointers in this regards. John -(void)setupView:(GLView*)view { const GLfloat zNear = 0.01, zFar = 1000.0, fieldOfView = 45.0; GLfloat size; glEnable(GL_DEPTH_TEST); glMatrixMode(GL_PROJECTION); size = zNear * tanf(DEGREES_TO_RADIANS(fieldOfView) / 2.0); CGRect rect = view.bounds; glFrustumf(-size, size, -size / (rect.size.width / rect.size.height), size / (rect.size.width / rect.size.height), zNear, zFar); glViewport(0, 0, rect.size.width, rect.size.height); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glClearColor(0.0f, 0.0f, 0.0f, 0.0f); NSString *path = [[NSBundle mainBundle] pathForResource:@"plane" ofType:@"obj"]; OpenGLWaveFrontObject *theObject = [[OpenGLWaveFrontObject alloc] initWithPath:path]; Vertex3D position; position.z = -8.0; position.y = 3.0; position.x = 2.0; theObject.currentPosition = position; self.plane = theObject; [theObject release]; } (void)drawView:(GLView*)view; { static GLfloat rotation = 0.0; glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glLoadIdentity(); glColor4f(0.0, 0.5, 1.0, 1.0); // the coordinates of the rectangle are // myRect.x, myRect.y, myRect.width, myRect.height // Do I need to use a matrix to translate the currentPosition of the // 3-D model so it is drawn within myRect? //glOrthof(-160.0f, 160.0f, -240.0f, 240.0f, -1.0f, 1.0f); [plane drawSelf]; }

    Read the article

  • mem-leak freeing g_strdup

    - by Mike
    I'm trying to free g_strdup but I'm not sure what I'm doing wrong. Using valgrind --tool=memcheck --leak-check=yes ./a.out I keep getting: ==4506== 40 bytes in 10 blocks are definitely lost in loss record 2 of 9 ==4506== at 0x4024C1C: malloc (vg_replace_malloc.c:195) ==4506== by 0x40782E3: g_malloc (in /lib/libglib-2.0.so.0.2200.3) ==4506== by 0x4090CA8: g_strdup (in /lib/libglib-2.0.so.0.2200.3) ==4506== by 0x8048722: add_inv (dup.c:26) ==4506== by 0x80487E6: main (dup.c:47) ==4506== 504 bytes in 1 blocks are possibly lost in loss record 4 of 9 ==4506== at 0x4023E2E: memalign (vg_replace_malloc.c:532) ==4506== by 0x4023E8B: posix_memalign (vg_replace_malloc.c:660) ==4506== by 0x408D61D: ??? (in /lib/libglib-2.0.so.0.2200.3) ==4506== by 0x408E5AC: g_slice_alloc (in /lib/libglib-2.0.so.0.2200.3) ==4506== by 0x4061628: g_hash_table_new_full (in /lib/libglib-2.0.so.0.2200.3) ==4506== by 0x40616C7: g_hash_table_new (in /lib/libglib-2.0.so.0.2200.3) ==4506== by 0x8048795: main (dup.c:42) I've tried different ways to freed but no success so far. I'll appreciate any help. Thanks BTW: It compiles and runs fine. #include <stdio.h> #include <string.h> #include <stdlib.h> #include <glib.h> #include <stdint.h> struct s { char *data; }; static GHashTable *hashtable1; static GHashTable *hashtable2; static void add_inv(GHashTable *table, const char *key) { gpointer old_value, old_key; gint value; if(g_hash_table_lookup_extended(table,key, &old_key, &old_value)){ value = GPOINTER_TO_INT(old_value); value = value + 2; /*g_free (old_key);*/ } else { value = 5; } g_hash_table_replace(table, g_strdup(key), GINT_TO_POINTER(value)); } static void print_hash_kv (gpointer key, gpointer value, gpointer user_data){ gchar *k = (gchar *) key; gchar *h = (gchar *) value; printf("%s: %d \n",k, h); } int main(int argc, char *argv[]){ struct s t; t.data = "bar"; int i,j; hashtable1 = g_hash_table_new(g_str_hash, g_str_equal); hashtable2 = g_hash_table_new(g_str_hash, g_str_equal); for(i=0;i<10;i++){ add_inv(hashtable1, t.data); add_inv(hashtable2, t.data); } /*free(t.data);*/ /*free(t.data);*/ g_hash_table_foreach (hashtable1, print_hash_kv, NULL); g_hash_table_foreach (hashtable2, print_hash_kv, NULL); g_hash_table_destroy(hashtable1); g_hash_table_destroy(hashtable2); return 0; }

    Read the article

  • Windows splash screen using GDI+

    - by Luther
    The eventual aim of this is to have a splash screen in windows that uses transparency but that's not what I'm stuck on at the moment. In order to create a transparent window, I'm first trying to composite the splash screen and text on an off screen buffer using GDI+. At the moment I'm just trying to composite the buffer and display it in response to a 'WM_PAINT' message. This isn't working out at the moment; all I see is a black window. I imagine I've misunderstood something with regards to setting up render targets in GDI+ and then rendering them (I'm trying to render the screen using straight forward GDI blit) Anyway, here's the code so far: //my window initialisation code void MyWindow::create_hwnd(HINSTANCE instance, const SIZE &dim) { DWORD ex_style = WS_EX_LAYERED ; //eventually I'll be making use of this layerd flag m_hwnd = CreateWindowEx( ex_style, szFloatingWindowClass , L"", WS_POPUP , 0, 0, dim.cx, dim.cy, null, null, instance, null); SetWindowLongPtr(m_hwnd ,0, (__int3264)(LONG_PTR)this); m_display_dc = GetDC(NULL); //This was sanity check test code - just loading a standard HBITMAP and displaying it in WM_PAINT. It worked fine //HANDLE handle= LoadImage(NULL , L"c:\\test_image2.bmp", IMAGE_BITMAP, 0, 0, LR_LOADFROMFILE); m_gdip_offscreen_bm = new Gdiplus::Bitmap(dim.cx, dim.cy); m_gdi_dc = Gdiplus::Graphics::FromImage(m_gdip_offscreen_bm);//new Gdiplus::Graphics(m_splash_dc );//window_dc ;m_splash_dc //this draws the conents of my splash screen - this works if I create a GDI+ context for the window, rather than for an offscreen bitmap. //For all I know, it might actually be working but when I try to display the contents on screen, it shows a black image draw_all(); //this is just to show that drawing something simple on the offscreen bit map seems to have no effect Gdiplus::Pen pen(Gdiplus::Color(255, 0, 0, 255)); m_gdi_dc->DrawLine(&pen, 0,0,100,100); DWORD last_error = GetLastError(); //returns '0' at this stage } And here's the snipit that handles the WM_PAINT message: ---8<----------------------- //Paint message snippit case WM_PAINT: { BITMAP bm; PAINTSTRUCT ps; HDC hdc = BeginPaint(vg->m_hwnd, &ps); //get the HWNDs DC HDC hdcMem = vg->m_gdi_dc->GetHDC(); //get the HDC from our offscreen GDI+ object unsigned int width = vg->m_gdip_offscreen_bm->GetWidth(); //width and height seem fine at this point unsigned int height = vg->m_gdip_offscreen_bm->GetHeight(); BitBlt(hdc, 0, 0, width, height, hdcMem, 0, 0, SRCCOPY); //this blits a black rectangle DWORD last_error = GetLastError(); //this was '0' vg->m_gdi_dc->ReleaseHDC(hdcMem); EndPaint(vg->m_hwnd, &ps); //end paint return 1; } ---8<----------------------- My apologies for the long post. Does anybody know what I'm not quite understanding regarding how you write to an offscreen buffer using GDI+ (or GDI for that matter)and then display this on screen? Thank you for reading.

    Read the article

  • Calling cdecl Functions That Have Different Number of Arguments

    - by KlaxSmashing
    I have functions that I wish to call based on some input. Each function has different number of arguments. In other words, if (strcmp(str, "funcA") == 0) funcA(a, b, c); else if (strcmp(str, "funcB") == 0) funcB(d); else if (strcmp(str, "funcC") == 0) funcC(f, g); This is a bit bulky and hard to maintain. Ideally, these are variadic functions (e.g., printf-style) and can use varargs. But they are not. So exploiting the cdecl calling convention, I am stuffing the stack via a struct full of parameters. I'm wondering if there's a better way to do it. Note that this is strictly for in-house (e.g., simple tools, unit tests, etc.) and will not be used for any production code that might be subjected to malicious attacks. Example: #include <stdio.h> typedef struct __params { unsigned char* a; unsigned char* b; unsigned char* c; } params; int funcA(int a, int b) { printf("a = %d, b = %d\n", a, b); return a; } int funcB(int a, int b, const char* c) { printf("a = %d, b = %d, c = %s\n", a, b, c); return b; } int funcC(int* a) { printf("a = %d\n", *a); *a *= 2; return 0; } typedef int (*f)(params); int main(int argc, char**argv) { int val; int tmp; params myParams; f myFuncA = (f)funcA; f myFuncB = (f)funcB; f myFuncC = (f)funcC; myParams.a = (unsigned char*)100; myParams.b = (unsigned char*)200; val = myFuncA(myParams); printf("val = %d\n", val); myParams.c = (unsigned char*)"This is a test"; val = myFuncB(myParams); printf("val = %d\n", val); tmp = 300; myParams.a = (unsigned char*)&tmp; val = myFuncC(myParams); printf("a = %d, val = %d\n", tmp, val); return 0; } Output: gcc -o func func.c ./func a = 100, b = 200 val = 100 a = 100, b = 200, c = This is a test val = 200 a = 300 a = 600, val = 0

    Read the article

  • [c++] upload image to imageshack

    - by cinek1lol
    Hi! I would like to send pictures via a program written in C + +. - OK WinExec("C:\\curl\\curl.exe -H Expect: -F \"fileupload=@C:\\curl\\ok.jpg\" -F \"xml=yes\" -# \"http://www.imageshack.us/index.php\" -o data.txt -A \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.1) Gecko/20061204 Firefox/2.0.0.1\" -e \"http://www.imageshack.us\"", NULL); It works, but I would like to send the pictures from pre-loaded carrier to a variable char (you know what I mean? First off, I load the pictures into a variable and then send the variable), cause now I have to specify the path of the picture on a disk. I wanted to write this program in c++ by using the curl library, not through exe. extension. I have also found such a program (which has been modified by me a bit) #include <stdio.h> #include <string.h> #include <iostream> #include <curl/curl.h> #include <curl/types.h> #include <curl/easy.h> int main(int argc, char *argv[]) { CURL *curl; CURLcode res; struct curl_httppost *formpost=NULL; struct curl_httppost *lastptr=NULL; struct curl_slist *headerlist=NULL; static const char buf[] = "Expect:"; curl_global_init(CURL_GLOBAL_ALL); /* Fill in the file upload field */ curl_formadd(&formpost, &lastptr, CURLFORM_COPYNAME, "send", CURLFORM_FILE, "nowy.jpg", CURLFORM_END); curl_formadd(&formpost, &lastptr, CURLFORM_COPYNAME, "nowy.jpg", CURLFORM_COPYCONTENTS, "nowy.jpg", CURLFORM_END); curl_formadd(&formpost, &lastptr, CURLFORM_COPYNAME, "submit", CURLFORM_COPYCONTENTS, "send", CURLFORM_END); curl = curl_easy_init(); headerlist = curl_slist_append(headerlist, buf); if(curl) { curl_easy_setopt(curl, CURLOPT_URL, "http://www.imageshack.us/index.php"); if ( (argc == 2) && (!strcmp(argv[1], "xml=yes")) ) curl_easy_setopt(curl, CURLOPT_HTTPHEADER, headerlist); curl_easy_setopt(curl, CURLOPT_HTTPPOST, formpost); res = curl_easy_perform(curl); curl_easy_cleanup(curl); curl_formfree(formpost); curl_slist_free_all (headerlist); } system("pause"); return 0; }

    Read the article

  • bad file descriptor with close() socket (c++)

    - by user321246
    hi everybody! I'm running out of file descriptors when my program can't connect another host. The close() system call doesn't work, the number of open sockets increases. I can se it with cat /proc/sys/fs/file-nr Print from console: connect: No route to host close: Bad file descriptor connect: No route to host close: Bad file descriptor .. Code: #include <stdio.h> #include <stdlib.h> #include <sys/socket.h> #include <netinet/in.h> #include <netdb.h> #include <string.h> #include <iostream> using namespace std; #define PORT 1238 #define MESSAGE "Yow!!! Are we having fun yet?!?" #define SERVERHOST "192.168.9.101" void write_to_server (int filedes) { int nbytes; nbytes = write (filedes, MESSAGE, strlen (MESSAGE) + 1); if (nbytes < 0) { perror ("write"); } } void init_sockaddr (struct sockaddr_in *name, const char *hostname, uint16_t port) { struct hostent *hostinfo; name->sin_family = AF_INET; name->sin_port = htons (port); hostinfo = gethostbyname (hostname); if (hostinfo == NULL) { fprintf (stderr, "Unknown host %s.\n", hostname); } name->sin_addr = *(struct in_addr *) hostinfo->h_addr; } int main() { for (;;) { sleep(1); int sock; struct sockaddr_in servername; /* Create the socket. */ sock = socket (PF_INET, SOCK_STREAM, 0); if (sock < 0) { perror ("socket (client)"); } /* Connect to the server. */ init_sockaddr (&servername, SERVERHOST, PORT); if (0 > connect (sock, (struct sockaddr *) &servername, sizeof (servername))) { perror ("connect"); sock = -1; } /* Send data to the server. */ if (sock > -1) write_to_server (sock); if (close (sock) != 0) perror("close"); } return 0; }

    Read the article

  • C++: Templates for static functions?

    - by Rosarch
    I have a static Utils class. I want certain methods to be templated, but not the entire class. How do I do this? This fails: #pragma once #include <string> using std::string; class Utils { private: template<class InputIterator, class Predicate> static set<char> findAll_if_rec(InputIterator begin, InputIterator end, Predicate pred, set<char> result); public: static void PrintLine(const string& line, int tabLevel = 0); static string getTabs(int tabLevel); template<class InputIterator, class Predicate> static set<char> Utils::findAll_if(InputIterator begin, InputIterator end, Predicate pred); }; Error: utils.h(10): error C2143: syntax error : missing ';' before '<' utils.h(10): error C4430: missing type specifier - int assumed. Note: C++ does not support default-int utils.h(10): error C4430: missing type specifier - int assumed. Note: C++ does not support default-int utils.h(10): error C2238: unexpected token(s) preceding ';' utils.h(10): error C2988: unrecognizable template declaration/definition utils.h(10): error C2059: syntax error : '<' What am I doing wrong? What is the correct syntax for this? Incidentally, I'd like to templatize the return value, too. So instead of: template<class InputIterator, class Predicate> static set<char> findAll_if_rec(InputIterator begin, InputIterator end, Predicate pred, set<char> result); I'd have: template<class return_t, class InputIterator, class Predicate> static return_t findAll_if_rec(InputIterator begin, InputIterator end, Predicate pred, set<char> result); How would I specify that: 1) return_t must be a set of some sort 2) InputIterator must be an iterator 3) InputIterator's type must work with return_t's type. Thanks.

    Read the article

  • C++ using cdb_read returns extra characters on some reads

    - by Moe Be
    Hi All, I am using the following function to loop through a couple of open CDB hash tables. Sometimes the value for a given key is returned along with an additional character (specifically a CTRL-P (a DLE character/0x16/0o020)). I have checked the cdb key/value pairs with a couple of different utilities and none of them show any additional characters appended to the values. I get the character if I use cdb_read() or cdb_getdata() (the commented out code below). If I had to guess I would say I am doing something wrong with the buffer I create to get the result from the cdb functions. Any advice or assistance is greatly appreciated. char* HashReducer::getValueFromDb(const string &id, vector <struct cdb *> &myHashFiles) { unsigned char hex_value[BUFSIZ]; size_t hex_len; //construct a real hex (not ascii-hex) value to use for database lookups atoh(id,hex_value,&hex_len); char *value = NULL; vector <struct cdb *>::iterator my_iter = myHashFiles.begin(); vector <struct cdb *>::iterator my_end = myHashFiles.end(); try { //while there are more databases to search and we have not found a match for(; my_iter != my_end && !value ; my_iter++) { //cerr << "\n looking for this MD5:" << id << " hex(" << hex_value << ") \n"; if (cdb_find(*my_iter, hex_value, hex_len)){ //cerr << "\n\nI found the key " << id << " and it is " << cdb_datalen(*my_iter) << " long\n\n"; value = (char *)malloc(cdb_datalen(*my_iter)); cdb_read(*my_iter,value,cdb_datalen(*my_iter),cdb_datapos(*my_iter)); //value = (char *)cdb_getdata(*my_iter); //cerr << "\n\nThe value is:" << value << " len is:" << strlen(value)<< "\n\n"; }; } } catch (...){} return value; }

    Read the article

  • How to obtain the first cluster of the directory's data in FAT using C# (or at least C++) and Win32A

    - by DarkWalker
    So I have a FAT drive, lets say H: and a directory 'work' (full path 'H:\work'). I need to get the NUMBER of the first cluster of that directory. The number of the first cluster is 2-bytes value, that is stored in the 26th and 27th bytes of the folder enty (wich is 32 bytes). Lets say I am doing it with file, NOT a directory. I can use code like this: static public string GetDirectoryPtr(string dir) { IntPtr ptr = CreateFile(@"H:\Work\dover.docx", GENERIC_READ, FILE_SHARE_READ | FILE_SHARE_WRITE, IntPtr.Zero, OPEN_EXISTING, 0,//FILE_FLAG_BACKUP_SEMANTICS, IntPtr.Zero); try { const uint bytesToRead = 2; byte[] readbuffer = new byte[bytesToRead]; if (ptr.ToInt32() == -1) return String.Format("Error: cannot open direcotory {0}", dir); if (SetFilePointer(ptr, 26, 0, 0) == -1) return String.Format("Error: unable to set file pointer on file {0}", ptr); uint read = 0; // real count of read bytes if (!ReadFile(ptr, readbuffer, bytesToRead, out read, 0)) return String.Format("cant read from file {0}. Error #{1}", ptr, Marshal.GetLastWin32Error()); int result = readbuffer[0] + 16 * 16 * readbuffer[1]; return result.ToString();//ASCIIEncoding.ASCII.GetString(readbuffer); } finally { CloseHandle(ptr); } } And it will return some number, like 19 (quite real to me, this is the only file on the disk). But I DONT need a file, I need a folder. So I am puttin FILE_FLAG_BACKUP_SEMANTICS param for CreateFile call... and dont know what to do next =) msdn is very clear on this issue http://msdn.microsoft.com/en-us/library/aa365258(v=VS.85).aspx It sounds to me like: "There is no way you can get a number of the folder's first cluster". The most desperate thing is that my tutor said smth like "You are going to obtain this or you wont pass this course". The true reason why he is so sure this is possible is because for 10 years (or may be more) he recieved the folder's first cluster number as a HASH of the folder's addres (and I was stupid enough to point this to him, so now I cant do it the same way) PS: This is the most spupid task I have ever had!!! This value is not really used anythere in program, it is only fcking pointless integer.

    Read the article

  • how SendMailMAPI is adjusted to support multiple file attachments

    - by Tom
    I have this code that sends just one attachment by time, how can I adjust this code to send 1-2 attachments? function SendMailMAPI(const Subject, Body, FileName, SenderName, SenderEMail, RecepientName, RecepientEMail: String) : Integer; var message: TMapiMessage; lpSender, lpRecepient: TMapiRecipDesc; FileAttach: TMapiFileDesc; SM: TFNMapiSendMail; MAPIModule: HModule; begin FillChar(message, SizeOf(message), 0); with message do begin if (Subject<>'') then begin lpszSubject := PChar(Subject) end; if (Body<>'') then begin lpszNoteText := PChar(Body) end; if (SenderEMail<>'') then begin lpSender.ulRecipClass := MAPI_ORIG; if (SenderName='') then begin lpSender.lpszName := PChar(SenderEMail) end else begin lpSender.lpszName := PChar(SenderName) end; lpSender.lpszAddress := PChar('SMTP:'+SenderEMail); lpSender.ulReserved := 0; lpSender.ulEIDSize := 0; lpSender.lpEntryID := nil; lpOriginator := @lpSender; end; if (RecepientEMail<>'') then begin lpRecepient.ulRecipClass := MAPI_TO; if (RecepientName='') then begin lpRecepient.lpszName := PChar(RecepientEMail) end else begin lpRecepient.lpszName := PChar(RecepientName) end; lpRecepient.lpszAddress := PChar('SMTP:'+RecepientEMail); lpRecepient.ulReserved := 0; lpRecepient.ulEIDSize := 0; lpRecepient.lpEntryID := nil; nRecipCount := 1; lpRecips := @lpRecepient; end else begin lpRecips := nil end; if (FileName='') then begin nFileCount := 0; lpFiles := nil; end else begin FillChar(FileAttach, SizeOf(FileAttach), 0); FileAttach.nPosition := Cardinal($FFFFFFFF); FileAttach.lpszPathName := PChar(FileName); nFileCount := 1; lpFiles := @FileAttach; end; end; MAPIModule := LoadLibrary(PChar(MAPIDLL)); if MAPIModule=0 then begin Result := -1 end else begin try @SM := GetProcAddress(MAPIModule, 'MAPISendMail'); if @SM<>nil then begin Result := SM(0, Application.Handle, message, MAPI_DIALOG or MAPI_LOGON_UI, 0); end else begin Result := 1 end; finally FreeLibrary(MAPIModule); end; end; if Result<>0 then begin MessageDlg('Error sending mail ('+IntToStr(Result)+').', mtError, [mbOk], 0) end; end;

    Read the article

  • How to download file into string with progress callback?

    - by Kaminari
    I would like to use the WebClient (or there is another better option?) but there is a problem. I understand that opening up the stream takes some time and this can not be avoided. However, reading it takes a strangely much more amount of time compared to read it entirely immediately. Is there a best way to do this? I mean two ways, to string and to file. Progress is my own delegate and it's working good. FIFTH UPDATE: Finally, I managed to do it. In the meantime I checked out some solutions what made me realize that the problem lies elsewhere. I've tested custom WebResponse and WebRequest objects, library libCURL.NET and even Sockets. The difference in time was gzip compression. Compressed stream lenght was simply half the normal stream lenght and thus download time was less than 3 seconds with the browser. I put some code if someone will want to know how i solved this: (some headers are not needed) public static string DownloadString(string URL) { WebClient client = new WebClient(); client.Headers["User-Agent"] = "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/532.5 (KHTML, like Gecko) Chrome/4.1.249.1045 Safari/532.5"; client.Headers["Accept"] = "application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5"; client.Headers["Accept-Encoding"] = "gzip,deflate,sdch"; client.Headers["Accept-Charset"] = "ISO-8859-2,utf-8;q=0.7,*;q=0.3"; Stream inputStream = client.OpenRead(new Uri(URL)); MemoryStream memoryStream = new MemoryStream(); const int size = 32 * 4096; byte[] buffer = new byte[size]; if (client.ResponseHeaders["Content-Encoding"] == "gzip") { inputStream = new GZipStream(inputStream, CompressionMode.Decompress); } int count = 0; do { count = inputStream.Read(buffer, 0, size); if (count > 0) { memoryStream.Write(buffer, 0, count); } } while (count > 0); string result = Encoding.Default.GetString(memoryStream.ToArray()); memoryStream.Close(); inputStream.Close(); return result; } I think that asyncro functions will be almost the same. But i will simply use another thread to fire this function. I dont need percise progress indication.

    Read the article

  • c++ i need help with this program. everytime i try to run it, i got a problem

    - by FOXMULDERIZE
    1-the program must read numeric data from a file. 2-only one line per number 3-half way between those numbers is a negative number. 4-the program must sum those who are above the negative number in a acumulator an those below the negative number in another acumulator. 5-the black screen shall print both results and determined who is grater or equal. include include using namespace std; void showvalues(int,int,int[]); void showvalues2(int,int); void sumtotal(int,int); int main() { int total1=0; int total2=0; const int SIZE_A= 9; int arreglo[SIZE_A]; int suma,total,a,b,c,d,e,f; ifstream archivo_de_entrada; archivo_de_entrada.open("numeros.txt"); //lee/// for(int count =0 ;count < SIZE_A;count++) archivo_de_entrada>>arreglo[count] ; archivo_de_entrada.close(); showvalues(0,3,arreglo); showvalues2(5,8); sumtotal(total1,total2); system("pause"); return 0; } void showvalues(int a,int b,int arreglos) { int total1=0; //muestra//////////////////////// cout<< "los num son "; for(int count = a ;count <= b;count++) total1 += arreglos[count]; cout < } void showvalues2(int c,int d) { ////////////////////////////// int total2=0; cout<< "los num 2 son "; for(count =5 ;count <=8;count++) total2 = total2 + arreglo[count]; cout < void sumtotal(int e,int f) { ///////////////////////////////// cout<<"la suma de t1 y t2 es "; total= total1 + total2; cout< }

    Read the article

  • Can't Show image in gridview using templates

    - by n10i
    i am trying to load images from the northwind database (categories table, images that are stored in the database) into grid view control. But it dosenot seems to work. Plz! have a look... Default.aspx <asp:GridView ID="GridView1" runat="server" AllowSorting="True" AutoGenerateColumns="False" DataKeyNames="CategoryID" DataSourceID="NorthWindSQLExpressConnectionString" EnableModelValidation="True"> <Columns> <asp:BoundField DataField="CategoryID" HeaderText="CategoryID" InsertVisible="False" ReadOnly="True" SortExpression="CategoryID" /> <asp:BoundField DataField="CategoryName" HeaderText="CategoryName" SortExpression="CategoryName" /> <asp:BoundField DataField="Description" HeaderText="Description" SortExpression="Description" /> <asp:TemplateField HeaderText="Picture" SortExpression="Picture"> <EditItemTemplate> <asp:TextBox ID="TextBox1" runat="server" Text='<%# Bind("Picture") %>'></asp:TextBox> </EditItemTemplate> <ItemTemplate> <asp:Image ID="Image1" runat="server" ImageUrl='<%# RetriveImage(Eval("CategoryID")) %>' /> </ItemTemplate> </asp:TemplateField> </Columns> </asp:GridView> <asp:SqlDataSource ID="NorthWindSQLExpressConnectionString" runat="server" ConnectionString="<%$ ConnectionStrings:NorthwindSQLExpressConnectionString %>" SelectCommand="SELECT [CategoryID], [CategoryName], [Description], [Picture] FROM [Categories]"> </asp:SqlDataSource> </div> </form> [Partial] Default.aspx.cs protected string RetriveImage(object eval) { return ("ImageHandler.ashx?CategoryID=" + eval.ToString()); } [Partial] ImageHandler.ashx public void ProcessRequest(HttpContext context) { if (context.Request.QueryString == null) { } else { try { using (var sqlCon = new SqlConnection(conString)) { const string cmdString = "Select picture from Categories where CategoryID=@CategoryID"; using (var sqlCmd = new SqlCommand(cmdString, sqlCon)) { sqlCmd.Parameters.AddWithValue("@CategoryID", context.Request.QueryString["CategoryID"]); string trmp = sqlCmd.ToString(); sqlCon.Open(); using (var sqlDr = sqlCmd.ExecuteReader()) { sqlDr.Read(); context.Response.BinaryWrite((byte[])sqlDr["Picture"]); } } } } catch (Exception ex) { context.Response.Write(ex.Message); } } }

    Read the article

  • Are Objective-C initializers allowed to share the same name?

    - by NattKatt
    I'm running into an odd issue in Objective-C when I have two classes using initializers of the same name, but differently-typed arguments. For example, let's say I create classes A and B: A.h: #import <Cocoa/Cocoa.h> @interface A : NSObject { } - (id)initWithNum:(float)theNum; @end A.m: #import "A.h" @implementation A - (id)initWithNum:(float)theNum { self = [super init]; if (self != nil) { NSLog(@"A: %f", theNum); } return self; } @end B.h: #import <Cocoa/Cocoa.h> @interface B : NSObject { } - (id)initWithNum:(int)theNum; @end B.m: #import "B.h" @implementation B - (id)initWithNum:(int)theNum { self = [super init]; if (self != nil) { NSLog(@"B: %d", theNum); } return self; } @end main.m: #import <Foundation/Foundation.h> #import "A.h" #import "B.h" int main (int argc, const char * argv[]) { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; A *a = [[A alloc] initWithNum:20.0f]; B *b = [[B alloc] initWithNum:10]; [a release]; [b release]; [pool drain]; return 0; } When I run this, I get the following output: 2010-04-26 20:44:06.820 FnTest[14617:a0f] A: 20.000000 2010-04-26 20:44:06.823 FnTest[14617:a0f] B: 1 If I reverse the order of the imports so it imports B.h first, I get: 2010-04-26 20:45:03.034 FnTest[14635:a0f] A: 0.000000 2010-04-26 20:45:03.038 FnTest[14635:a0f] B: 10 For some reason, it seems like it's using the data type defined in whichever @interface gets included first for both classes. I did some stepping through the debugger and found that the isa pointer for both a and b objects ends up the same. I also found out that if I no longer make the alloc and init calls inline, both initializations seem to work properly, e.g.: A *a = [A alloc]; [a initWithNum:20.0f]; If I use this convention when I create both a and b, I get the right output and the isa pointers seem to be different for each object. Am I doing something wrong? I would have thought multiple classes could have the same initializer names, but perhaps that is not the case.

    Read the article

  • Use format strings that contain %1, %2 etc. instead of %d, %s etc. - Linux, C++

    - by rursw1
    Hello, As a follow-up of this question (Message compiler replacement in Linux gcc), I have the following problem: When using MC.exe on Windows for compiling and generating messages, within the C++ code I call FormatMessage, which retrieves the message and uses the va_list *Arguments parameter to send the varied message arguments. For example: messages.mc file: MessageId=1 Severity=Error SymbolicName=MULTIPLE_MESSAGE_OCCURED Language=English message %1 occured %2 times. . C++ code: void GetMsg(unsigned int errCode, wstring& message,unsigned int paramNumber, ...) { HLOCAL msg; DWORD ret; LANGID lang = GetUserDefaultLangID(); try { va_list argList; va_start( argList, paramNumber ); const TCHAR* dll = L"MyDll.dll"; _hModule = GetModuleHandle(dll); ret =::FormatMessage( FORMAT_MESSAGE_ALLOCATE_BUFFER | FORMAT_MESSAGE_FROM_HMODULE|FORMAT_MESSAGE_IGNORE_INSERTS, _hModule, errCode, lang, (LPTSTR) &msg, 0, &argList ); if ( 0 != ret ) { unsigned int count = 0 ; message = msg; if (paramNumber>0) { wstring::const_iterator iter; for (iter = message.begin();iter!=message.end();iter++) { wchar_t xx = *iter; if (xx ==L'%') count++; } } if ((count == paramNumber) && (count >0)) { ::LocalFree( msg ); ret =::FormatMessage( FORMAT_MESSAGE_ALLOCATE_BUFFER | FORMAT_MESSAGE_FROM_HMODULE, _hModule, errCode, GetUserDefaultLangID(), (LPTSTR) &msg, 0, &argList ); } else if (count != paramNumber) { wstringstream tmp; wstring messNumber; tmp << (errCode & 0xFFFF); tmp >> messNumber; message = message +L"("+ messNumber + L"). Bad Format String. "; } } ::LocalFree( msg ); } catch (...) { message << L"last error: " << GetLastError(); } va_end( argList ); } Caller code: wstring message; GetMsg(MULTIPLE_MESSAGE_OCCURED, message,2, "Error message", 5); Now, I wrote a simple script to generate a .msg file from the .mc file, and then I use gencat to generate a catalog from it. But is there a way to use the formatted strings as they contain %1, %2, etc. and NOT the general (%d, %s...) format? Please note, that the solution has to be generic enough for each possible message with each posible types\ arguments order... Is it possible at all? Thank you.

    Read the article

  • boost::spirit::karma using the alternatives operator (|) with conditions

    - by Ingemar
    I'm trying to generate a string from my own class called Value using boost::spirit::karma, but i got stuck with this. I've tried to extract my problem into a simple example. I want to generate a String with karma from instances of the following class: class Value { public: enum ValueType { BoolType, NumericType }; Value(bool b) : type_(BoolType), value_(b) {} Value(const double d) : type_(NumericType), value_(d) {}; ValueType type() { return type_; } operator bool() { return boost::get<bool>(value_); } operator double() { return boost::get<double>(value_); } private: ValueType type_; boost::variant<bool, double> value_; }; Here you can see what I'm tying to do: int main() { using karma::bool_; using karma::double_; using karma::rule; using karma::eps; std::string generated; std::back_insert_iterator<std::string> sink(generated); rule<std::back_insert_iterator<std::string>, Value()> value_rule = bool_ | double_; Value bool_value = Value(true); Value double_value = Value(5.0); karma::generate(sink, value_rule, bool_value); std::cout << generated << "\n"; generated.clear(); karma::generate(sink, value_rule, double_value); std::cout << generated << "\n"; return 0; } The first call to karma::generate() works fine because the value is a bool and the first generator in my rule also "consumes" a bool. But the second karma::generate() fails with boost::bad_get because karma tries to eat a bool and calls therefore Value::operator bool(). My next thought was to modify my generator rule and use the eps() generator together with a condition but here i got stuck: value_rule = (eps( ... ) << bool_) | (eps( ... ) << double_); I'm unable to fill the brackets of the eps generator with sth. like this (of course not working): eps(value.type() == BoolType) I've tried to get into boost::phoenix, but my brain seems not to be ready for things like this. Please help me! here is my full example (compiling but not working): main.cpp

    Read the article

  • SQLite/iPhone read copyright symbol

    - by Marco A
    Hi All, I am having problems reading the copyright symbol from a sqlite db that I have for my App that I am developing. I import the information manually, ie, from an excel sheet. I have tried two ways of doing it and failed with both: 1) Tried replacing the copyright symbol with "\u00ae" (unicode combination) within excel and then importing the modified file. - Result: I get the combination of \u00ae as a part of the string, it doesnt detect the unicode combination. 2) Tried leaving as it is. Importing the excel with the copyright symbol. - Result: I get a symbol that is different from the copyright, its something like an AE put together.looks like this: Æ Heres my code how I read from DB: -(void) readCategoriesFromDatabase:(NSString *) rest_input { // Init the products Array categories = [[NSMutableArray alloc] init]; // Open the database from the users filessytem rest_input = [rest_input stringByAppendingString:@"'"]; NSString *newString; newString = [@"select distinct category from food where restaurant='" stringByAppendingString:rest_input]; const char *cat_sqlStatement = [newString UTF8String]; sqlite3_stmt *cat_compiledStatement; if(sqlite3_prepare_v2(database, cat_sqlStatement, -1, &cat_compiledStatement, NULL) == SQLITE_OK) { // Loop through the results and add them to the feeds array while(sqlite3_step(cat_compiledStatement) == SQLITE_ROW) { NSString *catName = [NSString stringWithUTF8String:(char *)sqlite3_column_text(cat_compiledStatement,0)]; // Create a new product object with the data from the database Product *category = [[Product alloc] initWithName:catName]; // Add the product object to the respective Array [categories addObject:category]; [category release]; } sqlite3_finalize(cat_compiledStatement); } NSLog(@"Finished Accessing Database to gather Categories...."); } I open the DB with this function: -(void) checkAndCreateDatabase{ NSLog(@"Checking/Creating Database...."); NSFileManager *fileManager = [NSFileManager defaultManager]; success = [fileManager fileExistsAtPath:databasePath]; [fileManager removeFileAtPath:databasePath handler:nil]; NSString *databasePathFromApp = [[[NSBundle mainBundle] resourcePath] stringByAppendingPathComponent:databaseName]; [fileManager copyItemAtPath:databasePathFromApp toPath:databasePath error:nil]; [fileManager release]; if (sqlite3_open([databasePath UTF8String], &database) != SQLITE_OK) { sqlite3_close(database); database = nil; } NSLog(@"Finished Checking/Creating Database...."); } Thanks to anything that can help me out.

    Read the article

  • Remote XP -> Win98 WMI Connection

    - by Logan Young
    I've asked this on Technet, but because Win98 is no longer supported, I can't get any decent information, I was hoping there might be some "old school" developers here who might be able to help me. There is an application that we use a lot at work. This application should run 8am-5pm with as little interruption as possible. Most of the computers where this application runs are using Win98, and we have no way to upgrade them because we can't buy new hardware at the moment. My computer is running WinXP, so I thought of a way to make sure that this application runs all the time: The idea I had was to develop a Windows Service that executes a VBScript file that contains a WMI query to get a list of processes from each computer. Each list is then examined, and, depending on whether or not the target application is running, it will either do nothing, or it will execute another VBScript file that contains a WMI query that will be used to start the target application remotely. I later found a way to do this all with 1 VBScript file (see code below) My problem is in the remote connection to the target computers. I've installed WMI Core 1.5 on them, but every time I try the remote connection, I get the following: The remote server is unavailable or does not exist: 'GetObject' VBScript runtime error 800A01CE I've done some research, and all I've found is info about DCOM Config and Windows Firewall, but Win98 doesn't have either of these. ' #### Variables and constants #### Const HIDDEN_WINDOW = 12 Dim T ' #### End Variables and constants #### Main() Sub Main() ' #### Get Process information from WMI Computer = "." Set WMI = GetObject("winmgmts:" & _ "{ImpersonationLevel=Impersonate}!\\" & Computer & "\root\cimv2") Set Settings = WMI.ExecQuery("SELECT * FROM Win32_Process") For Each Process In Settings ' #### If the application is found to be running, set a value to indicate this If Process.Name = "NOTEPAD.EXE" Then T = True End If Next ' #### T will only have a value if the application is not running. We therefore ' #### evaluate it to determine if it has a value or not. If not, start the application If Not T Then 'MsgBox("Application not found.") Set Startup = WMI.Get("Win32_ProcessStartup") Set Config = Startup.SpawnInstance_ Config.ShowWindow = HIDDEN_WINDOW Set Process = GetObject("winmgmts:root\cimv2:Win32_Process") errReturn = Process.Create(_ "C:\Windows\notepad.exe", null, Config, intProcessID) End If End Sub This uses WMI to get the list of processes from the local computer and, if the target application is running, it'll do nothing, otherwise it'll forcefully start the target application. The problem is that this works only if I specify the local comuter, if I target another computer, I get the error mentioned above. Does anyone have any ideas? Thanks in advance for the help!

    Read the article

  • Having trouble wrapping functions in the linux kernel

    - by Corey Henderson
    I've written a LKM that implements Trusted Path Execution (TPE) into your kernel: https://github.com/cormander/tpe-lkm I run into an occasional kernel OOPS (describe at the end of this question) when I define WRAP_SYSCALLS to 1, and am at my wit's end trying to track it down. A little background: Since the LSM framework doesn't export its symbols, I had to get creative with how I insert the TPE checking into the running kernel. I wrote a find_symbol_address() function that gives me the address of any function I need, and it works very well. I can call functions like this: int (*my_printk)(const char *fmt, ...); my_printk = find_symbol_address("printk"); (*my_printk)("Hello, world!\n"); And it works fine. I use this method to locate the security_file_mmap, security_file_mprotect, and security_bprm_check functions. I then overwrite those functions with an asm jump to my function to do the TPE check. The problem is, the currently loaded LSM will no longer execute the code for it's hook to that function, because it's been totally hijacked. Here is an example of what I do: int tpe_security_bprm_check(struct linux_binprm *bprm) { int ret = 0; if (bprm->file) { ret = tpe_allow_file(bprm->file); if (IS_ERR(ret)) goto out; } #if WRAP_SYSCALLS stop_my_code(&cs_security_bprm_check); ret = cs_security_bprm_check.ptr(bprm); start_my_code(&cs_security_bprm_check); #endif out: return ret; } Notice the section between the #if WRAP_SYSCALLS section (it's defined as 0 by default). If set to 1, the LSM's hook is called because I write the original code back over the asm jump and call that function, but I run into an occasional kernel OOPS with an "invalid opcode": invalid opcode: 0000 [#1] SMP RIP: 0010:[<ffffffff8117b006>] [<ffffffff8117b006>] security_bprm_check+0x6/0x310 I don't know what the issue is. I've tried several different types of locking methods (see the inside of start/stop_my_code for details) to no avail. To trigger the kernel OOPS, write a simple bash while loop that endlessly starts a backgrounded "ls" command. After a minute or so, it'll happen. I'm testing this on a RHEL6 kernel, also works on Ubuntu 10.04 LTS (2.6.32 x86_64). While this method has been the most successful so far, I have tried another method of simply copying the kernel function to a pointer I created with kmalloc but when I try to execute it, I get: kernel tried to execute NX-protected page - exploit attempt? (uid: 0). If anyone can tell me how to kmalloc space and have it marked as executable, that would also help me solve the above problem. Any help is appreciated!

    Read the article

< Previous Page | 514 515 516 517 518 519 520 521 522 523 524 525  | Next Page >