Search Results

Search found 5165 results on 207 pages for 'const cast'.

Page 192/207 | < Previous Page | 188 189 190 191 192 193 194 195 196 197 198 199  | Next Page >

  • ld: symbol(s) not found with OpenSSL (libssl)

    - by Benjamin
    Hi all, I'm trying to build TorTunnel on my mac. I've successfully installed the Boost library and its development files. TorTunnel also requires the OpenSSL and its development files. I've got them installed in /usr/lib/libssl.dylib and /usr/include/openssl/. When I run the make command this is the error i'm getting: g++ -ggdb -g -O2 -lssl -lboost_system-xgcc42-mt-1_38 -o torproxy TorProxy.o HybridEncryption.o Connection.o Cell.o Directory.o ServerListing.o Util.o Circuit.o CellEncrypter.o RelayCellDispatcher.o CellConsumer.o ProxyShuffler.o CreateCell.o CreatedCell.o TorTunnel.o SocksConnection.o Network.o Undefined symbols: "_BN_hex2bn", referenced from: Circuit::initializeDhParameters() in Circuit.o "_BN_free", referenced from: Circuit::~Circuit()in Circuit.o Circuit::~Circuit()in Circuit.o CreatedCell::getKeyMaterial(unsigned char**, unsigned char**)in CreatedCell.o "_DH_generate_key", referenced from: Circuit::initializeDhParameters() in Circuit.o "_PEM_read_bio_RSAPublicKey", referenced from: ServerListing::getOnionKey() in ServerListing.o "_BIO_s_mem", referenced from: Connection::initializeSSL() in Connection.o Connection::initializeSSL() in Connection.o "_DH_free", referenced from: Circuit::~Circuit()in Circuit.o "_BIO_ctrl_pending", referenced from: Connection::writeFromBuffer(boost::function)in Connection.o "_RSA_size", referenced from: HybridEncryption::encryptInSingleChunk(unsigned char*, int, unsigned char**, int*, rsa_st*)in HybridEncryption.o HybridEncryption::encryptInHybridChunk(unsigned char*, int, unsigned char**, int*, rsa_st*)in HybridEncryption.o HybridEncryption::encrypt(unsigned char*, int, unsigned char**, int*, rsa_st*)in HybridEncryption.o "_RSA_public_encrypt", referenced from: HybridEncryption::encryptInSingleChunk(unsigned char*, int, unsigned char**, int*, rsa_st*)in HybridEncryption.o HybridEncryption::encryptInHybridChunk(unsigned char*, int, unsigned char**, int*, rsa_st*)in HybridEncryption.o "_BN_num_bits", referenced from: CreateCell::CreateCell(unsigned short, dh_st*, rsa_st*)in CreateCell.o CreatedCell::getKeyMaterial(unsigned char**, unsigned char**)in CreatedCell.o CreatedCell::getKeyMaterial(unsigned char**, unsigned char**)in CreatedCell.o CreatedCell::isValid() in CreatedCell.o "_SHA1", referenced from: CellEncrypter::expandKeyMaterial(unsigned char*, int, unsigned char*, int)in CellEncrypter.o "_BN_bn2bin", referenced from: CreateCell::CreateCell(unsigned short, dh_st*, rsa_st*)in CreateCell.o "_BN_bin2bn", referenced from: CreatedCell::getKeyMaterial(unsigned char**, unsigned char**)in CreatedCell.o "_DH_compute_key", referenced from: CreatedCell::getKeyMaterial(unsigned char**, unsigned char**)in CreatedCell.o "_BIO_new", referenced from: Connection::initializeSSL() in Connection.o Connection::initializeSSL() in Connection.o "_BIO_new_mem_buf", referenced from: ServerListing::getOnionKey() in ServerListing.o "_AES_ctr128_encrypt", referenced from: HybridEncryption::AES_encrypt(unsigned char*, int, unsigned char*, unsigned char*, int)in HybridEncryption.o CellEncrypter::aesOperate(Cell&, aes_key_st*, unsigned char*, unsigned char*, unsigned int*)in CellEncrypter.o "_BIO_read", referenced from: Connection::writeFromBuffer(boost::function)in Connection.o "_SHA1_Update", referenced from: CellEncrypter::calculateDigest(SHAstate_st*, RelayCell&, unsigned char*)in CellEncrypter.o CellEncrypter::initKeyMaterial(unsigned char*)in CellEncrypter.o CellEncrypter::initKeyMaterial(unsigned char*)in CellEncrypter.o "_SHA1_Final", referenced from: CellEncrypter::calculateDigest(SHAstate_st*, RelayCell&, unsigned char*)in CellEncrypter.o "_DH_size", referenced from: CreatedCell::getKeyMaterial(unsigned char**, unsigned char**)in CreatedCell.o "_DH_new", referenced from: Circuit::initializeDhParameters() in Circuit.o "_BIO_write", referenced from: Connection::readIntoBufferComplete(boost::function, boost::system::error_code const&, unsigned long)in Connection.o "_RSA_free", referenced from: Circuit::~Circuit()in Circuit.o "_BN_dup", referenced from: Circuit::initializeDhParameters() in Circuit.o Circuit::initializeDhParameters() in Circuit.o "_BN_new", referenced from: Circuit::initializeDhParameters() in Circuit.o Circuit::initializeDhParameters() in Circuit.o "_SHA1_Init", referenced from: CellEncrypter::CellEncrypter()in CellEncrypter.o CellEncrypter::CellEncrypter()in CellEncrypter.o "_RAND_bytes", referenced from: HybridEncryption::encryptInHybridChunk(unsigned char*, int, unsigned char**, int*, rsa_st*)in HybridEncryption.o Util::getRandomId() in Util.o "_AES_set_encrypt_key", referenced from: HybridEncryption::AES_encrypt(unsigned char*, int, unsigned char*, unsigned char*, int)in HybridEncryption.o CellEncrypter::initKeyMaterial(unsigned char*)in CellEncrypter.o CellEncrypter::initKeyMaterial(unsigned char*)in CellEncrypter.o "_BN_set_word", referenced from: Circuit::initializeDhParameters() in Circuit.o "_RSA_new", referenced from: ServerListing::getOnionKey() in ServerListing.o ld: symbol(s) not found collect2: ld returned 1 exit status make: *** [torproxy] Error 1 Any idea how I could fix it?

    Read the article

  • How to use CFNetwork to get byte array from sockets?

    - by Vic
    Hi, I'm working in a project for the iPad, it is a small program and I need it to communicate with another software that runs on windows and act like a server; so the application that I'm creating for the iPad will be the client. I'm using CFNetwork to do sockets communication, this is the way I'm establishing the connection: char ip[] = "192.168.0.244"; NSString *ipAddress = [[NSString alloc] initWithCString:ip]; /* Build our socket context; this ties an instance of self to the socket */ CFSocketContext CTX = { 0, self, NULL, NULL, NULL }; /* Create the server socket as a TCP IPv4 socket and set a callback */ /* for calls to the socket's lower-level connect() function */ TCPClient = CFSocketCreate(NULL, PF_INET, SOCK_STREAM, IPPROTO_TCP, kCFSocketDataCallBack, (CFSocketCallBack)ConnectCallBack, &CTX); if (TCPClient == NULL) return; /* Set the port and address we want to listen on */ struct sockaddr_in addr; memset(&addr, 0, sizeof(addr)); addr.sin_len = sizeof(addr); addr.sin_family = AF_INET; addr.sin_port = htons(PORT); addr.sin_addr.s_addr = inet_addr([ipAddress UTF8String]); CFDataRef connectAddr = CFDataCreate(NULL, (unsigned char *)&addr, sizeof(addr)); CFSocketConnectToAddress(TCPClient, connectAddr, -1); CFRunLoopSourceRef sourceRef = CFSocketCreateRunLoopSource(kCFAllocatorDefault, TCPClient, 0); CFRunLoopAddSource(CFRunLoopGetCurrent(), sourceRef, kCFRunLoopCommonModes); CFRelease(sourceRef); CFRunLoopRun(); And this is the way I sent the data, which basically is a byte array /* The native socket, used for various operations */ // TCPClient is a CFSocketRef member variable CFSocketNativeHandle sock = CFSocketGetNative(TCPClient); Byte byteData[3]; byteData[0] = 0; byteData[1] = 4; byteData[2] = 0; send(sock, byteData, strlen(byteData)+1, 0); Finally, as you may have noticed, when I create the server socket, I registered a callback for the kCFSocketDataCallBack type, this is the code. void ConnectCallBack(CFSocketRef socket, CFSocketCallBackType type, CFDataRef address, const void *data, void *info) { // SocketsViewController is the class that contains all the methods SocketsViewController *obj = (SocketsViewController*)info; UInt8 *unsignedData = (UInt8 *) CFDataGetBytePtr(data); char *value = (char*)unsignedData; NSString *text = [[NSString alloc]initWithCString:value length:strlen(value)]; [obj writeToTextView:text]; [text release]; } Actually, this callback is being invoked in my code, the problem is that I don't know how can I get the data that the windows client sent me, I'm expecting to receive an array of bytes, but I don't know how can I get those bytes from the data param. If anyone can help me to find a way to do this, or maybe me point me to another way to get the data from the server in my client application I would really appreciate it. Thanks.

    Read the article

  • setting up subreport in ireport using XML datasource

    - by shyam
    can anyone explain in detail(if possible with screen shorts) how to add subreport (one to many relation) this being the xml data source <addressbook> <category name="home"> <person id="1"> <LASTNAME>Davolio</LASTNAME> <FIRSTNAME>Nancy</FIRSTNAME> <hobbies> <hobby>Radio Control</hobby> <hobby>R/C Cars</hobby> <hobby>Micro R/C Cars</hobby> <hobby>Die-Cast Models</hobby> </hobbies> <email>[email protected]</email> <email>[email protected]</email> </person> <person id="2"> <LASTNAME>Fuller</LASTNAME> <FIRSTNAME>Andrew</FIRSTNAME> <email>[email protected]</email> <email>[email protected]</email> </person> <person id="3"> <LASTNAME>Leverling</LASTNAME> <FIRSTNAME>Janet</FIRSTNAME> <hobbies> <hobby>Rockets</hobby> <hobby>Puzzles</hobby> <hobby>Science Hobby</hobby> <hobby>Toy Horse</hobby> </hobbies> <email>[email protected]</email> <email>[email protected]</email> </person> </category> <category name="work"> <person id="4"> <LASTNAME>Peacock</LASTNAME> <FIRSTNAME>Margaret</FIRSTNAME> <hobbies> <hobby>Toy Horse</hobby> </hobbies> <email>[email protected]</email> </person> <person id="5"> <LASTNAME>Buchanan</LASTNAME> <FIRSTNAME>Steven</FIRSTNAME> <hobbies> </hobbies> <email>[email protected]</email> </person> <person id="6"> <LASTNAME>Suyama</LASTNAME> <FIRSTNAME>Michael</FIRSTNAME> </person> <person id="7"> <LASTNAME>King</LASTNAME> <FIRSTNAME>Robert</FIRSTNAME> </person> </category> <category name="Other"> <person id="8"> <LASTNAME>Callahan</LASTNAME> <FIRSTNAME>Laura</FIRSTNAME> <email>[email protected]</email> </person> <person id="9"> <LASTNAME>Dodsworth</LASTNAME> <email>[email protected]</email> </person> </category> </addressbook>

    Read the article

  • being able to solve google code jam problem sets

    - by JPro
    This is not a homework question, but rather my intention to know if this is what it takes to learn programming. I keep loggin into TopCoder not to actually participate but to get the basic understand of how the problems are solved. But to my knowledge I don't understand what the problem is and how to translate the problem into an algorithm that can solve it. Just now I happen to look at ACM ICPC 2010 World Finals which is being held in china. The teams were given problem sets and one of them is this: Given at most 100 points on a plan with distinct x-coordinates, find the shortest cycle that passes through each point exactly once, goes from the leftmost point always to the right until it reaches the rightmost point, then goes always to the left until it gets back to the leftmost point. Additionally, two points are given such that the the path from left to right contains the first point, and the path from right to left contains the second point. This seems to be a very simple DP: after processing the last k points, and with the first path ending in point a and the second path ending in point b, what is the smallest total length to achieve that? This is O(n^2) states, transitions in O(n). We deal with the two special points by forcing the first path to contain the first one, and the second path contain the second one. Now I have no idea what I am supposed to solve after reading the problem set. and there's an other one from google code jam: Problem In a big, square room there are two point light sources: one is red and the other is green. There are also n circular pillars. Light travels in straight lines and is absorbed by walls and pillars. The pillars therefore cast shadows: they do not let light through. There are places in the room where no light reaches (black), where only one of the two light sources reaches (red or green), and places where both lights reach (yellow). Compute the total area of each of the four colors in the room. Do not include the area of the pillars. Input * One line containing the number of test cases, T. Each test case contains, in order: * One line containing the coordinates x, y of the red light source. * One line containing the coordinates x, y of the green light source. * One line containing the number of pillars n. * n lines describing the pillars. Each contains 3 numbers x, y, r. The pillar is a disk with the center (x, y) and radius r. The room is the square described by 0 = x, y = 100. Pillars, room walls and light sources are all disjoint, they do not overlap or touch. Output For each test case, output: Case #X: black area red area green area yellow area Is it required that people who program should be should be able to solve these type of problems? I would apprecite if anyone can help me interpret the google code jam problem set as I wish to participate in this years Code Jam to see if I can do anthing or not. Thanks.

    Read the article

  • Detect Client Computer name when an RDP session is open

    - by Ubiquitous Che
    Hey all, My manager has pointed out to me a few nifty things that one of our accounting applications can do because it can load different settings based on the machine name of the host and the machine name of the client when the package is opened in an RDP session. We want to provide similar functionality in one of my company's applications. I've found out on this site how to detect if I'm in an RDP session, but I'm having trouble finding information anywhere on how to detect the name of the client computer. Any pointers in the right direction would be great. I'm coding in C# for .NET 3.5 EDIT The sample code I cobbled together from the advice below - it should be enough for anyone who has a use for the WTSQuerySessionInformation to get a feel for what's going on. Note that this isn't necessarily the best way of doing it - just a starting point that I've found useful. When I run this locally, I get boring, expected answers. When I run it on our local office server in an RDP session, I see my own computer name in the WTSClientName property. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Runtime.InteropServices; namespace TerminalServicesTest { class Program { const int WTS_CURRENT_SESSION = -1; static readonly IntPtr WTS_CURRENT_SERVER_HANDLE = IntPtr.Zero; static void Main(string[] args) { StringBuilder sb = new StringBuilder(); uint byteCount; foreach (WTS_INFO_CLASS item in Enum.GetValues(typeof(WTS_INFO_CLASS))) { Program.WTSQuerySessionInformation( WTS_CURRENT_SERVER_HANDLE, WTS_CURRENT_SESSION, item, out sb, out byteCount); Console.WriteLine("{0}({1}): {2}", item.ToString(), byteCount, sb); } Console.WriteLine(); Console.WriteLine("Press any key to exit..."); Console.ReadKey(); } [DllImport("Wtsapi32.dll")] public static extern bool WTSQuerySessionInformation( IntPtr hServer, int sessionId, WTS_INFO_CLASS wtsInfoClass, out StringBuilder ppBuffer, out uint pBytesReturned); } enum WTS_INFO_CLASS { WTSInitialProgram = 0, WTSApplicationName = 1, WTSWorkingDirectory = 2, WTSOEMId = 3, WTSSessionId = 4, WTSUserName = 5, WTSWinStationName = 6, WTSDomainName = 7, WTSConnectState = 8, WTSClientBuildNumber = 9, WTSClientName = 10, WTSClientDirectory = 11, WTSClientProductId = 12, WTSClientHardwareId = 13, WTSClientAddress = 14, WTSClientDisplay = 15, WTSClientProtocolType = 16, WTSIdleTime = 17, WTSLogonTime = 18, WTSIncomingBytes = 19, WTSOutgoingBytes = 20, WTSIncomingFrames = 21, WTSOutgoingFrames = 22, WTSClientInfo = 23, WTSSessionInfo = 24, WTSSessionInfoEx = 25, WTSConfigInfo = 26, WTSValidationInfo = 27, WTSSessionAddressV4 = 28, WTSIsRemoteSession = 29 } }

    Read the article

  • Flex - Problem with auto resizing datagrid

    - by Marty Pitt
    Hi All I'm trying to create a datagrid which will resize vertically to ensure all the renderers are displayed in full. Additionally, Renderers are of variable height Renderers can resize themselves Generally speaking, the flow of events is as follows : One of the item renderers resizes itself (normally in response to a user click etc) It dispatches a bubbling event which the parent datagrid picks up The DataGrid attempts to resize to ensure that all renderers remain visible in full. I'm currently using this code within the datagrid to calculate the height: height = measureHeightOfItems(0, dataProvider.length ) + headerHeight; This appears to get an incorrect height. I've tried a number of variations including callLater ( to ensure the resize has completed so measure can work correctly), and overriding meausre() and calling invalidateSize() / validateSize(), but neither works. Below are 3 classes which will illustrate the problem. Clicking the button in the item renderers resizes the renderer. The grid should also expand so that all of the 3 renderers are shown in their entirety. Any suggestions would be greatly appreciated. Regards Marty DataGridProblem.mxml (Application file) <mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" layout="vertical" xmlns:view="view.*"> <mx:ArrayCollection id="dataProvider"> <mx:String>Item A</mx:String> <mx:String>Item B</mx:String> <mx:String>Item C</mx:String> </mx:ArrayCollection> <view:TestDataGrid id="dg" dataProvider="{ dataProvider }" width="400"> <view:columns> <mx:DataGridColumn dataField="text" /> <mx:DataGridColumn itemRenderer="view.RendererButton" /> </view:columns> </view:TestDataGrid> </mx:Application> view.TestDataGrid.as package view { import flash.events.Event; import mx.controls.DataGrid; import mx.core.ScrollPolicy; public class TestDataGrid extends DataGrid { public function TestDataGrid() { this.verticalScrollPolicy = ScrollPolicy.OFF; this.variableRowHeight = true; this.addEventListener( RendererButton.RENDERER_RESIZE , onRendererResize ); } private function onRendererResize( event : Event ) : void { resizeDatagrid(); } private function resizeDatagrid():void { height = measureHeightOfItems(0, dataProvider.length ) + headerHeight; } } } view.RendererButton.mxml <?xml version="1.0" encoding="utf-8"?> <mx:HBox xmlns:mx="http://www.adobe.com/2006/mxml"> <mx:Button width="50" height="50" click="onClick()" /> <mx:Script> <![CDATA[ public static const RENDERER_RESIZE : String = "resizeRenderer"; private function onClick() : void { this.height += 20; dispatchEvent( new Event( RENDERER_RESIZE , true ) ); } ]]> </mx:Script> </mx:HBox>

    Read the article

  • sqlite3_prepare_v2 throws SQLITE_ERROR in iPhone code

    - by incognitii
    I have a piece of code that used to work, and other iterations of it that DO work within the same app, and have compared code, and they are identical in structure. This is driving me INSANE!!!!!! In this instance, sqlite3_prepare_v2 is throwing one of those useless SQLITE_ERROR exceptions. Apparently, it can FIND the database, and OPEN the database, but it can't prepare a statement for it. Does ANYONE have any ideas? I'm desperate here. second_maindatabaseName = @"database.db"; NSArray *documentPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDir = [documentPaths objectAtIndex:0]; second_maindatabasePath = [documentsDir stringByAppendingPathComponent:second_maindatabaseName]; // Setup the database object sqlite3 *database; // Init the entry requirements Array BOOL success; NSFileManager *fileManager = [NSFileManager defaultManager]; success = [fileManager fileExistsAtPath:second_maindatabasePath]; if(!success) { NSString *databasePathFromApp = [[[NSBundle mainBundle] resourcePath] stringByAppendingPathComponent:second_maindatabaseName]; [fileManager copyItemAtPath:databasePathFromApp toPath:second_maindatabasePath error:nil]; [fileManager release]; } if(sqlite3_open([second_maindatabasePath UTF8String], &database) == SQLITE_OK) { NSUserDefaults *defaults = [NSUserDefaults standardUserDefaults]; value1 = [defaults stringForKey:kConst1]; value2 = [defaults stringForKey:kConst2]; value3 = [defaults stringForKey:kConst3]; value4 = [defaults stringForKey:kConst4]; const char *sqlStatement = "SELECT * FROM myTable WHERE Field1 = ? AND Field2 = ? AND Field3 = ? AND Field4 = ?"; sqlite3_stmt *compiledStatement; int error_code = sqlite3_prepare_v2(database, sqlStatement, -1, &compiledStatement, NULL); if(error_code == SQLITE_OK) { // Loop through the results and add them to the feeds array sqlite3_bind_text(compiledStatement,1,[value1 UTF8String],-1,SQLITE_TRANSIENT); sqlite3_bind_text(compiledStatement,2,[value2 UTF8String],-1,SQLITE_TRANSIENT); sqlite3_bind_text(compiledStatement,3,[value3 UTF8String],-1,SQLITE_TRANSIENT); sqlite3_bind_text(compiledStatement,4,[value4 UTF8String],-1,SQLITE_TRANSIENT); while(sqlite3_step(compiledStatement) == SQLITE_ROW) { // Read the data from the result row NSString *aNewValue = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, 1)]; } } sqlite3_finalize(compiledStatement); } sqlite3_close(database);

    Read the article

  • Possible to use Python with Intel's Atom Developer SDK (C/C++)?

    - by Jordan Magnuson
    So I've made a game in Python and PyGame. Now I'm interested in submitting the game to Intel's March Developer Challenge. However, the developer challenge requires use of Intel's Atom Developer SDK (http://appdeveloper.intel.com/en-us/sdk), which only has API's for C and C++. I'm new to Python and PyGame, and have no experience in C or C++. My question is, would it be possible to somehow implement Intel's Atom SDK through/with/from a Python application (as the first link above suggests)? I've read up a little bit on embedding/extending Python into/with C, but I'm not entirely sure what to embed or where. I mean, I know I can do things like this in C: #include <Python.h> int main(int argc, char *argv[]) { Py_Initialize(); PyRun_SimpleString("from time import time,ctime\n" "print 'Today is',ctime(time())\n"); Py_Finalize(); return 0; } But what do I do about all my dependencies on Python and Pygame, for people that don't have those installed on their machines? Normally Py2Exe takes care of compacting the required dependencies (I've managed to package my game into an exe/zip), but how do I take care of that stuff in the context of embedding within C? Can I somehow work with py2exe on this, or do I need to do something entirely different for embedding within C? It seems like it would be a lot easier to go the route of extending Python with the C validation code, rather than trying to embed my whole game within C, but I think that's not an option, "because the library provided is currently only available as a Visual Studio 2008 '.lib'", meaning the application has to be compiled with Visual Studio...? Any help, thoughts, or ideas are much appreciated! You can find the complete SDK Developer's Guide on the intel site above, but here is their "Hello World" using the C Language API: #include <stdio.h> #include “adpcore.h” int main( int argc, char* argv[] ) { ADP_RET_CODE ret_code; const ADP_APPLICATIONID myApplicationID = {{ 0x12345678,0x11112222,0x33331234,0x567890ab}}; if ((ret_code = ADP_Initialize()) != ADP_SUCCESS ){ printf( “ERROR: exiting” ); exit( -1 ); } if (( ret_code = ADP_IsAuthorized( myApplicationId )) == ADP_AUTHORIZED ) printf( “Hello World” ); else printf( “Not authorized to run” ); exit 0; } 35 Page SDK Developer Guide: http:// appdeveloper.intel.com/sites/files/pages/SDK%20Developer%20Guide.pdf

    Read the article

  • a problem in socks.h

    - by janathan
    i use this (http://www.codeproject.com/KB/IP/Socks.aspx) lib in my socket programing in c++ and copy the socks.h in include folder and write this code: include include include include include include "socks.h" define PORT 1001 // the port client will be connecting to define MAXDATASIZE 100 static void ReadThread(void* lp); int socketId; int main(int argc, char* argv[]) { const char temp[]="GET / HTTP/1.0\r\n\r\n"; CSocks cs; cs.SetVersion(SOCKS_VER4); cs.SetSocksPort(1080); cs.SetDestinationPort(1001); cs.SetDestinationAddress("192.168.11.97"); cs.SetSocksAddress("192.168.11.97"); //cs.SetVersion(SOCKS_VER5); //cs.SetSocksAddress("128.0.21.200"); socketId = cs.Connect(); // if failed if (cs.m_IsError) { printf( "\n%s", cs.GetLastErrorMessage()); getch(); return 0; } // send packet for requesting to a server if(socketId > 0) { send(socketId, temp, strlen(temp), 0); HANDLE ReadThreadID; // handle for read thread id HANDLE handle; // handle for thread handle handle = CreateThread ((LPSECURITY_ATTRIBUTES)NULL, // No security attributes. (DWORD)0, // Use same stack size. (LPTHREAD_START_ROUTINE)ReadThread, // Thread procedure. (LPVOID)(void*)NULL, // Parameter to pass. (DWORD)0, // Run immediately. (LPDWORD)&ReadThreadID); WaitForSingleObject(handle, INFINITE); } else { printf("\nSocks Server / Destination Server not started.."); } closesocket(socketId); getch(); return 0; } // Thread Proc for reading from server socket. static void ReadThread(void* lp) { int numbytes; char buf[MAXDATASIZE]; while(1) { if ((numbytes=recv(socketId, buf, MAXDATASIZE-1, 0)) == -1) { printf("\nServer / Socks Server has been closed Receive thread Closed\0"); break; } if (numbytes == 0) break; buf[numbytes] = '\0'; printf("Received: %s\r\n",buf); send(socketId,buf,strlen(buf),0); } } but when compile this i get an error . pls help me thanks

    Read the article

  • Strange iPhone memory leak in xml parser

    - by Chris
    Update: I edited the code, but the problem persists... Hi everyone, this is my first post here - I found this place a great ressource for solving many of my questions. Normally I try my best to fix anything on my own but this time I really have no idea what goes wrong, so I hope someone can help me out. I am building an iPhone app that parses a couple of xml files using TouchXML. I have a class XMLParser, which takes care of downloading and parsing the results. I am getting memory leaks when I parse an xml file more than once with the same instance of XMLParser. Here is one of the parsing snippets (just the relevant part): for(int counter = 0; counter < [item childCount]; counter++) { CXMLNode *child = [item childAtIndex:counter]; if([[child name] isEqualToString:@"PRODUCT"]) { NSMutableDictionary *product = [[NSMutableDictionary alloc] init]; for(int j = 0; j < [child childCount]; j++) { CXMLNode *grandchild = [child childAtIndex:j]; if([[grandchild stringValue] length] > 1) { NSString *trimmedString = [[grandchild stringValue] stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]]; [product setObject:trimmedString forKey:[grandchild name]]; } } // Add product to current category array switch (categoryId) { case 0: [self.mobil addObject: product]; break; case 1: [self.allgemein addObject: product]; break; case 2: [self.besitzeIch addObject: product]; break; case 3: [self.willIch addObject: product]; break; default: break; } [product release]; } } The first time, I parse the xml no leak shows up in instruments, the next time I do so, I got a lot of leaks (NSCFString / NSCFDictionary). Instruments points me to this part inside CXMLNode.m, when I dig into a leaked object: theStringValue = [NSString stringWithUTF8String:(const char *)theXMLString]; if ( _node->type != CXMLTextKind ) xmlFree(theXMLString); } return(theStringValue); I really spent a long time and tried multiple approaches to fix this, but to no avail so far, maybe I am missing something essential? Any help is highly appreciated, thank you!

    Read the article

  • seperating interface and implemention with normal functions

    - by ace
    this seems like it should be pretty simple, im probably leaving something simple out. this is the code im trying to run. it is 3 files, 2*cpp and 1*header. -------------lab6.h ifndef LAB6_H_INCLUDED define LAB6_H_INCLUDED int const arraySize = 10; int array1[arraySize]; int array2[arraySize]; void generateArray(int[], int ); void displayArray(int[], int[], int ); void reverseOrder(int [],int [], int); endif // LAB6_H_INCLUDED -----------------lab6.cpp include using std::cout; using std::endl; include using std::rand; using std::srand; include using std::time; include using std::setw; include "lab6.h" void generateArray(int array1[], int arraySize) { srand(time(0)); for (int i=0; i<10; i++) { array1[i]=(rand()%10); } } void displayArray(int array1[], int array2[], int arraySize) { cout<<endl<<"Array 1"<<endl; for (int i=0; i<arraySize; i++) { cout<<array1[i]<<", "; } cout<<endl<<"Array 2"<<endl; for (int i=0; i<arraySize; i++) { cout<<array2[i]<<", "; } } void reverseOrder(int array1[],int array2[], int arraySize) { for (int i=0, j=arraySize-1; i<arraySize;j--, i++) { array2[j] = array1[i]; } } ------------and finally main.cpp include "lab6.h" int main() { generateArray(array1, arraySize); reverseOrder(array1, array2, arraySize); displayArray(array1, array2, arraySize); return 0; }

    Read the article

  • Sening a file from memory (rather than disk) over HTTP using libcurl

    - by cinek1lol
    Hi! I would like to send pictures via a program written in C + +. - OK WinExec("C:\\curl\\curl.exe -H Expect: -F \"fileupload=@C:\\curl\\ok.jpg\" -F \"xml=yes\" -# \"http://www.imageshack.us/index.php\" -o data.txt -A \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.1) Gecko/20061204 Firefox/2.0.0.1\" -e \"http://www.imageshack.us\"", NULL); It works, but I would like to send the pictures from pre-loaded carrier to a variable char (you know what I mean? First off, I load the pictures into a variable and then send the variable), cause now I have to specify the path of the picture on a disk. I wanted to write this program in c++ by using the curl library, not through exe. extension. I have also found such a program (which has been modified by me a bit) #include <stdio.h> #include <string.h> #include <iostream> #include <curl/curl.h> #include <curl/types.h> #include <curl/easy.h> int main(int argc, char *argv[]) { CURL *curl; CURLcode res; struct curl_httppost *formpost=NULL; struct curl_httppost *lastptr=NULL; struct curl_slist *headerlist=NULL; static const char buf[] = "Expect:"; curl_global_init(CURL_GLOBAL_ALL); /* Fill in the file upload field */ curl_formadd(&formpost, &lastptr, CURLFORM_COPYNAME, "send", CURLFORM_FILE, "nowy.jpg", CURLFORM_END); curl_formadd(&formpost, &lastptr, CURLFORM_COPYNAME, "nowy.jpg", CURLFORM_COPYCONTENTS, "nowy.jpg", CURLFORM_END); curl_formadd(&formpost, &lastptr, CURLFORM_COPYNAME, "submit", CURLFORM_COPYCONTENTS, "send", CURLFORM_END); curl = curl_easy_init(); headerlist = curl_slist_append(headerlist, buf); if(curl) { curl_easy_setopt(curl, CURLOPT_URL, "http://www.imageshack.us/index.php"); if ( (argc == 2) && (!strcmp(argv[1], "xml=yes")) ) curl_easy_setopt(curl, CURLOPT_HTTPHEADER, headerlist); curl_easy_setopt(curl, CURLOPT_HTTPPOST, formpost); res = curl_easy_perform(curl); curl_easy_cleanup(curl); curl_formfree(formpost); curl_slist_free_all (headerlist); } system("pause"); return 0; }

    Read the article

  • HELP! Any ideas? Im creating a new site using the below script embedded in my swf. But I keep getti

    - by Suzanne
    package com.flashden { import flash.display.MovieClip; import flash.text.*; import flash.events.MouseEvent; import flash.events.*; import flash.net.URLRequest; import flash.display.Loader; public class MenuItem extends MovieClip { private var scope; public var closedX; :Number public static const OPEN_MENU = "openMenu"; public function MenuItem(scope) { // set scope to talk back to -------------------------------// this.scope = scope; // disable all items not to be clickable -------------------// txt_label.mouseEnabled = false; menuItemShine.mouseEnabled = false; menuItemArrow.mouseEnabled = false; // make background clip the item to be clicked (button) ----// menuItemBG.buttonMode = true; // add click event listener to the header background -------// menuItemBG.addEventListener(MouseEvent.CLICK, clickHandler); } private function clickHandler (e:MouseEvent) { scope.openMenuItem(this); } public function loadContent (contentURL:String) { var loader:Loader = new Loader(); configureListeners(loader.contentLoaderInfo); var request:URLRequest = new URLRequest(contentURL); loader.load(request); // place x position of content at the bottom of the header so the top is not cut off ----// loader.x = 30; // we add the content at level 1, because the background clip is at level 0 ----// addChildAt(loader, 1); } private function configureListeners(dispatcher:IEventDispatcher):void { dispatcher.addEventListener(Event.COMPLETE, completeHandler); dispatcher.addEventListener(HTTPStatusEvent.HTTP_STATUS, httpStatusHandler); dispatcher.addEventListener(Event.INIT, initHandler); dispatcher.addEventListener(IOErrorEvent.IO_ERROR, ioErrorHandler); dispatcher.addEventListener(Event.OPEN, openHandler); dispatcher.addEventListener(ProgressEvent.PROGRESS, progressHandler); dispatcher.addEventListener(Event.UNLOAD, unLoadHandler); } private function completeHandler(event:Event):void { //trace("completeHandler: " + event); // remove loader animation ----------------// removeChild(getChildByName("mc_preloader")); } private function httpStatusHandler(event:HTTPStatusEvent):void { // trace("httpStatusHandler: " + event); } private function initHandler(event:Event):void { //trace("initHandler: " + event); } private function ioErrorHandler(event:IOErrorEvent):void { //trace("ioErrorHandler: " + event); } private function openHandler(event:Event):void { //trace("openHandler: " + event); } private function progressHandler(event:ProgressEvent):void { //trace("progressHandler: bytesLoaded=" + event.bytesLoaded + " bytesTotal=" + event.bytesTotal); } private function unLoadHandler(event:Event):void { //trace("unLoadHandler: " + event); } } }

    Read the article

  • Architecture for Qt SIGNAL with subclass-specific, templated argument type

    - by Barry Wark
    I am developing a scientific data acquisition application using Qt. Since I'm not a deep expert in Qt, I'd like some architecture advise from the community on the following problem: The application supports several hardware acquisition interfaces but I would like to provide an common API on top of those interfaces. Each interface has a sample data type and a units for its data. So I'm representing a vector of samples from each device as a std::vector of Boost.Units quantities (i.e. std::vector<boost::units::quantity<unit,sample_type> >). I'd like to use a multi-cast style architecture, where each data source broadcasts newly received data to 1 or more interested parties. Qt's Signal/Slot mechanism is an obvious fit for this style. So, I'd like each data source to emit a signal like typedef std::vector<boost::units::quantity<unit,sample_type> > SampleVector signals: void samplesAcquired(SampleVector sampleVector); for the unit and sample_type appropriate for that device. Since tempalted QObject subclasses aren't supported by the meta-object compiler, there doesn't seem to be a way to have a (tempalted) base class for all data sources which defines the samplesAcquired Signal. In other words, the following won't work: template<T,U> //sample type and units class DataSource : public QObject { Q_OBJECT ... public: typedef std::vector<boost::units::quantity<U,T> > SampleVector signals: void samplesAcquired(SampleVector sampleVector); }; The best option I've been able to come up with is a two-layered approach: template<T,U> //sample type and units class IAcquiredSamples { public: typedef std::vector<boost::units::quantity<U,T> > SampleVector virtual shared_ptr<SampleVector> acquiredData(TimeStamp ts, unsigned long nsamples); }; class DataSource : public QObject { ... signals: void samplesAcquired(TimeStamp ts, unsigned long nsamples); }; The samplesAcquired signal now gives a timestamp and number of samples for the acquisition and clients must use the IAcquiredSamples API to retrieve those samples. Obviously data sources must subclass both DataSource and IAcquiredSamples. The disadvantage of this approach appears to be a loss of simplicity in the API... it would be much nicer if clients could get the acquired samples in the Slot connected. Being able to use Qt's queued connections would also make threading issues easier instead of having to manage them in the acquiredData method within each subclass. One other possibility, is to use a QVariant argument. This necessarily puts the onus on subclass to register their particular sample vector type with Q_REGISTER_METATYPE/qRegisterMetaType. Not really a big deal. Clients of the base class however, will have no way of knowing what type the QVariant value type is, unless a tag struct is also passed with the signal. I consider this solution at least as convoluted as the one above, as it forces clients of the abstract base class API to deal with some of the gnarlier aspects of type system. So, is there a way to achieve the templated signal parameter? Is there a better architecture than the one I've proposed?

    Read the article

  • OpenGL Polygon Stipple Not Working On Different Machine

    - by FranticPedantic
    I have a situation where I am trying to draw a semi-transparent rectangle over a background that is not using openGL and so I can not use blending. I decided to use polygon stippling for a 'screen door transparency' effect as recommended by some. It works fine on my machine and some others, but on some machines with slightly old Intel graphics cards it's failing to render the rectangle at all. If I turn off polygon stipple, it renders fine (but without the stipple). I have compared many of the state variables that I thought might affect it (see code) between machines and they are all the same, and I get no errors. static const GLubyte stipplePatternChkr[128]; //definition omitted for clarity //but works on my machine // stipple the box glPolygonMode(GL_FRONT_AND_BACK, GL_FILL); glColor4ubv(Color(COLORREF_PADGRAY)); glEnable(GL_POLYGON_STIPPLE); glPolygonStipple(stipplePatternChkr); CRect rcStipple(dim); rcStipple.DeflateRect(padding - 1, padding - 1); glBegin(GL_QUADS); glVertex2i(rcStipple.left, rcStipple.bottom); glVertex2i(rcStipple.right, rcStipple.bottom); glVertex2i(rcStipple.right, rcStipple.top); glVertex2i(rcStipple.left, rcStipple.top); glEnd(); glDisable(GL_POLYGON_STIPPLE); int err = glGetError(); if (err != GL_NO_ERROR) { TRACE("glError(%s: %s)\n", s, gluErrorString(err)); } float x; glGetFloatv(GL_UNPACK_ALIGNMENT, &x); TRACE("unpack alignment %f\n", x); glGetFloatv(GL_UNPACK_IMAGE_HEIGHT, &x); TRACE("unpack height %f\n", x); glGetFloatv(GL_UNPACK_LSB_FIRST, &x); TRACE("unpack lsb %f\n", x); glGetFloatv(GL_UNPACK_ROW_LENGTH, &x); TRACE("unpack length %f\n", x); glGetFloatv(GL_UNPACK_SKIP_PIXELS, &x); TRACE("upnack skip %f\n", x); glGetFloatv(GL_UNPACK_SWAP_BYTES, &x); TRACE("upnack swap %f\n", x);

    Read the article

  • NSKeyedUnarchiver chokes when trying to unarchive more than one object

    - by ajduff574
    We've got a custom matrix class, and we're attempting to archive and unarchive an NSArray containing four of them. The first seems to get unarchived fine (we can see that initWithCoder is called once), but then the program simply hangs, using 100% CPU. It doesn't continue or output any errors. These are the relevant methods from the matrix class (rows, columns, and matrix are our only instance variables): -(void)encodeWithCoder:(NSCoder*) coder { float temp[rows * columns]; for(int i = 0; i < rows; i++) { for(int j = 0; j < columns; j++) { temp[columns * i + j] = matrix[i][j]; } } [coder encodeBytes:(const void *)temp length:rows*columns*sizeof(float) forKey:@"matrix"]; [coder encodeInteger:rows forKey:@"rows"]; [coder encodeInteger:columns forKey:@"columns"]; } -(id)initWithCoder:(NSCoder *) coder { if (self = [super init]) { rows = [coder decodeIntegerForKey:@"rows"]; columns = [coder decodeIntegerForKey:@"columns"]; NSUInteger * len; *len = (unsigned int)(rows * columns * sizeof(float)); float * temp = (float * )[coder decodeBytesForKey:@"matrix" returnedLength:len]; matrix = (float ** )calloc(rows, sizeof(float*)); for (int i = 0; i < rows; i++) { matrix[i] = (float*)calloc(columns, sizeof(float)); } for(int i = 0; i < rows *columns; i++) { matrix[i / columns][i % columns] = temp[i]; } } return self; } And this is really all we're trying to do: NSArray * weightMatrices = [NSArray arrayWithObjects:w1,w2,w3,w4,nil]; [NSKeyedArchiver archiveRootObject:weightMatrices toFile:@"weights.archive"]; NSArray * newWeights = [NSKeyedUnarchiver unarchiveObjectWithFile:@"weights.archive"]; What's driving us crazy is that we can archive and unarchive a single matrix just fine. We've done so (successfully) with a matrix many times larger than these four combined.

    Read the article

  • How ca I return a value from a function

    - by Shadi Al Mahallawy
    I used a function to calculate information about certain instructions I intialized in a map,like this void get_objectcode(char*&token1,const int &y) { map<string,int> operations; operations["ADD"] = 18; operations["AND"] = 40; operations["COMP"] = 28; operations["DIV"] = 24; operations["J"] = 0X3c; operations["JEQ"] =30; operations["JGT"] =34; operations["JLT"] =38; operations["JSUB"] =48; operations["LDA"] =00; operations["LDCH"] =50; operations["LDL"] =55; operations["LDX"] =04; operations["MUL"] =20; operations["OR"] =44; operations["RD"] =0xd8; operations["RSUB"] =0x4c; operations["STA"] =0x0c; operations["STCH"] =54; operations["STL"] =14; operations["STSW"] =0xe8; operations["STX"] =10; operations["SUB"] =0x1c; operations["TD"] =0xe0; operations["TIX"] =0x2c; operations["WD"] =0xdc; if ((operations.find("ADD")->first==token1)||(operations.find("AND")->first==token1)||(operations.find("COMP")->first==token1) ||(operations.find("DIV")->first==token1)||(operations.find("J")->first==token1)||(operations.find("JEQ")->first==token1) ||(operations.find("JGT")->first==token1)||(operations.find("JLT")->first==token1)||(operations.find("JSUB")->first==token1) ||(operations.find("LDA")->first==token1)||(operations.find("LDCH")->first==token1)||(operations.find("LDL")->first==token1) ||(operations.find("LDX")->first==token1)||(operations.find("MUL")->first==token1)||(operations.find("OR")->first==token1) ||(operations.find("RD")->first==token1)||(operations.find("RSUB")->first==token1)||(operations.find("STA")->first==token1)||(operations.find("STCH")->first==token1)||(operations.find("STCH")->first==token1)||(operations.find("STL")->first==token1) ||(operations.find("STSW")->first==token1)||(operations.find("STX")->first==token1)||(operations.find("SUB")->first==token1) ||(operations.find("TD")->first==token1)||(operations.find("TIX")->first==token1)||(operations.find("WD")->first==token1)) { int y=operations.find(token1)->second; //cout<<hex<<y<<endl; } return ; } which if I cout y in the function gives me an answer just fine which is what i need but there is a problem tring to return the value from the function so that I could use it outside the function , it gives a whole different answer, what is the problem

    Read the article

  • Passing a variable from Excel 2007 Custom Task Pane to Hosted PowerShell

    - by Uros Calakovic
    I am testing PowerShell hosting using C#. Here is a console application that works: using System; using System.Collections; using System.Collections.Generic; using System.Collections.ObjectModel; using System.Management.Automation; using System.Management.Automation.Runspaces; using Microsoft.Office.Interop.Excel; namespace ConsoleApplication3 { class Program { static void Main() { Application app = new Application(); app.Visible = true; app.Workbooks.Add(XlWBATemplate.xlWBATWorksheet); Runspace runspace = RunspaceFactory.CreateRunspace(); runspace.Open(); runspace.SessionStateProxy.SetVariable("Application", app); Pipeline pipeline = runspace.CreatePipeline("$Application"); Collection<PSObject> results = null; try { results = pipeline.Invoke(); foreach (PSObject pob in results) { Console.WriteLine(pob); } } catch (RuntimeException re) { Console.WriteLine(re.GetType().Name); Console.WriteLine(re.Message); } } } } I first create an Excel.Application instance and pass it to the hosted PowerShell instance as a varible named $Application. This works and I can use this variable as if Excel.Application was created from within PowerShell. I next created an Excel addin using VS 2008 and added a user control with two text boxes and a button to the addin (the user control appears as a custom task pane when Excel starts). The idea was this: when I click the button a hosted PowerShell instance is created and I can pass to it the current Excel.Application instance as a variable, just like in the first sample, so I can use this variable to automate Excel from PowerShell (one text box would be used for input and the other one for output. Here is the code: using System; using System.Windows.Forms; using System.Management.Automation; using System.Management.Automation.Runspaces; using System.Collections.ObjectModel; using Microsoft.Office.Interop.Excel; namespace POSHAddin { public partial class POSHControl : UserControl { public POSHControl() { InitializeComponent(); } private void btnRun_Click(object sender, EventArgs e) { txtOutput.Clear(); Microsoft.Office.Interop.Excel.Application app = Globals.ThisAddIn.Application; Runspace runspace = RunspaceFactory.CreateRunspace(); runspace.Open(); runspace.SessionStateProxy.SetVariable("Application", app); Pipeline pipeline = runspace.CreatePipeline( "$Application | Get-Member | Out-String"); app.ActiveCell.Value2 = "Test"; Collection<PSObject> results = null; try { results = pipeline.Invoke(); foreach (PSObject pob in results) { txtOutput.Text += pob.ToString() + "-"; } } catch (RuntimeException re) { txtOutput.Text += re.GetType().Name; txtOutput.Text += re.Message; } } } } The code is similar to the first sample, except that the current Excel.Application instance is available to the addin via Globals.ThisAddIn.Application (VSTO generated) and I can see that it is really a Microsoft.Office.Interop.Excel.Application instance because I can use things like app.ActiveCell.Value2 = "Test" (this actually puts the text into the active cell). But when I pass the Excel.Application instance to the PowerShell instance what gets there is an instance of System.__ComObject and I can't figure out how to cast it to Excel.Application. When I examine the variable from PowerShell using $Application | Get-Member this is the output I get in the second text box: TypeName: System.__ComObject Name MemberType Definition ---- ---------- ---------- CreateObjRef Method System.Runtime.Remoting.ObjRef CreateObj... Equals Method System.Boolean Equals(Object obj) GetHashCode Method System.Int32 GetHashCode() GetLifetimeService Method System.Object GetLifetimeService() GetType Method System.Type GetType() InitializeLifetimeService Method System.Object InitializeLifetimeService() ToString Method System.String ToString() My question is how can I pass an instance of Microsoft.Office.Interop.Excel.Application from a VSTO generated Excel 2007 addin to a hosted PowerShell instance, so I can manipulate it from PowerShell? (I have previously posted the question in the Microsoft C# forum without an answer)

    Read the article

  • Placing & deleting element(s) from a object (stack)

    - by Chris
    Hello, Initialising 2 stack objects: Stack s1 = new Stack(), s2 = new Stack(); s1 = 0 0 0 0 0 0 0 0 0 0 (array of 10 elements wich is empty to start with) top:0 const int Rows = 10; int[] Table = new int[Rows]; public void TableStack(int[] Table) { for (int i=0; i < Table.Length; i++) { } } My question is how exactly do i place a element on a stack (push) or take a element from the stack (pop) as the following: Push: s1.Push(5); // s1 = 5 0 0 0 0 0 0 0 0 0 (top:1) s1.Push(9); // s1 = 5 9 0 0 0 0 0 0 0 0 (top:2) Pop: int number = s1.Pop(); // s1 = 5 0 0 0 0 0 0 0 0 0 0 (top:1) 9 got removed Do i have to use get & set, and if so how exactly do i implent this with a array? Not really sure what to exactly use for this. Any hints highly appreciated. The program uses the following driver to test the Stack class (wich cannot be changed or modified): public void ExecuteProgram() { Console.Title = "StackDemo"; Stack s1 = new Stack(), s2 = new Stack(); ShowStack(s1, "s1"); ShowStack(s2, "s2"); Console.WriteLine(); int getal = TryPop(s1); ShowStack(s1, "s1"); TryPush(s2, 17); ShowStack(s2, "s2"); TryPush(s2, -8); ShowStack(s2, "s2"); TryPush(s2, 59); ShowStack(s2, "s2"); Console.WriteLine(); for (int i = 1; i <= 3; i++) { TryPush(s1, 2 * i); ShowStack(s1, "s1"); } Console.WriteLine(); for (int i = 1; i <= 3; i++) { TryPush(s2, i * i); ShowStack(s2, "s2"); } Console.WriteLine(); for (int i = 1; i <= 6; i++) { getal = TryPop(s2); //use number ShowStack(s2, "s2"); } }/*ExecuteProgram*/ Regards.

    Read the article

  • iPhone - database reading method and memory leaks

    - by Do8821
    Hi, in my application, a RSS reader, I get memory leaks that I can't fix because I can't understand from where they come from. Here is the code pointed out by Instruments. -(void) readArticlesFromDatabase { [self setDatabaseInfo]; sqlite3 *database; articles = [[NSMutableArray alloc] init]; if(sqlite3_open([databasePath UTF8String], &database) == SQLITE_OK) { const char *sqlStatement = "select * from articles"; if(sqlite3_prepare_v2(database, sqlStatement, -1, &compiledStatement, NULL) == SQLITE_OK) { while(sqlite3_step(compiledStatement) == SQLITE_ROW) { NSString *aName = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, 1)]; NSString *aDate = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, 2)]; NSString *aUrl = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, 3)]; NSString *aCategory = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, 4)]; NSString *aAuthor = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, 5)]; NSString *aSummary = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, 6)]; NSMutableString *aContent = [NSMutableString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, 7)]; NSString *aNbrComments = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, 8)]; NSString *aCommentsLink = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, 9)]; NSString *aPermalink = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, 11)]; [aContent replaceCharactersInRange: [aContent rangeOfString: @"http://www.mywebsite.com/img/action-on.gif"] withString: @"hellocoton-action-on.gif"]; [aContent replaceCharactersInRange: [aContent rangeOfString: @"hhttp://www.mywebsite.com/img/action-on-h.gif"] withString: @"hellocoton-action-on-h.gif"]; [aContent replaceCharactersInRange: [aContent rangeOfString: @"hthttp://www.mywebsite.com/img/hellocoton.gif"] withString: @"hellocoton-hellocoton.gif"]; NSString *imageURLBrut = [self parseArticleForImages:aContent]; NSString *imageURLCache = [imageURLBrut stringByReplacingOccurrencesOfString:@":" withString:@"_"]; imageURLCache = [imageURLCache stringByReplacingOccurrencesOfString:@"/" withString:@"_"]; imageURLCache = [imageURLCache stringByReplacingOccurrencesOfString:@" " withString:@"_"]; NSString *uniquePath = [tmp stringByAppendingPathComponent: imageURLCache]; if([[NSFileManager defaultManager] fileExistsAtPath: uniquePath]) { imageURLCache = [@"../tmp/" stringByAppendingString: imageURLCache]; [aContent replaceCharactersInRange: [aContent rangeOfString: imageURLBrut ] withString: imageURLCache]; } Article *article = [[Article alloc] initWithName:aName date:aDate url:aUrl category:aCategory author:aAuthor summary:aSummary content:aContent commentsNbr:aNbrComments commentsLink:aCommentsLink commentsRSS:@"" enclosure:aPermalink enclosure2:@"" enclosure3:@""]; [articles addObject:article]; article = nil; [article release]; } } sqlite3_finalize(compiledStatement); } sqlite3_close(database); } ` I have a lot of "Article" leaked and NSString matching with these using : [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, X)]; I tried a lot of different code I always have these leaks. Anyone has got an idea to help me?

    Read the article

  • NHibernate Proxy Creation

    - by Chris Meek
    I have a class structure like the following class Container { public virtual int Id { get; set; } public IList<Base> Bases { get; set; } } class Base { public virtual int Id { get; set; } public virtual string Name { get; set; } } class EnemyBase : Base { public virtual int EstimatedSize { get; set; } } class FriendlyBase : Base { public virtual int ActualSize { get; set; } } Now when I ask the session for a particular Container it normally gives me the concrete EnemyBase and FriendlyBase objects in the Bases collection. I can then (if I so choose) cast them to their concrete types and do something specific with them. However, sometime I get a proxy of the "Base" class which is not castable to the concrete types. The same method is used both times with the only exception being that in the case that I get proxies I have added some related entities to the session (think the friendly base having a collection of people or something like that). Is there any way I can prevent it from doing the proxy creating and why would it choose to do this in some scenarios? UPDATE The mappings are generated with the automap feature of fluentnhibernate but look something like this when exported <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" default-access="property" auto-import="true" default-cascade="none" default-lazy="true"> <class xmlns="urn:nhibernate-mapping-2.2" mutable="true" name="Base" table="`Base`"> <id name="Id" type="System.Int32, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <column name="Id" /> <generator class="MyIdGenerator" /> </id> <property name="Name" type="String"> <column name="Name" /> </property> <joined-subclass name="EnemyBase"> <key> <column name="Id" /> </key> <property name="EstimatedSize" type="Int"> <column name="EstimatedSize" /> </property> </joined-subclass> <joined-subclass name="FriendlyBase"> <key> <column name="Id" /> </key> <property name="ActualSize" type="Int"> <column name="ActualSize" /> </property> </joined-subclass> </class> </hibernate-mapping> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" default-access="property" auto-import="true" default-cascade="none" default-lazy="true"> <class xmlns="urn:nhibernate-mapping-2.2" mutable="true" name="Container" table="`Container`"> <id name="Id" type="System.Int32, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <column name="Id" /> <generator class="MyIdGenerator" /> </id> <bag cascade="all-delete-orphan" inverse="true" lazy="false" name="Bases" mutable="true"> <key> <column name="ContainerId" /> </key> <one-to-many class="Base" /> </bag> </class> </hibernate-mapping>

    Read the article

  • Exception opening TAdoDataset: Arguments are of the wrong type, are out of acceptable range, or are

    - by Dave Falkner
    I've been trying to debug the following problem for several weeks now - this method is called from several places within the same datamodule, but this exception (from the subject line of this post) only occurs when integers for a certain purpose (pickup orders vs. orders that we ship through a carrier) are used - and don't ask me how the application can tell the difference between one integer's purpose and another! Furthermore, I cannot duplicate this issue on my machine - the error occurs on a warehouse machine but not my own development machine, even when working with the same production database. I have suspected an MDAC version conflict between the two machines, but have run a version checker and confirmed that both machines are running 2.8, and additionally have confirmed this by logging the TAdoDataset's .Version property at runtime. function TdmESShip.SecondaryID(const PrimaryID : Integer ): String; begin try with qESPackage2 do begin if Active then Close; LogMessage('-----------------------------------'); LogMessage('Version: ' + FConnection.Version); LogMessage('DB Info: ' + FConnection.Properties['Initial Catalog'].Value + ' ' + FConnection.Properties['Data Source'].Value); LogMessage('Setting the parameter.'); Parameters.ParamByName('ParameterName').Value := PrimaryID; LogMessage('Done setting the parameter.'); Open; Ninety-nine times out of 100 this logging code logs a successful operation as follows: Version: 2.8 DB Info: (database name and instance) Setting the parameter. Done setting the parameter. Opened the dataset. But then whenever a "pickup" order is processed, this exception gets thrown whenever the dataset is opened: Version: 2.8 DB Info: (database name and instance) Setting the parameter. Done setting the parameter. GetESPackageID() threw an exception. Type: EOleException, Message: Arguments are of the wrong type, are out of acceptable range, or are in conflict with one another Error: Arguments are of the wrong type, are out of acceptable range, or are in conflict with one another for packageID 10813711 I've tried eliminating the parameter and have built the commandtext for this dataset programmatically, suspecting that some part of the TParameter's configuration might be out of whack, but the same error occurs under the same circumstances. I've tried every combination of TParameter properties that I can think of - this is the millionth TParameter I've created for my millionth dataset, and I've never encountered this error. I've even created a second dataset from scratch and removed all references to the original dataset in case some property of the original dataset in the .dfm might be corrupted, but the same error occurs under the same circumstances. The commandtext for this dataset is a simple select ValueA from TableName where ValueB = @ParameterB I'm about ready to do something extreme, such as writing a web service to look these values up - it feels right now as though I could destroy my machine, rebuild it, rewrite this entire application from scratch, and the application would still know to throw an exception whenever I try to look up a secondary value from a primary value, but only for pickup orders, and only from the one machine in the warehouse, but I'm probably missing something simple. So, any help anyone could provide would be greatly appreciated.

    Read the article

  • DataAnnotation attributes buddy class strangeness - ASP.NET MVC

    - by JK
    Given this POCO class that was automatically generated by an EntityFramework T4 template (has not and can not be manually edited in any way): public partial class Customer { [Required] [StringLength(20, ErrorMessage = "Customer Number - Please enter no more than 20 characters.")] [DisplayName("Customer Number")] public virtual string CustomerNumber { get;set; } [Required] [StringLength(10, ErrorMessage = "ACNumber - Please enter no more than 10 characters.")] [DisplayName("ACNumber")] public virtual string ACNumber{ get;set; } } Note that "ACNumber" is a badly named database field, so the autogenerator is unable to generate the correct display name and error message which should be "Account Number". So we manually create this buddy class to add custom attributes that could not be automatically generated: [MetadataType(typeof(CustomerAnnotations))] public partial class Customer { } public class CustomerAnnotations { [NumberCode] // This line does not work public virtual string CustomerNumber { get;set; } [StringLength(10, ErrorMessage = "Account Number - Please enter no more than 10 characters.")] [DisplayName("Account Number")] public virtual string ACNumber { get;set; } } Where [NumberCode] is a simple regex based attribute that allows only digits and hyphens: [AttributeUsage(AttributeTargets.Property)] public class NumberCodeAttribute: RegularExpressionAttribute { private const string REGX = @"^[0-9-]+$"; public NumberCodeAttribute() : base(REGX) { } } NOW, when I load the page, the DisplayName attribute works correctly - it shows the display name from the buddy class not the generated class. The StringLength attribute does not work correctly - it shows the error message from the generated class ("ACNumber" instead of "Account Number"). BUT the [NumberCode] attribute in the buddy class does not even get applied to the AccountNumber property: foreach (ValidationAttribute attrib in prop.Attributes.OfType<ValidationAttribute>()) { // This collection correctly contains all the [Required], [StringLength] attributes // BUT does not contain the [NumberCode] attribute ApplyValidation(generator, attrib); } Why does the prop.Attributes.OfType<ValidationAttribute>() collection not contain the [NumberCode] attribute? NumberCode inherits RegularExpressionAttribute which inherits ValidationAttribute so it should be there. If I manually move the [NumberCode] attribute to the autogenerated class, then it is included in the prop.Attributes.OfType<ValidationAttribute>() collection. So what I don't understand is why this particular attribute does not work in when in the buddy class, when other attributes in the buddy class do work. And why this attribute works in the autogenerated class, but not in the buddy. Any ideas? Also why does DisplayName get overriden by the buddy, when StringLength does not?

    Read the article

  • cannot retrieve effect.fx file

    - by numerical25
    I am having issues loading my effect.fx from directx. When I step into my application, my ID3D10Effect *m_pDefaultEffect; pointer remains empty. the address remains at 0x000000 below is my code #pragma once #include "stdafx.h" #include "resource.h" #include "d3d10.h" #include "d3dx10.h" #include "dinput.h" #define MAX_LOADSTRING 100 class RenderEngine { protected: RECT m_screenRect; //direct3d Members ID3D10Device *m_pDevice; // The IDirect3DDevice10 // interface ID3D10Texture2D *m_pBackBuffer; // Pointer to the back buffer ID3D10RenderTargetView *m_pRenderTargetView; // Pointer to render target view IDXGISwapChain *m_pSwapChain; // Pointer to the swap chain RECT m_rcScreenRect; // The dimensions of the screen ID3D10Texture2D *m_pDepthStencilBuffer; ID3D10DepthStencilState *m_pDepthStencilState; ID3D10DepthStencilView *m_pDepthStencilView; //transformation matrixs D3DXMATRIX g_mtxWorld; D3DXMATRIX g_mtxView; D3DXMATRIX g_mtxProj; //Effect members ID3D10Effect *m_pDefaultEffect; ID3D10EffectTechnique *m_pDefaultTechnique; ID3DX10Font *m_pFont; // The font used for rendering text // Sprites used to hold font characters ID3DX10Sprite *m_pFontSprite; ATOM RegisterEngineClass(); void DoFrame(float); bool LoadEffects(); public: static HINSTANCE m_hInst; HWND m_hWnd; int m_nCmdShow; TCHAR m_szTitle[MAX_LOADSTRING]; // The title bar text TCHAR m_szWindowClass[MAX_LOADSTRING]; // the main window class name void DrawTextString(int x, int y, D3DXCOLOR color, const TCHAR *strOutput); //static functions static LRESULT CALLBACK WndProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam); static INT_PTR CALLBACK About(HWND hDlg, UINT message, WPARAM wParam, LPARAM lParam); bool InitWindow(); bool InitDirectX(); bool InitInstance(); int Run(); void ShutDown(); RenderEngine() { m_screenRect.right = 800; m_screenRect.bottom = 600; } }; below is the implementation bool RenderEngine::LoadEffects() { HRESULT hr; ID3D10Blob *pErrors = 0; // Create the default rendering effect hr = D3DX10CreateEffectFromFile(L"effect.fx", NULL, NULL, "fx_4_0", D3D10_SHADER_DEBUG, 0, m_pDevice, NULL, NULL, &m_pDefaultEffect, &pErrors, NULL); if(pErrors)// at this point, m_pDefaultEffect is still empty but pErrors returns data which means there is {//errors return false; //ends here } //m_pDefaultTechnique = m_pDefaultEffect->GetTechniqueByName("DefaultTechnique"); return true; } My directx Device does work. My effect.fx file is in the same folder as my solution files (.cpp and header files)

    Read the article

  • Qt - problem appending to QList of QList

    - by bullettime
    I'm trying to append items to a QList at runtime, but I'm running on a error message. Basically what I'm trying to do is to make a QList of QLists and add a few customClass objects to each of the inner lists. Here's my code: widget.h: class Widget : public QWidget { Q_OBJECT public: Widget(QWidget *parent = 0); ~Widget(); public slots: static QList<QList<customClass> > testlist(){ QList<QList<customClass> > mylist; for(int w=0 ; w<5 ; w++){ mylist.append(QList<customClass>()); } for(int z=0 ; z<mylist.size() ; z++){ for(int x=0 ; x<10 ; x++){ customClass co = customClass(); mylist.at(z).append(co); } } return mylist; } }; customclass.h: class customClass { public: customClass(){ this->varInt = 1; this->varQString = "hello world"; } int varInt; QString varQString; }; main.cpp: int main(int argc, char *argv[]) { QApplication a(argc, argv); Widget w; QList<QList<customClass> > list; list = w.testlist(); w.show(); return a.exec(); } But when I try to run the application, it gives off this error: error: passing `const QList<customClass>' as `this' argument of `void List<T>::append(const T&) [with T = customClass]' discards qualifiers I also tried inserting the objects using foreach: foreach(QList<customClass> list, mylist){ for(int x=0 ; x<10 ; x++){ list.append(customClass()); } } The error was gone, but the customClass objects weren't appended, I could verify that by using a debugging loop in main that showed the inner QLists sizes as zero. What am I doing wrong?

    Read the article

< Previous Page | 188 189 190 191 192 193 194 195 196 197 198 199  | Next Page >