Search Results

Search found 6123 results on 245 pages for 'unsigned char'.

Page 168/245 | < Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >

  • C++ floating point precision

    - by Davinel
    double a = 0.3; std::cout.precision(20); std::cout << a << std::endl; result: 0.2999999999999999889 double a, b; a = 0.3; b = 0; for (char i = 1; i <= 50; i++) { b = b + a; }; std::cout.precision(20); std::cout << b << std::endl; result: 15.000000000000014211 So.. 'a' is smaller than it should be. But if we take 'a' 50 times - result will be bigger than it should be. Why is this? And how to get correct result in this case?

    Read the article

  • Returning a tuple of multipe objects in Python C API

    - by celil
    I am writing a native function that will return multiple Python objects PyObject *V = PyList_New(0); PyObject *E = PyList_New(0); PyObject *F = PyList_New(0); return Py_BuildValue("ooo", V, E, F); This compiles fine, however, when I call it from a Python program, I get an error: SystemError: bad format char passed to Py_BuildValue How can this be done correctly? EDIT: The following works PyObject *rslt = PyTuple_New(3); PyTuple_SetItem(rslt, 0, V); PyTuple_SetItem(rslt, 1, E); PyTuple_SetItem(rslt, 2, F); return rslt; However, isn't there a shorter way to do this?

    Read the article

  • Get special numbers from a random number generator

    - by Wikeno
    I have a random number generator: int32_t ksp_random_table[GENERATOR_DEG] = { -1726662223, 379960547, 1735697613, 1040273694, 1313901226, 1627687941, -179304937, -2073333483, 1780058412, -1989503057, -615974602, 344556628, 939512070, -1249116260, 1507946756, -812545463, 154635395, 1388815473, -1926676823, 525320961, -1009028674, 968117788, -123449607, 1284210865, 435012392, -2017506339, -911064859, -370259173, 1132637927, 1398500161, -205601318, }; int front_pointer=3, rear_pointer=0; int32_t ksp_rand() { int32_t result; ksp_random_table[ front_pointer ] += ksp_random_table[ rear_pointer ]; result = ( ksp_random_table[ front_pointer ] >> 1 ) & 0x7fffffff; front_pointer++, rear_pointer++; if (front_pointer >= GENERATOR_DEG) front_pointer = 0; if (rear_pointer >= GENERATOR_DEG) rear_pointer = 0; return result; } void ksp_srand(unsigned int seed) { int32_t i, dst=0, kc=GENERATOR_DEG, word, hi, lo; word = ksp_random_table[0] = (seed==0) ? 1 : seed; for (i = 1; i < kc; ++i) { hi = word / 127773, lo = word % 127773; word = 16807 * lo - 2836 * hi; if (word < 0) word += 2147483647; ksp_random_table[++dst] = word; } front_pointer=3, rear_pointer=0; kc *= 10; while (--kc >= 0) ksp_rand(); } I'd like know what type of pseudo random number generation algorithm this is. My guess is a multiple linear congruential generator. And is there a way of seeding this algorithm so that after 987721(1043*947) numbers it would return 15 either even-only, odd-only or alternating odd and even numbers? It is a part of an assignment for a long term competition and i've got no idea how to solve it. I don't want the final solution, I'd like to learn how to do it myself.

    Read the article

  • Java: What are the various available security settings for applets

    - by bguiz
    I have an applet that throws this exception when trying to communicate with the server (running on localhost). This problem is limited to Applets only - a POJO client is able to communicate with the exact same server without any problem. Exception in thread "AWT-EventQueue-1" java.security.AccessControlException: access denied (java.net .SocketPermission 127.0.0.1:9999 connect,resolve) at java.security.AccessControlContext.checkPermission(AccessControlContext.java:323) My applet.policy file's contents is: grant { permission java.security.AllPermission; }; My question is what are the other places where I need to modify my security settings to grant an Applet more security settings? Thank you. EDIT: Further investigation has lead me to find that this problem only occurs on some machines - but not others. So it could be a machine level (global) setting that is causing this, rather than a application-specific setting such as the one in the applet.policy file. EDIT: Another SO question: Socket connection to originating server of an unsigned Java applet This seems to describe the exact same problem, and Tom Hawtin - tackline 's answer provides the reason why (a security patch released that disallows applets from connecting to localhost). Bearing this in mind, how do I grant the applet the security settings such that in can indeed run on my machine. Also why does it run as-is on other machines but not mine?

    Read the article

  • Why is setting HTML5's CanvasPixelArray values ridiculously slow and how can I do it faster?

    - by Nixuz
    I am trying to do some dynamic visual effects using the HTML 5 canvas' pixel manipulation, but I am running into a problem where setting pixels in the CanvasPixelArray is ridiculously slow. For example if I have code like: imageData = ctx.getImageData(0, 0, 500, 500); for (var i = 0; i < imageData.length; i += 4){ imageData.data[i] = buffer[i]; imageData.data[i + 1] = buffer[i + 1]; imageData.data[i + 2] = buffer[i + 2]; } ctx.putImageData(imageData, 0, 0); Profiling with Chrome reveals, it runs 44% slower than the following code where CanvasPixelArray is not used. tempArray = new Array(500 * 500 * 4); imageData = ctx.getImageData(0, 0, 500, 500); for (var i = 0; i < imageData.length; i += 4){ tempArray[i] = buffer[i]; tempArray[i + 1] = buffer[i + 1]; tempArray[i + 2] = buffer[i + 2]; } ctx.putImageData(imageData, 0, 0); My guess is that the reason for this slowdown is due to the conversion between the Javascript doubles and the internal unsigned 8bit integers, used by the CanvasPixelArray. Is this guess correct? Is there anyway to reduce the time spent setting values in the CanvasPixelArray?

    Read the article

  • jQueryUI Ajax.NET Postback Bug

    - by nigative
    Hi, I have this ASP.NET page with ASP.NET UpdatePanel and jQueryUI droppable and sortable components. The page works fine in all browsers, but doesn't in Internet Explorer (IE8 tested). After I try to call ASP.NET AJAX event (by pressing my asp.net button inside the UpdatePanel) my sortable list stops working properly inside IE browser and the browser throws the following error: Message: Unspecified error. Line: 145 Char: 186 Code: 0 URI: http://code.jquery.com/jquery-1.4.2.min.js I found out that the problem is caused by the code on line 66: $("#droppable").droppable(); If I comment it out, the sortable list works fine after ajax postbacks. But it doesn't make sense. Does anyone know what could be wrong? Thanks. P.S. I am using jQueryUI 1.8.1 and jQuery 1.4.2

    Read the article

  • g++ C++0x enum class Compiler Warnings

    - by Travis G
    I've been refactoring my horrible mess of C++ type-safe psuedo-enums to the new C++0x type-safe enums because they're way more readable. Anyway, I use them in exported classes, so I explicitly mark them to be exported: enum class __attribute__((visibility("default"))) MyEnum : unsigned int { One = 1, Two = 2 }; Compiling this with g++ yields the following warning: type attributes ignored after type is already defined This seems very strange, since, as far as I know, that warning is meant to prevent actual mistakes like: class __attribute__((visibility("default"))) MyClass { }; class __attribute__((visibility("hidden"))) MyClass; Of course, I'm clearly not doing that, since I have only marked the visibility attributes at the definition of the enum class and I'm not re-defining or declaring it anywhere else (I can duplicate this error with a single file). Ultimately, I can't make this bit of code actually cause a problem, save for the fact that, if I change a value and re-compile the consumer without re-compiling the shared library, the consumer passes the new values and the shared library has no idea what to do with them (although I wouldn't expect that to work in the first place). Am I being way too pedantic? Can this be safely ignored? I suspect so, but at the same time, having this error prevents me from compiling with Werror, which makes me uncomfortable. I would really like to see this problem go away.

    Read the article

  • vss intializefor backup fails with return code E_UNEXPECTED

    - by suresh
    #include "vss.h" #include "vswriter.h" #include <VsBackup.h> #include <stdio.h> #define CHECK_PRINT(result) printf("%s\n",result==S_OK?"S_OK":"error") int main(int argc, char* argv[]) { BSTR xml; LPTSTR errorText; IVssBackupComponents *VssHandle; HRESULT result = CreateVssBackupComponents(&VssHandle); CHECK_PRINT(result); result = VssHandle->InitializeForBackup(); printf("unexpected%x\n",result); system("pause"); return 0; } in the above program intializeforbackup fails with error code E_UNEXPECTED. The VSS service is running . In the event log it shows as "Volume Shadow Copy Service error: Unexpected error calling routine CoCreateInstance. hr = 0x800401f0.".. Any solutions for the InitializeForBackup to return S_OK?

    Read the article

  • overriding enumeration base type using pragma or code change

    - by vprajan
    Problem: I am using a big C/C++ code base which works on gcc & visual studio compilers where enum base type is by default 32-bit(integer type). This code also has lots of inline + embedded assembly which treats enum as integer type and enum data is used as 32-bit flags in many cases. When compiled this code with realview ARM RVCT 2.2 compiler, we started getting many issues since realview compiler decides enum base type automatically based on the value an enum is set to. http://www.keil.com/support/man/docs/armccref/armccref_Babjddhe.htm For example, Consider the below enum, enum Scale { TimesOne, //0 TimesTwo, //1 TimesFour, //2 TimesEight, //3 }; This enum is used as a 32-bit flag. but compiler optimizes it to unsigned char type for this enum. Using --enum_is_int compiler option is not a good solution for our case, since it converts all the enum's to 32-bit which will break interaction with any external code compiled without --enum_is_int. This is warning i found in RVCT compilers & Library guide, The --enum_is_int option is not recommended for general use and is not required for ISO-compatible source. Code compiled with this option is not compliant with the ABI for the ARM Architecture (base standard) [BSABI], and incorrect use might result in a failure at runtime. This option is not supported by the C++ libraries. Question How to convert all enum's base type (by hand-coded changes) to use 32-bit without affecting value ordering? enum Scale { TimesOne=0x00000000, TimesTwo, // 0x00000001 TimesFour, // 0x00000002 TimesEight, //0x00000003 }; I tried the above change. But compiler optimizes this also for our bad luck. :( There is some syntax in .NET like enum Scale: int Is this a ISO C++ standard and ARM compiler lacks it? There is no #pragma to control this enum in ARM RVCT 2.2 compiler. Is there any hidden pragma available ?

    Read the article

  • How large should my recv buffer be when calling recv in the socket library

    - by Silmaril89
    Hi, I have a few questions about the socket library in C. Here is a snippet of code I'll refer to in my questions. char recv_buffer[3000]; recv(socket, recv_buffer, 3000, 0); First, How do I decide how big to make recv_buffer? I'm using 3000, but it's arbitrary. Second, what happens if recv() receives a packet bigger than my recv_buffer? Third, how can I know if I have received the entire message without calling recv again and have it wait forever when there is nothing to be received? And finally, is there a way I can make a buffer not have a fixed amount of space, so that I can keep adding to it without fear of running out of space? maybe using strcat to concatenate the latest recv() response to the buffer? I know it's a lot of questions in one, but I would greatly appreciate any responses.

    Read the article

  • Create ntp time stamp from gettimeofday

    - by krunk
    I need to calculate an ntp time stamp using gettimeofday. Below is how I've done it with comments on method. Look good to you guys? (minus error checking). Also, here's a codepad link. #include <unistd.h> #include <sys/time.h> const unsigned long EPOCH = 2208988800UL; // delta between epoch time and ntp time const double NTP_SCALE_FRAC = 4294967295.0; // maximum value of the ntp fractional part int main() { struct timeval tv; uint64_t ntp_time; uint64_t tv_ntp; double tv_usecs; gettimeofday(&tv, NULL); tv_ntp = tv.tv_sec + EPOCH; // convert tv_usec to a fraction of a second // next, we multiply this fraction times the NTP_SCALE_FRAC, which represents // the maximum value of the fraction until it rolls over to one. Thus, // .05 seconds is represented in NTP as (.05 * NTP_SCALE_FRAC) tv_usecs = (tv.tv_usec * 1e-6) * NTP_SCALE_FRAC; // next we take the tv_ntp seconds value and shift it 32 bits to the left. This puts the // seconds in the proper location for NTP time stamps. I recognize this method has an // overflow hazard if used after around the year 2106 // Next we do a bitwise AND with the tv_usecs cast as a uin32_t, dropping the fractional // part ntp_time = ((tv_ntp << 32) & (uint32_t)tv_usecs); }

    Read the article

  • Having a little trouble with this Java code. Beginner probably mixing some things with C#.

    - by Sergio Tapia
    package practico1; /** * Programador: Sergio Tapia Gutierrez * Fecha: Lunes 10, Mayo - 2010 * Practico: 1 */ public class Main { public static void main(String[] args) { System.out.println("Esta es una pequena aplicacion para mostrar los"); System.out.println("distintos tipos de datos que existen en Java 6."); //boolean, char, byte, short, int, long, float, double, String ejemplosBoolean(); } public void ejemplosBoolean(){ } } So, I'm just testing some things out, but I'm getting an error claiming that I'm trying to run ejemplosBoolean() in a static context when it isn't a static method. My question is, in Java do methods have to have static in order to use them even if they are in the same class?

    Read the article

  • Use Google Test from Qt in Windows

    - by Dave
    I have a simple test file, TestMe.cpp: #include <gtest/gtest.h> TEST(MyTest, SomeTest) { EXPECT_EQ(1, 1); } int main(int argc, char **argv) { ::testing::InitGoogleTest(&argc, argv); return RUN_ALL_TESTS(); } I have Google Test built as a static library. (I can provide the makefile if it's relevant.) I can compile TestMe.cpp from a command-line with no problem: g++ TestMe.cpp -IC:\gtest-1.5.0\gtest-1.5.0\include -L../gtest/staticlib -lgtest -o TestMe.exe It runs as expected. However, I cannot get this to compile in Qt. My Qt project file, in the same directory: SOURCES += TestMe.cpp INCLUDEPATH += C:\gtest-1.5.0\gtest-1.5.0\include LIBS += -L../gtest/staticlib -lgtest This results in 17 "unresolved external symbol" errors related to gtest functions. I'm pulling my hair out here, as I'm sure it's something simple. Any ideas?

    Read the article

  • sprintf an LPCWSTR variable

    - by Julio
    Hello everyone. I'm trying to debug print an LPCWSTR string, but I get a problem during the sprintf push in the buffer, because it retrieve only the first character from the string. Here is the code: HANDLE WINAPI hookedCreateFileW(LPCWSTR lpFileName, DWORD dwDesiredAccess, DWORD dwShareMode, LPSECURITY_ATTRIBUTES lpSecurityAttributes, DWORD dwCreationDisposition, DWORD dwFlagsAndAttributes, HANDLE hTemplateFile) { char buffer[1024]; sprintf_s(buffer, 1024, "CreateFileW: %s", lpFileName); OutputDebugString(buffer); return trueCreateFileW(lpFileName, dwDesiredAccess, dwShareMode, lpSecurityAttributes, dwFlagsAndAttributes, dwCreationDisposition, hTemplateFile); } For example I get "CreateFileW: C" or "CreateFileW: \". How do I properly push it in the buffer? Thank you.

    Read the article

  • Trying to parse OpenCV YAML ouput with yaml-cpp

    - by Kenn Sebesta
    I've got a series of OpenCv generated YAML files and would like to parse them with yaml-cpp I'm doing okay on simple stuff, but the matrix representation is proving difficult. # Center of table tableCenter: !!opencv-matrix rows: 1 cols: 2 dt: f data: [ 240, 240] This should map into the vector 240 240 with type float. My code looks like: #include "yaml.h" #include <fstream> #include <string> struct Matrix { int x; }; void operator >> (const YAML::Node& node, Matrix& matrix) { unsigned rows; node["rows"] >> rows; } int main() { std::ifstream fin("monsters.yaml"); YAML::Parser parser(fin); YAML::Node doc; Matrix m; doc["tableCenter"] >> m; return 0; } But I get terminate called after throwing an instance of 'YAML::BadDereference' what(): yaml-cpp: error at line 0, column 0: bad dereference Abort trap I searched around for some documentation for yaml-cpp, but there doesn't seem to be any, aside from a short introductory example on parsing and emitting. Unfortunately, neither of these two help in this particular circumstance. As I understand, the !! indicate that this is a user-defined type, but I don't see with yaml-cpp how to parse that.

    Read the article

  • How to set permissions on MSMQ Cluster queues?

    - by JorgeSandoval
    I've got a cluster with functioning private MSMQ 3.0 queues. I'm trying to programmatically set the permissions, but can't seem to connect via System.Messaging on the queues. The code below works just fine when working with local queues (and using .\ nomenclature for the local queue). How to programmatically set the permissions on the clustered queues? Powershell code executed from the active node function set-msmqpermission ([string] $queuepath,[string] $account, [string] $accessright) { if (!([System.Messaging.MessageQueue]::Exists($queuepath))){ throw "$queuepath could not be found." } $q=New-Object System.Messaging.MessageQueue($queuepath) $q.SetPermissions($account,[System.Messaging.MessageQueueAccessRights]::$accessright, [System.Messaging.AccessControlEntryType]::Set) } set-msmqpermission "clusternetworkname\private$\qa1ack" "UserAccount" "FullControl" Exception calling "SetPermissions" with "3" argument(s): "Invalid queue path name." At line:30 char:19 + $q.SetPermissions <<<< ($account,[System.Messaging.MessageQueueAccessRights]::$accessright, + CategoryInfo : NotSpecified: (:) [], MethodInvocationException + FullyQualifiedErrorId : DotNetMethodException

    Read the article

  • Segmentation fault in strcpy

    - by Alien01
    consider the program below char str[5]; strcpy(str,"Hello12345678"); printf("%s",str); When run this program gives segmentation fault. But when strcpy is replaced with following, program runs fine. strcpy(str,"Hello1234567"); So question is it should crash when trying to copy to str any other string of more than 5 chars length. So why it is not crashing for "Hello1234567" and only crashing for "Hello12345678" ie of string with length 13 or more than 13. This program was run on 32 bit machine .

    Read the article

  • getting "implicit declaration of function 'fcloseall' is invalid in C99" when compiling to gnu99

    - by Emanuel Ey
    Consider the following C code: #include <stdio.h> #include <stdlib.h> void fatal(const char* message){ /* Prints a message and terminates the program. Closes all open i/o streams before exiting. */ printf("%s\n", message); fcloseall(); exit(EXIT_FAILURE); } I'm using clang 2.8 to compile: clang -Wall -std=gnu99 -o <executable> <source.c> And get: implicit declaration of function 'fcloseall' is invalid in C99 Which is true, but i'm explicitly compiling to gnu99 [which should support fcloseall()], and not to c99. Although the code runs, I don like the have unresolved warnings when compiling. How can i solve this?

    Read the article

  • One pass multiple whitespace replace to single whitespace and eliminate leading and trailing whitesp

    - by Phoenix
    void RemoveSpace(char *String) { int i=0,y=0; int leading=0; for(i=0,y=0;String[i]!='\0';i++,y++) { String[y]=String[i]; // let us copy the current character. if(isspace(String[i])) // Is the current character a space? { if(isspace(String[i+1])||String[i+1]=='\0'||leading!=1) // leading space y--; } else leading=1; } String[y]='\0'; } Does this do the trick of removing leading and trailing whitespaces and replacing multiple whitespaces with single ones ?? i tested it for null string, all whitespaces, leading whitespaces and trailing whitespaces. Do you think this is an efficient one pass solution ??

    Read the article

  • How to use LIKE query in sqlite & iPhone

    - by 4thSpace
    I'm using the following for a LIKE query. Is this technique for LIKE correct? selectstmtSearch = nil; if(selectstmtSearch == nil){ const char *sql = "SELECT col1, col2 FROM table1 t1 JOIN table2 t2 ON t1.cityid = t2.cityid where t1.cityname like ?001 order by t1.cityname"; if(sqlite3_prepare_v2(databaseSearch, sql, -1, &selectstmtSearch, NULL) == SQLITE_OK) { sqlite3_bind_text(selectstmtSearch, 1, [[NSString stringWithFormat:@"%%%@%%", searchText] UTF8String], -1, SQLITE_TRANSIENT); } } The problem I'm having is after a few uses of this, I get an error 14 on sqlite3_open(), which is unable to open database. If I replace the LIKE with something such as: SELECT col1, col2 FROM table1 t1 JOIN table2 t2 ON t1.cityid = t2.cityid where t1.cityname = ? order by t1.cityname It works fine. I do open/close the DB before after the above code. Is there a way to troubleshoot exactly why the database can't be opened and what its relationship to my LIKE syntax is?

    Read the article

  • UART speed possibly wrong

    - by Mike
    My brain is fried, so I thought I would pass this one to the community. When sending 1 character to my embedded system, it consistently thinks it receives 2 characters. The first received character seems to map to the transmitted character (in some unkown way) and the second received character is always 0xff Here is what I observed: Tx char (in hex) Rx character (in hex), I left out the second byte (always ff) 31 9D 32 9B 33 99 61 3D 62 3B 63 39 64 37 65 35 41 7D 42 7B 43 79 I have check my clocks and them seem to be ok. The only diff between this non working version and the previous version is that i am now using a RS485 chip. I have traced the signal all the way up to the MCU and it looks fine (confirmed the bit value on the rx pin)

    Read the article

  • how to use gettimeofday() or something equivalent with Visual Studio C++ 2008?

    - by make
    Hi, Could someone please help me to use gettimeofday() function with Visual Studio C++ 2008 on Windows XP? here is a code that I found somewhere on the net: #include < time.h > #include <windows.h> #if defined(_MSC_VER) || defined(_MSC_EXTENSIONS) #define DELTA_EPOCH_IN_MICROSECS 11644473600000000Ui64 #else #define DELTA_EPOCH_IN_MICROSECS 11644473600000000ULL #endif struct timezone { int tz_minuteswest; /* minutes W of Greenwich */ int tz_dsttime; /* type of dst correction */ }; int gettimeofday(struct timeval *tv, struct timezone *tz) { FILETIME ft; unsigned __int64 tmpres = 0; static int tzflag; if (NULL != tv) { GetSystemTimeAsFileTime(&ft); tmpres |= ft.dwHighDateTime; tmpres <<= 32; tmpres |= ft.dwLowDateTime; /*converting file time to unix epoch*/ tmpres -= DELTA_EPOCH_IN_MICROSECS; tmpres /= 10; /*convert into microseconds*/ tv->tv_sec = (long)(tmpres / 1000000UL); tv->tv_usec = (long)(tmpres % 1000000UL); } if (NULL != tz) { if (!tzflag) { _tzset(); tzflag++; } tz->tz_minuteswest = _timezone / 60; tz->tz_dsttime = _daylight; } return 0; } ... // call gettimeofday() gettimeofday(&tv, &tz); tm = localtime(&tv.tv_sec); Last yesr when I tested this code VC++6, it works fine. But now when I use VC++ 2008, I am getting error of exception handling. So is there any idea on how to use gettimeofday or something equivalent? Thanks for your reply and any help would be very appreciated:

    Read the article

  • I have having following warning in gcc compilation in 32 bit architecture but not having any such wa

    - by thetna
    symbol.c: In function 'symbol_FPrint': symbol.c:1209: warning: format '%ld' expects type 'long int', but argument 3 has type 'SYMBOL' symbol.c: In function 'symbol_FPrintOtter': symbol.c:1236: warning: format '%ld' expects type 'long int', but argument 3 has type 'SYMBOL' symbol.c:1239: warning: format '%ld' expects type 'long int', but argument 3 has type 'SYMBOL' symbol.c:1243: warning: format '%ld' expects type 'long int', but argument 3 has type 'SYMBOL' symbol.c:1266: warning: format '%ld' expects type 'long int', but argument 3 has type 'SYMBOL' In symbol.c 1198 #ifdef CHECK 1199 else { 1200 misc_StartErrorReport(); 1201 misc_ErrorReport("\n In symbol_FPrint: Cannot print symbol.\n"); 1202 misc_FinishErrorReport(); 1203 } 1204 #endif 1205 } 1206 else if (symbol_SignatureExists()) 1207 fputs(symbol_Name(Symbol), File); 1208 else 1209 fprintf(File, "%ld", Symbol); 1210 } And SYMBOL is defined as: typedef size_t SYMBOL When i replaced '%ld' with '%zu' , i got the following warning: symbol.c: In function 'symbol_FPrint': symbol.c:1209: warning: ISO C90 does not support the 'z' printf length modifier Note: From here it has been edited on 26th of march 2010 and and following problem has beeen added because of its similarity to the above mentioned problem. I have following statement: printf("\n\t %4d:%4d:%4d:%4d:%4d:%s:%d", Index, S->info, S->weight, Precedence[Index],S->props,S->name, S->length); The warning I get while compiling in 64 bit architecture is : format ‘%4d’ expects type ‘int’, but argument 5 has type ‘size_t’ here are the definitions of parameter: NAT props; typedef unsigned int NAT; How can i get rid of this so that i can compile without warning in 32 and 64 bit architecture? What can be its solution?

    Read the article

  • Invalid Argument IE 8 jQuery

    - by Deshiknaves
    Hi, I have this particular script that runs so that the flash elements don't show up on top of my slide out navigation. This redraws that flash element with wmode as opaque and so it shows up under the navigation. Works perfectly with Chrome and FireFox but not with IE. In IE I get an Invalid Argument in jquery.min.js code 0 Line 103 char 460. Can anyone help me as to why? If I comment out the second line of code inside the function then there is no error, but then doesn't work in FireFox. Any help is appriciated. $(window).load(function(){ $('embed').attr('wmode','opaque'); $('object').append('<param name="wmode" value="opaque">'); $('object').wrap('<div>'); });

    Read the article

  • Best practices for fixed-width processing in .NET

    - by jmgant
    I'm working a .NET web service that will be processing a text file with a relatively long, multilevel record format. Each record in the file represents a different entity; the record contains multiple sub-types. (The same record format is currently being processed by a COBOL job, if that gives you a better picture of what we're looking at). I've created a class structure (a DATA DIVISION if you will) to hold the input data. My question is, what best practices have you found for processing large, complex fixed-width files in .NET? My general approach will be to read the entire line into a string and then parse the data from the string into the classes I've created. But I'm not sure whether I'll get better results working with the characters in the string as an array, or with the string itself. I guess that's the specific question, string vs. char[], but I would appreciate any other pointers anyone has. Thanks.

    Read the article

< Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >