Search Results

Search found 1486 results on 60 pages for 'unsigned'.

Page 48/60 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • OpenGL Fast-Object Instancing Error

    - by HJ Media Studios
    I have some code that loops through a set of objects and renders instances of those objects. The list of objects that needs to be rendered is stored as a std::map, where an object of class MeshResource contains the vertices and indices with the actual data, and an object of classMeshRenderer defines the point in space the mesh is to be rendered at. My rendering code is as follows: glDisable(GL_BLEND); glEnable(GL_CULL_FACE); glDepthMask(GL_TRUE); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glEnable(GL_DEPTH_TEST); for (std::map<MeshResource*, std::vector<MeshRenderer*> >::iterator it = renderables.begin(); it != renderables.end(); it++) { it->first->setupBeforeRendering(); cout << "<"; for (unsigned long i =0; i < it->second.size(); i++) { //Pass in an identity matrix to the vertex shader- used here only for debugging purposes; the real code correctly inputs any matrix. uniformizeModelMatrix(Matrix4::IDENTITY); /** * StartHere fix rendering problem. * Ruled out: * Vertex buffers correctly. * Index buffers correctly. * Matrices correct? */ it->first->render(); } it->first->cleanupAfterRendering(); } geometryPassShader->disable(); glDepthMask(GL_FALSE); glDisable(GL_CULL_FACE); glDisable(GL_DEPTH_TEST); The function in MeshResource that handles setting up the uniforms is as follows: void MeshResource::setupBeforeRendering() { glEnableVertexAttribArray(0); glEnableVertexAttribArray(1); glEnableVertexAttribArray(2); glEnableVertexAttribArray(3); glEnableVertexAttribArray(4); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, iboID); glBindBuffer(GL_ARRAY_BUFFER, vboID); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), 0); // Vertex position glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const GLvoid*) 12); // Vertex normal glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const GLvoid*) 24); // UV layer 0 glVertexAttribPointer(3, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const GLvoid*) 32); // Vertex color glVertexAttribPointer(4, 1, GL_UNSIGNED_SHORT, GL_FALSE, sizeof(Vertex), (const GLvoid*) 44); //Material index } The code that renders the object is this: void MeshResource::render() { glDrawElements(GL_TRIANGLES, geometry->numIndices, GL_UNSIGNED_SHORT, 0); } And the code that cleans up is this: void MeshResource::cleanupAfterRendering() { glDisableVertexAttribArray(0); glDisableVertexAttribArray(1); glDisableVertexAttribArray(2); glDisableVertexAttribArray(3); glDisableVertexAttribArray(4); } The end result of this is that I get a black screen, although the end of my rendering pipeline after the rendering code (essentially just drawing axes and lines on the screen) works properly, so I'm fairly sure it's not an issue with the passing of uniforms. If, however, I change the code slightly so that the rendering code calls the setup immediately before rendering, like so: void MeshResource::render() { setupBeforeRendering(); glDrawElements(GL_TRIANGLES, geometry->numIndices, GL_UNSIGNED_SHORT, 0); } The program works as desired. I don't want to have to do this, though, as my aim is to set up vertex, material, etc. data once per object type and then render each instance updating only the transformation information. The uniformizeModelMatrix works as follows: void RenderManager::uniformizeModelMatrix(Matrix4 matrix) { glBindBuffer(GL_UNIFORM_BUFFER, globalMatrixUBOID); glBufferSubData(GL_UNIFORM_BUFFER, 0, sizeof(Matrix4), matrix.ptr()); glBindBuffer(GL_UNIFORM_BUFFER, 0); }

    Read the article

  • CGBitmapContextCreate on the iPhone/iPad

    - by toastie
    Hello, I have a method that needs to parse through a bunch of large PNG images pixel by pixel (the PNGs are 600x600 pixels each). It seems to work great on the Simulator, but on the device (iPad), i get an EXC_BAD_ACCESS in some internal memory copying function. It seems the size is the culprit because if I try it on smaller images, everything seems to work. Here's the memory related meat of method below. + (CGRect) getAlphaBoundsForUImage: (UIImage*) image { CGImageRef imageRef = [image CGImage]; NSUInteger width = CGImageGetWidth(imageRef); NSUInteger height = CGImageGetHeight(imageRef); CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); unsigned char *rawData = malloc(height * width * 4); memset(rawData,0,height * width * 4); NSUInteger bytesPerPixel = 4; NSUInteger bytesPerRow = bytesPerPixel * width; NSUInteger bitsPerComponent = 8; CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big); CGColorSpaceRelease(colorSpace); CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef); CGContextRelease(context); /* non-memory related stuff */ free(rawData); When I run this on a bunch of images, it runs 12 times and then craps out, while on the simulator it runs no problem. Do you guys have any ideas?

    Read the article

  • OO Design - polymorphism - how to design for handing streams of different file types

    - by Kache4
    I've little experience with advanced OO practices, and I want to design this properly as an exercise. I'm thinking of implementing the following, and I'm asking if I'm going about this the right way. I have a class PImage that holds the raw data and some information I need for an image file. Its header is currently something like this: #include <boost/filesytem.hpp> #include <vector> namespace fs = boost::filesystem; class PImage { public: PImage(const fs::path& path, const unsigned char* buffer, int bufferLen); const vector<char> data() const { return data_; } const char* rawData() const { return &data_[0]; } /*** other assorted accessors ***/ private: fs::path path_; int width_; int height_; int filesize_; vector<char> data_; } I want to fill the width_ and height_ by looking through the file's header. The trivial/inelegant solution would be to have a lot of messy control flow that identifies the type of image file (.gif, .jpg, .png, etc) and then parse the header accordingly. Instead of using vector<char> data_, I was thinking of having PImage use a class, RawImageStream data_ that inherits from vector<char>. Each type of file I plan to support would then inherit from RawImageStream, e.g. RawGifStream, RawPngStream. Each RawXYZStream would encapsulate the respective header-parsing functions, and PImage would only have to do something like height_ = data_.getHeight();. Am I thinking this through correctly? How would I create the proper RawImageStream subclass for data_ to be in the PImage ctor? Is this where I could use an object factory? Anything I'm forgetting?

    Read the article

  • Compressing IplImage to JPEG using libjpeg in OpenCV

    - by Petar
    Hi everybody. So I have this problem. I have an IplImage that i want to compress to JPEG and do something with it. I use libjpeg. I found a lot of answers "read through examples and docs" and such and did that. And successfully written a function for that. FILE* convert2jpeg(IplImage* frame) { FILE* outstream = NULL; outstream=malloc(frame-imageSize*frame-nChannels*sizeof(char)) unsigned char *outdata = (uchar *) frame->imageData; struct jpeg_error_mgr jerr; struct jpeg_compress_struct cinfo; int row_stride; JSAMPROW row_ptr[1]; jpeg_create_compress(&cinfo); jpeg_stdio_dest(&cinfo, outstream); cinfo.image_width = frame->width; cinfo.image_height = frame->height; cinfo.input_components = frame->nChannels; cinfo.in_color_space = JCS_RGB; jpeg_set_defaults(&cinfo); jpeg_start_compress(&cinfo, TRUE); row_stride = frame->width * frame->nChannels; while (cinfo.next_scanline < cinfo.image_height) { row_ptr[0] = &outdata[cinfo.next_scanline * row_stride]; jpeg_write_scanlines(&cinfo, row_ptr, 1); } jpeg_finish_compress(&cinfo); jpeg_destroy_compress(&cinfo); return outstream; } Now this function is straight from the examples (except the part of allocating memory, but i need that since i'm not writnig to a file), but it still doesn't work. It dies on jpeg_start_compress(&cinfo, TRUE); part? Can anybody help?

    Read the article

  • rand() generating the same number – even with srand(time(NULL)) in my main!

    - by Nick Sweet
    So, I'm trying to create a random vector (think geometry, not an expandable array), and every time I call my random vector function I get the same x value, thought y and z are different. int main () { srand ( (unsigned)time(NULL)); Vector<double> a; a.randvec(); cout << a << endl; return 0; } using the function //random Vector template <class T> void Vector<T>::randvec() { const int min=-10, max=10; int randx, randy, randz; const int bucket_size = RAND_MAX/(max-min); do randx = (rand()/bucket_size)+min; while (randx <= min && randx >= max); x = randx; do randy = (rand()/bucket_size)+min; while (randy <= min && randy >= max); y = randy; do randz = (rand()/bucket_size)+min; while (randz <= min && randz >= max); z = randz; } For some reason, randx will consistently return 8, whereas the other numbers seem to be following the (pseudo) randomness perfectly. However, if I put the call to define, say, randy before randx, randy will always return 8. Why is my first random number always 8? Am I seeding incorrectly?

    Read the article

  • determine parity of a bit representation of a number in MIPS

    - by Hristo
    Is there some instruction in MIPS that will determine the parity of a certain bit representation? I know to determine whether a "number" has an even parity or an odd parity is to XOR the individual bits of the binary representation together, but that seems computationally-intensive for a set of MIPS instructions... and I need to do this as quick as possible. Also, the number I'm working in is represented in Grey Code... just to throw that in there. So is there some pseudo-instruction in MIPS to determine the parity of a "number" or do I have to do it by hand? If there is no MIPS instruction, which it seems very unlikely to be, any advice on how to do it by hand? Thanks, Hristo follow-up: I found a optimization, but my implementation isn't working. unsigned int v; // 32-bit word v ^= v >> 1; v ^= v >> 2; v = (v & 0x11111111U) * 0x11111111U; return (v >> 28) & 1;

    Read the article

  • MySQL Error 1452 - Cannot add or update a child row: a foreign key constraint fails

    - by dscher
    I've looked at other people's questions on this topic but can't seem to find where my error is coming from. Any help would be greatly appreciated. I'm including as much as I can think of that might help find the problem: CREATE TABLE stocks ( id INT AUTO_INCREMENT NOT NULL, user_id INT(11) UNSIGNED NOT NULL, ticker VARCHAR(20) NOT NULL, name VARCHAR(20), rating INT(11), position ENUM("strong buy", "buy", "sell", "strong sell", "neutral"), next_look DATE, privacy ENUM("public", "private"), PRIMARY KEY(id), FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE ) ENGINE=InnoDB DEFAULT CHARSET=utf8; CREATE TABLE IF NOT EXISTS `stocks_tags` ( `stock_id` INT NOT NULL, `tag_id` INT NOT NULL, PRIMARY KEY (`stock_id`,`tag_id`), KEY `fk_stock_tag` (`tag_id`), KEY `fk_tag_stock` (`stock_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; ALTER TABLE `stocks_tags` ADD CONSTRAINT `fk_stock_tag` FOREIGN KEY (`tag_id`) REFERENCES `tags` (`id`) ON DELETE CASCADE ON UPDATE CASCADE, ADD CONSTRAINT `fk_tag_stock` FOREIGN KEY (`stock_id`) REFERENCES `stocks` (`id`) ON DELETE CASCADE ON UPDATE CASCADE; CREATE TABLE tags( id INT AUTO_INCREMENT NOT NULL PRIMARY KEY, tags VARCHAR(30), UNIQUE(tags) ) ENGINE=INNODB DEFAULT CHARSET=utf8; And the error I'm getting: Database_Exception [ 1452 ]: Cannot add or update a child row: a foreign key constraint fails (`ddmachine`.`stocks_tags`, CONSTRAINT `fk_stock_tag` FOREIGN KEY (`tag_id`) REFERENCES `tags` (`id`) ON DELETE CASCADE ON UPDATE CASCADE) [ INSERT INTO `stocks_tags` (`stock_id`, `tag_id`) VALUES (19, 'cash') ] I did see that someone else had a similar problem based on their enum columns but don't think that's it.

    Read the article

  • Rle Encoding...What's the wrong?

    - by FILIaS
    I'm trying to make a Rle Encoder Programme.I read the way it works on notes on net. And i tried to fix my code! Regardless I think that the structure and logic of code are right,the code doesnt work! It appears some strange 'Z' as it runs. I really cant find what;s wrong! Could u please give me a piece of advice? Thanx in advance... #include <stdio.h> int main() { int count; unsigned char currChar,prevChar=EOF; while(currChar=getchar() != EOF) { if ( ( (currChar='A')&&(currChar='Z') ) || ( (currChar='a')&&(currChar='z') ) ) { printf("%c",currChar); if(prevChar==currChar) { count=0; currChar=getchar(); while(currChar!='EOF') { if (currChar==prevChar) count++; else { if(count<=9) printf("%d%c",count,prevChar); else { printf("%d%c",reverse(count),prevChar); } prevChar=currChar; break; } } } else prevChar=currChar; if(currChar=='EOF') { printf("%d",count); break; } } else { printf("Error Message:Only characters are accepted! Please try again! False input!"); break; } } return 0; } int reverse(int x) { int piliko,ypoloipo,r=0; x=(x<0)?-x:x; while (x>0) { ypoloipo=x%10; piliko=x/10; r=10*r+ypoloipo; x=piliko; } printf("%d",r); return 0; }

    Read the article

  • error: switch quantity not an integer

    - by nikeunltd
    I have researched my issue all over StackOverflow and multi-google links, and I am still confused. I figured the best thing for me is ask... Im creating a simple command line calculator. Here is my code so far: const std::string Calculator::SIN("sin"); const std::string Calculator::COS("cos"); const std::string Calculator::TAN("tan"); const std::string Calculator::LOG( "log" ); const std::string Calculator::LOG10( "log10" ); void Calculator::set_command( std::string cmd ) { for(unsigned i = 0; i < cmd.length(); i++) { cmd[i] = tolower(cmd[i]); } command = cmd; } bool Calculator::is_legal_command() const { switch(command) { case TAN: case SIN: case COS: case LOG: case LOG10: return true; break; default: return false; break; } } the error i get is: Calculator.cpp: In member function 'bool Calculator::is_trig_command() const': Calculator.cpp: error: switch quantity not an integer Calculator.cpp: error: 'Calculator::TAN' cannot appear in a constant-expression Calculator.cpp: error: 'Calculator::SIN' cannot appear in a constant-expression Calculator.cpp: error: 'Calculator::COS' cannot appear in a constant-expression The mighty internet, it says strings are allowed to be used in switch statements. Thanks everyone, I appreciate your help.

    Read the article

  • EXC_BREAKPOINT when starting iPhone app

    - by pgb
    A user of our app sent me the following crash log: Incident Identifier: 59D4D5E7-570A-4047-A679-3016B2A226C4 CrashReporter Key: d8284d671ee22ad17511360ce73409ebfa8b84bb Process: .... [63] Path: /var/mobile/Applications/.... Identifier: ... Version: ??? (???) Code Type: ARM (Native) Parent Process: launchd [1] Date/Time: 2010-03-08 17:00:15.437 -0800 OS Version: iPhone OS 2.2.1 (5H11a) Report Version: 103 Exception Type: EXC_BREAKPOINT (SIGTRAP) Exception Codes: 0x00000001, 0xe7ffdefe Crashed Thread: 0 Thread 0 Crashed: 0 dyld 0x2fe01060 dyld_fatal_error + 0 1 dyld 0x2fe088d4 dyld::_main(mach_header const*, unsigned long, int, char const**, char const**, char const**) + 3064 2 dyld 0x2fe0196c dyldbootstrap::start(mach_header const*, int, char const**, long) + 884 3 dyld 0x2fe01048 _dyld_start + 32 Thread 0 crashed with ARM Thread State: r0: 0x2fe23ca0 r1: 0x00000000 r2: 0x2fe23ca0 r3: 0x00000000 r4: 0x2ffff4e0 r5: 0x2ffff4bc r6: 0x2fe005c0 r7: 0x2ffffb00 r8: 0x00000004 r9: 0x2fe57cf0 r10: 0x2fe236c8 r11: 0x00000009 ip: 0x0000018d sp: 0x2ffff5b8 lr: 0x2fe088dc pc: 0x2fe01060 cpsr: 0x00000010 Binary Images: 0x2fe00000 - 0x2fe22fff dyld ??? (???) <f6a50d5f57a676b54276d0ecef46d5f0> /usr/lib/dyld My app uses OpenFeint and PinchMedia analytics. For PinchMedia, I'm linking using their provided .a file, and for OpenFeint, I'm compiling their code (as per their guidelines). The frameworks / libs I'm linking are: UIKit.framework (Weak) MapKit.framework (Weak) Foundation.framework CoreGraphics.framework OpenAL.framework AudioToolbox.framework libsqlite3 SystemConfiguration.framework CoreLocation.framework PinchMedia analytics Security.framework QuartzCore.framework CFNetwork.framework My base SDK is iPhone 3.0, and my Base OS Deployment Target is 2.2.1. There are two things I find weird: The app crashes even before the main method is invoked. The crash log looks exactly like the one posted here: http://stackoverflow.com/questions/2368689/objective-c-iphone-app-exc-breakpoint-sigtrap The user that sent me this crash is using a 2nd gen iPod Touch with OS 2.2.1. I wasn't able to reproduce the issue, but based on the comments in iTunes, it seems that more people is having the same issue.

    Read the article

  • [MFC] I can't re-parent a window

    - by John
    Following on from this question, now I have a clearer picture what's going on... I have a MFC application with no main window, which exposes an API to create dialogs. When I call some of these methods repeatedly, the dialogs created are parented to each other instead of all being parented to the desktop... I have no idea why. But anyway even after creation, I am unable to change the parent back to NULL or CWnd::GetDesktopWindow()... if I call SetParent followed by GetParent, nothing has changed. So apart from the really weird question of why Windows is magically parenting each dialog to the last one created, is there anything I'm missing to be able to set these windows as children of the desktop? UPDATED: I have found the reason for all this, but not the solution. From my dialog constructor, we end up in: BOOL CDialog::CreateIndirect(LPCDLGTEMPLATE lpDialogTemplate, CWnd* pParentWnd, void* lpDialogInit, HINSTANCE hInst) { ASSERT(lpDialogTemplate != NULL); if (pParentWnd == NULL) pParentWnd = AfxGetMainWnd(); m_lpDialogInit = lpDialogInit; return CreateDlgIndirect(lpDialogTemplate, pParentWnd, hInst); } Note: if (pParentWnd == NULL)pParentWnd = AfxGetMainWnd(); The call-stack from my dialog constructor looks like this: mfc80d.dll!CDialog::CreateIndirect(const DLGTEMPLATE * lpDialogTemplate=0x005931a8, CWnd * pParentWnd=0x00000000, void * lpDialogInit=0x00000000, HINSTANCE__ * hInst=0x00400000) mfc80d.dll!CDialog::CreateIndirect(void * hDialogTemplate=0x005931a8, CWnd * pParentWnd=0x00000000, HINSTANCE__ * hInst=0x00400000) mfc80d.dll!CDialog::Create(const char * lpszTemplateName=0x0000009d, CWnd * pParentWnd=0x00000000) mfc80d.dll!CDialog::Create(unsigned int nIDTemplate=157, CWnd * pParentWnd=0x00000000) MyApp.exe!CMyDlg::CMyDlg(CWnd * pParent=0x00000000)

    Read the article

  • Is there a way to receive receive data as unsugned char over UDP on QT

    - by user269037
    I need to send floating point numbers using UDP connection to a QT application. Now in QT the only function available is qint64 readDatagram ( char * data, qint64 maxSize, QHostAddress * address = 0, quint16 * port = 0 ) which accepts data in the form of signed character buffer. I can convert my float into a string and send it but it will obviously not be very efficient converting a 4 byte float into a much longer sized character buffer. I got hold of these 2 functions to convert a 4 byte float into an unsinged 32 bit integer to transfer over network which works fine for a simple c++ udp program but for QT I need to receive the data as unsigned char. Is it possible to avoid converting the floatinf point data into a string and then sending it ?? uint32_t htonf(float f) { uint32_t p; uint32_t sign; if (f < 0) { sign = 1; f = -f; } else { sign = 0; } p = ((((uint32_t)f)&0x7fff)<<16) | (sign<<31); // whole part and sign p |= (uint32_t)(((f - (int)f) * 65536.0f))&0xffff; // fraction return p; } float ntohf(uint32_t p) { float f = ((p16)&0x7fff); // whole part f += (p&0xffff) / 65536.0f; // fraction if (((p>>31)&0x1) == 0x1) { f = -f; } // sign bit set return f; }

    Read the article

  • Java: What are the various available security settings for applets

    - by bguiz
    I have an applet that throws this exception when trying to communicate with the server (running on localhost). This problem is limited to Applets only - a POJO client is able to communicate with the exact same server without any problem. Exception in thread "AWT-EventQueue-1" java.security.AccessControlException: access denied (java.net .SocketPermission 127.0.0.1:9999 connect,resolve) at java.security.AccessControlContext.checkPermission(AccessControlContext.java:323) My applet.policy file's contents is: grant { permission java.security.AllPermission; }; My question is what are the other places where I need to modify my security settings to grant an Applet more security settings? Thank you. EDIT: Further investigation has lead me to find that this problem only occurs on some machines - but not others. So it could be a machine level (global) setting that is causing this, rather than a application-specific setting such as the one in the applet.policy file. EDIT: Another SO question: Socket connection to originating server of an unsigned Java applet This seems to describe the exact same problem, and Tom Hawtin - tackline 's answer provides the reason why (a security patch released that disallows applets from connecting to localhost). Bearing this in mind, how do I grant the applet the security settings such that in can indeed run on my machine. Also why does it run as-is on other machines but not mine?

    Read the article

  • Advice using leaks in instruments for noobs

    - by Gyozo Kudor
    Hello I am pretty new to iphone development. I have run my app for the first time using the "Leaks" from "Instruments". It shows me several leaks around 20 the smallest is 32 bytes and there is one with 1KB. I have followed the memory management guidelines, (i (think i) understand how and when to use release, not to use it when adding to autorelease pools, for every copy, retain, init there should be a release,... etc). I don't think I understand the output of the Leaks in instruments. What does "Responsible library" and "Responsible frame" mean. Because there are some classes and methods i never used directly. Are there any good tutorials for debugging memory leaks in instruments or other advice you can give me regarding leaks. Thanks in advance. Here are the largest 2 leaks. Leaked Object # Address Size Responsible Library Responsible Frame Malloc 1.00 KB 0x4827400 1024 CFNetwork std::vector *, std::allocator * ::reserve(unsigned long) // i have no idea what this is. Leaked Object # Address Size Responsible Library Responsible Frame Malloc 128 Bytes 5 640 UIKit UIImagePickerLoadPhotoLibraryIfNecessary // so this means UIImagePicker is leaking memory? The first leak i get Leaked Object # Address Size Responsible Library Responsible Frame Malloc 128 Bytes 0x442dfd0 128 UIKit UIKeyboardInputManagerClassForInputMode I don't understand any of those.

    Read the article

  • Get special numbers from a random number generator

    - by Wikeno
    I have a random number generator: int32_t ksp_random_table[GENERATOR_DEG] = { -1726662223, 379960547, 1735697613, 1040273694, 1313901226, 1627687941, -179304937, -2073333483, 1780058412, -1989503057, -615974602, 344556628, 939512070, -1249116260, 1507946756, -812545463, 154635395, 1388815473, -1926676823, 525320961, -1009028674, 968117788, -123449607, 1284210865, 435012392, -2017506339, -911064859, -370259173, 1132637927, 1398500161, -205601318, }; int front_pointer=3, rear_pointer=0; int32_t ksp_rand() { int32_t result; ksp_random_table[ front_pointer ] += ksp_random_table[ rear_pointer ]; result = ( ksp_random_table[ front_pointer ] >> 1 ) & 0x7fffffff; front_pointer++, rear_pointer++; if (front_pointer >= GENERATOR_DEG) front_pointer = 0; if (rear_pointer >= GENERATOR_DEG) rear_pointer = 0; return result; } void ksp_srand(unsigned int seed) { int32_t i, dst=0, kc=GENERATOR_DEG, word, hi, lo; word = ksp_random_table[0] = (seed==0) ? 1 : seed; for (i = 1; i < kc; ++i) { hi = word / 127773, lo = word % 127773; word = 16807 * lo - 2836 * hi; if (word < 0) word += 2147483647; ksp_random_table[++dst] = word; } front_pointer=3, rear_pointer=0; kc *= 10; while (--kc >= 0) ksp_rand(); } I'd like know what type of pseudo random number generation algorithm this is. My guess is a multiple linear congruential generator. And is there a way of seeding this algorithm so that after 987721(1043*947) numbers it would return 15 either even-only, odd-only or alternating odd and even numbers? It is a part of an assignment for a long term competition and i've got no idea how to solve it. I don't want the final solution, I'd like to learn how to do it myself.

    Read the article

  • g++ C++0x enum class Compiler Warnings

    - by Travis G
    I've been refactoring my horrible mess of C++ type-safe psuedo-enums to the new C++0x type-safe enums because they're way more readable. Anyway, I use them in exported classes, so I explicitly mark them to be exported: enum class __attribute__((visibility("default"))) MyEnum : unsigned int { One = 1, Two = 2 }; Compiling this with g++ yields the following warning: type attributes ignored after type is already defined This seems very strange, since, as far as I know, that warning is meant to prevent actual mistakes like: class __attribute__((visibility("default"))) MyClass { }; class __attribute__((visibility("hidden"))) MyClass; Of course, I'm clearly not doing that, since I have only marked the visibility attributes at the definition of the enum class and I'm not re-defining or declaring it anywhere else (I can duplicate this error with a single file). Ultimately, I can't make this bit of code actually cause a problem, save for the fact that, if I change a value and re-compile the consumer without re-compiling the shared library, the consumer passes the new values and the shared library has no idea what to do with them (although I wouldn't expect that to work in the first place). Am I being way too pedantic? Can this be safely ignored? I suspect so, but at the same time, having this error prevents me from compiling with Werror, which makes me uncomfortable. I would really like to see this problem go away.

    Read the article

  • Why is setting HTML5's CanvasPixelArray values ridiculously slow and how can I do it faster?

    - by Nixuz
    I am trying to do some dynamic visual effects using the HTML 5 canvas' pixel manipulation, but I am running into a problem where setting pixels in the CanvasPixelArray is ridiculously slow. For example if I have code like: imageData = ctx.getImageData(0, 0, 500, 500); for (var i = 0; i < imageData.length; i += 4){ imageData.data[i] = buffer[i]; imageData.data[i + 1] = buffer[i + 1]; imageData.data[i + 2] = buffer[i + 2]; } ctx.putImageData(imageData, 0, 0); Profiling with Chrome reveals, it runs 44% slower than the following code where CanvasPixelArray is not used. tempArray = new Array(500 * 500 * 4); imageData = ctx.getImageData(0, 0, 500, 500); for (var i = 0; i < imageData.length; i += 4){ tempArray[i] = buffer[i]; tempArray[i + 1] = buffer[i + 1]; tempArray[i + 2] = buffer[i + 2]; } ctx.putImageData(imageData, 0, 0); My guess is that the reason for this slowdown is due to the conversion between the Javascript doubles and the internal unsigned 8bit integers, used by the CanvasPixelArray. Is this guess correct? Is there anyway to reduce the time spent setting values in the CanvasPixelArray?

    Read the article

  • overriding enumeration base type using pragma or code change

    - by vprajan
    Problem: I am using a big C/C++ code base which works on gcc & visual studio compilers where enum base type is by default 32-bit(integer type). This code also has lots of inline + embedded assembly which treats enum as integer type and enum data is used as 32-bit flags in many cases. When compiled this code with realview ARM RVCT 2.2 compiler, we started getting many issues since realview compiler decides enum base type automatically based on the value an enum is set to. http://www.keil.com/support/man/docs/armccref/armccref_Babjddhe.htm For example, Consider the below enum, enum Scale { TimesOne, //0 TimesTwo, //1 TimesFour, //2 TimesEight, //3 }; This enum is used as a 32-bit flag. but compiler optimizes it to unsigned char type for this enum. Using --enum_is_int compiler option is not a good solution for our case, since it converts all the enum's to 32-bit which will break interaction with any external code compiled without --enum_is_int. This is warning i found in RVCT compilers & Library guide, The --enum_is_int option is not recommended for general use and is not required for ISO-compatible source. Code compiled with this option is not compliant with the ABI for the ARM Architecture (base standard) [BSABI], and incorrect use might result in a failure at runtime. This option is not supported by the C++ libraries. Question How to convert all enum's base type (by hand-coded changes) to use 32-bit without affecting value ordering? enum Scale { TimesOne=0x00000000, TimesTwo, // 0x00000001 TimesFour, // 0x00000002 TimesEight, //0x00000003 }; I tried the above change. But compiler optimizes this also for our bad luck. :( There is some syntax in .NET like enum Scale: int Is this a ISO C++ standard and ARM compiler lacks it? There is no #pragma to control this enum in ARM RVCT 2.2 compiler. Is there any hidden pragma available ?

    Read the article

  • Create ntp time stamp from gettimeofday

    - by krunk
    I need to calculate an ntp time stamp using gettimeofday. Below is how I've done it with comments on method. Look good to you guys? (minus error checking). Also, here's a codepad link. #include <unistd.h> #include <sys/time.h> const unsigned long EPOCH = 2208988800UL; // delta between epoch time and ntp time const double NTP_SCALE_FRAC = 4294967295.0; // maximum value of the ntp fractional part int main() { struct timeval tv; uint64_t ntp_time; uint64_t tv_ntp; double tv_usecs; gettimeofday(&tv, NULL); tv_ntp = tv.tv_sec + EPOCH; // convert tv_usec to a fraction of a second // next, we multiply this fraction times the NTP_SCALE_FRAC, which represents // the maximum value of the fraction until it rolls over to one. Thus, // .05 seconds is represented in NTP as (.05 * NTP_SCALE_FRAC) tv_usecs = (tv.tv_usec * 1e-6) * NTP_SCALE_FRAC; // next we take the tv_ntp seconds value and shift it 32 bits to the left. This puts the // seconds in the proper location for NTP time stamps. I recognize this method has an // overflow hazard if used after around the year 2106 // Next we do a bitwise AND with the tv_usecs cast as a uin32_t, dropping the fractional // part ntp_time = ((tv_ntp << 32) & (uint32_t)tv_usecs); }

    Read the article

  • Trying to parse OpenCV YAML ouput with yaml-cpp

    - by Kenn Sebesta
    I've got a series of OpenCv generated YAML files and would like to parse them with yaml-cpp I'm doing okay on simple stuff, but the matrix representation is proving difficult. # Center of table tableCenter: !!opencv-matrix rows: 1 cols: 2 dt: f data: [ 240, 240] This should map into the vector 240 240 with type float. My code looks like: #include "yaml.h" #include <fstream> #include <string> struct Matrix { int x; }; void operator >> (const YAML::Node& node, Matrix& matrix) { unsigned rows; node["rows"] >> rows; } int main() { std::ifstream fin("monsters.yaml"); YAML::Parser parser(fin); YAML::Node doc; Matrix m; doc["tableCenter"] >> m; return 0; } But I get terminate called after throwing an instance of 'YAML::BadDereference' what(): yaml-cpp: error at line 0, column 0: bad dereference Abort trap I searched around for some documentation for yaml-cpp, but there doesn't seem to be any, aside from a short introductory example on parsing and emitting. Unfortunately, neither of these two help in this particular circumstance. As I understand, the !! indicate that this is a user-defined type, but I don't see with yaml-cpp how to parse that.

    Read the article

  • What is the proper query to get all the children in a tree?

    - by Nathan Adams
    Lets say I have the following MySQL structure: CREATE TABLE `domains` ( `id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT, `domain` CHAR(50) NOT NULL, `parent` INT(11) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=MYISAM AUTO_INCREMENT=10 DEFAULT CHARSET=latin1 insert into `domains`(`id`,`domain`,`parent`) values (1,'.com',0); insert into `domains`(`id`,`domain`,`parent`) values (2,'example.com',1); insert into `domains`(`id`,`domain`,`parent`) values (3,'sub1.example.com',2); insert into `domains`(`id`,`domain`,`parent`) values (4,'sub2.example.com',2); insert into `domains`(`id`,`domain`,`parent`) values (5,'s1.sub1.example.com',3); insert into `domains`(`id`,`domain`,`parent`) values (6,'s2.sub1.example.com',3); insert into `domains`(`id`,`domain`,`parent`) values (7,'sx1.s1.sub1.example.com',5); insert into `domains`(`id`,`domain`,`parent`) values (8,'sx2.s2.sub1.example.com',6); insert into `domains`(`id`,`domain`,`parent`) values (9,'x.sub2.example.com',4); In my mind that is enough to emulate a simple tree structure: .com | example / \ sub1 sub2 ect My problem is that give sub1.example.com I want to know all the children of sub1.example.com without using multiple queries in my code. I have tried joining the table to itself and tried to use subqueries, I can't think of anything that will reveal all the children. At work we are using MPTT to keep in hierarchal order a list of domains/subdomains however, I feel that there is an easier way to do it. I did some digging and someone did something similar but they required the use of a function in MySQL. I don't think for something simple like this we would need a whole function. Maybe I am just dumb and not seeing some sort of obvious solution. Also, feel free to alter the structure.

    Read the article

  • I have having following warning in gcc compilation in 32 bit architecture but not having any such wa

    - by thetna
    symbol.c: In function 'symbol_FPrint': symbol.c:1209: warning: format '%ld' expects type 'long int', but argument 3 has type 'SYMBOL' symbol.c: In function 'symbol_FPrintOtter': symbol.c:1236: warning: format '%ld' expects type 'long int', but argument 3 has type 'SYMBOL' symbol.c:1239: warning: format '%ld' expects type 'long int', but argument 3 has type 'SYMBOL' symbol.c:1243: warning: format '%ld' expects type 'long int', but argument 3 has type 'SYMBOL' symbol.c:1266: warning: format '%ld' expects type 'long int', but argument 3 has type 'SYMBOL' In symbol.c 1198 #ifdef CHECK 1199 else { 1200 misc_StartErrorReport(); 1201 misc_ErrorReport("\n In symbol_FPrint: Cannot print symbol.\n"); 1202 misc_FinishErrorReport(); 1203 } 1204 #endif 1205 } 1206 else if (symbol_SignatureExists()) 1207 fputs(symbol_Name(Symbol), File); 1208 else 1209 fprintf(File, "%ld", Symbol); 1210 } And SYMBOL is defined as: typedef size_t SYMBOL When i replaced '%ld' with '%zu' , i got the following warning: symbol.c: In function 'symbol_FPrint': symbol.c:1209: warning: ISO C90 does not support the 'z' printf length modifier Note: From here it has been edited on 26th of march 2010 and and following problem has beeen added because of its similarity to the above mentioned problem. I have following statement: printf("\n\t %4d:%4d:%4d:%4d:%4d:%s:%d", Index, S->info, S->weight, Precedence[Index],S->props,S->name, S->length); The warning I get while compiling in 64 bit architecture is : format ‘%4d’ expects type ‘int’, but argument 5 has type ‘size_t’ here are the definitions of parameter: NAT props; typedef unsigned int NAT; How can i get rid of this so that i can compile without warning in 32 and 64 bit architecture? What can be its solution?

    Read the article

  • how to use gettimeofday() or something equivalent with Visual Studio C++ 2008?

    - by make
    Hi, Could someone please help me to use gettimeofday() function with Visual Studio C++ 2008 on Windows XP? here is a code that I found somewhere on the net: #include < time.h > #include <windows.h> #if defined(_MSC_VER) || defined(_MSC_EXTENSIONS) #define DELTA_EPOCH_IN_MICROSECS 11644473600000000Ui64 #else #define DELTA_EPOCH_IN_MICROSECS 11644473600000000ULL #endif struct timezone { int tz_minuteswest; /* minutes W of Greenwich */ int tz_dsttime; /* type of dst correction */ }; int gettimeofday(struct timeval *tv, struct timezone *tz) { FILETIME ft; unsigned __int64 tmpres = 0; static int tzflag; if (NULL != tv) { GetSystemTimeAsFileTime(&ft); tmpres |= ft.dwHighDateTime; tmpres <<= 32; tmpres |= ft.dwLowDateTime; /*converting file time to unix epoch*/ tmpres -= DELTA_EPOCH_IN_MICROSECS; tmpres /= 10; /*convert into microseconds*/ tv->tv_sec = (long)(tmpres / 1000000UL); tv->tv_usec = (long)(tmpres % 1000000UL); } if (NULL != tz) { if (!tzflag) { _tzset(); tzflag++; } tz->tz_minuteswest = _timezone / 60; tz->tz_dsttime = _daylight; } return 0; } ... // call gettimeofday() gettimeofday(&tv, &tz); tm = localtime(&tv.tv_sec); Last yesr when I tested this code VC++6, it works fine. But now when I use VC++ 2008, I am getting error of exception handling. So is there any idea on how to use gettimeofday or something equivalent? Thanks for your reply and any help would be very appreciated:

    Read the article

  • "jpeglib.h: No such file or directory" ghostscript port in OPENBSD

    - by holms
    Hello I have a problem with compiling a ghostscript from ports in openbsd 4.7. SO i have jpeg-7 installed, I have latest port tree for obsd4.7. ===> Building for ghostscript-8.63p11 mkdir -p /usr/ports/pobj/ghostscript-8.63p11/ghostscript-8.63/obj gmake LDFLAGS='-L/usr/local/lib -shared' GS_XE=./obj/../obj/libgs.so.11.0 STDIO_IMPLEMENTATION=c DISPLAY_DEV=./obj/../obj/display.dev BINDIR=./obj/../obj GLGENDIR=./obj/../obj GLOBJDIR=./obj/../obj PSGENDIR=./obj/../obj PSOBJDIR=./obj/../obj CFLAGS='-O2 -fno-reorder-blocks -fno-reorder-functions -fomit-frame-pointer -march=i386 -fPIC -Wall -Wstrict-prototypes -Wmissing-declarations -Wmissing-prototypes -fno-builtin -fno-common -DGS_DEVS_SHARED -DGS_DEVS_SHARED_DIR=\"/usr/local/lib/ghostscript/8.63\"' prefix=/usr/local ./obj/../obj/gsc gmake[1]: Entering directory `/usr/ports/pobj/ghostscript-8.63p11/ghostscript-8.63' cc -I./obj/../obj -I./src -DHAVE_MKSTEMP -O2 -fno-reorder-blocks -fno-reorder-functions -fomit-frame-pointer -march=i386 -fPIC -Wall -Wstrict-prototypes -Wmissing-declarations -Wmissing-prototypes -fno-builtin -fno-common -DGS_DEVS_SHARED -DGS_DEVS_SHARED_DIR=\"/usr/local/lib/ghostscript/8.63\" -DGX_COLOR_INDEX_TYPE='unsigned long long' -o ./obj/../obj/sdctc.o -c ./src/sdctc.c In file included from src/sdctc.c:17: obj/jpeglib_.h:1:21: jpeglib.h: No such file or directory In file included from src/sdctc.c:19: src/sdct.h:58: error: field `err' has incomplete type src/sdct.h:70: error: field `err' has incomplete type src/sdct.h:72: error: field `cinfo' has incomplete type src/sdct.h:73: error: field `destination' has incomplete type src/sdct.h:84: error: field `err' has incomplete type src/sdct.h:87: error: field `dinfo' has incomplete type src/sdct.h:88: error: field `source' has incomplete type gmake[1]: *** [obj/../obj/sdctc.o] Error 1 gmake[1]: Leaving directory `/usr/ports/pobj/ghostscript-8.63p11/ghostscript-8.63' gmake: *** [so] Error 2 *** Error code 2 Stop in /usr/ports/print/ghostscript/gnu (line 2225 of /usr/ports/infrastructure/mk/bsd.port.mk). I tried to place one more param in CFLAGS in Makefile with value "-I/usr/local" but no luck =( People in irc [freenode server, #openbsd channel] refuses give any help for ports at all, and even more - because this is 4.7 unstable version. I have my reasons to use this version and ports believe me =) CFLAGS+= -DSYS_TYPES_HAS_STDINT_TYPES \ -I${LOCALBASE}/include \ -I${LOCALBASE}/include/ijs \ -I${LOCALBASE}/include/libpng \

    Read the article

  • What is the best way to send structs containing enum values via sockets in C.

    - by Axel
    I've lots of different structs containing enum members that I have to transmit via TCP/IP. While the communication endpoints are on different operating systems (Windows XP and Linux) meaning different compilers (gcc 4.x.x and MSVC 2008) both program parts share the same header files with type declarations. For performance reasons, the structures should be transmitted directly (see code sample below) without expensively serializing or streaming the members inside. So the question is how to ensure that both compilers use the same internal memory representation for the enumeration members (i.e. both use 32-bit unsigned integers). Or if there is a better way to solve this problem... //type and enum declaration typedef enum { A = 1, B = 2, C = 3 } eParameter; typedef enum { READY = 400, RUNNING = 401, BLOCKED = 402 FINISHED = 403 } eState; #pragma pack(push,1) typedef struct { eParameter mParameter; eState mState; int32_t miSomeValue; uint8_t miAnotherValue; ... } tStateMessage; #pragma pack(pop) //... send via socket tStateMessage msg; send(iSocketFD,(void*)(&msg),sizeof(tStateMessage)); //... receive message on the other side tStateMessage msg_received; recv(iSocketFD,(void*)(&msg_received),sizeof(tStateMessage)); Additionally... Since both endpoints are little endian maschines, endianess is not a problem here. And the pack #pragma solves alignment issues satisfactorily. Thx for your answers, Axel

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >