Search Results

Search found 15637 results on 626 pages for 'memory efficient'.

Page 617/626 | < Previous Page | 613 614 615 616 617 618 619 620 621 622 623 624  | Next Page >

  • Why are my Opteron cores running at only 75% capacity each? (25% CPU idle)

    - by Tim Cooper
    We've just taken delivery of a powerful 32-core AMD Opteron server with 128Gb. We have 2 x 6272 CPU's with 16 cores each. We are running a big long-running java task on 30 threads. We have the NUMA optimisations for Linux and java turned on. Our Java threads are mainly using objects that are private to that thread, sometimes reading memory that other threads will be reading, and very very occasionally writing or locking shared objects. We can't explain why the CPU cores are 25% idle. Below is a dump of "top": top - 23:06:38 up 1 day, 23 min, 3 users, load average: 10.84, 10.27, 9.62 Tasks: 676 total, 1 running, 675 sleeping, 0 stopped, 0 zombie Cpu(s): 64.5%us, 1.3%sy, 0.0%ni, 32.9%id, 1.3%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 132138168k total, 131652664k used, 485504k free, 92340k buffers Swap: 5701624k total, 230252k used, 5471372k free, 13444344k cached ... top - 22:37:39 up 23:54, 3 users, load average: 7.83, 8.70, 9.27 Tasks: 678 total, 1 running, 677 sleeping, 0 stopped, 0 zombie Cpu0 : 75.8%us, 2.0%sy, 0.0%ni, 22.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu1 : 77.2%us, 1.3%sy, 0.0%ni, 21.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu2 : 77.3%us, 1.0%sy, 0.0%ni, 21.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu3 : 77.8%us, 1.0%sy, 0.0%ni, 21.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu4 : 76.9%us, 2.0%sy, 0.0%ni, 21.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu5 : 76.3%us, 2.0%sy, 0.0%ni, 21.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu6 : 12.6%us, 3.0%sy, 0.0%ni, 84.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu7 : 8.6%us, 2.0%sy, 0.0%ni, 89.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu8 : 77.0%us, 2.0%sy, 0.0%ni, 21.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu9 : 77.0%us, 2.0%sy, 0.0%ni, 21.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu10 : 77.6%us, 1.7%sy, 0.0%ni, 20.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu11 : 75.7%us, 2.0%sy, 0.0%ni, 21.4%id, 1.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu12 : 76.6%us, 2.3%sy, 0.0%ni, 21.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu13 : 76.6%us, 2.3%sy, 0.0%ni, 21.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu14 : 76.2%us, 2.6%sy, 0.0%ni, 15.9%id, 5.3%wa, 0.0%hi, 0.0%si, 0.0%st Cpu15 : 76.6%us, 2.0%sy, 0.0%ni, 21.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu16 : 73.6%us, 2.6%sy, 0.0%ni, 23.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu17 : 74.5%us, 2.3%sy, 0.0%ni, 23.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu18 : 73.9%us, 2.3%sy, 0.0%ni, 23.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu19 : 72.9%us, 2.6%sy, 0.0%ni, 24.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu20 : 72.8%us, 2.6%sy, 0.0%ni, 24.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu21 : 72.7%us, 2.3%sy, 0.0%ni, 25.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu22 : 72.5%us, 2.6%sy, 0.0%ni, 24.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu23 : 73.0%us, 2.3%sy, 0.0%ni, 24.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu24 : 74.7%us, 2.7%sy, 0.0%ni, 22.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu25 : 74.5%us, 2.6%sy, 0.0%ni, 22.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu26 : 73.7%us, 2.0%sy, 0.0%ni, 24.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu27 : 74.1%us, 2.3%sy, 0.0%ni, 23.6%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu28 : 74.1%us, 2.3%sy, 0.0%ni, 23.6%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu29 : 74.0%us, 2.0%sy, 0.0%ni, 24.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu30 : 73.2%us, 2.3%sy, 0.0%ni, 24.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu31 : 73.1%us, 2.0%sy, 0.0%ni, 24.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 132138168k total, 131711704k used, 426464k free, 88336k buffers Swap: 5701624k total, 229572k used, 5472052k free, 13745596k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 13865 root 20 0 122g 112g 3.1g S 2334.3 89.6 20726:49 java 27139 jayen 20 0 15428 1728 952 S 2.6 0.0 0:04.21 top 27161 sysadmin 20 0 15428 1712 940 R 1.0 0.0 0:00.28 top 33 root 20 0 0 0 0 S 0.3 0.0 0:06.24 ksoftirqd/7 131 root 20 0 0 0 0 S 0.3 0.0 0:09.52 events/0 1858 root 20 0 0 0 0 S 0.3 0.0 1:35.14 kondemand/0 A dump of the java stack confirms that none of the threads are anywhere near the few places where locks are used, nor are they anywhere near any disk or network i/o. I had trouble finding a clear explanation of what 'top' means by "idle" versus "wait", but I get the impression that "idle" means "no more threads that need to be run" but this doesn't make sense in our case. We're using a "Executors.newFixedThreadPool(30)". There are a large number of tasks pending and each task lasts for 10 seconds or so. I suspect that the explanation requires a good understanding of NUMA. Is the "idle" state what you see when a CPU is waiting for a non-local access? If not, then what is the explanation?

    Read the article

  • OpenGL textures trigger error 1281 and strange background behavior

    - by user3714670
    I am using SOIL to apply textures to VBOs, without textures i could change the background and display black (default color) vbos easily, but now with textures, openGL is giving an error 1281, the background is black and some textures are not applied. There must be something i didn't understand about applying/loading the textures. BUt the texture IS applied (nothing else is working though), the error is applied when i try to use the shader program however i checked the compilation of these and no problems were written. Here is the code i use to load textures, once loaded it is kept in memory, it mostly comes from the example of SOIL : texture = SOIL_load_OGL_single_cubemap( filename, SOIL_DDS_CUBEMAP_FACE_ORDER, SOIL_LOAD_AUTO, SOIL_CREATE_NEW_ID, SOIL_FLAG_POWER_OF_TWO | SOIL_FLAG_MIPMAPS | SOIL_FLAG_DDS_LOAD_DIRECT ); if( texture > 0 ) { glEnable( GL_TEXTURE_CUBE_MAP ); glEnable( GL_TEXTURE_GEN_S ); glEnable( GL_TEXTURE_GEN_T ); glEnable( GL_TEXTURE_GEN_R ); glTexGeni( GL_S, GL_TEXTURE_GEN_MODE, GL_REFLECTION_MAP ); glTexGeni( GL_T, GL_TEXTURE_GEN_MODE, GL_REFLECTION_MAP ); glTexGeni( GL_R, GL_TEXTURE_GEN_MODE, GL_REFLECTION_MAP ); glBindTexture( GL_TEXTURE_CUBE_MAP, texture ); std::cout << "the loaded single cube map ID was " << texture << std::endl; } else { std::cout << "Attempting to load as a HDR texture" << std::endl; texture = SOIL_load_OGL_HDR_texture( filename, SOIL_HDR_RGBdivA2, 0, SOIL_CREATE_NEW_ID, SOIL_FLAG_POWER_OF_TWO | SOIL_FLAG_MIPMAPS ); if( texture < 1 ) { std::cout << "Attempting to load as a simple 2D texture" << std::endl; texture = SOIL_load_OGL_texture( filename, SOIL_LOAD_AUTO, SOIL_CREATE_NEW_ID, SOIL_FLAG_POWER_OF_TWO | SOIL_FLAG_MIPMAPS | SOIL_FLAG_DDS_LOAD_DIRECT ); } if( texture > 0 ) { // enable texturing glEnable( GL_TEXTURE_2D ); // bind an OpenGL texture ID glBindTexture( GL_TEXTURE_2D, texture ); std::cout << "the loaded texture ID was " << texture << std::endl; } else { glDisable( GL_TEXTURE_2D ); std::cout << "Texture loading failed: '" << SOIL_last_result() << "'" << std::endl; } } and how i apply it when drawing : GLuint TextureID = glGetUniformLocation(shaderProgram, "myTextureSampler"); if(!TextureID) cout << "TextureID not found ..." << endl; // glEnableVertexAttribArray(TextureID); glActiveTexture(GL_TEXTURE0); if(SFML) sf::Texture::bind(sfml_texture); else { glBindTexture (GL_TEXTURE_2D, texture); // glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 1024, 768, 0, GL_RGB, GL_UNSIGNED_BYTE, &texture); } glUniform1i(TextureID, 0); I am not sure that SOIL is adapted to my program as i want something as simple as possible (i used sfml's texture object which was the best but i can't anymore), but if i can get it to work it would be great. EDIT : After narrowing the code implied by the error, here is the code that provokes it, it is called between texture loading and bos drawing : glEnableClientState(GL_VERTEX_ARRAY); //this gives the error : glUseProgram(this->shaderProgram); if (!shaderLoaded) { std::cout << "Loading default shaders" << std::endl; if(textured) loadShaderProgramm(texture_vertexSource, texture_fragmentSource); else loadShaderProgramm(default_vertexSource,default_fragmentSource); } glm::mat4 Projection = camera->getPerspective(); glm::mat4 View = camera->getView(); glm::mat4 Model = glm::mat4(1.0f); Model[0][0] *= scale_x; Model[1][1] *= scale_y; Model[2][2] *= scale_z; glm::vec3 translate_vec(this->x,this->y,this->z); glm::mat4 object_transform = glm::translate(glm::mat4(1.0f),translate_vec); glm::quat rotation = QAccumulative.getQuat(); glm::mat4 matrix_rotation = glm::mat4_cast(rotation); object_transform *= matrix_rotation; Model *= object_transform; glm::mat4 MVP = Projection * View * Model; GLuint ModelID = glGetUniformLocation(this->shaderProgram, "M"); if(ModelID ==-1) cout << "ModelID not found ..." << endl; GLuint MatrixID = glGetUniformLocation(this->shaderProgram, "MVP"); if(MatrixID ==-1) cout << "MatrixID not found ..." << endl; GLuint ViewID = glGetUniformLocation(this->shaderProgram, "V"); if(ViewID ==-1) cout << "ViewID not found ..." << endl; glUniformMatrix4fv(MatrixID, 1, GL_FALSE, &MVP[0][0]); glUniformMatrix4fv(ModelID, 1, GL_FALSE, &Model[0][0]); glUniformMatrix4fv(ViewID, 1, GL_FALSE, &View[0][0]); drawObject();

    Read the article

  • Test of procedure is fine but when called from a menu gives uninitialized errors. C

    - by Delfic
    The language is portuguese, but I think you get the picture. My main calls only the menu function (the function in comment is the test which works). In the menu i introduce the option 1 which calls the same function. But there's something wrong. If i test it solely on the input: (1/1)x^2 //it reads the polinomyal (2/1) //reads the rational and returns 4 (you can guess what it does, calculates the value of an instace of x over a rational) My polinomyals are linear linked lists with a coeficient (rational) and a degree (int) int main () { menu_interactivo (); // instanciacao (); return 0; } void menu_interactivo(void) { int i; do{ printf("1. Instanciacao de um polinomio com um escalar\n"); printf("2. Multiplicacao de um polinomio por um escalar\n"); printf("3. Soma de dois polinomios\n"); printf("4. Multiplicacao de dois polinomios\n"); printf("5. Divisao de dois polinomios\n"); printf("0. Sair\n"); scanf ("%d", &i); switch (i) { case 0: exit(0); break; case 1: instanciacao (); break; case 2: multiplicacao_esc (); break; case 3: somar_pol (); break; case 4: multiplicacao_pol (); break; case 5: divisao_pol (); break; default:printf("O numero introduzido nao e valido!\n"); } } while (i != 0); } When i call it with the menu, with the same input, it does not stop reading the polinomyal (I know this because it does not ask me for the rational as on the other example) I've run it with valgrind --track-origins=yes returning the following: ==17482== Memcheck, a memory error detector ==17482== Copyright (C) 2002-2009, and GNU GPL'd, by Julian Seward et al. ==17482== Using Valgrind-3.5.0 and LibVEX; rerun with -h for copyright info ==17482== Command: ./teste ==17482== 1. Instanciacao de um polinomio com um escalar 2. Multiplicacao de um polinomio por um escalar 3. Soma de dois polinomios 4. Multiplicacao de dois polinomios 5. Divisao de dois polinomios 0. Sair 1 Introduza um polinomio na forma (n0/d0)x^e0 + (n1/d1)x^e1 + ... + (nk/dk)^ek, com ei > e(i+1): (1/1)x^2 ==17482== Conditional jump or move depends on uninitialised value(s) ==17482== at 0x401126: simplifica_f (fraccoes.c:53) ==17482== by 0x4010CB: le_f (fraccoes.c:30) ==17482== by 0x400CDA: le_pol (polinomios.c:156) ==17482== by 0x400817: instanciacao (t4.c:14) ==17482== by 0x40098C: menu_interactivo (t4.c:68) ==17482== by 0x4009BF: main (t4.c:86) ==17482== Uninitialised value was created by a stack allocation ==17482== at 0x401048: le_f (fraccoes.c:19) ==17482== ==17482== Conditional jump or move depends on uninitialised value(s) ==17482== at 0x400D03: le_pol (polinomios.c:163) ==17482== by 0x400817: instanciacao (t4.c:14) ==17482== by 0x40098C: menu_interactivo (t4.c:68) ==17482== by 0x4009BF: main (t4.c:86) ==17482== Uninitialised value was created by a stack allocation ==17482== at 0x401048: le_f (fraccoes.c:19) ==17482== I will now give you the functions which are called void le_pol (pol *p) { fraccao f; int e; char c; printf ("Introduza um polinomio na forma (n0/d0)x^e0 + (n1/d1)x^e1 + ... + (nk/dk)^ek,\n"); printf("com ei > e(i+1):\n"); *p = NULL; do { le_f (&f); getchar(); getchar(); scanf ("%d", &e); if (f.n != 0) *p = add (*p, f, e); c = getchar (); if (c != '\n') { getchar(); getchar(); } } while (c != '\n'); } void instanciacao (void) { pol p1; fraccao f; le_pol (&p1); printf ("Insira uma fraccao na forma (n/d):\n"); le_f (&f); escreve_f(inst_esc_pol(p1, f)); } void le_f (fraccao *f) { int n, d; getchar (); scanf ("%d", &n); getchar (); scanf ("%d", &d); getchar (); assert (d != 0); *f = simplifica_f(cria_f(n, d)); } simplifica_f simplifies a rational and cria_f creates a rationa given the numerator and the denominator Can someone help me please? Thanks in advance. If you want me to provide some tests, just post it. See ya.

    Read the article

  • EXC_BAD_ACCESS at UITableView on IOS

    - by Suprie
    Hi all, When scrolling through table, my application crash and console said it was EXC_BAD_ACCESS. I've look everywhere, and people suggest me to use NSZombieEnabled on my executables environment variables. I've set NSZombieEnabled, NSDebugEnabled, MallocStackLogging and MallocStackLoggingNoCompact to YES on my executables. But apparently i still can't figure out which part of my program that cause EXC_BAD_ACCESS. This is what my console said [Session started at 2010-12-21 21:11:21 +0700.] GNU gdb 6.3.50-20050815 (Apple version gdb-1510) (Wed Sep 22 02:45:02 UTC 2010) Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "x86_64-apple-darwin".sharedlibrary apply-load-rules all Attaching to process 9335. TwitterSearch(9335) malloc: recording malloc stacks to disk using standard recorder TwitterSearch(9335) malloc: process 9300 no longer exists, stack logs deleted from /tmp/stack-logs.9300.TwitterSearch.suirlR.index TwitterSearch(9335) malloc: stack logs being written into /tmp/stack- logs.9335.TwitterSearch.tQJAXk.index 2010-12-21 21:11:25.446 TwitterSearch[9335:207] View Did Load Program received signal: “EXC_BAD_ACCESS”. And this is when i tried to type backtrace on gdb : Program received signal: “EXC_BAD_ACCESS”. (gdb) backtrace #0 0x00f20a67 in objc_msgSend () #1 0x0565cd80 in ?? () #2 0x0033b7fa in -[UITableView(UITableViewInternal) _createPreparedCellForGlobalRow:withIndexPath:] () #3 0x0033177f in -[UITableView(UITableViewInternal) _createPreparedCellForGlobalRow:] () #4 0x00346450 in -[UITableView(_UITableViewPrivate) _updateVisibleCellsNow:] () #5 0x0033e538 in -[UITableView layoutSubviews] () #6 0x01ffc451 in -[CALayer layoutSublayers] () #7 0x01ffc17c in CALayerLayoutIfNeeded () #8 0x01ff537c in CA::Context::commit_transaction () #9 0x01ff50d0 in CA::Transaction::commit () #10 0x020257d5 in CA::Transaction::observer_callback () #11 0x00d9ffbb in __CFRUNLOOP_IS_CALLING_OUT_TO_AN_OBSERVER_CALLBACK_FUNCTION__ () #12 0x00d350e7 in __CFRunLoopDoObservers () #13 0x00cfdbd7 in __CFRunLoopRun () #14 0x00cfd240 in CFRunLoopRunSpecific () #15 0x00cfd161 in CFRunLoopRunInMode () #16 0x01a73268 in GSEventRunModal () #17 0x01a7332d in GSEventRun () #18 0x002d642e in UIApplicationMain () #19 0x00001d4e in main (argc=1, argv=0xbfffee34) at /Users/suprie/Documents/Projects/Self/cocoa/TwitterSearch/main.m:14 I really appreciate for any clue to help me debug my application. EDIT this is the Header file of table #import <UIKit/UIKit.h> @interface TwitterTableViewController : UITableViewController { NSMutableArray *twitters; } @property(nonatomic,retain) NSMutableArray *twitters; @end and the implementation file #import "TwitterTableViewController.h" @implementation TwitterTableViewController @synthesize twitters; #pragma mark - #pragma mark Table view data source - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { // Return the number of sections. return 1; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { // Return the number of rows in the section. return [twitters count]; } - (CGFloat)tableView:(UITableView *)tableView heightForRowAtIndexPath:(NSIndexPath *)indexPath { return 90.0f; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { const NSInteger TAG_IMAGE_VIEW = 1001; const NSInteger TAG_TWEET_VIEW = 1002; const NSInteger TAG_FROM_VIEW = 1003; static NSString *CellIdentifier = @"Cell"; UIImageView *imageView; UILabel *tweet; UILabel *from; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; // Image imageView = [[[[UIImageView alloc] initWithFrame:CGRectMake(5.0f, 5.0f, 60.0f, 60.0f)] autorelease] retain]; [cell.contentView addSubview:imageView]; imageView.tag = TAG_IMAGE_VIEW; // Tweet tweet = [[[UILabel alloc] initWithFrame:CGRectMake(105.0f, 5.0f, 200.0f, 50.0f)] autorelease]; [cell.contentView addSubview:tweet]; tweet.tag = TAG_TWEET_VIEW; tweet.numberOfLines = 2; tweet.font = [UIFont fontWithName:@"Helvetica" size:12]; tweet.textColor = [UIColor blackColor]; tweet.backgroundColor = [UIColor clearColor]; // From from = [[[UILabel alloc] initWithFrame:CGRectMake(105.0f, 55.0, 200.0f, 35.0f)] autorelease]; [cell.contentView addSubview:from]; from.tag = TAG_FROM_VIEW; from.numberOfLines = 1; from.font = [UIFont fontWithName:@"Helvetica" size:10]; from.textColor = [UIColor blackColor]; from.backgroundColor = [UIColor clearColor]; } // Configure the cell... NSMutableDictionary *twitter = [twitters objectAtIndex:(NSInteger) indexPath.row]; // cell.text = [twitter objectForKey:@"text"]; tweet.text = (NSString *) [twitter objectForKey:@"text"]; tweet.hidden = NO; from.text = (NSString *) [twitter objectForKey:@"from_user"]; from.hidden = NO; NSString *avatar_url = (NSString *)[twitter objectForKey:@"profile_image_url"]; NSData * imageData = [[NSData alloc] initWithContentsOfURL: [NSURL URLWithString: avatar_url]]; imageView.image = [UIImage imageWithData: imageData]; imageView.hidden = NO; return cell; } #pragma mark - #pragma mark Table view delegate - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { NSMutableDictionary *twitter = [twitters objectAtIndex:(NSInteger)indexPath.row]; NSLog(@"Twit ini kepilih :%@", [twitter objectForKey:@"text"]); } #pragma mark - #pragma mark Memory management - (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; } - (void)viewDidUnload { } - (void)dealloc { [super dealloc]; } @end

    Read the article

  • Array subscript is not an integer

    - by Dimitri
    Hello folks, following this previous question Malloc Memory Corruption in C, now i have another problem. I have the same code. Now I am trying to multiply the values contained in the arrays A * vc and store in res. Then A is set to zero and i do a second multiplication with res and vc and i store the values in A. (A and Q are square matrices and mc and vc are N lines two columns matrices or arrays). Here is my code : int jacobi_gpu(double A[], double Q[], double tol, long int dim){ int nrot, p, q, k, tid; double c, s; double *mc, *vc, *res; int i,kc; double vc1, vc2; mc = (double *)malloc(2 * dim * sizeof(double)); vc = (double *)malloc(2 * dim * sizeof(double)); vc = (double *)malloc(dim * dim * sizeof(double)); if( mc == NULL || vc == NULL){ fprintf(stderr, "pb allocation matricre\n"); exit(1); } nrot = 0; for(k = 0; k < dim - 1; k++){ eye(mc, dim); eye(vc, dim); for(tid = 0; tid < floor(dim /2); tid++){ p = (tid + k)%(dim - 1); if(tid != 0) q = (dim - tid + k - 1)%(dim - 1); else q = dim - 1; printf("p = %d | q = %d\n", p, q); if(fabs(A[p + q*dim]) > tol){ nrot++; symschur2(A, dim, p, q, &c, &s); mc[2*tid] = p; vc[2 * tid] = c; mc[2*tid + 1] = q; vc[2*tid + 1] = -s; mc[2*tid + 2*(dim - 2*tid) - 2] = p; vc[2*tid + 2*(dim - 2*tid) - 2 ] = s; mc[2*tid + 2*(dim - 2*tid) - 1] = q; vc[2 * tid + 2*(dim - 2*tid) - 1 ] = c; } } for( i = 0; i< dim; i++){ for(kc=0; kc < dim; kc++){ if( kc < floor(dim/2)) { vc1 = vc[2*kc + i*dim]; vc2 = vc[2*kc + 2*(dim - 2*kc) - 2]; }else { vc1 = vc[2*kc+1 + i*dim]; vc2 = vc[2*kc - 2*(dim - 2*kc) - 1]; } res[kc + i*dim] = A[mc[2*kc] + i*dim]*vc1 + A[mc[2*kc + 1] + i*dim]*vc2; } } zero(A, dim); for( i = 0; i< dim; i++){ for(kc=0; kc < dim; k++){ if( k < floor(dim/2)){ vc1 = vc[2*kc + i*dim]; vc2 = vc[2*kc + 2*(dim - 2*kc) - 2]; }else { vc1 = vc[2*kc+1 + i*dim]; vc2 = vc[2*kc - 2*(dim - 2*kc) - 1]; } A[kc + i*dim] = res[mc[2*kc] + i*dim]*vc1 + res[mc[2*kc + 1] + i*dim]*vc2; } } affiche(mc,dim,2,"Matrice creuse"); affiche(vc,dim,2,"Valeur creuse"); } free(mc); free(vc); free(res); return nrot; } When i try to compile, i have this error : jacobi_gpu.c: In function ‘jacobi_gpu’: jacobi_gpu.c:103: error: array subscript is not an integer jacobi_gpu.c:103: error: array subscript is not an integer jacobi_gpu.c:118: error: array subscript is not an integer jacobi_gpu.c:118: error: array subscript is not an integer make: *** [jacobi_gpu.o] Erreur 1 The corresponding lines are where I store the results in res and A : res[kc + i*dim] = A[mc[2*kc] + i*dim]*vc1 + A[mc[2*kc + 1] + i*dim]*vc2; and A[kc + i*dim] = res[mc[2*kc] + i*dim]*vc1 + res[mc[2*kc + 1] + i*dim]*vc2; Can someone explain me what is this error and how can i correct it? Thanks for your help. ;)

    Read the article

  • Database file is inexplicably locked during SQLite commit

    - by sweeney
    Hello, I'm performing a large number of INSERTS to a SQLite database. I'm using just one thread. I batch the writes to improve performance and have a bit of security in case of a crash. Basically I cache up a bunch of data in memory and then when I deem appropriate, I loop over all of that data and perform the INSERTS. The code for this is shown below: public void Commit() { using (SQLiteConnection conn = new SQLiteConnection(this.connString)) { conn.Open(); using (SQLiteTransaction trans = conn.BeginTransaction()) { using (SQLiteCommand command = conn.CreateCommand()) { command.CommandText = "INSERT OR IGNORE INTO [MY_TABLE] (col1, col2) VALUES (?,?)"; command.Parameters.Add(this.col1Param); command.Parameters.Add(this.col2Param); foreach (Data o in this.dataTemp) { this.col1Param.Value = o.Col1Prop; this. col2Param.Value = o.Col2Prop; command.ExecuteNonQuery(); } } this.TryHandleCommit(trans); } conn.Close(); } } I now employ the following gimmick to get the thing to eventually work: private void TryHandleCommit(SQLiteTransaction trans) { try { trans.Commit(); } catch (Exception e) { Console.WriteLine("Trying again..."); this.TryHandleCommit(trans); } } I create my DB like so: public DataBase(String path) { //build connection string SQLiteConnectionStringBuilder connString = new SQLiteConnectionStringBuilder(); connString.DataSource = path; connString.Version = 3; connString.DefaultTimeout = 5; connString.JournalMode = SQLiteJournalModeEnum.Persist; connString.UseUTF16Encoding = true; using (connection = new SQLiteConnection(connString.ToString())) { //check for existence of db FileInfo f = new FileInfo(path); if (!f.Exists) //build new blank db { SQLiteConnection.CreateFile(path); connection.Open(); using (SQLiteTransaction trans = connection.BeginTransaction()) { using (SQLiteCommand command = connection.CreateCommand()) { command.CommandText = DataBase.CREATE_MATCHES; command.ExecuteNonQuery(); command.CommandText = DataBase.CREATE_STRING_DATA; command.ExecuteNonQuery(); //TODO add logging } trans.Commit(); } connection.Close(); } } } I then export the connection string and use it to obtain new connections in different parts of the program. At seemingly random intervals, though at far too great a rate to ignore or otherwise workaround this problem, I get unhandled SQLiteException: Database file is locked. This occurs when I attempt to commit the transaction. No errors seem to occur prior to then. This does not always happen. Sometimes the whole thing runs without a hitch. No reads are being performed on these files before the commits finish. I have the very latest SQLite binary. I'm compiling for .NET 2.0. I'm using VS 2008. The db is a local file. All of this activity is encapsulated within one thread / process. Virus protection is off (though I think that was only relevant if you were connecting over a network?). As per Scotsman's post I have implemented the following changes: Journal Mode set to Persist DB files stored in C:\Docs + Settings\ApplicationData via System.Windows.Forms.Application.AppData windows call No inner exception Witnessed on two distinct machines (albeit very similar hardware and software) Have been running Process Monitor - no extraneous processes are attaching themselves to the DB files - the problem is definitely in my code... Does anyone have any idea whats going on here? I know I just dropped a whole mess of code, but I've been trying to figure this out for way too long. My thanks to anyone who makes it to the end of this question! brian UPDATES: Thanks for the suggestions so far! I've implemented many of the suggested changes. I feel that we are getting closer to the answer...however... The code above technically works however it is non-deterministic! It is not guaranteed to do anything aside from spin in neutral forever. In practice it seems to work somewhere between the 1st and 10th iteration. If i batch my commits at a reasonable interval damage will be mitigated but I really do not want to leave things in this state... More suggestions welcome!

    Read the article

  • Trying to reduce the speed overhead of an almost-but-not-quite-int number class

    - by Fumiyo Eda
    I have implemented a C++ class which behaves very similarly to the standard int type. The difference is that it has an additional concept of "epsilon" which represents some tiny value that is much less than 1, but greater than 0. One way to think of it is as a very wide fixed point number with 32 MSBs (the integer parts), 32 LSBs (the epsilon parts) and a huge sea of zeros in between. The following class works, but introduces a ~2x speed penalty in the overall program. (The program includes code that has nothing to do with this class, so the actual speed penalty of this class is probably much greater than 2x.) I can't paste the code that is using this class, but I can say the following: +, -, +=, <, > and >= are the only heavily used operators. Use of setEpsilon() and getInt() is extremely rare. * is also rare, and does not even need to consider the epsilon values at all. Here is the class: #include <limits> struct int32Uepsilon { typedef int32Uepsilon Self; int32Uepsilon () { _value = 0; _eps = 0; } int32Uepsilon (const int &i) { _value = i; _eps = 0; } void setEpsilon() { _eps = 1; } Self operator+(const Self &rhs) const { Self result = *this; result._value += rhs._value; result._eps += rhs._eps; return result; } Self operator-(const Self &rhs) const { Self result = *this; result._value -= rhs._value; result._eps -= rhs._eps; return result; } Self operator-( ) const { Self result = *this; result._value = -result._value; result._eps = -result._eps; return result; } Self operator*(const Self &rhs) const { return this->getInt() * rhs.getInt(); } // XXX: discards epsilon bool operator<(const Self &rhs) const { return (_value < rhs._value) || (_value == rhs._value && _eps < rhs._eps); } bool operator>(const Self &rhs) const { return (_value > rhs._value) || (_value == rhs._value && _eps > rhs._eps); } bool operator>=(const Self &rhs) const { return (_value >= rhs._value) || (_value == rhs._value && _eps >= rhs._eps); } Self &operator+=(const Self &rhs) { this->_value += rhs._value; this->_eps += rhs._eps; return *this; } Self &operator-=(const Self &rhs) { this->_value -= rhs._value; this->_eps -= rhs._eps; return *this; } int getInt() const { return(_value); } private: int _value; int _eps; }; namespace std { template<> struct numeric_limits<int32Uepsilon> { static const bool is_signed = true; static int max() { return 2147483647; } } }; The code above works, but it is quite slow. Does anyone have any ideas on how to improve performance? There are a few hints/details I can give that might be helpful: 32 bits are definitely insufficient to hold both _value and _eps. In practice, up to 24 ~ 28 bits of _value are used and up to 20 bits of _eps are used. I could not measure a significant performance difference between using int32_t and int64_t, so memory overhead itself is probably not the problem here. Saturating addition/subtraction on _eps would be cool, but isn't really necessary. Note that the signs of _value and _eps are not necessarily the same! This broke my first attempt at speeding this class up. Inline assembly is no problem, so long as it works with GCC on a Core i7 system running Linux!

    Read the article

  • Decryption Key value not match

    - by Jitendra Jadav
    public class TrippleENCRSPDESCSP { public TrippleENCRSPDESCSP() { } public void EncryptIt(string sData,ref byte[] sEncData,ref byte[] Key1,ref byte[] Key2) { try { // Create a new TripleDESCryptoServiceProvider object // to generate a key and initialization vector (IV). TripleDESCryptoServiceProvider tDESalg = new TripleDESCryptoServiceProvider(); // Create a string to encrypt. // Encrypt the string to an in-memory buffer. byte[] Data = EncryptTextToMemory(sData,tDESalg.Key,tDESalg.IV); sEncData = Data; Key1 = tDESalg.Key; Key2 = tDESalg.IV; } catch (Exception) { throw; } } public string DecryptIt(byte[] sEncData) { //byte[] toEncrypt = new ASCIIEncoding().GetBytes(sEncData); //XElement xParser = null; //XmlDocument xDoc = new XmlDocument(); try { //string Final = ""; string sPwd = null; string sKey1 = null; string sKey2 = null; //System.Text.ASCIIEncoding encoding = new System.Text.ASCIIEncoding(); string soutxml = ""; //soutxml = encoding.GetString(sEncData); soutxml = ASCIIEncoding.ASCII.GetString(sEncData); sPwd = soutxml.Substring(18, soutxml.LastIndexOf("</EncPwd>") - 18); sKey1 = soutxml.Substring(18 + sPwd.Length + 15, soutxml.LastIndexOf("</Key1>") - (18 + sPwd.Length + 15)); sKey2 = soutxml.Substring(18 + sPwd.Length + 15 + sKey1.Length + 13, soutxml.LastIndexOf("</Key2>") - (18 + sPwd.Length + 15 + sKey1.Length + 13)); //xDoc.LoadXml(soutxml); //xParser = XElement.Parse(soutxml); //IEnumerable<XElement> elemsValidations = // from el in xParser.Elements("EmailPwd") // select el; #region OldCode //XmlNodeList objXmlNode = xDoc.SelectNodes("EmailPwd"); //foreach (XmlNode xmllist in objXmlNode) //{ // XmlNode xmlsubnode; // xmlsubnode = xmllist.SelectSingleNode("EncPwd"); // xmlsubnode = xmllist.SelectSingleNode("Key1"); // xmlsubnode = xmllist.SelectSingleNode("Key2"); //} #endregion //foreach (XElement elemValidation in elemsValidations) //{ // sPwd = elemValidation.Element("EncPwd").Value; // sKey1 = elemValidation.Element("Key1").Value; // sKey2 = elemValidation.Element("Key2").Value; //} //byte[] Key1 = encoding.GetBytes(sKey1); //byte[] Key2 = encoding.GetBytes(sKey2); //byte[] Data = encoding.GetBytes(sPwd); byte[] Key1 = ASCIIEncoding.ASCII.GetBytes(sKey1); byte[] Key2 = ASCIIEncoding.ASCII.GetBytes(sKey2); byte[] Data = ASCIIEncoding.ASCII.GetBytes(sPwd); // Decrypt the buffer back to a string. string Final = DecryptTextFromMemory(Data, Key1, Key2); return Final; } catch (Exception) { throw; } } public static byte[] EncryptTextToMemory(string Data,byte[] Key,byte[] IV) { try { // Create a MemoryStream. MemoryStream mStream = new MemoryStream(); // Create a CryptoStream using the MemoryStream // and the passed key and initialization vector (IV). CryptoStream cStream = new CryptoStream(mStream, new TripleDESCryptoServiceProvider().CreateEncryptor(Key, IV), CryptoStreamMode.Write); // Convert the passed string to a byte array. //byte[] toEncrypt = new ASCIIEncoding().GetBytes(Data); byte[] toEncrypt = ASCIIEncoding.ASCII.GetBytes(Data); // Write the byte array to the crypto stream and flush it. cStream.Write(toEncrypt, 0, toEncrypt.Length); cStream.FlushFinalBlock(); // Get an array of bytes from the // MemoryStream that holds the // encrypted data. byte[] ret = mStream.ToArray(); // Close the streams. cStream.Close(); mStream.Close(); // Return the encrypted buffer. return ret; } catch (CryptographicException e) { MessageBox.Show("A Cryptographic error occurred: {0}", e.Message); return null; } } public static string DecryptTextFromMemory(byte[] Data, byte[] Key, byte[] IV) { try { // Create a new MemoryStream using the passed // array of encrypted data. MemoryStream msDecrypt = new MemoryStream(Data); // Create a CryptoStream using the MemoryStream // and the passed key and initialization vector (IV). CryptoStream csDecrypt = new CryptoStream(msDecrypt, new TripleDESCryptoServiceProvider().CreateDecryptor(Key, IV), CryptoStreamMode.Write); csDecrypt.Write(Data, 0, Data.Length); //csDecrypt.FlushFinalBlock(); msDecrypt.Position = 0; // Create buffer to hold the decrypted data. byte[] fromEncrypt = new byte[msDecrypt.Length]; // Read the decrypted data out of the crypto stream // and place it into the temporary buffer. msDecrypt.Read(fromEncrypt, 0, msDecrypt.ToArray().Length); //csDecrypt.Close(); MessageBox.Show(ASCIIEncoding.ASCII.GetString(fromEncrypt)); //Convert the buffer into a string and return it. return new ASCIIEncoding().GetString(fromEncrypt); } catch (CryptographicException e) { MessageBox.Show("A Cryptographic error occurred: {0}", e.Message); return null; } } }

    Read the article

  • ASp.Net Mvc 1.0 Dynamic Images Returned from Controller taking 154 seconds+ to display in IE8, firef

    - by julian guppy
    I have a curious problem with IE, IIS 6.0 dynamic PNG files and I am baffled as to how to fix.. Snippet from Helper (this returns the URL to the view for requesting the images from my Controller. string url = LinkBuilder.BuildUrlFromExpression(helper.ViewContext.RequestContext, helper.RouteCollection, c = c.FixHeight(ir.Filename, ir.AltText, "FFFFFF")); url = url.Replace("&", "&"); sb.Append(string.Format("<removed id=\"TheImage\" src=\"{0}\" alt=\"\" /", url)+Environment.NewLine); This produces a piece of html as follows:- img id="TheImage" src="/ImgText/FixHeight?sFile=Images%2FUser%2FJulianGuppy%2FMediums%2Fconservatory.jpg&backgroundColour=FFFFFF" alt="" / brackets missing because i cant post an image... even though I dont want to post an image I jsut want to post the markup... sigh Snippet from Controller ImgTextController /// <summary> /// This function fixes the height of the image /// </summary> /// <param name="sFile"></param> /// <param name="alternateText"></param> /// <param name="backgroundColour"></param> /// <returns></returns> [AcceptVerbs(HttpVerbs.Get)] public ActionResult FixHeight(string sFile, string alternateText, string backgroundColour) { #region File if (string.IsNullOrEmpty(sFile)) { return new ImgTextResult(); } // MVC specific change to prepend the new directory if (sFile.IndexOf("Content") == -1) { sFile = "~/Content/" + sFile; } // open the file System.Drawing.Image img; try { img = System.Drawing.Image.FromFile(Server.MapPath(sFile)); } catch { img = null; } // did we fail? if (img == null) { return new ImgTextResult(); } #endregion File #region Width // Sort out the width from the image passed to me Int32 nWidth = img.Width; #endregion Width #region Height Int32 nHeight = img.Height; #endregion Height // What is the ideal height given a width of 2100 this should be 1400. var nIdealHeight = (int)(nWidth / 1.40920096852); // So is the actual height of the image already greater than the ideal height? Int32 nSplit; if (nIdealHeight < nHeight) { // Yes, do nothing, well i need to return the iamge... nSplit = 0; } else { // rob wants to not show the white at the top or bottom, so if we were to crop the image how would be do it // 1. Calculate what the width should be If we dont adjust the heigt var newIdealWidth = (int)(nHeight * 1.40920096852); // 2. This newIdealWidth should be smaller than the existing width... so work out the split on that Int32 newSplit = (nWidth - newIdealWidth) / 2; // 3. Now recrop the image using 0-nHeight as the height (i.e. full height) // but crop the sides so that its the correct aspect ration var newRect = new Rectangle(newSplit, 0, newIdealWidth, nHeight); img = CropImage(img, newRect); nHeight = img.Height; nWidth = img.Width; nSplit = 0; } // No, so I want to place this image on a larger canvas and we do this by Creating a new image to be the size that we want System.Drawing.Image canvas = new Bitmap(nWidth, nIdealHeight, PixelFormat.Format24bppRgb); Graphics g = Graphics.FromImage(canvas); #region Color // Whilst we can set the background colour we shall default to white if (string.IsNullOrEmpty(backgroundColour)) { backgroundColour = "FFFFFF"; } Color bc = ColorTranslator.FromHtml("#" + backgroundColour); #endregion Color // Filling the background (which gives us our broder) Brush backgroundBrush = new SolidBrush(bc); g.FillRectangle(backgroundBrush, -1, -1, nWidth + 1, nIdealHeight + 1); // draw the image at the position var rect = new Rectangle(0, nSplit, nWidth, nHeight); g.DrawImage(img, rect); return new ImgTextResult { Image = canvas, ImageFormat = ImageFormat.Png }; } My ImgTextResult is a class that returns an Action result for me but embedding the image from a memory stream into the response.outputstream. snippet from my ImageResults /// <summary> /// Execute the result /// </summary> /// <param name="context"></param> public override void ExecuteResult(ControllerContext context) { // output context.HttpContext.Response.Clear(); context.HttpContext.Response.ContentType = "image/png"; try { var memStream = new MemoryStream(); Image.Save(memStream, ImageFormat.Png); context.HttpContext.Response.BinaryWrite(memStream.ToArray()); context.HttpContext.Response.Flush(); context.HttpContext.Response.Close(); memStream.Dispose(); Image.Dispose(); } catch (Exception ex) { string a = ex.Message; } } Now all of this works locally and lovely, and indeed all of this works on my production server BUT Only for Firefox, Safari, Chrome (and other browsers) IE has a fit and decides that it either wont display the image or it does display the image after approx 154seconds of waiting..... I have made sure my HTML is XHTML compliant, I have made sure I am getting no Routing errors or crashes in my event log on the server.... Now obviously I have been a muppet and have done something wrong... but what I cant fathom is why in development all works fine, and in production all non IE browsers also work fine, but IE 8 using IIS 6.0 production server is having some kind of problem in returning this PNG and I dont have an error to trace... so what I am looking for is guidance as to how I can debug this problem.

    Read the article

  • Core Animation bad access on device

    - by user1595102
    I'm trying to do a frame by frame animation with CAlayers. I'm doing this with this tutorial http://mysterycoconut.com/blog/2011/01/cag1/ but everything works with disable ARC, when I'm try to rewrite code with ARC, it's works on simulator perfectly but on device I got a bad access memory. Layer Class interface #import <Foundation/Foundation.h> #import <QuartzCore/QuartzCore.h> @interface MCSpriteLayer : CALayer { unsigned int sampleIndex; } // SampleIndex needs to be > 0 @property (readwrite, nonatomic) unsigned int sampleIndex; // For use with sample rects set by the delegate + (id)layerWithImage:(CGImageRef)img; - (id)initWithImage:(CGImageRef)img; // If all samples are the same size + (id)layerWithImage:(CGImageRef)img sampleSize:(CGSize)size; - (id)initWithImage:(CGImageRef)img sampleSize:(CGSize)size; // Use this method instead of sprite.sampleIndex to obtain the index currently displayed on screen - (unsigned int)currentSampleIndex; @end Layer Class implementation @implementation MCSpriteLayer @synthesize sampleIndex; - (id)initWithImage:(CGImageRef)img; { self = [super init]; if (self != nil) { self.contents = (__bridge id)img; sampleIndex = 1; } return self; } + (id)layerWithImage:(CGImageRef)img; { MCSpriteLayer *layer = [(MCSpriteLayer*)[self alloc] initWithImage:img]; return layer ; } - (id)initWithImage:(CGImageRef)img sampleSize:(CGSize)size; { self = [self initWithImage:img]; if (self != nil) { CGSize sampleSizeNormalized = CGSizeMake(size.width/CGImageGetWidth(img), size.height/CGImageGetHeight(img)); self.bounds = CGRectMake( 0, 0, size.width, size.height ); self.contentsRect = CGRectMake( 0, 0, sampleSizeNormalized.width, sampleSizeNormalized.height ); } return self; } + (id)layerWithImage:(CGImageRef)img sampleSize:(CGSize)size; { MCSpriteLayer *layer = [[self alloc] initWithImage:img sampleSize:size]; return layer; } + (BOOL)needsDisplayForKey:(NSString *)key; { return [key isEqualToString:@"sampleIndex"]; } // contentsRect or bounds changes are not animated + (id < CAAction >)defaultActionForKey:(NSString *)aKey; { if ([aKey isEqualToString:@"contentsRect"] || [aKey isEqualToString:@"bounds"]) return (id < CAAction >)[NSNull null]; return [super defaultActionForKey:aKey]; } - (unsigned int)currentSampleIndex; { return ((MCSpriteLayer*)[self presentationLayer]).sampleIndex; } // Implement displayLayer: on the delegate to override how sample rectangles are calculated; remember to use currentSampleIndex, ignore sampleIndex == 0, and set the layer's bounds - (void)display; { if ([self.delegate respondsToSelector:@selector(displayLayer:)]) { [self.delegate displayLayer:self]; return; } unsigned int currentSampleIndex = [self currentSampleIndex]; if (!currentSampleIndex) return; CGSize sampleSize = self.contentsRect.size; self.contentsRect = CGRectMake( ((currentSampleIndex - 1) % (int)(1/sampleSize.width)) * sampleSize.width, ((currentSampleIndex - 1) / (int)(1/sampleSize.width)) * sampleSize.height, sampleSize.width, sampleSize.height ); } @end I create the layer on viewDidAppear and start animate by clicking on button, but after init I got a bad access error -(void)viewDidAppear:(BOOL)animated { [super viewDidAppear:animated]; NSString *path = [[NSBundle mainBundle] pathForResource:@"mama_default.png" ofType:nil]; CGImageRef richterImg = [UIImage imageWithContentsOfFile:path].CGImage; CGSize fixedSize = animacja.frame.size; NSLog(@"wid: %f, heigh: %f", animacja.frame.size.width, animacja.frame.size.height); NSLog(@"%f", animacja.frame.size.width); richter = [MCSpriteLayer layerWithImage:richterImg sampleSize:fixedSize]; animacja.hidden = 1; richter.position = animacja.center; [self.view.layer addSublayer:richter]; } -(IBAction)animacja:(id)sender { if ([richter animationForKey:@"sampleIndex"]) {NSLog(@"jest"); } if (! [richter animationForKey:@"sampleIndex"]) { CABasicAnimation *anim = [CABasicAnimation animationWithKeyPath:@"sampleIndex"]; anim.fromValue = [NSNumber numberWithInt:0]; anim.toValue = [NSNumber numberWithInt:22]; anim.duration = 4; anim.repeatCount = 1; [richter addAnimation:anim forKey:@"sampleIndex"]; } } Have you got any idea how to fix it? Thanks a lot.

    Read the article

  • I'd like to know why a function executes fine when called from x but not when called from y

    - by Roland
    When called from archive(), readcont(char *filename) executes fine! Called from runoptions() though, it fails to list the files "archived"! why is this? The program must run in terminal. Use -h as a parameter to view the usage. This program is written to "archive" text files into ".rldzip" files. readcont( char *x) should show the files archived in file (*x) a) Successful call Use the program to archive 3 text files: rldzip.exe a.txt b.txt c.txt FILEXY -a archive() will call readcont and it will work showing the files archived after the binary FILEXY will be created. b) Unsuccessful call After the file is created, use: rldzip.exe FILEXY.rldzip -v You can see that the function crashes! I'd like to know why is this happening! /* Sa se scrie un program care: a) arhiveaza fisiere b) dezarhiveaza fisierele athivate */ #include<stdio.h> #include<stdlib.h> #include<conio.h> #include<string.h> struct content{ char *text; char *flname; }*arc; FILE *f; void readcont(char *x){ FILE *p; if((p = fopen(x, "rb")) == NULL){ perror("Critical error: "); exit(EXIT_FAILURE); } content aux; int i; fread(&i, sizeof(int), 1, p); printf("\nFiles in %s \n\n", x); while(i-- >1 && fread(&aux, sizeof(struct content), 1, p) != 0) printf("%s \n", aux.flname); fclose(p); printf("\n\n"); } void archive(int argc, char **argv){ int i; char inttext[5000], textline[1000]; //Allocate dynamic memory for the content to be archived! arc = (content*)malloc(argc * sizeof(content)); for(i=1; i< argc; i++) { if((f = fopen(argv[i], "r")) == NULL){ printf("%s: ", argv[i]); perror(""); exit(EXIT_FAILURE); } while(!feof(f)){ fgets(textline, 5000, f); strcat(inttext, textline); } arc[i-1].text = (char*)malloc(strlen(inttext) + 1); strcpy(arc[i-1].text, inttext); arc[i-1].flname = (char*)malloc(strlen(argv[i]) + 1); strcpy(arc[i-1].flname, argv[i]); fclose(f); } char *filen; filen=(char*)malloc(strlen(argv[argc])+1+7); strcpy(filen, argv[argc]); strcat(filen, ".rldzip"); f = fopen(filen, "wb"); fwrite(&argc, sizeof(int), 1, f); fwrite(arc, sizeof(content), argc, f); fclose(f); printf("Success! "); for(i=1; i< argc; i++) { (i==argc-1)? printf("and %s ", argv[i]) : printf("%s ", argv[i]); } printf("compressed into %s", filen); readcont(filen); free(filen); } void help(char *v){ printf("\n\n----------------------RLDZIP----------------------\n\nUsage: \n\n Archive n files: \n\n%s $file[1] $file[2] ... $file[n] $output -a\n\nExample:\n%s a.txt b.txt c.txt output -a\n\n\n\nView files:\n\n %s $file.rldzip -v\n\nExample:\n %s fileE.rldzip -v\n\n", v, v, v, v); } void runoptions(int c, char **v){ int i; if(c < 2){ printf("Arguments missing! Use -h for help"); } else{ for(i=0; i<c; i++) if(strcmp(v[i], "-h") == 0){ help(v[0]); exit(2); } for(i=0; i<c; i++) if(strcmp(v[i], "-v") == 0){ if(c != 3){ printf("Arguments misused! Use -h for help"); exit(2); } else { printf("-%s-", v[1]); readcont(v[1]); } } } if(strcmp(v[c-1], "-a") == 0) archive(c-2, v); } main(int argc, char **argv) { runoptions(argc, argv); }

    Read the article

  • Odd optimization problem under MSVC

    - by Goz
    I've seen this blog: http://igoro.com/archive/gallery-of-processor-cache-effects/ The "weirdness" in part 7 is what caught my interest. My first thought was "Thats just C# being weird". Its not I wrote the following C++ code. volatile int* p = (volatile int*)_aligned_malloc( sizeof( int ) * 8, 64 ); memset( (void*)p, 0, sizeof( int ) * 8 ); double dStart = t.GetTime(); for (int i = 0; i < 200000000; i++) { //p[0]++;p[1]++;p[2]++;p[3]++; // Option 1 //p[0]++;p[2]++;p[4]++;p[6]++; // Option 2 p[0]++;p[2]++; // Option 3 } double dTime = t.GetTime() - dStart; The timing I get on my 2.4 Ghz Core 2 Quad go as follows: Option 1 = ~8 cycles per loop. Option 2 = ~4 cycles per loop. Option 3 = ~6 cycles per loop. Now This is confusing. My reasoning behind the difference comes down to the cache write latency (3 cycles) on my chip and an assumption that the cache has a 128-bit write port (This is pure guess work on my part). On that basis in Option 1: It will increment p[0] (1 cycle) then increment p[2] (1 cycle) then it has to wait 1 cycle (for cache) then p[1] (1 cycle) then wait 1 cycle (for cache) then p[3] (1 cycle). Finally 2 cycles for increment and jump (Though its usually implemented as decrement and jump). This gives a total of 8 cycles. In Option 2: It can increment p[0] and p[4] in one cycle then increment p[2] and p[6] in another cycle. Then 2 cycles for subtract and jump. No waits needed on cache. Total 4 cycles. In option 3: It can increment p[0] then has to wait 2 cycles then increment p[2] then subtract and jump. The problem is if you set case 3 to increment p[0] and p[4] it STILL takes 6 cycles (which kinda blows my 128-bit read/write port out of the water). So ... can anyone tell me what the hell is going on here? Why DOES case 3 take longer? Also I'd love to know what I've got wrong in my thinking above, as i obviously have something wrong! Any ideas would be much appreciated! :) It'd also be interesting to see how GCC or any other compiler copes with it as well! Edit: Jerry Coffin's idea gave me some thoughts. I've done some more tests (on a different machine so forgive the change in timings) with and without nops and with different counts of nops case 2 - 0.46 00401ABD jne (401AB0h) 0 nops - 0.68 00401AB7 jne (401AB0h) 1 nop - 0.61 00401AB8 jne (401AB0h) 2 nops - 0.636 00401AB9 jne (401AB0h) 3 nops - 0.632 00401ABA jne (401AB0h) 4 nops - 0.66 00401ABB jne (401AB0h) 5 nops - 0.52 00401ABC jne (401AB0h) 6 nops - 0.46 00401ABD jne (401AB0h) 7 nops - 0.46 00401ABE jne (401AB0h) 8 nops - 0.46 00401ABF jne (401AB0h) 9 nops - 0.55 00401AC0 jne (401AB0h) I've included the jump statetements so you can see that the source and destination are in one cache line. You can also see that we start to get a difference when we are 13 bytes or more apart. Until we hit 16 ... then it all goes wrong. So Jerry isn't right (though his suggestion DOES help a bit), however something IS going on. I'm more and more intrigued to try and figure out what it is now. It does appear to be more some sort of memory alignment oddity rather than some sort of instruction throughput oddity. Anyone want to explain this for an inquisitive mind? :D Edit 3: Interjay has a point on the unrolling that blows the previous edit out of the water. With an unrolled loop the performance does not improve. You need to add a nop in to make the gap between jump source and destination the same as for my good nop count above. Performance still sucks. Its interesting that I need 6 nops to improve performance though. I wonder how many nops the processor can issue per cycle? If its 3 then that account for the cache write latency ... But, if thats it, why is the latency occurring? Curiouser and curiouser ...

    Read the article

  • How to handle failure to release a resource which is contained in a smart pointer?

    - by cj
    How should an error during resource deallocation be handled, when the object representing the resource is contained in a shared pointer? Smart pointers are a useful tool to manage resources safely. Examples of such resources are memory, disk files, database connections, or network connections. // open a connection to the local HTTP port boost::shared_ptr<Socket> socket = Socket::connect("localhost:80"); In a typical scenario, the class encapsulating the resource should be noncopyable and polymorphic. A good way to support this is to provide a factory method returning a shared pointer, and declare all constructors non-public. The shared pointers can now be copied from and assigned to freely. The object is automatically destroyed when no reference to it remains, and the destructor then releases the resource. /** A TCP/IP connection. */ class Socket { public: static boost::shared_ptr<Socket> connect(const std::string& address); virtual ~Socket(); protected: Socket(const std::string& address); private: // not implemented Socket(const Socket&); Socket& operator=(const Socket&); }; But there is a problem with this approach. The destructor must not throw, so a failure to release the resource will remain undetected. A common way out of this problem is to add a public method to release the resource. class Socket { public: virtual void close(); // may throw // ... }; Unfortunately, this approach introduces another problem: Our objects may now contain resources which have already been released. This complicates the implementation of the resource class. Even worse, it makes it possible for clients of the class to use it incorrectly. The following example may seem far-fetched, but it is a common pitfall in multi-threaded code. socket->close(); // ... size_t nread = socket->read(&buffer[0], buffer.size()); // wrong use! Either we ensure that the resource is not released before the object is destroyed, thereby losing any way to deal with a failed resource deallocation. Or we provide a way to release the resource explicitly during the object's lifetime, thereby making it possible to use the resource class incorrectly. There is a way out of this dilemma. But the solution involves using a modified shared pointer class. These modifications are likely to be controversial. Typical shared pointer implementations, such as boost::shared_ptr, require that no exception be thrown when their object's destructor is called. Generally, no destructor should ever throw, so this is a reasonable requirement. These implementations also allow a custom deleter function to be specified, which is called in lieu of the destructor when no reference to the object remains. The no-throw requirement is extended to this custom deleter function. The rationale for this requirement is clear: The shared pointer's destructor must not throw. If the deleter function does not throw, nor will the shared pointer's destructor. However, the same holds for other member functions of the shared pointer which lead to resource deallocation, e.g. reset(): If resource deallocation fails, no exception can be thrown. The solution proposed here is to allow custom deleter functions to throw. This means that the modified shared pointer's destructor must catch exceptions thrown by the deleter function. On the other hand, member functions other than the destructor, e.g. reset(), shall not catch exceptions of the deleter function (and their implementation becomes somewhat more complicated). Here is the original example, using a throwing deleter function: /** A TCP/IP connection. */ class Socket { public: static SharedPtr<Socket> connect(const std::string& address); protected: Socket(const std::string& address); virtual Socket() { } private: struct Deleter; // not implemented Socket(const Socket&); Socket& operator=(const Socket&); }; struct Socket::Deleter { void operator()(Socket* socket) { // Close the connection. If an error occurs, delete the socket // and throw an exception. delete socket; } }; SharedPtr<Socket> Socket::connect(const std::string& address) { return SharedPtr<Socket>(new Socket(address), Deleter()); } We can now use reset() to free the resource explicitly. If there is still a reference to the resource in another thread or another part of the program, calling reset() will only decrement the reference count. If this is the last reference to the resource, the resource is released. If resource deallocation fails, an exception is thrown. SharedPtr<Socket> socket = Socket::connect("localhost:80"); // ... socket.reset();

    Read the article

  • What could be the reason for continuous Full GC's during application startup.

    - by Kumar225
    What could be the reason for continuous Full GC's during application (webapplication deployed on tomcat) startup? JDK 1.6 Memory settings -Xms1024M -Xmx1024M -XX:PermSize=200M -XX:MaxPermSize=512M -XX:+UseParallelOldGC jmap output is below Heap Configuration: MinHeapFreeRatio = 40 MaxHeapFreeRatio = 70 MaxHeapSize = 1073741824 (1024.0MB) NewSize = 2686976 (2.5625MB) MaxNewSize = 17592186044415 MB OldSize = 5439488 (5.1875MB) NewRatio = 2 SurvivorRatio = 8 PermSize = 209715200 (200.0MB) MaxPermSize = 536870912 (512.0MB) 0.194: [GC [PSYoungGen: 10489K->720K(305856K)] 10489K->720K(1004928K), 0.0061190 secs] [Times: user=0.01 sys=0.00, real=0.00 secs] 0.200: [Full GC (System) [PSYoungGen: 720K->0K(305856K)] [ParOldGen: 0K->594K(699072K)] 720K->594K(1004928K) [PSPermGen: 6645K->6641K(204800K)], 0.0516540 secs] [Times: user=0.10 sys=0.00, real=0.06 secs] 6.184: [GC [PSYoungGen: 262208K->14797K(305856K)] 262802K->15392K(1004928K), 0.0354510 secs] [Times: user=0.18 sys=0.04, real=0.03 secs] 9.549: [GC [PSYoungGen: 277005K->43625K(305856K)] 277600K->60736K(1004928K), 0.0781960 secs] [Times: user=0.56 sys=0.07, real=0.08 secs] 11.768: [GC [PSYoungGen: 305833K->43645K(305856K)] 322944K->67436K(1004928K), 0.0584750 secs] [Times: user=0.40 sys=0.05, real=0.06 secs] 15.037: [GC [PSYoungGen: 305853K->43619K(305856K)] 329644K->72932K(1004928K), 0.0688340 secs] [Times: user=0.42 sys=0.01, real=0.07 secs] 19.372: [GC [PSYoungGen: 273171K->43621K(305856K)] 302483K->76957K(1004928K), 0.0573890 secs] [Times: user=0.41 sys=0.01, real=0.06 secs] 19.430: [Full GC (System) [PSYoungGen: 43621K->0K(305856K)] [ParOldGen: 33336K->54668K(699072K)] 76957K->54668K(1004928K) [PSPermGen: 36356K->36296K(204800K)], 0.4569500 secs] [Times: user=1.77 sys=0.02, real=0.46 secs] 19.924: [GC [PSYoungGen: 4280K->128K(305856K)] 58949K->54796K(1004928K), 0.0041070 secs] [Times: user=0.01 sys=0.00, real=0.01 secs] 19.928: [Full GC (System) [PSYoungGen: 128K->0K(305856K)] [ParOldGen: 54668K->54532K(699072K)] 54796K->54532K(1004928K) [PSPermGen: 36300K->36300K(204800K)], 0.3531480 secs] [Times: user=1.19 sys=0.10, real=0.35 secs] 20.284: [GC [PSYoungGen: 4280K->64K(305856K)] 58813K->54596K(1004928K), 0.0040580 secs] [Times: user=0.01 sys=0.00, real=0.01 secs] 20.288: [Full GC (System) [PSYoungGen: 64K->0K(305856K)] [ParOldGen: 54532K->54532K(699072K)] 54596K->54532K(1004928K) [PSPermGen: 36300K->36300K(204800K)], 0.2360580 secs] [Times: user=1.01 sys=0.01, real=0.24 secs] 20.525: [GC [PSYoungGen: 4280K->96K(305856K)] 58813K->54628K(1004928K), 0.0030960 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 20.528: [Full GC (System) [PSYoungGen: 96K->0K(305856K)] [ParOldGen: 54532K->54533K(699072K)] 54628K->54533K(1004928K) [PSPermGen: 36300K->36300K(204800K)], 0.2311320 secs] [Times: user=0.88 sys=0.00, real=0.23 secs] 20.760: [GC [PSYoungGen: 4280K->96K(305856K)] 58814K->54629K(1004928K), 0.0034940 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 20.764: [Full GC (System) [PSYoungGen: 96K->0K(305856K)] [ParOldGen: 54533K->54533K(699072K)] 54629K->54533K(1004928K) [PSPermGen: 36300K->36300K(204800K)], 0.2381600 secs] [Times: user=0.85 sys=0.01, real=0.24 secs] 21.201: [GC [PSYoungGen: 5160K->354K(305856K)] 59694K->54888K(1004928K), 0.0019950 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 21.204: [Full GC (System) [PSYoungGen: 354K->0K(305856K)] [ParOldGen: 54533K->54792K(699072K)] 54888K->54792K(1004928K) [PSPermGen: 36300K->36300K(204800K)], 0.2358570 secs] [Times: user=0.98 sys=0.01, real=0.24 secs] 21.442: [GC [PSYoungGen: 4280K->64K(305856K)] 59073K->54856K(1004928K), 0.0022190 secs] [Times: user=0.01 sys=0.00, real=0.01 secs] 21.444: [Full GC (System) [PSYoungGen: 64K->0K(305856K)] [ParOldGen: 54792K->54792K(699072K)] 54856K->54792K(1004928K) [PSPermGen: 36300K->36300K(204800K)], 0.2475970 secs] [Times: user=0.95 sys=0.00, real=0.24 secs] 21.773: [GC [PSYoungGen: 11200K->741K(305856K)] 65993K->55534K(1004928K), 0.0030230 secs] [Times: user=0.01 sys=0.00, real=0.01 secs] 21.776: [Full GC (System) [PSYoungGen: 741K->0K(305856K)] [ParOldGen: 54792K->54376K(699072K)] 55534K->54376K(1004928K) [PSPermGen: 36538K->36537K(204800K)], 0.2550630 secs] [Times: user=1.05 sys=0.00, real=0.25 secs] 22.033: [GC [PSYoungGen: 4280K->96K(305856K)] 58657K->54472K(1004928K), 0.0032130 secs] [Times: user=0.00 sys=0.00, real=0.01 secs] 22.036: [Full GC (System) [PSYoungGen: 96K->0K(305856K)] [ParOldGen: 54376K->54376K(699072K)] 54472K->54376K(1004928K) [PSPermGen: 36537K->36537K(204800K)], 0.2507170 secs] [Times: user=1.01 sys=0.01, real=0.25 secs] 22.289: [GC [PSYoungGen: 4280K->96K(305856K)] 58657K->54472K(1004928K), 0.0038060 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 22.293: [Full GC (System) [PSYoungGen: 96K->0K(305856K)] [ParOldGen: 54376K->54376K(699072K)] 54472K->54376K(1004928K) [PSPermGen: 36537K->36537K(204800K)], 0.2640250 secs] [Times: user=1.07 sys=0.02, real=0.27 secs] 22.560: [GC [PSYoungGen: 4280K->128K(305856K)] 58657K->54504K(1004928K), 0.0036890 secs] [Times: user=0.01 sys=0.00, real=0.01 secs] 22.564: [Full GC (System) [PSYoungGen: 128K->0K(305856K)] [ParOldGen: 54376K->54377K(699072K)] 54504K->54377K(1004928K) [PSPermGen: 36537K->36536K(204800K)], 0.2585560 secs] [Times: user=1.08 sys=0.01, real=0.25 secs] 22.823: [GC [PSYoungGen: 4533K->96K(305856K)] 58910K->54473K(1004928K), 0.0020840 secs] [Times: user=0.00 sys=0.00, real=0.01 secs] 22.825: [Full GC (System) [PSYoungGen: 96K->0K(305856K)] [ParOldGen: 54377K->54377K(699072K)] 54473K->54377K(1004928K) [PSPermGen: 36536K->36536K(204800K)], 0.2505380 secs] [Times: user=0.99 sys=0.01, real=0.25 secs] 23.077: [GC [PSYoungGen: 4530K->32K(305856K)] 58908K->54409K(1004928K), 0.0016220 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 23.079: [Full GC (System) [PSYoungGen: 32K->0K(305856K)] [ParOldGen: 54377K->54378K(699072K)] 54409K->54378K(1004928K) [PSPermGen: 36536K->36536K(204800K)], 0.2320970 secs] [Times: user=0.95 sys=0.00, real=0.23 secs] 24.424: [GC [PSYoungGen: 87133K->800K(305856K)] 141512K->55179K(1004928K), 0.0038230 secs] [Times: user=0.01 sys=0.01, real=0.01 secs] 24.428: [Full GC (System) [PSYoungGen: 800K->0K(305856K)] [ParOldGen: 54378K->54950K(699072K)] 55179K->54950K(1004928K) [PSPermGen: 37714K->37712K(204800K)], 0.4105190 secs] [Times: user=1.25 sys=0.17, real=0.41 secs] 24.866: [GC [PSYoungGen: 4280K->256K(305856K)] 59231K->55206K(1004928K), 0.0041370 secs] [Times: user=0.01 sys=0.00, real=0.00 secs] 24.870: [Full GC (System) [PSYoungGen: 256K->0K(305856K)] [ParOldGen: 54950K->54789K(699072K)] 55206K->54789K(1004928K) [PSPermGen: 37720K->37719K(204800K)], 0.4160520 secs] [Times: user=1.12 sys=0.19, real=0.42 secs] 29.041: [GC [PSYoungGen: 262208K->12901K(275136K)] 316997K->67691K(974208K), 0.0170890 secs] [Times: user=0.11 sys=0.00, real=0.02 secs]

    Read the article

  • Referenced vector does not pass through functions

    - by kylepayne
    The referenced vector to functions does not hold the information in memory. Do I have to use pointers? Thanks. #include <iostream> #include <cstdlib> #include <vector> #include <string> using namespace std; void menu(); void addvector(vector<string>& vec); void subvector(vector<string>& vec); void vectorsize(const vector<string>& vec); void printvec(const vector<string>& vec); void printvec_bw(const vector<string>& vec); int main() { vector<string> svector; menu(); return 0; } //functions definitions void menu() { vector<string> svector; int choice = 0; cout << "Thanks for using this program! \n" << "Enter 1 to add a string to the vector \n" << "Enter 2 to remove the last string from the vector \n" << "Enter 3 to print the vector size \n" << "Enter 4 to print the contents of the vector \n" << "Enter 5 ----------------------------------- backwards \n" << "Enter 6 to end the program \n"; cin >> choice; switch(choice) { case 1: addvector(svector); menu(); break; case 2: subvector(svector); menu(); break; case 3: vectorsize(svector); menu(); break; case 4: printvec(svector); menu(); break; case 5: printvec_bw(svector); menu(); break; case 6: exit(1); default: cout << "not a valid choice \n"; // menu is structured so that all other functions are called from it. } } void addvector(vector<string>& vec) { //string line; //int i = 0; //cin.ignore(1, '\n'); //cout << "Enter the string please \n"; //getline(cin, line); vec.push_back("the police man's beard is half-constructed"); } void subvector(vector<string>& vec) { vec.pop_back(); return; } void vectorsize(const vector<string>& vec) { if (vec.empty()) { cout << "vector is empty"; } else { cout << vec.size() << endl; } return; } void printvec(const vector<string>& vec) { for(int i = 0; i < vec.size(); i++) { cout << vec[i] << endl; } return; } void printvec_bw(const vector<string>& vec) { for(int i = vec.size(); i > 0; i--) { cout << vec[i] << endl; } return; }

    Read the article

  • Vector of Object Pointers, general help and confusion

    - by Staypuft
    Have a homework assignment in which I'm supposed to create a vector of pointers to objects Later on down the load, I'll be using inheritance/polymorphism to extend the class to include fees for two-day delivery, next day air, etc. However, that is not my concern right now. The final goal of the current program is to just print out every object's content in the vector (name & address) and find it's shipping cost (weight*cost). My Trouble is not with the logic, I'm just confused on few points related to objects/pointers/vectors in general. But first my code. I basically cut out everything that does not mater right now, int main, will have user input, but right now I hard-coded two examples. #include <iostream> #include <string> #include <vector> using namespace std; class Package { public: Package(); //default constructor Package(string d_name, string d_add, string d_zip, string d_city, string d_state, double c, double w); double calculateCost(double, double); void Print(); ~Package(); private: string dest_name; string dest_address; string dest_zip; string dest_city; string dest_state; double weight; double cost; }; Package::Package() { cout<<"Constucting Package Object with default values: "<<endl; string dest_name=""; string dest_address=""; string dest_zip=""; string dest_city=""; string dest_state=""; double weight=0; double cost=0; } Package::Package(string d_name, string d_add, string d_zip, string d_city, string d_state, string r_name, string r_add, string r_zip, string r_city, string r_state, double w, double c){ cout<<"Constucting Package Object with user defined values: "<<endl; string dest_name=d_name; string dest_address=d_add; string dest_zip=d_zip; string dest_city=d_city; string dest_state=d_state; double weight=w; double cost=c; } Package::~Package() { cout<<"Deconstructing Package Object!"<<endl; delete Package; } double Package::calculateCost(double x, double y){ return x+y; } int main(){ double cost=0; vector<Package*> shipment; cout<<"Enter Shipping Cost: "<<endl; cin>>cost; shipment.push_back(new Package("tom r","123 thunder road", "90210", "Red Bank", "NJ", cost, 10.5)); shipment.push_back(new Package ("Harry Potter","10 Madison Avenue", "55555", "New York", "NY", cost, 32.3)); return 0; } So my questions are: I'm told I have to use a vector of Object Pointers, not Objects. Why? My assignment calls for it specifically, but I'm also told it won't work otherwise. Where should I be creating this vector? Should it be part of my Package Class? How do I go about adding objects into it then? Do I need a copy constructor? Why? What's the proper way to deconstruct my vector of object pointers? Any help would be appreciated. I've searched for a lot of related articles on here and I realize that my program will have memory leaks. Using one of the specialized ptrs from boost:: will not be available for me to use. Right now, I'm more concerned with getting the foundation of my program built. That way I can actually get down to the functionality I need to create. Thanks.

    Read the article

  • OpenSSL in C++ email client - server closes connection with TLSv1 Alert message

    - by mice
    My app connects to a IMAP email server. One client configured his server to reject SSLv2 certificates, and now my app fails to connect to the server. All other email clients connect to this server successfully. My app uses openssl. I debugged by creating minimal openssl client and attempt to connect to the server. Below is the code with connects to the mail server (using Windows sockets, but same problem is with unix sockets). Server sends its initial IMAP greeting message, but after client sends 1st command, server closes connection. In Wireshark, I see that after sending command to server, it returns TLSv1 error message 21 (Encrypted Alert) and connection is gone. I'm looking for proper setup of OpenSSL for this connection to succeed. Thanks #include <stdio.h> #include <memory.h> #include <errno.h> #include <sys/types.h> #include <winsock2.h> #include <openssl/crypto.h> #include <openssl/x509.h> #include <openssl/pem.h> #include <openssl/ssl.h> #include <openssl/err.h> #define CHK_NULL(x) if((x)==NULL) exit(1) #define CHK_ERR(err,s) if((err)==-1) { perror(s); exit(1); } #define CHK_SSL(err) if((err)==-1) { ERR_print_errors_fp(stderr); exit(2); } SSL *ssl; char buf[4096]; void write(const char *s){ int err = SSL_write(ssl, s, strlen(s)); printf("> %s\n", s); CHK_SSL(err); } void read(){ int n = SSL_read(ssl, buf, sizeof(buf) - 1); CHK_SSL(n); if(n==0){ printf("Finished\n"); exit(1); } buf[n] = 0; printf("%s\n", buf); } void main(){ int err=0; SSLeay_add_ssl_algorithms(); SSL_METHOD *meth = SSLv23_client_method(); SSL_load_error_strings(); SSL_CTX *ctx = SSL_CTX_new(meth); CHK_NULL(ctx); WSADATA data; WSAStartup(0x202, &data); int sd = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP); CHK_ERR(sd, "socket"); struct sockaddr_in sa; memset(&sa, 0, sizeof(sa)); sa.sin_family = AF_INET; sa.sin_addr.s_addr = inet_addr("195.137.27.14"); sa.sin_port = htons(993); err = connect(sd,(struct sockaddr*) &sa, sizeof(sa)); CHK_ERR(err, "connect"); /* ----------------------------------------------- */ /* Now we have TCP connection. Start SSL negotiation. */ ssl = SSL_new(ctx); CHK_NULL(ssl); SSL_set_fd(ssl, sd); err = SSL_connect(ssl); CHK_SSL(err); // Following two steps are optional and not required for data exchange to be successful. /* printf("SSL connection using %s\n", SSL_get_cipher(ssl)); X509 *server_cert = SSL_get_peer_certificate(ssl); CHK_NULL(server_cert); printf("Server certificate:\n"); char *str = X509_NAME_oneline(X509_get_subject_name(server_cert),0,0); CHK_NULL(str); printf(" subject: %s\n", str); OPENSSL_free(str); str = X509_NAME_oneline(X509_get_issuer_name (server_cert),0,0); CHK_NULL(str); printf(" issuer: %s\n", str); OPENSSL_free(str); // We could do all sorts of certificate verification stuff here before deallocating the certificate. X509_free(server_cert); */ printf("\n\n"); read(); // get initial IMAP greeting write("1 CAPABILITY\r\n"); // send 1st command read(); // get reply to cmd; server closes connection here write("2 LOGIN a b\r\n"); read(); SSL_shutdown(ssl); closesocket(sd); SSL_free(ssl); SSL_CTX_free(ctx); }

    Read the article

  • my window's handle is unused and cannot be evaluated

    - by numerical25
    I am trying to encapsulate my Win32 application into a class. My problem occurs when trying to initiate my primary window for the application below is my declaration and implementation... I notice the issue within my class method InitInstance(); declaration #pragma once #include "stdafx.h" #include "resource.h" #define MAX_LOADSTRING 100 class RenderEngine { protected: int m_width; int m_height; ATOM RegisterEngineClass(); public: static HINSTANCE m_hInst; HWND m_hWnd; int m_nCmdShow; TCHAR m_szTitle[MAX_LOADSTRING]; // The title bar text TCHAR m_szWindowClass[MAX_LOADSTRING]; // the main window class name bool InitWindow(); bool InitDirectX(); bool InitInstance(); //static functions static LRESULT CALLBACK WndProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam); static INT_PTR CALLBACK About(HWND hDlg, UINT message, WPARAM wParam, LPARAM lParam); int Run(); }; implementation #include "stdafx.h" #include "RenderEngine.h" HINSTANCE RenderEngine::m_hInst = NULL; bool RenderEngine::InitWindow() { RenderEngine::m_hInst = NULL; // Initialize global strings LoadString(m_hInst, IDS_APP_TITLE, m_szTitle, MAX_LOADSTRING); LoadString(m_hInst, IDC_RENDERENGINE, m_szWindowClass, MAX_LOADSTRING); if(!RegisterEngineClass()) { return false; } if(!InitInstance()) { return false; } return true; } ATOM RenderEngine::RegisterEngineClass() { WNDCLASSEX wcex; wcex.cbSize = sizeof(WNDCLASSEX); wcex.style = CS_HREDRAW | CS_VREDRAW; wcex.lpfnWndProc = RenderEngine::WndProc; wcex.cbClsExtra = 0; wcex.cbWndExtra = 0; wcex.hInstance = m_hInst; wcex.hIcon = LoadIcon(m_hInst, MAKEINTRESOURCE(IDI_RENDERENGINE)); wcex.hCursor = LoadCursor(NULL, IDC_ARROW); wcex.hbrBackground = (HBRUSH)(COLOR_WINDOW+1); wcex.lpszMenuName = MAKEINTRESOURCE(IDC_RENDERENGINE); wcex.lpszClassName = m_szWindowClass; wcex.hIconSm = LoadIcon(wcex.hInstance, MAKEINTRESOURCE(IDI_SMALL)); return RegisterClassEx(&wcex); } LRESULT CALLBACK RenderEngine::WndProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam) { int wmId, wmEvent; PAINTSTRUCT ps; HDC hdc; switch (message) { case WM_COMMAND: wmId = LOWORD(wParam); wmEvent = HIWORD(wParam); // Parse the menu selections: switch (wmId) { case IDM_ABOUT: DialogBox(m_hInst, MAKEINTRESOURCE(IDD_ABOUTBOX), hWnd, About); break; case IDM_EXIT: DestroyWindow(hWnd); break; default: return DefWindowProc(hWnd, message, wParam, lParam); } break; case WM_PAINT: hdc = BeginPaint(hWnd, &ps); // TODO: Add any drawing code here... EndPaint(hWnd, &ps); break; case WM_DESTROY: PostQuitMessage(0); break; default: return DefWindowProc(hWnd, message, wParam, lParam); } return 0; } bool RenderEngine::InitInstance() { m_hWnd = NULL;// When I step into my code it says on this line 0x000000 unused = ??? expression cannot be evaluated m_hWnd = CreateWindow(m_szWindowClass, m_szTitle, WS_OVERLAPPEDWINDOW, CW_USEDEFAULT, 0, CW_USEDEFAULT, 0, NULL, NULL, m_hInst, NULL); if (!m_hWnd)// At this point, memory has been allocated unused = ??. It steps over this { return FALSE; } if(!ShowWindow(m_hWnd, m_nCmdShow))// m_nCmdShow = 1 and m_hWnd is still unused and expression {//Still cannot be evaluated. This statement is true. and shuts down. return false; } UpdateWindow(m_hWnd); return true; } // Message handler for about box. INT_PTR CALLBACK RenderEngine::About(HWND hDlg, UINT message, WPARAM wParam, LPARAM lParam) { UNREFERENCED_PARAMETER(lParam); switch (message) { case WM_INITDIALOG: return (INT_PTR)TRUE; case WM_COMMAND: if (LOWORD(wParam) == IDOK || LOWORD(wParam) == IDCANCEL) { EndDialog(hDlg, LOWORD(wParam)); return (INT_PTR)TRUE; } break; } return (INT_PTR)FALSE; } int RenderEngine::Run() { MSG msg; HACCEL hAccelTable; hAccelTable = LoadAccelerators(m_hInst, MAKEINTRESOURCE(IDC_RENDERENGINE)); // Main message loop: while (GetMessage(&msg, NULL, 0, 0)) { if (!TranslateAccelerator(msg.hwnd, hAccelTable, &msg)) { TranslateMessage(&msg); DispatchMessage(&msg); } } return (int) msg.wParam; } and my winMain function that calls the class // RenderEngine.cpp : Defines the entry point for the application. #include "stdafx.h" #include "RenderEngine.h" // Global Variables: RenderEngine go; int APIENTRY _tWinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPTSTR lpCmdLine, int nCmdShow) { UNREFERENCED_PARAMETER(hPrevInstance); UNREFERENCED_PARAMETER(lpCmdLine); // TODO: Place code here. RenderEngine::m_hInst = hInstance; go.m_nCmdShow = nCmdShow; if(!go.InitWindow()) { return 0; } go.Run(); return 0; }

    Read the article

  • Handling inheritance with overriding efficiently

    - by Fyodor Soikin
    I have the following two data structures. First, a list of properties applied to object triples: Object1 Object2 Object3 Property Value O1 O2 O3 P1 "abc" O1 O2 O3 P2 "xyz" O1 O3 O4 P1 "123" O2 O4 O5 P1 "098" Second, an inheritance tree: O1 O2 O4 O3 O5 Or viewed as a relation: Object Parent O2 O1 O4 O2 O3 O1 O5 O3 O1 null The semantics of this being that O2 inherits properties from O1; O4 - from O2 and O1; O3 - from O1; and O5 - from O3 and O1, in that order of precedence. NOTE 1: I have an efficient way to select all children or all parents of a given object. This is currently implemented with left and right indexes, but hierarchyid could also work. This does not seem important right now. NOTE 2: I have tiggers in place that make sure that the "Object" column always contains all possible objects, even when they do not really have to be there (i.e. have no parent or children defined). This makes it possible to use inner joins rather than severely less effiecient outer joins. The objective is: Given a pair of (Property, Value), return all object triples that have that property with that value either defined explicitly or inherited from a parent. NOTE 1: An object triple (X,Y,Z) is considered a "parent" of triple (A,B,C) when it is true that either X = A or X is a parent of A, and the same is true for (Y,B) and (Z,C). NOTE 2: A property defined on a closer parent "overrides" the same property defined on a more distant parent. NOTE 3: When (A,B,C) has two parents - (X1,Y1,Z1) and (X2,Y2,Z2), then (X1,Y1,Z1) is considered a "closer" parent when: (a) X2 is a parent of X1, or (b) X2 = X1 and Y2 is a parent of Y1, or (c) X2 = X1 and Y2 = Y1 and Z2 is a parent of Z1 In other words, the "closeness" in ancestry for triples is defined based on the first components of the triples first, then on the second components, then on the third components. This rule establishes an unambigous partial order for triples in terms of ancestry. For example, given the pair of (P1, "abc"), the result set of triples will be: O1, O2, O3 -- Defined explicitly O1, O2, O5 -- Because O5 inherits from O3 O1, O4, O3 -- Because O4 inherits from O2 O1, O4, O5 -- Because O4 inherits from O2 and O5 inherits from O3 O2, O2, O3 -- Because O2 inherits from O1 O2, O2, O5 -- Because O2 inherits from O1 and O5 inherits from O3 O2, O4, O3 -- Because O2 inherits from O1 and O4 inherits from O2 O3, O2, O3 -- Because O3 inherits from O1 O3, O2, O5 -- Because O3 inherits from O1 and O5 inherits from O3 O3, O4, O3 -- Because O3 inherits from O1 and O4 inherits from O2 O3, O4, O5 -- Because O3 inherits from O1 and O4 inherits from O2 and O5 inherits from O3 O4, O2, O3 -- Because O4 inherits from O1 O4, O2, O5 -- Because O4 inherits from O1 and O5 inherits from O3 O4, O4, O3 -- Because O4 inherits from O1 and O4 inherits from O2 O5, O2, O3 -- Because O5 inherits from O1 O5, O2, O5 -- Because O5 inherits from O1 and O5 inherits from O3 O5, O4, O3 -- Because O5 inherits from O1 and O4 inherits from O2 O5, O4, O5 -- Because O5 inherits from O1 and O4 inherits from O2 and O5 inherits from O3 Note that the triple (O2, O4, O5) is absent from this list. This is because property P1 is defined explicitly for the triple (O2, O4, O5) and this prevents that triple from inheriting that property from (O1, O2, O3). Also note that the triple (O4, O4, O5) is also absent. This is because that triple inherits its value of P1="098" from (O2, O4, O5), because it is a closer parent than (O1, O2, O3). The straightforward way to do it is the following. First, for every triple that a property is defined on, select all possible child triples: select Children1.Id as O1, Children2.Id as O2, Children3.Id as O3, tp.Property, tp.Value from TriplesAndProperties tp -- Select corresponding objects of the triple inner join Objects as Objects1 on Objects1.Id = tp.O1 inner join Objects as Objects2 on Objects2.Id = tp.O2 inner join Objects as Objects3 on Objects3.Id = tp.O3 -- Then add all possible children of all those objects inner join Objects as Children1 on Objects1.Id [isparentof] Children1.Id inner join Objects as Children2 on Objects2.Id [isparentof] Children2.Id inner join Objects as Children3 on Objects3.Id [isparentof] Children3.Id But this is not the whole story: if some triple inherits the same property from several parents, this query will yield conflicting results. Therefore, second step is to select just one of those conflicting results: select * from ( select Children1.Id as O1, Children2.Id as O2, Children3.Id as O3, tp.Property, tp.Value, row_number() over( partition by Children1.Id, Children2.Id, Children3.Id, tp.Property order by Objects1.[depthInTheTree] descending, Objects2.[depthInTheTree] descending, Objects3.[depthInTheTree] descending ) as InheritancePriority from ... (see above) ) where InheritancePriority = 1 The window function row_number() over( ... ) does the following: for every unique combination of objects triple and property, it sorts all values by the ancestral distance from the triple to the parents that the value is inherited from, and then I only select the very first of the resulting list of values. A similar effect can be achieved with a GROUP BY and ORDER BY statements, but I just find the window function semantically cleaner (the execution plans they yield are identical). The point is, I need to select the closest of contributing ancestors, and for that I need to group and then sort within the group. And finally, now I can simply filter the result set by Property and Value. This scheme works. Very reliably and predictably. It has proven to be very powerful for the business task it implements. The only trouble is, it is awfuly slow. One might point out the join of seven tables might be slowing things down, but that is actually not the bottleneck. According to the actual execution plan I'm getting from the SQL Management Studio (as well as SQL Profiler), the bottleneck is the sorting. The problem is, in order to satisfy my window function, the server has to sort by Children1.Id, Children2.Id, Children3.Id, tp.Property, Parents1.[depthInTheTree] descending, Parents2.[depthInTheTree] descending, Parents3.[depthInTheTree] descending, and there can be no indexes it can use, because the values come from a cross join of several tables. EDIT: Per Michael Buen's suggestion (thank you, Michael), I have posted the whole puzzle to sqlfiddle here. One can see in the execution plan that the Sort operation accounts for 32% of the whole query, and that is going to grow with the number of total rows, because all the other operations use indexes. Usually in such cases I would use an indexed view, but not in this case, because indexed views cannot contain self-joins, of which there are six. The only way that I can think of so far is to create six copies of the Objects table and then use them for the joins, thus enabling an indexed view. Did the time come that I shall be reduced to that kind of hacks? The despair sets in.

    Read the article

  • OpenGL texture shifted somewhat to the left when applied to a quad

    - by user308226
    I'm a bit new to OpenGL and I've been having a problem with using textures. The texture seems to load fine, but when I run the program, the texture displays shifted a couple pixels to the left, with the section cut off by the shift appearing on the right side. I don't know if the problem here is in the my TGA loader or if it's the way I'm applying the texture to the quad. Here is the loader: #include "texture.h" #include <iostream> GLubyte uncompressedheader[12] = {0,0, 2,0,0,0,0,0,0,0,0,0}; GLubyte compressedheader[12] = {0,0,10,0,0,0,0,0,0,0,0,0}; TGA::TGA() { } //Private loading function called by LoadTGA. Loads uncompressed TGA files //Returns: TRUE on success, FALSE on failure bool TGA::LoadCompressedTGA(char *filename,ifstream &texturestream) { return false; } bool TGA::LoadUncompressedTGA(char *filename,ifstream &texturestream) { cout << "G position status:" << texturestream.tellg() << endl; texturestream.read((char*)header, sizeof(header)); //read 6 bytes into the file to get the tga header width = (GLuint)header[1] * 256 + (GLuint)header[0]; //read and calculate width and save height = (GLuint)header[3] * 256 + (GLuint)header[2]; //read and calculate height and save bpp = (GLuint)header[4]; //read bpp and save cout << bpp << endl; if((width <= 0) || (height <= 0) || ((bpp != 24) && (bpp !=32))) //check to make sure the height, width, and bpp are valid { return false; } if(bpp == 24) { type = GL_RGB; } else { type = GL_RGBA; } imagesize = ((bpp/8) * width * height); //determine size in bytes of the image cout << imagesize << endl; imagedata = new GLubyte[imagesize]; //allocate memory for our imagedata variable texturestream.read((char*)imagedata,imagesize); //read according the the size of the image and save into imagedata for(GLuint cswap = 0; cswap < (GLuint)imagesize; cswap += (bpp/8)) //loop through and reverse the tga's BGR format to RGB { imagedata[cswap] ^= imagedata[cswap+2] ^= //1st Byte XOR 3rd Byte XOR 1st Byte XOR 3rd Byte imagedata[cswap] ^= imagedata[cswap+2]; } texturestream.close(); //close ifstream because we're done with it cout << "image loaded" << endl; glGenTextures(1, &texID); // Generate OpenGL texture IDs glBindTexture(GL_TEXTURE_2D, texID); // Bind Our Texture glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); // Linear Filtered glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, type, width, height, 0, type, GL_UNSIGNED_BYTE, imagedata); delete imagedata; return true; } //Public loading function for TGA images. Opens TGA file and determines //its type, if any, then loads it and calls the appropriate function. //Returns: TRUE on success, FALSE on failure bool TGA::loadTGA(char *filename) { cout << width << endl; ifstream texturestream; texturestream.open(filename,ios::binary); texturestream.read((char*)header,sizeof(header)); //read 6 bytes into the file, its the header. //if it matches the uncompressed header's first 6 bytes, load it as uncompressed LoadUncompressedTGA(filename,texturestream); return true; } GLubyte* TGA::getImageData() { return imagedata; } GLuint& TGA::getTexID() { return texID; } And here's the quad: void Square::show() { glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, texture.texID); //Move to offset glTranslatef( x, y, 0 ); //Start quad glBegin( GL_QUADS ); //Set color to white glColor4f( 1.0, 1.0, 1.0, 1.0 ); //Draw square glTexCoord2f(0.0f, 0.0f); glVertex3f( 0, 0, 0 ); glTexCoord2f(1.0f, 0.0f); glVertex3f( SQUARE_WIDTH, 0, 0 ); glTexCoord2f(1.0f, 1.0f); glVertex3f( SQUARE_WIDTH, SQUARE_HEIGHT, 0 ); glTexCoord2f(0.0f, 1.0f); glVertex3f( 0, SQUARE_HEIGHT, 0 ); //End quad glEnd(); //Reset glLoadIdentity(); }

    Read the article

  • Change English numbers to Persian and vice versa in MVC (httpmodule)?

    - by Mohammad
    I wanna change all English numbers to Persian for showing to users. and change them to English numbers again for giving all requests (Postbacks) e.g: we have something like this in view IRQ170, I wanna show IRQ??? to users and give IRQ170 from users. I know, I have to use Httpmodule, But I don't know how ? Could you please guide me? Edit : Let me describe more : I've written the following http module : using System; using System.Collections.Specialized; using System.Diagnostics; using System.IO; using System.Text; using System.Text.RegularExpressions; using System.Web; using System.Web.UI; using Smartiz.Common; namespace Smartiz.UI.Classes { public class PersianNumberModule : IHttpModule { private StreamWatcher _watcher; #region Implementation of IHttpModule /// <summary> /// Initializes a module and prepares it to handle requests. /// </summary> /// <param name="context">An <see cref="T:System.Web.HttpApplication"/> that provides access to the methods, properties, and events common to all application objects within an ASP.NET application </param> public void Init(HttpApplication context) { context.BeginRequest += ContextBeginRequest; context.EndRequest += ContextEndRequest; } /// <summary> /// Disposes of the resources (other than memory) used by the module that implements <see cref="T:System.Web.IHttpModule"/>. /// </summary> public void Dispose() { } #endregion private void ContextBeginRequest(object sender, EventArgs e) { HttpApplication context = sender as HttpApplication; if (context == null) return; _watcher = new StreamWatcher(context.Response.Filter); context.Response.Filter = _watcher; } private void ContextEndRequest(object sender, EventArgs e) { HttpApplication context = sender as HttpApplication; if (context == null) return; _watcher = new StreamWatcher(context.Response.Filter); context.Response.Filter = _watcher; } } public class StreamWatcher : Stream { private readonly Stream _stream; private readonly MemoryStream _memoryStream = new MemoryStream(); public StreamWatcher(Stream stream) { _stream = stream; } public override void Flush() { _stream.Flush(); } public override int Read(byte[] buffer, int offset, int count) { int bytesRead = _stream.Read(buffer, offset, count); string orgContent = Encoding.UTF8.GetString(buffer, offset, bytesRead); string newContent = orgContent.ToEnglishNumber(); int newByteCountLength = Encoding.UTF8.GetByteCount(newContent); Encoding.UTF8.GetBytes(newContent, 0, Encoding.UTF8.GetByteCount(newContent), buffer, 0); return newByteCountLength; } public override void Write(byte[] buffer, int offset, int count) { string strBuffer = Encoding.UTF8.GetString(buffer, offset, count); MatchCollection htmlAttributes = Regex.Matches(strBuffer, @"(\S+)=[""']?((?:.(?![""']?\s+(?:\S+)=|[>""']))+.)[""']?", RegexOptions.IgnoreCase | RegexOptions.Multiline); foreach (Match match in htmlAttributes) { strBuffer = strBuffer.Replace(match.Value, match.Value.ToEnglishNumber()); } MatchCollection scripts = Regex.Matches(strBuffer, "<script[^>]*>(.*?)</script>", RegexOptions.Singleline | RegexOptions.IgnoreCase | RegexOptions.Multiline | RegexOptions.IgnorePatternWhitespace); foreach (Match match in scripts) { MatchCollection values = Regex.Matches(match.Value, @"([""'])(?:(?=(\\?))\2.)*?\1", RegexOptions.Singleline | RegexOptions.IgnoreCase | RegexOptions.Multiline | RegexOptions.IgnorePatternWhitespace); foreach (Match stringValue in values) { strBuffer = strBuffer.Replace(stringValue.Value, stringValue.Value.ToEnglishNumber()); } } MatchCollection styles = Regex.Matches(strBuffer, "<style[^>]*>(.*?)</style>", RegexOptions.Singleline | RegexOptions.IgnoreCase | RegexOptions.Multiline | RegexOptions.IgnorePatternWhitespace); foreach (Match match in styles) { strBuffer = strBuffer.Replace(match.Value, match.Value.ToEnglishNumber()); } byte[] data = Encoding.UTF8.GetBytes(strBuffer); _memoryStream.Write(data, offset, count); _stream.Write(data, offset, count); } public override string ToString() { return Encoding.UTF8.GetString(_memoryStream.ToArray()); } #region Rest of the overrides public override bool CanRead { get { throw new NotImplementedException(); } } public override bool CanSeek { get { throw new NotImplementedException(); } } public override bool CanWrite { get { throw new NotImplementedException(); } } public override long Seek(long offset, SeekOrigin origin) { throw new NotImplementedException(); } public override void SetLength(long value) { throw new NotImplementedException(); } public override long Length { get { throw new NotImplementedException(); } } public override long Position { get { throw new NotImplementedException(); } set { throw new NotImplementedException(); } } #endregion } } It works well, but It converts all numbers in css and scripts files to Persian and it causes error.

    Read the article

  • Image loader cant load my live image url

    - by Bindhu
    In my application i need to load the images in list view, when using locale(ip ported url) then no problem all images are loading properly, But when using live url then the images are not loading, My image loader class: public class ImageLoader { MemoryCache memoryCache = new MemoryCache(); FileCache fileCache; private Map<ImageView, String> imageViews = Collections .synchronizedMap(new WeakHashMap<ImageView, String>()); ExecutorService executorService; public ImageLoader(Context context) { fileCache = new FileCache(context); executorService = Executors.newFixedThreadPool(5); } final int stub_id = R.drawable.appointeesample; public void DisplayImage(String url, ImageView imageView) { imageViews.put(imageView, url); Bitmap bitmap = memoryCache.get(url); if (bitmap != null) imageView.setImageBitmap(bitmap); else { Log.d("stub", "stub" + stub_id); queuePhoto(url, imageView); imageView.setImageResource(stub_id); } } private void queuePhoto(String url, ImageView imageView) { PhotoToLoad p = new PhotoToLoad(url, imageView); executorService.submit(new PhotosLoader(p)); } private Bitmap getBitmap(String url) { File f = fileCache.getFile(url); // from SD cache Bitmap b = decodeFile(f); if (b != null) return b; // from web try { Bitmap bitmap = null; URL imageUrl = new URL(url); HttpURLConnection conn = (HttpURLConnection) imageUrl .openConnection(); conn.setConnectTimeout(30000); conn.setReadTimeout(30000); conn.setInstanceFollowRedirects(true); InputStream is = conn.getInputStream(); BufferedInputStream bis = new BufferedInputStream(is, 81960); BitmapFactory.Options opts = new BitmapFactory.Options(); opts.inJustDecodeBounds = true; OutputStream os = new FileOutputStream(f); Utils.CopyStream(bis, os); os.close(); bitmap = decodeFile(f); Log.d("bitmap", "Bit map" + bitmap); return bitmap; } catch (Exception ex) { ex.printStackTrace(); return null; } } // decodes image and scales it to reduce memory consumption private Bitmap decodeFile(File f) { try { try { BitmapFactory.Options o = new BitmapFactory.Options(); o.inJustDecodeBounds = true; BitmapFactory.decodeStream(new FileInputStream(f), null, o); final int REQUIRED_SIZE = 200; int scale = 1; while (o.outWidth / scale / 2 >= REQUIRED_SIZE && o.outHeight / scale / 2 >= REQUIRED_SIZE) scale *= 2; BitmapFactory.Options o2 = new BitmapFactory.Options(); o2.inSampleSize = scale; return BitmapFactory.decodeStream(new FileInputStream(f), null, o2); } catch (FileNotFoundException e) { } finally { System.gc(); } return null; } catch (Exception e) { } return null; } // Task for the queue private class PhotoToLoad { public String url; public ImageView imageView; public PhotoToLoad(String u, ImageView i) { url = u; imageView = i; } } class PhotosLoader implements Runnable { PhotoToLoad photoToLoad; PhotosLoader(PhotoToLoad photoToLoad) { this.photoToLoad = photoToLoad; } @Override public void run() { if (imageViewReused(photoToLoad)) return; Bitmap bmp = getBitmap(photoToLoad.url); memoryCache.put(photoToLoad.url, bmp); if (imageViewReused(photoToLoad)) return; BitmapDisplayer bd = new BitmapDisplayer(bmp, photoToLoad); Activity a = (Activity) photoToLoad.imageView.getContext(); a.runOnUiThread(bd); } } boolean imageViewReused(PhotoToLoad photoToLoad) { String tag = imageViews.get(photoToLoad.imageView); if (tag == null || !tag.equals(photoToLoad.url)) return true; return false; } // Used to display bitmap in the UI thread class BitmapDisplayer implements Runnable { Bitmap bitmap; PhotoToLoad photoToLoad; public BitmapDisplayer(Bitmap b, PhotoToLoad p) { bitmap = b; photoToLoad = p; } public void run() { if (imageViewReused(photoToLoad)) return; if (bitmap != null) photoToLoad.imageView.setImageBitmap(bitmap); else photoToLoad.imageView.setImageResource(stub_id); } } public void clearCache() { memoryCache.clear(); fileCache.clear(); } My Live Image url for Example: https://goappointed.com/images_upload/3330Torana_Logo.JPG I have referred google but no solution is working, Thanks a lot in advance.

    Read the article

  • Numpy zero rank array indexing/broadcasting

    - by Lemming
    I'm trying to write a function that supports broadcasting and is fast at the same time. However, numpy's zero-rank arrays are causing trouble as usual. I couldn't find anything useful on google, or by searching here. So, I'm asking you. How should I implement broadcasting efficiently and handle zero-rank arrays at the same time? This whole post became larger than anticipated, sorry. Details: To clarify what I'm talking about I'll give a simple example: Say I want to implement a Heaviside step-function. I.e. a function that acts on the real axis, which is 0 on the negative side, 1 on the positive side, and from case to case either 0, 0.5, or 1 at the point 0. Implementation Masking The most efficient way I found so far is the following. It uses boolean arrays as masks to assign the correct values to the corresponding slots in the output vector. from numpy import * def step_mask(x, limit=+1): """Heaviside step-function. y = 0 if x < 0 y = 1 if x > 0 See below for x == 0. Arguments: x Evaluate the function at these points. limit Which limit at x == 0? limit > 0: y = 1 limit == 0: y = 0.5 limit < 0: y = 0 Return: The values corresponding to x. """ b = broadcast(x, limit) out = zeros(b.shape) out[x>0] = 1 mask = (limit > 0) & (x == 0) out[mask] = 1 mask = (limit == 0) & (x == 0) out[mask] = 0.5 mask = (limit < 0) & (x == 0) out[mask] = 0 return out List Comprehension The following-the-numpy-docs way is to use a list comprehension on the flat iterator of the broadcast object. However, list comprehensions become absolutely unreadable for such complicated functions. def step_comprehension(x, limit=+1): b = broadcast(x, limit) out = empty(b.shape) out.flat = [ ( 1 if x_ > 0 else ( 0 if x_ < 0 else ( 1 if l_ > 0 else ( 0.5 if l_ ==0 else ( 0 ))))) for x_, l_ in b ] return out For Loop And finally, the most naive way is a for loop. It's probably the most readable option. However, Python for-loops are anything but fast. And hence, a really bad idea in numerics. def step_for(x, limit=+1): b = broadcast(x, limit) out = empty(b.shape) for i, (x_, l_) in enumerate(b): if x_ > 0: out[i] = 1 elif x_ < 0: out[i] = 0 elif l_ > 0: out[i] = 1 elif l_ < 0: out[i] = 0 else: out[i] = 0.5 return out Test First of all a brief test to see if the output is correct. >>> x = array([-1, -0.1, 0, 0.1, 1]) >>> step_mask(x, +1) array([ 0., 0., 1., 1., 1.]) >>> step_mask(x, 0) array([ 0. , 0. , 0.5, 1. , 1. ]) >>> step_mask(x, -1) array([ 0., 0., 0., 1., 1.]) It is correct, and the other two functions give the same output. Performance How about efficiency? These are the timings: In [45]: xl = linspace(-2, 2, 500001) In [46]: %timeit step_mask(xl) 10 loops, best of 3: 19.5 ms per loop In [47]: %timeit step_comprehension(xl) 1 loops, best of 3: 1.17 s per loop In [48]: %timeit step_for(xl) 1 loops, best of 3: 1.15 s per loop The masked version performs best as expected. However, I'm surprised that the comprehension is on the same level as the for loop. Zero Rank Arrays But, 0-rank arrays pose a problem. Sometimes you want to use a function scalar input. And preferably not have to worry about wrapping all scalars in at least 1-D arrays. >>> step_mask(1) Traceback (most recent call last): File "<ipython-input-50-91c06aa4487b>", line 1, in <module> step_mask(1) File "script.py", line 22, in step_mask out[x>0] = 1 IndexError: 0-d arrays can't be indexed. >>> step_for(1) Traceback (most recent call last): File "<ipython-input-51-4e0de4fcb197>", line 1, in <module> step_for(1) File "script.py", line 55, in step_for out[i] = 1 IndexError: 0-d arrays can't be indexed. >>> step_comprehension(1) array(1.0) Only the list comprehension can handle 0-rank arrays. The other two versions would need special case handling for 0-rank arrays. Numpy gets a bit messy when you want to use the same code for arrays and scalars. However, I really like to have functions that work on as arbitrary input as possible. Who knows which parameters I'll want to iterate over at some point. Question: What is the best way to implement a function as the one above? Is there a way to avoid if scalar then like special cases? I'm not looking for a built-in Heaviside. It's just a simplified example. In my code the above pattern appears in many places to make parameter iteration as simple as possible without littering the client code with for loops or comprehensions. Furthermore, I'm aware of Cython, or weave & Co., or implementation directly in C. However, the performance of the masked version above is sufficient for the moment. And for the moment I would like to keep things as simple as possible.

    Read the article

  • MEF + Plug-In not updating

    - by mybrokengnome
    I asked this on the MEF Codeplex forum already, but I haven't gotten a response yet, so I figured I'd try StackOverflow. Here's the original post if anyone's interested (this is just a copy from it): MEF Codeplex "Let me first say that I'm completely new to MEF (just discovered it today) and am very happy with it so far. However, I've ran in to a problem that is very frustrating. I'm creating an app that will have a plugin architecture and the plugins will only be stored in a single DLL file (or coded into the main app). The DLL file needs to be able to be recompiled during run-time and the app should recognize this and re-load the plugins (I know this is difficult, but it's a requirement). To accomplish this I took the approach covered http://blog.maartenballiauw.be/category/MEF.aspx there (look for WebServerDirectoryCatalog). Basically the idea is to "monitor the plugins folder, copy the new/modified assemblies to the web application’s /bin folder and instruct MEF to load its exports from there." This is my code, which is probably not the correct way to do it but it's what I found in some samples around the net: main()... string myExecName = Assembly.GetExecutingAssembly().Location; string myPath = System.IO.Path.GetDirectoryName(myExecName); catalog = new AggregateCatalog(); pluginCatalog = new MyDirectoryCatalog(myPath + @"/Plugins"); catalog.Catalogs.Add(pluginCatalog); exportContainer = new CompositionContainer(catalog); CompositionBatch compBatch = new CompositionBatch(); compBatch.AddPart(this); compBatch.AddPart(catalog); exportContainer.Compose(compBatch); and private FileSystemWatcher fileSystemWatcher; public DirectoryCatalog directoryCatalog; private string path; private string extension; public MyDirectoryCatalog(string path) { Initialize(path, "*.dll", "*.dll"); } private void Initialize(string path, string extension, string modulePattern) { this.path = path; this.extension = extension; fileSystemWatcher = new FileSystemWatcher(path, modulePattern); fileSystemWatcher.Changed += new FileSystemEventHandler(fileSystemWatcher_Changed); fileSystemWatcher.Created += new FileSystemEventHandler(fileSystemWatcher_Created); fileSystemWatcher.Deleted += new FileSystemEventHandler(fileSystemWatcher_Deleted); fileSystemWatcher.Renamed += new RenamedEventHandler(fileSystemWatcher_Renamed); fileSystemWatcher.IncludeSubdirectories = false; fileSystemWatcher.EnableRaisingEvents = true; Refresh(); } void fileSystemWatcher_Renamed(object sender, RenamedEventArgs e) { RemoveFromBin(e.OldName); Refresh(); } void fileSystemWatcher_Deleted(object sender, FileSystemEventArgs e) { RemoveFromBin(e.Name); Refresh(); } void fileSystemWatcher_Created(object sender, FileSystemEventArgs e) { Refresh(); } void fileSystemWatcher_Changed(object sender, FileSystemEventArgs e) { Refresh(); } private void Refresh() { // Determine /bin path string binPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "Plugins"); string newPath = ""; // Copy files to /bin foreach (string file in Directory.GetFiles(path, extension, SearchOption.TopDirectoryOnly)) { try { DirectoryInfo dInfo = new DirectoryInfo(binPath); DirectoryInfo[] dirs = dInfo.GetDirectories(); int count = dirs.Count() + 1; newPath = binPath + "/" + count; DirectoryInfo dInfo2 = new DirectoryInfo(newPath); if (!dInfo2.Exists) dInfo2.Create(); File.Copy(file, System.IO.Path.Combine(newPath, System.IO.Path.GetFileName(file)), true); } catch { // Not that big deal... Blog readers will probably kill me for this bit of code :-) } } // Create new directory catalog directoryCatalog = new DirectoryCatalog(newPath, extension); directoryCatalog.Refresh(); } public override IQueryable<ComposablePartDefinition> Parts { get { return directoryCatalog.Parts; } } private void RemoveFromBin(string name) { string binPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, ""); File.Delete(Path.Combine(binPath, name)); } So all this actually works, and after the end of the code in main my IEnumerable variable is actually filled with all the plugins in the DLL (which if you follow the code is located in Plugins/1 so that I can modify the dll in the plugins folder). So now at this point I should be able to re-compile the plugins DLL, drop it in to the Plugins folder, my FileWatcher detect that it's changed, and then copy it into folder "2" and directoryCatalog should point to the new folder. All this actually works! The problem is, even though it seems like every thing is pointed to the right place, my IEnumerable variable is never updated with the new plugins. So close, but yet so far! Any suggestions? I know the downsides of doing it this way, that no dll is actually getting unloaded and causing a memory leak, but it's a Windows App and will probably be started at least once a day, and the plugins are un-likely to change that often, but it's still a requirement from the client that it does this without re-loading the app. Thanks! Thanks for any help you all can provide, it's driving me crazy not being able to figure this out."

    Read the article

  • strange behavior while including a class in php

    - by user1864539
    I'm experiencing a strange behavior with PHP. Basically I want to require a class within a PHP script. I know it is straight forward and I did it before but when I do so, it change the behavior of my jquery (1.8.3) ajax response. I'm running a wamp setup and my PHP version is 5.4.6. Here is a sample as for my index.html head (omitting the jquery js include) <script> $(document).ready(function(){ $('#submit').click(function(){ var action = $('#form').attr('action'); var form_data = { fname: $('#fname').val(), lname: $('#lname').val(), phone: $('#phone').val(), email: $('#email').val(), is_ajax: 1 }; $.ajax({ type: $('#form').attr('method'), url: action, data: form_data, success: function(response){ switch(response){ case 'ok': var msg = 'data saved'; break; case 'ko': var msg = 'Oops something wrong happen'; break; default: var msg = 'misc:<br/>'+response; break; } $('#message').html(msg); } }); return false; }); }); </script> body <div id="message"></div> <form id="form" action="handler.php" method="post"> <p> <input type="text" name="fname" id="fname" placeholder="fname"> <input type="text" name="lname" id="lname" placeholder="lname"> </p> <p> <input type="text" name="phone" id="phone" placeholder="phone"> <input type="text" name="email" id="email" placeholder="email"> </p> <input type="submit" name="submit" value="submit" id="submit"> </form> And as for the handler.php file: <?php require('class/Container.php'); $filename = 'xml/memory.xml'; $is_ajax = $_REQUEST['is_ajax']; if(isset($is_ajax) && $is_ajax){ $fname = $_REQUEST['fname']; $lname = $_REQUEST['lname']; $phone = $_REQUEST['phone']; $email = $_REQUEST['email']; $obj = new Container; $obj->insertData('fname',$fname); $obj->insertData('lname',$lname); $obj->insertData('phone',$phone); $obj->insertData('email',$email); $tmp = $obj->give(); $result = $tmp['_obj']; /* Push data inside array */ $array = array(); foreach($result as $key => $value){ array_push($array,$key,$value); } $xml = simplexml_load_file($filename); // check if there is any data in if(count($xml->elements->data) == 0){ // if not, create the structure $xml->elements->addChild('data',''); } // proceed now that we do have the structure if(count($xml->elements->data) == 1){ foreach($result as $key => $value){ $xml->elements->data->addChild($key,$value); } $xml->saveXML($filename); echo 'ok'; }else{ echo 'ko'; } } ? The Container class: <?php class Container{ private $_obj; public function __construct(){ $this->_obj = array(); } public function addData($data = array()){ if(!empty($data)){ $oldData = $this->_obj; $data = array_merge($oldData,$data); $this->_obj = $data; } } public function removeData($key){ if(!empty($key)){ $oldData = $this->_obj; unset($oldData[$key]); $this->_obj = $oldData; } } public function outputData(){ return $this->_obj; } public function give(){ return get_object_vars($this); } public function insertData($key,$value){ $this->_obj[$key] = $value; } } ? The strange thing is that my result always fall under the default switch statement and the ajax response fit both present statement. I noticed then if I just paste the Container class on the top of the handler.php file, everything works properly but it kind of defeat what I try to achieve. I tried different way to include the Container class but it seem to be than the issue is specific to this current scenario. I'm still learning PHP and my guess is that I'm missing something really basic. I also search on stackoverflow regarding the issue I'm experiencing as well as PHP.net, without success. Regards,

    Read the article

< Previous Page | 613 614 615 616 617 618 619 620 621 622 623 624  | Next Page >