Search Results

Search found 12061 results on 483 pages for 'non printable'.

Page 439/483 | < Previous Page | 435 436 437 438 439 440 441 442 443 444 445 446  | Next Page >

  • Do you still limit line length in code?

    - by Noldorin
    This is a matter on which I would like to gauge the opinion of the community: Do you still limit the length of lines of code to a fixed maximum? This was certainly a convention of the past for many languages; one would typically cap the number of characters per line to a value such as 80 (and more recnetly 100 or 120 I believe). As far as I understand, the primary reasons for limiting line length are: Readability - You don't have to scroll over horizontally when you want to see the end of some lines. Printing - Admittedly (at least in my experience), most code that you are working on does not get printed out on paper, but by limiting the number of characters you can insure that formatting doesn't get messed up when printed. Past editors (?) - Not sure about this one, but I suspect that at some point in the distant past of programming, (at least some) text editors may have been based on a fixed-width buffer. I'm sure there are points that I am still missing out, so feel free to add to these... Now, when I tend to observe C or C# code nowadays, I often see a number of different styles, the main ones being: Line length capped to 80, 100, or even 120 characters. As far as I understand, 80 is the traditional length, but the longer ones of 100 and 120 have appeared because of the widespread use of high resolutions and widescreen monitors nowadays. No line length capping at all. This tends to be pretty horrible to read, and I don't see it too often, though it's certainly not too rare either. Inconsistent capping of line length. The length of some lines are limited to a fixed maximum (or even a maximum that changes depending on the file/location in code), while others (possibly comments) are not at all. My personal preference here (at least recently) has been to cap the line length to 100 in the Visual Studio editor. This means that in a decently sized window (on a non-widescreen monitor), the ends of lines are still fully visible. I can however see a few disadvantages in this, especially when you end up writing code that's indented 3 or 4 levels and then having to include a long string literal - though I often take this as a sign to refactor my code! In particular, I am curious what the C and C# coders (or anyone who uses Visual Studio for that matter) think about this point, though I would be interested in hearing anyone's thoughts on the subject. Edit Thanks for the all answers - I appreciate the variety of opinions here, all presenting sound reasons. Consensus does seem to be tipping in the direction of always (or almost always) limit the line length. Interestingly, it seems to be in various coding standards to limit the line length. Judging by some of the answers, both the Python and Google CPP guidelines set the limit at 80 chars. I haven't seen anything similar regarding C# or VB.NET, but I would be curious to see if there are ones anywhere.

    Read the article

  • Is Zend Framework a total waste of my time?

    - by Citizen
    Ok, I'm about 50% done with the "30 minute" quickstart guide from Zend. I must be missing something, because this seems like a total waste of time. The point of this quickguide is to create a guestbook, something I could do in 5 minutes with regular naked non-framework php. Here's my path to zend framework: c:/program files/wamp/www/_zend/ Here's my path to my quickstart project: c:/program files/wamp/www/_zend/bin/quickstart/ I have a number of questions at this point: http://framework.zend.com/docs/quickstart/create-a-model-and-database-table 1: I'm running the command line to run my database loading script. I get an error stating the it can't find the Zend/AutoLoader.php becuase my path to the zend library is wrong. I followed all of the steps. I defined the path to my zend library in the main config file, but for some reason, its defined again in my db loader. In all of these scripts that they have me load, it points the relative path to the zend library as being /../library Problem is, there's nothing in that folder. To get to my actual zend folder, you'd need to be (relatively) /../../../../library Which brings me to my 2nd question: 2: Where the #$#$ is the main Zend files supposed to be? The install directions were basically "put it wherever you want", when the real answer (after a bunch of errors and wasted time was) was "put it somewhere so that its really easy to type the full path a thousand times in command line" and "it also better be in a runnable place on your webserver since its going to create your quickstart application in a subdirectory within zend". Which brings us to the third question 3: Am I supposed to have this libary in both the parent core Zend (wamp/_zend/library) AND my application (quickstart/library)? 4: If that is the case, it seems like a ton of wasted files to be uploading. I'd like to use Zend to create products that my customers will download. 5 megs of overhead seems like a bit much. Zend claims you can use these library components separately, but it looks to me like I'm going to have to upload them every time. Which leads to the next question: 5: It appears that perhaps Zend is more for a single application that is not supposed to be distributed. Is this not the case? 6: According to their default file structure everything but my /public folder would be above public_html on my server if I wanted this to rest on my TLD. I would need to rename every reference of /public/ to /public_html/, or am I missing something else?

    Read the article

  • Robust LINQ to XML query for sibling key-value pairs

    - by awshepard
    (First post, please be gentle!) I am just learning about LINQ to XML in all its glory and frailty, trying to hack it to do what I want to do: Given an XML file like this - <list> <!-- random data, keys, values, etc.--> <key>FIRST_WANTED_KEY</key> <value>FIRST_WANTED_VALUE</value> <key>SECOND_WANTED_KEY</key> <value>SECOND_WANTED_VALUE</value> <!-- wanted because it's first --> <key>SECOND_WANTED_KEY</key> <value>UNWANTED_VALUE</value> <!-- not wanted because it's second --> <!-- nonexistent <key>THIRD_WANTED_KEY</key> --> <!-- nonexistent <value>THIRD_WANTED_VALUE</value> --> <!-- more stuff--> </list> I want to extract the values of a set of known "wanted keys" in a robust fashion, i.e. if SECOND_WANTED_KEY appears twice, I only want SECOND_WANTED_VALUE, not UNWANTED_VALUE. Additionally, THIRD_WANTED_KEY may or may not appear, so the query should be able to handle that as well. I can assume that FIRST_WANTED_KEY will appear before other keys, but can't assume anything about the order of the other keys - if a key appears twice, its values aren't important, I only want the first one. An anonymous data type consisting of strings is fine. My attempt has centered around something along these lines: var z = from y in x.Descendants() where y.Value == "FIRST_WANTED_KEY" select new { first_wanted_value = ((XElement)y.NextNode).Value, //... } My question is what should that ... be? I've tried, for instance, (ugly, I know) second_wanted_value = ((XElement)y.ElementsAfterSelf() .Where(w => w.Value=="SECOND_WANTED_KEY") .FirstOrDefault().NextNode).Value which should hopefully allow the key to be anywhere, or non-existent, but that hasn't worked out, since .NextNode on a null XElement doesn't seem to work. I've also tried to add in a .Select(t => { if (t==null) return new XElement("SECOND_WANTED_KEY",""); else return t; }) clause in after the where, but that hasn't worked either. I'm open to suggestions, (constructive) criticism, links, references, or suggestions of phrases to Google for, etc. I've done a fair share of Googling and checking around S.O., so any help would be appreciated. Thanks!

    Read the article

  • Using Effect For Fog of War

    - by Qua
    I'm trying to apply fog of war to areas on the screen not currently visible to the player. I do this by rendering the game content in one RenderTarget and the the fog of war into another, and then I merge them with an effect file that takes the color from the game RenderTarget and the alpha from the fog of war render target. The FOW RenderTarget is black where the FOW appears, and white where it doesn't. This does work, but it colors the fog of war (the unrevealed locations) white instead of the intended color of black. Before applying the effect I clear the backbuffer of the device to white. When I try to clear it to black, non of the fog of war appears at all, which I assume is a product of alpha blending with black. It works for all other colors, however - giving the resulting screen a tint of that color. How do I archieve a black fog while still being able to do alpha blending between the two render targets? The rendering code for applying the FOW: private RenderTarget2D mainTarget; private RenderTarget2D lightTarget; private void CombineRenderTargetsAndDraw() { batch.GraphicsDevice.SetRenderTarget(null); batch.GraphicsDevice.Clear(Color.White); fogOfWar.Parameters["LightsTexture"].SetValue(lightTarget); batch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend); fogOfWar.CurrentTechnique.Passes[0].Apply(); batch.Draw( mainTarget, new Rectangle(0, 0, batch.GraphicsDevice.PresentationParameters.BackBufferWidth, batch.GraphicsDevice.PresentationParameters.BackBufferHeight), Color.White ); batch.End(); } The effect file I'm using to apply the FOW: texture LightsTexture; sampler ColorSampler : register(s0); sampler LightsSampler = sampler_state{ Texture = <LightsTexture>; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 TexCoord : TEXCOORD0; }; float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float2 tex = input.TexCoord; float4 color = tex2D(ColorSampler, tex); float4 alpha = tex2D(LightsSampler, tex); return float4(color.r, color.g, color.b, alpha.r); } technique Technique1 { pass Pass1 { PixelShader = compile ps_2_0 PixelShaderFunction(); } }

    Read the article

  • iPhone: didSelectRowAtIndexPath not invoked

    - by soletan
    Hi, I know this issue being mentioned before, but resolutions there didn't apply. I'm having a UINavigationController with an embedded UITableViewController set up using IB. In IB the UITableView's delegate and dataSource are both set to my derivation of UITableViewController. This class has been added using XCode's templates for UITableViewController classes. There is no custom UITableViewCell and the table view is using default plain style with single title, only. Well, in simulator the list is rendered properly, with two elements provided by dataSource, so dataSource is linked properly. If I remove the outlet link for dataSource in IB, an empty table is rendered instead. As soon as I tap on one of these two items, it is flashing blue and the GDB encounters interruption in __forwarding__ in scope of a UITableView::_selectRowAtIndexPath. It's not reaching breakpoint set in my non-empty method didSelectRowIndexPath. I checked the arguments and method's name to exclude typos resulting in different selector. I recently didn't succeed in whether delegate is set properly, but as it is set equivalently to dataSource which is getting two elements from the same class, I expect it to be set properly. So, what's wrong? I'm running iPhone/iPad SDK 3.1.2 ... but tried with iPhone SDK 3.1 in simulator as well. EDIT: This is the code of my UITableViewController derivation: #import "LocalBrowserListController.h" #import "InstrumentDescriptor.h" @implementation LocalBrowserListController - (void)viewDidLoad { [super viewDidLoad]; [self listLocalInstruments]; } - (void)viewDidAppear:(BOOL)animated { [super viewDidAppear:animated]; } - (void)didReceiveMemoryWarning { [super didReceiveMemoryWarning]; } - (void)viewDidUnload { [super viewDidUnload]; } - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { return 1; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { return [entries count]; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; if ( ( [entries count] > 0 ) && ( [indexPath length] > 0 ) ) cell.textLabel.text = [[[entries objectAtIndex:[indexPath indexAtPosition:[indexPath length] - 1]] label] retain]; return cell; } - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { if ( ( [entries count] > 0 ) && ( [indexPath length] > 0 ) ) { ... } } - (void)dealloc { [super dealloc]; } - (void) listLocalInstruments { NSMutableArray *result = [NSMutableArray arrayWithCapacity:10]; [result addObject:[InstrumentDescriptor descriptorOn:[[NSBundle mainBundle] pathForResource:@"example" ofType:@"idl"] withLabel:@"Default 1"]]; [result addObject:[InstrumentDescriptor descriptorOn:[[NSBundle mainBundle] pathForResource:@"example" ofType:@"xml"] withLabel:@"Default 2"]]; [entries release]; entries = [[NSArray alloc] initWithArray:result]; } @end

    Read the article

  • How can I get image data from QTKit without color or gamma correction in Snow Leopard?

    - by Nick Haddad
    Since Snow Leopard, QTKit is now returning color corrected image data from functions like QTMovies frameImageAtTime:withAttributes:error:. Given an uncompressed AVI file, the same image data is displayed with larger pixel values in Snow Leopard vs. Leopard. Currently I'm using frameImageAtTime to get an NSImage, then ask for the tiffRepresentation of that image. After doing this, pixel values are slightly higher in Snow Leopard. For example, a file with the following pixel value in Leopard: [0 180 0] Now has a pixel value like: [0 192 0] Is there any way to ask a QTMovie for video frames that are not color corrected? Should I be asking for a CGImageRef, CIImage, or CVPixelBufferRef instead? Is there a way to disable color correction altogether prior to reading in the video files? I've attempted to work around this issue by drawing into a NSBitmapImageRep with the NSCalibratedColroSpace, but that only gets my part of the way there: // Create a movie NSDictionary *dict = [NSDictionary dictionaryWithObjectsAndKeys : nsFileName, QTMovieFileNameAttribute, [NSNumber numberWithBool:NO], QTMovieOpenAsyncOKAttribute, [NSNumber numberWithBool:NO], QTMovieLoopsAttribute, [NSNumber numberWithBool:NO], QTMovieLoopsBackAndForthAttribute, (id)nil]; _theMovie = [[QTMovie alloc] initWithAttributes:dict error:&error]; // .... NSMutableDictionary *imageAttributes = [NSMutableDictionary dictionary]; [imageAttributes setObject:QTMovieFrameImageTypeNSImage forKey:QTMovieFrameImageType]; [imageAttributes setObject:[NSArray arrayWithObject:@"NSBitmapImageRep"] forKey: QTMovieFrameImageRepresentationsType]; [imageAttributes setObject:[NSNumber numberWithBool:YES] forKey:QTMovieFrameImageHighQuality]; NSError* err = nil; NSImage* image = (NSImage*)[_theMovie frameImageAtTime:frameTime withAttributes:imageAttributes error:&err]; // copy NSImage into an NSBitmapImageRep (Objective-C) NSBitmapImageRep* bitmap = [[image representations] objectAtIndex:0]; // Draw into a colorspace we know about NSBitmapImageRep *bitmapWhoseFormatIKnow = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:NULL pixelsWide:getWidth() pixelsHigh:getHeight() bitsPerSample:8 samplesPerPixel:4 hasAlpha:YES isPlanar:NO colorSpaceName:NSCalibratedRGBColorSpace bitmapFormat:0 bytesPerRow:(getWidth() * 4) bitsPerPixel:32]; [NSGraphicsContext saveGraphicsState]; [NSGraphicsContext setCurrentContext:[NSGraphicsContext graphicsContextWithBitmapImageRep:bitmapWhoseFormatIKnow]]; [bitmap draw]; [NSGraphicsContext restoreGraphicsState]; This does convert back to a 'Non color corrected' colorspace, but the color values NOT are exactly the same as what is stored in the Uncompressed AVI files we are testing with. Also this is much less efficient because it is converting from RGB - "Device RGB" - RGB. Also, I am working in a 64-bit application, so dropping down to the Quicktime-C API is not an option. Thanks for your help.

    Read the article

  • How can I turn a string of text into a BigInteger representation for use in an El Gamal cryptosystem

    - by angstrom91
    I'm playing with the El Gamal cryptosystem, and my goal is to be able to encipher and decipher long sequences of text. I have come up with a method that works for short sequences, but does not work for long sequences, and I cannot figure out why. El Gamal requires the plaintext to be an integer. I have turned my string into a byte[] using the .getBytes() method for Strings, and then created a BigInteger out of the byte[]. After encryption/decryption, I turn the BigInteger into a byte[] using the .toByteArray() method for BigIntegers, and then create a new String object from the byte[]. This works perfectly when i call ElGamalEncipher with strings up to 129 characters. With 130 or more characters, the output produced is garbled. Can someone suggest how to solve this issue? Is this an issue with my method of turning the string into a BigInteger? If so, is there a better way to turn my string of text into a BigInteger and back? Below is my encipher/decipher code. public static BigInteger[] ElGamalEncipher(String plaintext, BigInteger p, BigInteger g, BigInteger r) { // returns a BigInteger[] cipherText // cipherText[0] is c // cipherText[1] is d BigInteger[] cipherText = new BigInteger[2]; BigInteger pText = new BigInteger(plaintext.getBytes()); // 1: select a random integer k such that 1 <= k <= p-2 BigInteger k = new BigInteger(p.bitLength() - 2, sr); // 2: Compute c = g^k(mod p) BigInteger c = g.modPow(k, p); // 3: Compute d= P*r^k = P(g^a)^k(mod p) BigInteger d = pText.multiply(r.modPow(k, p)).mod(p); // C =(c,d) is the ciphertext cipherText[0] = c; cipherText[1] = d; return cipherText; } public static String ElGamalDecipher(BigInteger c, BigInteger d, BigInteger a, BigInteger p) { //returns the plaintext enciphered as (c,d) // 1: use the private key a to compute the least non-negative residue // of an inverse of (c^a)' (mod p) BigInteger z = c.modPow(a, p).modInverse(p); BigInteger P = z.multiply(d).mod(p); byte[] plainTextArray = P.toByteArray(); String output = null; try { output = new String(plainTextArray, "UTF8"); } catch (Exception e) { } return output; }

    Read the article

  • Matching unmatched strings based on a unknown pattern

    - by Polity
    Alright guys, i really hurt my brain over this one and i'm curious if you guys can give me any pointers towards the right direction i should be taking. The situation is this: Lets say, i have a collection of strings (let it be clear that the pattern of this strings is unknown. For a fact, i can say that the string contain only signs from the ASCII table and therefore, i dont have to worry about weird Chinese signs). For this example, i take the following collection of strings (note that the strings dont have to make any human sence so dont try figguring them out :)): "[001].[FOO].[TEST] - 'foofoo.test'", "[002].[FOO].[TEST] - 'foofoo.test'", "[003].[FOO].[TEST] - 'foofoo.test'", "[001].[FOO].[TEST] - 'foofoo.test.sample'", "[002].[FOO].[TEST] - 'foofoo.test.sample'", "-001- BAR.[TEST] - 'bartest.xx1", "-002- BAR.[TEST] - 'bartest.xx1" Now, what i need to have is a way of finding logical groups (and subgroups) of these set of strings, so in the above example, just by rational thinking, you can combine the first 3, the 2 after that and the last 2. Also the resulting groups from the first 5 can be combined in one main group with 2 subgroups, this should give you something like this: { { "[001].[FOO].[TEST] - 'foofoo.test'", "[002].[FOO].[TEST] - 'foofoo.test'", "[003].[FOO].[TEST] - 'foofoo.test'", } { "[001].[FOO].[TEST] - 'foofoo.test.sample'", "[002].[FOO].[TEST] - 'foofoo.test.sample'", } { "-001- BAR.[TEST] - 'bartest.xx1", "-002- BAR.[TEST] - 'bartest.xx1" } } Sorry for the layout above but indenting with 4 spaces doesnt seem to work correctly (or im frakk'n it up). Anyways, I'm not sure how to approach this problem (how to get the result desired as indicated above). First of, i thought of creating a huge set of regexes which would parse most known patterns but the amount of different patterns is just to huge that this isn't realistic. Another think i thought of was parsing each indidual word within a string (so strip all non alphabetic or numeric characters and split by those), and if X% matches, i can assume the strings belong to the same group. (where X wil probably be around 80/90). However, i find the area of speculation kinda big. For example, when matching strings with each 20 words, the change of hitting above 80% is kinda big (that means that 4 words can differ), however when matching only 8 words, 2 words at most can differ. My question to you is, what would be a logical approach in the above situation? Thanks in advance!

    Read the article

  • Problem with ColdFusion communicating with MySQL database

    - by Greg
    Hi, I have been working to migrate a non-profit website from a local server (running Windows XP) to a GoDaddy hosting account (running Linux). Most of the pages are written in ColdFusion. Things have gone smoothly, up until this point. There is a flash form within the site (see this page: http://www.preservenet.cornell.edu/employ/submitjob.cfm) which, when completed, takes the visitor to this page: submitjobaction.cfm . I'm not quite sure what to make of this error, since I copied exactly what had been in the old MySQL database, and the .cfm files are exactly as they had been when they worked on the old server. Am I missing something? Below is the code from the database that the error seems to be referring to. When I change "Positionlat" to some default value in the database as it suggests in the error, it says that another field needs a default value, and it's a neverending chain of errors as I try to correct it. This is probably a stupid error that I'm missing, but I've been working at it for days and can't find what I'm missing. I would really appreciate any help. Thanks! -Greg DROP TABLE IF EXISTS employopp; CREATE TABLE employopp ( POSTID int(10) NOT NULL auto_increment, USERID varchar(10) collate latin1_general_ci default NULL, STATUS varchar(10) collate latin1_general_ci NOT NULL default 'ACTIVE', TYPE varchar(50) collate latin1_general_ci default 'professional', JOBTITLE varchar(70) collate latin1_general_ci default NULL, NUMBER varchar(30) collate latin1_general_ci default NULL, SALARY varchar(40) collate latin1_general_ci default NULL, ORGNAME varchar(70) collate latin1_general_ci default NULL, DEPTNAME varchar(70) collate latin1_general_ci default NULL, ORGDETAILS mediumtext character set utf8 collate utf8_unicode_ci, ORGWEBSITE varchar(200) collate latin1_general_ci default NULL, ADDRESS varchar(60) collate latin1_general_ci default 'none given', ADDRESS2 varchar(60) collate latin1_general_ci default NULL, CITY varchar(30) collate latin1_general_ci default NULL, STATE varchar(30) collate latin1_general_ci default NULL, COUNTRY varchar(3) collate latin1_general_ci default 'USA', POSTALCODE varchar(10) collate latin1_general_ci default NULL, EMAIL varchar(75) collate latin1_general_ci default NULL, NOMAIL varchar(5) collate latin1_general_ci default NULL, PHONE varchar(20) collate latin1_general_ci default NULL, FAX varchar(20) collate latin1_general_ci default NULL, WEBSITE varchar(200) collate latin1_general_ci default NULL, POSTDATE varchar(10) collate latin1_general_ci default NULL, POSTUNTIL varchar(20) collate latin1_general_ci default 'select date', POSTUNTILFILLED varchar(20) collate latin1_general_ci NOT NULL default 'until filled', texteHTML mediumtext character set utf8 collate utf8_unicode_ci, HOWTOAPPLY mediumtext character set utf8 collate utf8_unicode_ci, CONFIRSTNM varchar(30) collate latin1_general_ci default NULL, CONLASTNM varchar(60) collate latin1_general_ci default NULL, POSITIONCITY varchar(30) collate latin1_general_ci default NULL, POSITIONSTATE varchar(30) collate latin1_general_ci default NULL, POSITIONCOUNTRY varchar(3) collate latin1_general_ci default 'USA', POSITIONLAT varchar(50) collate latin1_general_ci NOT NULL, POSITIONLNG varchar(50) collate latin1_general_ci NOT NULL, PRIMARY KEY (POSTID) ) ENGINE=MyISAM AUTO_INCREMENT=2007 DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci;

    Read the article

  • What are five things you hate about your favorite language?

    - by brian d foy
    There's been a cluster of Perl-hate on Stackoverflow lately, so I thought I'd bring my "Five things you hate about your favorite language" question to StackOverflow. Take your favorite language and tell me five things you hate about it. Those might be things that just annoy you, admitted design flaws, recognized performance problems, or any other category. You just have to hate it, and it has to be your favorite language. Don't compare it to another language, and don't talk about languages that you already hate. Don't talk about the things you like in your favorite language. I just want to hear the things that you hate but tolerate so you can use all of the other stuff, and I want to hear it about the language you wished other people would use. I ask this whenever someone tries to push their favorite language on me, and sometimes as an interview question. If someone can't find five things to hate about his favorite tool, he don't know it well enough to either advocate it or pull in the big dollars using it. He hasn't used it in enough different situations to fully explore it. He's advocating it as a culture or religion, which means that if I don't choose his favorite technology, I'm wrong. I don't care that much which language you use. Don't want to use a particular language? Then don't. You go through due diligence to make an informed choice and still don't use it? Fine. Sometimes the right answer is "You have a strong programming team with good practices and a lot of experience in Bar. Changing to Foo would be stupid." This is a good question for code reviews too. People who really know a codebase will have all sorts of suggestions for it, and those who don't know it so well have non-specific complaints. I ask things like "If you could start over on this project, what would you do differently?" In this fantasy land, users and programmers get to complain about anything and everything they don't like. "I want a better interface", "I want to separate the model from the view", "I'd use this module instead of this other one", "I'd rename this set of methods", or whatever they really don't like about the current situation. That's how I get a handle on how much a particular developer knows about the codebase. It's also a clue about how much of the programmer's ego is tied up in what he's telling me. Hate isn't the only dimension of figuring out how much people know, but I've found it to be a pretty good one. The things that they hate also give me a clue how well they are thinking about the subject.

    Read the article

  • Write binary stream to browser using PHP

    - by Dave Jarvis
    Background Trying to stream a PDF report written using iReport through PHP to the browser. The general problem is: how do you write binary data to the browser using PHP? Working Code The following code does the job, but (for many reasons) it is not as efficient as it should be (the code writes a file then sends the file contents the browser). // Load the MySQL database driver. // java( 'java.lang.Class' )->forName( 'com.mysql.jdbc.Driver' ); // Attempt a database connection. // $conn = java( 'java.sql.DriverManager' )->getConnection( "jdbc:mysql://localhost:3306/climate?user=$user&password=$password" ); // Extract parameters. // $params = new java('java.util.HashMap'); $params->put('DistrictCode', '101'); $params->put('StationCode', '0066'); $params->put('CategoryCode', '010'); // Use the fill manager to produce the report. // $fm = java('net.sf.jasperreports.engine.JasperFillManager'); $pm = $fm->fillReport($report, $params, $conn); header('Cache-Control: no-cache private'); header('Content-Description: File Transfer'); header('Content-Disposition: attachment, filename=climate-report.pdf'); header('Content-Type: application/pdf'); header('Content-Transfer-Encoding: binary'); header('Content-Length: ' . strlen( $result ) ); $path = realpath( "." ) . "/output.pdf"; $em = java('net.sf.jasperreports.engine.JasperExportManager'); $result = $em->exportReportToPdfFile($pm,$path); readfile( $path ); $conn->close(); Non-working Code To remove the slight redundancy (i.e., write directly to the browser), the following code looks like it should work, but it does not: $em = java('net.sf.jasperreports.engine.JasperExportManager'); $result = $em->exportReportToPdf($pm); header('Content-Length: ' . strlen( $result ) ); echo $result; Content is sent to the browser, but the file is corrupt (it begins with the PDF header) and cannot be read by any PDF reader. Question How can I take out the middle step of writing to the file and write directly to the browser so that the PDF is not corrupted? Thank you!

    Read the article

  • Oracle sample data problems

    - by Jay
    So, I have this java based data trasformation / masking tool, which I wanted to test out on Oracle 10g. The good part with Oracle 10g is that you get a load of sample schemas with half a million records in some. The schemas are : SH, OE, HR, IX and etc. So, I installed 10g, found out that the installation scripts are under ORACLE_HOME/demo/scripts. I customized these scripts a bit to run in batch mode. That solves one half of my requirement - to create source data for my testing my data transformation software. The second half of the requirement is that I create the same schemas under different names (TR_HR, TR_OE and so on...) without any data. These schemas would represent my target schemas. So, in short, my software would pick up data from a table in a schema and load it up in to the same table in a different schema. Now, I have two issues in creating my target schema and emptying it. I would like this in a batch job. But the oracle scripts you get, the sample schema names are not configurable. So, I tried creating a script, replacing OE with TR_OE, HR with TR_HR and so on. However, this approach is kind of irritating coz the sample schemas are kind of complicated in the way they are created; Oracle creates synonyms, views, materialized views, data types and lot of weird stuff. I would like the target schemas (TR_HR, TR_OE,...) to be empty. But some of the schemas have circular references, which would not allow me to delete data. The only work around seems to be removing certain foreign keys, deleting data and then adding the constraints back. Is there any easy way to all this, without all this fuss? I would need a complicated data set for my testing (complicated as in tables with triggers, multiple hierarchies.. for instance.. a child table that has children up to 5 levels, a parent table that refers to an IOT table and an IOT table that refers to a non-IOT table etc..). The sample schemas are just about perfect from a data set perspective. The only challenge I see is in automating this whole process of loading up the source schemas, and then creating the target schemas and emptying them. Appreciate your help and suggestions.

    Read the article

  • Cannot populate form with ajax and populate jquery plugin

    - by Azriel_
    I'm trying to populate a form with jquery's populate plugin, but using $.ajax The idea is to retrieve data from my database according to the id in the links (ex of link: get_result_edit.php?id=34), reformulate it to json, return it to my page and fill up the form up with the populate plugin. But somehow i cannot get it to work. Any ideas: here's the code: $('a').click(function(){ $('#updatediv').hide('slow'); $.ajax({ type: "GET", url: "get_result_edit.php", success: function(data) { var $response=$(data); $('#form1').populate($response); } }); $('#updatediv').fadeIn('slow'); return false; whilst the php file states as follow: <?php $conn = new mysqli('localhost', 'XXXX', 'XXXXX', 'XXXXX'); @$query = 'Select * FROM news WHERE id ="'.$_GET['id'].'"'; $stmt = $conn->query($query) or die ($mysql->error()); if ($stmt) { $results = $stmt->fetch_object(); // get database data $json = json_encode($results); // convert to JSON format echo $json; } ?> Now first thing is that the mysql returns a null in this way: is there something wrong with he declaration of the sql statement in the $_GET part? Second is that even if i put a specific record to bring up, populate doesn't populate. Update: I changed the populate library with the one called "PHP jQuery helper functions" and the difference is that finally it says something. finally i get an error saying NO SUCH ELEMENT AS i wen into the library to have a look and up comes the following function function populateFormElement(form, name, value) { // check that the named element exists in the form var name = name; // handle non-php naming var element = form[name]; if(element == undefined) { debug('No such element as ' + name); return false; } // debug options if(options.debug) { _populate.elements.push(element); } } Now looking at it one can see that it should print out also the name, but its not printing it out. so i'm guessing that retrieving the name form the json is not working correctly. Link is at http://www.ocdmonline.org/michael/edit%5Fnews.php with username: Testing and pass:test123 Any ideas?

    Read the article

  • Historical / auditable database

    - by Mark
    Hi all, This question is related to the schema that can be found in one of my other questions here. Basically in my database I store users, locations, sensors amongst other things. All of these things are editable in the system by users, and deletable. However - when an item is edited or deleted I need to store the old data; I need to be able to see what the data was before the change. There are also non-editable items in the database, such as "readings". They are more of a log really. Readings are logged against sensors, because its the reading for a particular sensor. If I generate a report of readings, I need to be able to see what the attributes for a location or sensor was at the time of the reading. Basically I should be able to reconstruct the data for any point in time. Now, I've done this before and got it working well by adding the following columns to each editable table: valid_from valid_to edited_by If valid_to = 9999-12-31 23:59:59 then that's the current record. If valid_to equals valid_from, then the record is deleted. However, I was never happy with the triggers I needed to use to enforce foreign key consistency. I can possibly avoid triggers by using the extension to the "PostgreSQL" database. This provides a column type called "period" which allows you to store a period of time between two dates, and then allows you to do CHECK constraints to prevent overlapping periods. That might be an answer. I am wondering though if there is another way. I've seen people mention using special historical tables, but I don't really like the thought of maintainling 2 tables for almost every 1 table (though it still might be a possibility). Maybe I could cut down my initial implementation to not bother checking the consistency of records that aren't "current" - i.e. only bother to check constraints on records where the valid_to is 9999-12-31 23:59:59. Afterall, the people who use historical tables do not seem to have constraint checks on those tables (for the same reason, you'd need triggers). Does anyone have any thoughts about this? PS - the title also mentions auditable database. In the previous system I mentioned, there is always the edited_by field. This allowed all changes to be tracked so we could always see who changed a record. Not sure how much difference that might make. Thanks.

    Read the article

  • Possible uncommitted transactions causing "System.Data.SqlClient.SqlException: Timeout expired" erro

    - by Michael
    My application requires a user to log in and allows them to edit a list of things. However, it seems that if the same user always logs in and out and edits the list, this user will run into a "System.Data.SqlClient.SqlException: Timeout expired." error. I've read comments about increasing the timeout period but I've also read a comment about it possibly caused by uncommitted transactions. And I do have one going in the application. I'll provide the code I'm working with and there is an IF statement in there that I was a little iffy about but it seemed like a reasonable thing to do. I'll just go over what's going on here, there is a list of objects to update or add into the database. New objects created in the application are given an ID of 0 while existing objects have their own ID's generated from the DB. If the user chooses to delete some objects, their IDs are stored in a separate list of Integers. Once the user is ready to save their changes, the two lists are passed into this method. By use of the IF statement, objects with ID of 0 are added (using the Add stored procedure) and those objects with non-zero IDs are updated (using the Update stored procedure). After all this, a FOR loop goes through all the integers in the "removal" list and uses the Delete stored procedure to remove them. A transaction is used for all this. Public Shared Sub UpdateSomethings(ByVal SomethingList As List(Of Something), ByVal RemovalList As List(Of Integer)) Using DBConnection As New SqlConnection(conn) DBConnection.Open() Dim MyTransaction As SqlTransaction MyTransaction = DBConnection.BeginTransaction() Try For Each SomethingItem As Something In SomethingList Using MyCommand As New SqlCommand() MyCommand.Connection = DBConnection If SomethingItem.ID > 0 Then MyCommand.CommandText = "UpdateSomething" Else MyCommand.CommandText = "AddSomething" End If MyCommand.Transaction = MyTransaction MyCommand.CommandType = CommandType.StoredProcedure With MyCommand.Parameters If MyCommand.CommandText = "UpdateSomething" Then .Add("@id", SqlDbType.Int).Value = SomethingItem.ID End If .Add("@stuff", SqlDbType.Varchar).Value = SomethingItem.Stuff End With MyCommand.ExecuteNonQuery() End Using Next For Each ID As Integer In RemovalList Using MyCommand As New SqlCommand("DeleteSomething", DBConnection) MyCommand.Transaction = MyTransaction MyCommand.CommandType = CommandType.StoredProcedure With MyCommand.Parameters .Add("@id", SqlDbType.Int).Value = ID End With MyCommand.ExecuteNonQuery() End Using Next MyTransaction.Commit() Catch ex As Exception MyTransaction.Rollback() 'Exception handling goes here End Try End Using End Sub There are three stored procedures used here as well as some looping so I can see how something can be holding everything up if the list is large enough. Other users can log in to the system at the same time just fine though. I'm using Visual Studio 2008 to debug and am using SQL Server 2000 for the DB.

    Read the article

  • Are "EXC_BREAKPOINT (SIGTRAP)" exceptions caused by debugging breakpoints?

    - by Dennis
    I have a multithreaded app that is very stable on all my test machines and seems to be stable for almost every one of my users (based on no complaints of crashes). The app crashes frequently for one user, though, who was kind enough to send crash reports. All the crash reports (~10 consecutive reports) look essentially identical: Date/Time: 2010-04-06 11:44:56.106 -0700 OS Version: Mac OS X 10.6.3 (10D573) Report Version: 6 Exception Type: EXC_BREAKPOINT (SIGTRAP) Exception Codes: 0x0000000000000002, 0x0000000000000000 Crashed Thread: 0 Dispatch queue: com.apple.main-thread Thread 0 Crashed: Dispatch queue: com.apple.main-thread 0 com.apple.CoreFoundation 0x90ab98d4 __CFBasicHashRehash + 3348 1 com.apple.CoreFoundation 0x90adf610 CFBasicHashRemoveValue + 1264 2 com.apple.CoreText 0x94e0069c TCFMutableSet::Intersect(__CFSet const*) const + 126 3 com.apple.CoreText 0x94dfe465 TDescriptorSource::CopyMandatoryMatchableRequest(__CFDictionary const*, __CFSet const*) + 115 4 com.apple.CoreText 0x94dfdda6 TDescriptorSource::CopyDescriptorsForRequest(__CFDictionary const*, __CFSet const*, long (*)(void const*, void const*, void*), void*, unsigned long) const + 40 5 com.apple.CoreText 0x94e00377 TDescriptor::CreateMatchingDescriptors(__CFSet const*, unsigned long) const + 135 6 com.apple.AppKit 0x961f5952 __NSFontFactoryWithName + 904 7 com.apple.AppKit 0x961f54f0 +[NSFont fontWithName:size:] + 39 (....more text follows) First, I spent a long time investigating [NSFont fontWithName:size:]. I figured that maybe the user's fonts were screwed up somehow, so that [NSFont fontWithName:size:] was requesting something non-existent and failing for that reason. I added a bunch of code using [[NSFontManager sharedFontManager] availableFontNamesWithTraits:NSItalicFontMask] to check for font availability in advance. Sadly, these changes didn't fix the problem. I've now noticed that I forgot to remove some debugging breakpoints, including _NSLockError, [NSException raise], and objc_exception_throw. However, the app was definitely built using "Release" as the active build configuration. I assume that using the "Release" configuration prevents setting of any breakpoints--but then again I am not sure exactly how breakpoints work or whether the program needs to be run from within gdb for breakpoints to have any effect. My questions are: could my having left the breakpoints set be the cause of the crashes observed by the user? If so, why would the breakpoints cause a problem only for this one user? If not, has anybody else had similar problems with [NSFont fontWithName:size:]? I will probably just try removing the breakpoints and sending back to the user, but I'm not sure how much currency I have left with that user. And I'd like to understand more generally whether leaving the breakpoints set could possibly cause a problem (when the app is built using "Release" configuration).

    Read the article

  • Using array as map value: Cant see the error

    - by Tom
    Hi all, Im trying to create a map, where the key is an int, and the value is an array int red[3] = {1,0,0}; int green[3] = {0,1,0}; int blue[3] = {0,0,1}; std::map<int, int[3]> colours; colours.insert(std::pair<int,int[3]>(GLUT_LEFT_BUTTON,red)); //THIS IS LINE 24 ! colours.insert(std::pair<int,int[3]>(GLUT_MIDDLE_BUTTON,blue)); colours.insert(std::pair<int,int[3]>(GLUT_RIGHT_BUTTON,green)); However, when I try to compile this code, I get the following error. g++ (Ubuntu 4.4.1-4ubuntu8) 4.4.1 In file included from /usr/include/c++/4.4/bits/stl_algobase.h:66, from /usr/include/c++/4.4/bits/stl_tree.h:62, from /usr/include/c++/4.4/map:60, from ../src/utils.cpp:9: /usr/include/c++/4.4/bits/stl_pair.h: In constructor ‘std::pair<_T1, _T2>::pair(const _T1&, const _T2&) [with _T1 = int, _T2 = int [3]]’: ../src/utils.cpp:24: instantiated from here /usr/include/c++/4.4/bits/stl_pair.h:84: error: array used as initializer /usr/include/c++/4.4/bits/stl_pair.h: In constructor ‘std::pair<_T1, _T2>::pair(const std::pair<_U1, _U2>&) [with _U1 = int, _U2 = int [3], _T1 = const int, _T2 = int [3]]’: ../src/utils.cpp:24: instantiated from here /usr/include/c++/4.4/bits/stl_pair.h:101: error: array used as initializer In file included from /usr/include/c++/4.4/map:61, from ../src/utils.cpp:9: /usr/include/c++/4.4/bits/stl_map.h: In member function ‘_Tp& std::map<_Key, _Tp, _Compare, _Alloc>::operator[](const _Key&) [with _Key = int, _Tp = int [3], _Compare = std::less<int>, _Alloc = std::allocator<std::pair<const int, int [3]> >]’: ../src/utils.cpp:30: instantiated from here /usr/include/c++/4.4/bits/stl_map.h:450: error: conversion from ‘int’ to non-scalar type ‘int [3]’ requested make: *** [src/utils.o] Error 1 I really cant see where the error is. Or even if there's an error. Any help (please include an explanation to help me avoid this mistake) will be appreciated. Thanks in advance.

    Read the article

  • NHibernate criteria query question

    - by Chris
    I have 3 related objects (Entry, GamePlay, Prize) and I'm trying to find the best way to query them for what I need using NHibernate. When a request comes in, I need to query the Entries table for a matching entry and, if found, get a) the latest game play along with the first game play that has a prize attached. Prize is a child of GamePlay and each Entry object has a GamePlays property (IList). Currently, I'm working on a method that pulls the matching Entry and eagerly loads all game plays and associated prizes, but it seems wasteful to load all game plays just to find the latest one and any that contain a prize. Right now, my query looks like this: var entry = session.CreateCriteria<Entry>() .Add(Restrictions.Eq("Phone", phone)) .AddOrder(Order.Desc("Created")) .SetFetchMode("GamePlays", FetchMode.Join) .SetMaxResults(1).UniqueResult<Entry>(); Two problems with this: It loads all game plays up front. With 365 days of data, this could easily balloon to 300k of data per query. It doesn't eagerly load the Prize child property for each game. Therefore, my code that loops through the GamePlays list looking for a non-null Prize must make a call to load each Prize property I check. I'm not an nhibernate expert, but I know there has to be a better way to do this. Ideally, I'd like to do the following (pseudocode): entry = findEntry(phoneNumber) lastPlay = getLatestGamePlay(Entry) firstWinningPlay = getFirstWinningGamePlay(Entry) The end result of course is that I have the entry details, the latest game play, and the first winning game play. The catch is that I want to do this in as few database calls as possible, otherwise I'd just execute 3 separate queries. The object definitions look like: public class Entry { public Guid Id {get;set;} public string Phone {get;set;} public IList<GamePlay> GamePlays {get;set;} // ... other properties } public class GamePlay { public Guid Id {get;set;} public Entry Entry {get;set;} public Prize Prize {get;set;} // ... other properties } public class Prize { public Guid Id {get;set;} // ... other properties } The proper NHibernate mappings are in place, so I just need help figuring out how to set up the criteria query (not looking for HQL, don't use it).

    Read the article

  • Hosting a WCF Service Lib through a Windows service get a System.InvalidOperationException: attempti

    - by JohnL
    I have a WCF Service Library containing five service contracts. The library is hosted through a Windows Service. Most if not all my configuration for the WCF Library is declaritive. The only thing I am doing in code for configuration is to pass the type of the class implementing the service contracts into ServiceHost. I then call Open on each of the services during the Windows Service OnStart event. Here is the error message I get: Service cannot be started. System.InvalidOperationException: Service '[Fubu.Conversion.Service1' has zero application (non-infrastructure) endpoints. This might be because no configuration file was found for your application, or because no service element matching the service name could be found in the configuration file, or because no endpoints were defined in the service element. at System.ServiceModel.Description.DispatcherBuilder.EnsureThereAreNonMexEndpoints(ServiceDescription description) at System.ServiceModel.Description.DispatcherBuilder.InitializeServiceHost(ServiceDescription description, ServiceHostBase serviceHost) at System.ServiceModel.ServiceHostBase.InitializeRuntime() at System.ServiceModel.ServiceHostBase.OnBeginOpen() at System.ServiceModel.ServiceHostBase.OnOpen(TimeSpan timeout) at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout) at System.ServiceModel.Channels.CommunicationObject.Open() at Fubu.RemotingHost.RemotingHost.StartServ... protected override void OnStart(string[] args) { // Uncomment to debug this properly //System.Diagnostics.Debugger.Break(); StartService1(); StartService2(); StartService3(); StartService4(); StartService5(); } Each of the above simply do the following: private void StartSecurityService() { host = new ServiceHost(typeof(Service1)); host.Open(); } Service Lib app.congfig summary <services> <service behaviorConfiguration="DefaultServiceBehavior" name="Fubu.Conversion.Service1"> <endpoint address="" binding="netTcpBinding" bindingConfiguration="TCPBindingConfig" name="Service1" bindingName="TCPEndPoint" contract="Fubu.Conversion.IService1"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mex" binding="mexTcpBinding" bindingConfiguration="" name="mexSecurity" bindingName="TcpMetaData" contract="IMetadataExchange" /> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:8025/Fubu/Conversion/Service1/" /> </baseAddresses> </host> </service> ... Contract is set up as follows: namespace Fubu.Conversion.Service1 { [ServiceContract(Namespace = "net.tcp://localhost:8025/Fubu")] public interface IService1 { I have looked "high and low" for a solution without any luck. Is the answer obvious? The solution to this does not appear to be. Thanks

    Read the article

  • Tracking upstream svn changes with git-svn and github?

    - by Joseph Turian
    How do I track upstream SVN changes using git-svn and github? I used git-svn to convert an SVN repo to git on github: $ git svn clone -s http://svn.osqa.net/svnroot/osqa/ osqa $ cd osqa $ git remote add origin [email protected]:turian/osqa.git $ git push origin master I then made a few changes in my git repo, committed, and pushed to github. Now, I am on a new machine. I want to take upstream SVN changes, merge them with my github repo, and push them to my github repo. This documentation says: "If you ever lose your local copy, just run the import again with the same settings, and you’ll get another working directory with all the necessary SVN metainfo." So I did the following. But none of the commands work as desired. How do I track upstream SVN changes using git-svn and github? What am I doing wrong? $ git svn clone -s http://svn.osqa.net/svnroot/osqa/ osqa $ cd osqa $ git remote add origin [email protected]:turian/osqa.git $ git push origin master To [email protected]:turian/osqa.git ! [rejected] master -> master (non-fast forward) error: failed to push some refs to '[email protected]:turian/osqa.git' $ git pull remote: Counting objects: 21, done. remote: Compressing objects: 100% (17/17), done. remote: Total 17 (delta 7), reused 9 (delta 0) Unpacking objects: 100% (17/17), done. From [email protected]:turian/osqa * [new branch] master -> origin/master From [email protected]:turian/osqa * [new tag] master -> master You asked me to pull without telling me which branch you want to merge with, and 'branch.master.merge' in your configuration file does not tell me either. Please name which branch you want to merge on the command line and try again (e.g. 'git pull <repository> <refspec>'). See git-pull(1) for details on the refspec. ... $ /usr//lib/git-core/git-svn rebase warning: refname 'master' is ambiguous. First, rewinding head to replay your work on top of it... Applying: Added forum/management/commands/dumpsettings.py error: Ref refs/heads/master is at 6acd747f95aef6d9bce37f86798a32c14e04b82e but expected a7109d94d813b20c230a029ecd67801e6067a452 fatal: Cannot lock the ref 'refs/heads/master'. Could not move back to refs/heads/master rebase refs/remotes/trunk: command returned error: 1

    Read the article

  • What does N years of experience with a language really mean?

    - by marcgg
    I've been looking at jobs descriptions since I'm graduating soon and looking for a job and what's always coming back - I'm not teaching you anything - is the "N years of experience in this language". It has been discussed in this question that if you work professionally with let's say Ruby for 2 years, but during these two years you also did some C# and PHP and were actually coding in Ruby 50% of the time. Do you say you have 1 year of experience in Ruby? 2 years? Another issue that hasn't been reviewed in the other post is for "non-professional experience". I'll give you a personal example: I've been working with Ruby on Rails since 2004 while at school. I did a lot of personal projects and school projects using this technology. I also used Rails in 2 6-month internships. Do I have 5 years of Rails experience (2004-now)? Do I have 1 year(2 internships)? Do I have nothing? I feel like I don't deserve the credit for 5 years, because the first years I wasn't working a lot with rails, but since last year I launched some websites and invested myself a lot in this technology and just saying 1 year doesn't really reflect how much I know the technology... Another example: I Learned C++ at school and did 1 big project with it (2-3 month of work and a semester of classes). I never used it in a company but I'd be able to be productive fairly quickly if I had to work on a C++ project and I have a good grasp of the concepts. Do I have no experience? 3 months? 6 months? ... something else? What I'm really trying to do is to find a way to present my skill set in a way that is compliant to what recruiters expect. I also don't want to end up at an interview that would go something like this... Recruiter (finding out the horrible truth): Oh but you said that you had 2 years of experience with this when you have none! / slaps me in the face / Me (in pain): Oh! The irony! Recruiter (yelling): Get out of my office / calls security, punches me in the throat /

    Read the article

  • Why won't this hit test fire a second time? wpf

    - by csciguy
    All, I have a main window that contains two custom objects (AnimatedCharacter). These objects are nothing but images. These images might contain transparent portions. One of these objects slightly overlaps the other object. There is a listener attached to the main window and is as follows. private void Window_MouseLeftButtonUp_1(object sender, MouseButtonEventArgs e) { Point pt = e.GetPosition((UIElement)sender); //store off the mouse pt hitPointMouse = pt; //clear the result list hitResultsSubList.Clear(); EllipseGeometry m_egHitArea = new EllipseGeometry(pt, 1, 1); VisualTreeHelper.HitTest(sender as Visual, HitTestFilterFuncNew, new HitTestResultCallback(HitTestCallback), new GeometryHitTestParameters(m_egHitArea)); //Check all sub items you have now hit if (hitResultsSubList.Count > 0) { CheckSubHitItems(hitResultsSubList); } } The idea is to filter out only a select group of items (called AnimatedCharacters). The hittest and filters are as follows public HitTestResultBehavior HitTestCallback(HitTestResult htrResult) { IntersectionDetail idDetail = ((GeometryHitTestResult)htrResult).IntersectionDetail; switch (idDetail) { case IntersectionDetail.FullyContains: return HitTestResultBehavior.Continue; case IntersectionDetail.Intersects: return HitTestResultBehavior.Continue; case IntersectionDetail.FullyInside: return HitTestResultBehavior.Continue; default: return HitTestResultBehavior.Stop; } } public HitTestFilterBehavior HitTestFilterFuncNew(DependencyObject potentialHitTestTarget) { if (potentialHitTestTarget.GetType() == typeof(AnimatedCharacter)) { hitResultsSubList.Add(potentialHitTestTarget as AnimatedCharacter); } return HitTestFilterBehavior.Continue; } This returns me back a list (called hitResultsSubList) that I attempt to then process further. I want to take everything in the hitResultsSubList and run a hit test on it again. This time, the hit test will be checking alpha levels on the particular animatedCharacter object. private void CheckSubHitItems(List<DependencyObject> hitResultsSub) { for(int i = 0; i<hitResultsSub.Count; i++) { hitResultsList.Clear(); AnimatedCharacter ac = hitResultsSub[i] as AnimatedCharacter; try { //DEBUGGER SKIPS THIS NEXT LINE EVERY SINGLE TIME. VisualTreeHelper.HitTest(ac, null, new HitTestResultCallback(hitCallBack), new PointHitTestParameters(hitPointMouse)); } catch (Exception e) { System.Diagnostics.Debug.WriteLine(e.StackTrace); } if (hitResultsList.Count > 0) { //do something here } } } Here is my problem now. The hit test in the second function (CheckSubHitItems) never gets called. There are definitely items (DependencyObjects of the type AnimatedCharacter) in the hitResultSub, but no matter what, the second hit test will not fire. I can walk the for loop fine, but when that line is hit, I get the following console statement. Step into: Stepping over non-user code 'System.MulticastDelegate.CtorClosed' No exceptions are thrown. Any help is appreciated.

    Read the article

  • ActiveMQ 5.2.0 + REST + HTTP POST = java.lang.OutOfMemoryError

    - by Bruce Loth
    First off, I am a newbie when it comes to JMS & ActiveMQ. I have been looking into a messaging solution to serve as middleware for a message producer that will insert XML messages into a queue via HTTP POST. The producer is an existing system written in C++ that cannot be modified (so Java and the C++ API are out). Using the "demo" examples and some trial and error, I have cobbled together a working example of what I want to do (on a windows box). The web.xml I configured in a test directory under "webapps" specifies that the HTTP POST messages received from the producer are to be handled by the MessageServlet. I added a line for the text app in "activemq.xml" ('ow' is the test app dir): I created a test script to "insert" messages into the queue which works well. The problem I am running into is that it as I continue to insert messages via REST/HTTP POST, the memory consumption and thread count used by ActiveMQ continues to rise (It happens when I have timely consumers as well as slow or non-existent consumers). When memory consumption gets around 250MB's and the thread count exceeds 5000 (as shown in windows task manager), ActiveMQ crashes and I see this in the log: Exception in thread "ActiveMQ Transport Initiator: vm://localhost#3564" java.lang.OutOfMemoryError: unable to create new native thread It is as if Jetty is spawning a new thread to handle each HTTP POST and the thread never dies. I did look at this page: http://activemq.apache.org/javalangoutofmemory.html and tried but that didn't fix the problem (although I didn't fully understand the implications of the change either). Does anyone have any ideas? Thanks! Bruce Loth PS - I included the "test message producer" python script below for what it is worth. I created batches of 100 messages and continued to run the script manually from the command line while watching the memory consumption and thread count of ActiveMQ in task manager. def foo(): import httplib, urllib body = "<?xml version='1.0' encoding='UTF-8'?>\n \ <ROOT>\n \ [snip: xml deleted to save space] </ROOT>" headers = {"content-type": "text/xml", "content-length": str(len(body))} conn = httplib.HTTPConnection("127.0.0.1:8161") conn.request("POST", "/ow/message/RDRCP_Inbox?type=queue", body, headers) response = conn.getresponse() print response.status, response.reason data = response.read() conn.close() ## end method definition ## Begin test code count = 0; while(count < 100): # Test with batches of 100 msgs count += 1 foo()

    Read the article

  • One entityManger finds entity , the other does not.

    - by Pitelk
    Hi all, I have a very strange behavior in my program. I have 2 classes (class LogIn and CreateGame) where i have injected an EntityManager in each using the annotation @PersistenceContext(unitName="myUnitPU") EntityManager entitymanger; In some point i remove an object called "user" from the database using entitymanger.remove(user) from a method in LogIn class. The business logic is that a user can host and join games ( in the same time) so removing the user all the entries in database about the games the user has created are removed and all the entries showing in which games the user has joined are removed also. After that, i call another function which checks if the user exists using a method in the LogIn class entitymanager.find(user) which surprisingly enough, finds the user. After that I call a method in CreateGame class which tries to find the user by using again entitymanger.find(user) the entitymanger in that class fails to find the user (which is the expected result as the user is removed and it's not in the database) So the question is : Why the entitymanager in one class finds the user (which is wrong) where the other doesn't find it? Does anyone has ever the same problem? PS : This "bug" occurs when the user has hosted a game which is joined by another user (lets call him Buser) and the Buser has made a game which is joined by the current user. GAME | HOST | CLIENTS game1 | user | userB game2 | userB | user where in this case by removing the user, the game1 is deleted and the user is removed from game2 so the result is GAME | HOST | CLIENTS game2 | userB | PS2 : The Beans are EJB3.0. The methods are called from a delegate class. The beans in the delegate class are instantiated using the InitialContext.lookup() method. Note that for logging in ,creating , joining games the appropriate delegate class calls the correspondent EJB which does the transactions. In the case of logOut, the delegate calls an EJB to logout the user but becuase other stuff must be done (as said above) this EJB calls other EJB (again using lookup() ) which has methods like removegame(), removeUserFromGame() etc. After those methods are executed the user is then logged out. Maybe it has something to do with the fact the the first entity manager is called by a delegate but the second from inside an EJb and thats why the one entitymanger can see the non-existent user while the other cannot? Also all the methods have TRANSACTIONTYPE.REQUIRED Thank you in advance

    Read the article

  • MySQL table organization and optimization (Rails)

    - by aguynamedloren
    I've been learning Ruby on Rails over the past few months with no prior programming experience. Lately, I've been thinking about database optimization and table organization. I know there are great books on the subject, but I typically learn by example / as I go. Here's a hypothetical situation: Let's say I am building a social network for a niche community with 250,000 members (users). The users have the ability to attend events. Let's say there are 50,000 past/present/future events. Much like Facebook events, a user can attend any number of events and an event can have any number of attendees. In the database, there would be a table for users and a table for events. Somehow I would have to create an association between the users and events. I could create an "events" column in the users table such that each user row would contain a hash of event IDs, or I could create an "attendees" column in the events table such that each event row would contain a hash of user IDs. Neither of these solutions seem ideal, however. On a users profile page, I want to display the list of events they are associated with, which would require scanning the 50,000 event rows for the user ID of said user if I include an "attendees" column in the events table. Likewise, on an event page, I want to display a list of attendees for the event, which would require scanning the 250,000 user rows for the event ID of said event if I include an "events" column in the users table. Option 3 would be to create a third table that contains the attendee information for each and every event - but I don't see how this would solve any problems. Are these non-issues? Rails makes accessing all of this information easy, but I guess I'm worried about scale. It is entirely possible that I am under-estimating the speed and processing power of modern databases / servers / etc. How long would it take to scan 250,000 user rows for specific event IDs - 10ms? 100ms? 1,000ms? I guess that's not that bad. Am I just over-thinking this?

    Read the article

< Previous Page | 435 436 437 438 439 440 441 442 443 444 445 446  | Next Page >