Search Results

Search found 2347 results on 94 pages for 'slightly frustrated'.

Page 83/94 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • Why does Python's math.factorial not play nice with threads?

    - by W1N9Zr0
    Why does math.factorial act so weird in a thread? Here is an example, it creates three threads: thread that just sleeps for a while thread that increments an int for a while thread that does math.factorial on a large number. It calls start on the threads, then join with a timeout The sleep and spin threads work as expected and return from start right away, and then sit in the join for the timeout. The factorial thread on the other hand does not return from start until it runs to the end! import sys from threading import Thread from time import sleep, time from math import factorial # Helper class that stores a start time to compare to class timed_thread(Thread): def __init__(self, time_start): Thread.__init__(self) self.time_start = time_start # Thread that just executes sleep() class sleep_thread(timed_thread): def run(self): sleep(15) print "st DONE:\t%f" % (time() - time_start) # Thread that increments a number for a while class spin_thread(timed_thread): def run(self): x = 1 while x < 120000000: x += 1 print "sp DONE:\t%f" % (time() - time_start) # Thread that calls math.factorial with a large number class factorial_thread(timed_thread): def run(self): factorial(50000) print "ft DONE:\t%f" % (time() - time_start) # the tests print print "sleep_thread test" time_start = time() st = sleep_thread(time_start) st.start() print "st.start:\t%f" % (time() - time_start) st.join(2) print "st.join:\t%f" % (time() - time_start) print "sleep alive:\t%r" % st.isAlive() print print "spin_thread test" time_start = time() sp = spin_thread(time_start) sp.start() print "sp.start:\t%f" % (time() - time_start) sp.join(2) print "sp.join:\t%f" % (time() - time_start) print "sp alive:\t%r" % sp.isAlive() print print "factorial_thread test" time_start = time() ft = factorial_thread(time_start) ft.start() print "ft.start:\t%f" % (time() - time_start) ft.join(2) print "ft.join:\t%f" % (time() - time_start) print "ft alive:\t%r" % ft.isAlive() And here is the output on Python 2.6.5 on CentOS x64: sleep_thread test st.start: 0.000675 st.join: 2.006963 sleep alive: True spin_thread test sp.start: 0.000595 sp.join: 2.010066 sp alive: True factorial_thread test ft DONE: 4.475453 ft.start: 4.475589 ft.join: 4.475615 ft alive: False st DONE: 10.994519 sp DONE: 12.054668 I've tried this on python 2.6.5 on CentOS x64, 2.7.2 on Windows x86 and the factorial thread does not return from start on either of them until the thread is done executing. I've also tried this with PyPy 1.8.0 on Windows x86, and there result is slightly different. The start does return immediately, but then the join doesn't time out! sleep_thread test st.start: 0.001000 st.join: 2.001000 sleep alive: True spin_thread test sp.start: 0.000000 sp DONE: 0.197000 sp.join: 0.236000 sp alive: False factorial_thread test ft.start: 0.032000 ft DONE: 9.011000 ft.join: 9.012000 ft alive: False st DONE: 12.763000

    Read the article

  • How to limit results in a SharePoint XSL query

    - by David
    Hello all, I am creating a SharePoint site that we will use to report issues with trucks used in our business. Linked to the list I have created will be a page that will display an overview of the trucks and a little truck icon will show the trucks current status. Green and the truck is okay (no open issues), Red and the truck have an open issue with status "Undrivable", Orange and there is two issues open that requires the user to look further into the truck before using it and finally a Gray truck for when there is a new issue created that has not been looked into (not sure if it is drivable or not). I have managed to create the "Dashboard" and with my limit XSL/XPATH knowledge been able to add a truck and replicate the description above but... in my test I have created 4 issues, for example if three of them are changed to status Closed and one left to Undrivable I will get four icons on the page, three with Green trucks and the last one Red. So in theory it works but I obviously only want to see the last truck, one truck. I am not interested in seeing the others. <xsl:template name="dvt_1.rowview"> <xsl:variable name="CountReport" select="count(/dsQueryResponse/Rows/Row[@Highloader='GGEU12' and @Status!='Closed'])" /> <xsl:variable name="MoreThan" select="$CountReport &gt; 1" /> <xsl:variable name="NoReports" select="$CountReport = 0" /> <xsl:variable name="Closed" select=" @Highloader='GGEU12' and @Status='Closed'" /> <xsl:choose> <xsl:when test="$MoreThan"> <div class="ms-vb"><img title='More than one report exist!' border='0' alt='In Progress' src='highloader/Library/hl-orange.png' /></div> </xsl:when> <xsl:otherwise> <div class="ms-vb"><xsl:value-of disable-output-escaping="yes" select="@Icon" /></div> </xsl:otherwise> </xsl:choose> </xsl:template> My hope is that someone with slightly more knowledge can find the last piece of the puzzle for me! Thanks for reading and asking questions to fill any gap I left above. David

    Read the article

  • Calculating a Sample Covariance Matrix for Groups with plyr

    - by John A. Ramey
    I'm going to use the sample code from http://gettinggeneticsdone.blogspot.com/2009/11/split-apply-and-combine-in-r-using-plyr.html for this example. So, first, let's copy their example data: mydata=data.frame(X1=rnorm(30), X2=rnorm(30,5,2), SNP1=c(rep("AA",10), rep("Aa",10), rep("aa",10)), SNP2=c(rep("BB",10), rep("Bb",10), rep("bb",10))) I am going to ignore SNP2 in this example and just pretend the values in SNP1 denote group membership. So then, I may want some summary statistics about each group in SNP1: "AA", "Aa", "aa". Then if I want to calculate the means for each variable, it makes sense (modifying their code slightly) to use: > ddply(mydata, c("SNP1"), function(df) data.frame(meanX1=mean(df$X1), meanX2=mean(df$X2))) SNP1 meanX1 meanX2 1 aa 0.05178028 4.812302 2 Aa 0.30586206 4.820739 3 AA -0.26862500 4.856006 But what if I want the sample covariance matrix for each group? Ideally, I would like a 3D array, where the I have the covariance matrix for each group, and the third dimension denotes the corresponding group. I tried a modified version of the previous code and got the following results that have convinced me that I'm doing something wrong. > daply(mydata, c("SNP1"), function(df) cov(cbind(df$X1, df$X2))) , , = 1 SNP1 1 2 aa 1.4961210 -0.9496134 Aa 0.8833190 -0.1640711 AA 0.9942357 -0.9955837 , , = 2 SNP1 1 2 aa -0.9496134 2.881515 Aa -0.1640711 2.466105 AA -0.9955837 4.938320 I was thinking that the dim() of the 3rd dimension would be 3, but instead, it is 2. Really this is a sliced up version of the covariance matrix for each group. If we manually compute the sample covariance matrix for aa, we get: [,1] [,2] [1,] 1.4961210 -0.9496134 [2,] -0.9496134 2.8815146 Using plyr, the following gives me what I want in list() form: > dlply(mydata, c("SNP1"), function(df) cov(cbind(df$X1, df$X2))) $aa [,1] [,2] [1,] 1.4961210 -0.9496134 [2,] -0.9496134 2.8815146 $Aa [,1] [,2] [1,] 0.8833190 -0.1640711 [2,] -0.1640711 2.4661046 $AA [,1] [,2] [1,] 0.9942357 -0.9955837 [2,] -0.9955837 4.9383196 attr(,"split_type") [1] "data.frame" attr(,"split_labels") SNP1 1 aa 2 Aa 3 AA But like I said earlier, I would really like this in a 3D array. Any thoughts on where I went wrong with daply() or suggestions? Of course, I could typecast the list from dlply() to a 3D array, but I'd rather not do this because I will be repeating this process many times in a simulation. As a side note, I found one method (http://www.mail-archive.com/[email protected]/msg86328.html) that provides the sample covariance matrix for each group, but the outputted object is bloated. Thanks in advance.

    Read the article

  • On Mac, two jpg's whose color should match do not

    - by Tim
    So I'm designing a myspace page and I have two images, one is a repeating bg image, and another is an image which loads on a layer above it, which acts as a header/masthead. For some reason, on Macs only, and only in the browser (tested in safari and ff), the masthead renders slightly darker than the repeating bg image, creating this color inconsistency. The block that extends up behind the album artwork is a solid box made with css which blocks some of myspace's standard content. It actually renders as the proper color, blending in well with the bottom portion of this image, which is the repeating part of the background, but becomes noticeable as it extends up, over the masthead, which is darker than it should be. Both images where created in GIMP and saved as jpg's using, as far as i can tell, the same settings. Here's the pic of what is going on: Screenshot - Click Me!!! Here is the code which controls this part of the design. <div class="masthead"><span></span></div> .masthead {width: 1600px; height: 1940px; background-image:url(http://www.sourtricks.com/myspace/bdww/myspace_bg09.jpg); position: absolute; margin-left: -800px; left: 50%; top: 0px; z-index: -1; overflow-x: hidden;} body.bodyContent{ margin: 0 !important; padding: 0 !important; background-color: 000000 !important; font-size: 1px; background-image: url(http://www.sourtricks.com/myspace/bdww/bg_repeat05.jpg); background-position: center bottom; _background-position: right bottom; background-attachment: scroll; background-repeat: repeat-y; z-index: -2; overflow-x: hidden; font-family: arial, sans-serif !important; } Any help would be much, much, much appreciated. Thanks for your time, Tim

    Read the article

  • Performance of looping over an Unboxed array in Haskell

    - by Joey Adams
    First of all, it's great. However, I came across a situation where my benchmarks turned up weird results. I am new to Haskell, and this is first time I've gotten my hands dirty with mutable arrays and Monads. The code below is based on this example. I wrote a generic monadic for function that takes numbers and a step function rather than a range (like forM_ does). I compared using my generic for function (Loop A) against embedding an equivalent recursive function (Loop B). Having Loop A is noticeably faster than having Loop B. Weirder, having both Loop A and B together is faster than having Loop B by itself (but slightly slower than Loop A by itself). Some possible explanations I can think of for the discrepancies. Note that these are just guesses: Something I haven't learned yet about how Haskell extracts results from monadic functions. Loop B faults the array in a less cache efficient manner than Loop A. Why? I made a dumb mistake; Loop A and Loop B are actually different. Note that in all 3 cases of having either or both Loop A and Loop B, the program produces the same output. Here is the code. I tested it with ghc -O2 for.hs using GHC version 6.10.4 . import Control.Monad import Control.Monad.ST import Data.Array.IArray import Data.Array.MArray import Data.Array.ST import Data.Array.Unboxed for :: (Num a, Ord a, Monad m) => a -> a -> (a -> a) -> (a -> m b) -> m () for start end step f = loop start where loop i | i <= end = do f i loop (step i) | otherwise = return () primesToNA :: Int -> UArray Int Bool primesToNA n = runSTUArray $ do a <- newArray (2,n) True :: ST s (STUArray s Int Bool) let sr = floor . (sqrt::Double->Double) . fromIntegral $ n+1 -- Loop A for 4 n (+ 2) $ \j -> writeArray a j False -- Loop B let f i | i <= n = do writeArray a i False f (i+2) | otherwise = return () in f 4 forM_ [3,5..sr] $ \i -> do si <- readArray a i when si $ forM_ [i*i,i*i+i+i..n] $ \j -> writeArray a j False return a primesTo :: Int -> [Int] primesTo n = [i | (i,p) <- assocs . primesToNA $ n, p] main = print $ primesTo 30000000

    Read the article

  • Text Obfuscation using base64_encode()

    - by user271619
    I'm playing around with encrypt/decrypt coding in php. Interesting stuff! However, I'm coming across some issues involving what text gets encrypted into. Here's 2 functions that encrypt and decrypt a string. It uses an Encryption Key, which I set as something obscure. I actually got this from a php book. I modified it slightly, but not to change it's main goal. I created a small example below that anyone can test. But, I notice that some characters show up as the "encrypted" string. Characters like "=" and "+". Sometimes I pass this encrypted string via the url. Which may not quite make it to my receiving scripts. I'm guessing the browser does something to the string if certain characters are seen. I'm really only guessing. is there another function I can use to ensure the browser doesn't touch the string? or does anyone know enough php bas64_encode() to disallow certain characters from being used? I'm really not going to expect the latter as a possibility. But, I'm sure there's a work-around. enjoy the code, whomever needs it! define('ENCRYPTION_KEY', "sjjx6a"); function encrypt($string) { $result = ''; for($i=0; $i<strlen($string); $i++) { $char = substr($string, $i, 1); $keychar = substr(ENCRYPTION_KEY, ($i % strlen(ENCRYPTION_KEY))-1, 1); $char = chr(ord($char)+ord($keychar)); $result.=$char; } return base64_encode($result)."/".rand(); } function decrypt($string){ $exploded = explode("/",$string); $string = $exploded[0]; $result = ''; $string = base64_decode($string); for($i=0; $i<strlen($string); $i++) { $char = substr($string, $i, 1); $keychar = substr(ENCRYPTION_KEY, ($i % strlen(ENCRYPTION_KEY))-1, 1); $char = chr(ord($char)-ord($keychar)); $result.=$char; } return $result; } echo $encrypted = encrypt("reaplussign.jpg"); echo "<br>"; echo decrypt($encrypted);

    Read the article

  • Why is this OpenGL ES code slow on iPhone?

    - by f3r3nc
    I've slightly modified the iPhone SDK's GLSprite example while learning OpenGL ES and it turns out to be quite slow. Even in the simulator (on the hw worst) so I must be doing something wrong since it's only 400 textured triangles. const GLfloat spriteVertices[] = { 0.0f, 0.0f, 100.0f, 0.0f, 0.0f, 100.0f, 100.0f, 100.0f }; const GLshort spriteTexcoords[] = { 0,0, 1,0, 0,1, 1,1 }; - (void)setupView { glViewport(0, 0, backingWidth, backingHeight); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrthof(0.0f, backingWidth, backingHeight,0.0f, -10.0f, 10.0f); glMatrixMode(GL_MODELVIEW); glClearColor(0.3f, 0.0f, 0.0f, 1.0f); glVertexPointer(2, GL_FLOAT, 0, spriteVertices); glEnableClientState(GL_VERTEX_ARRAY); glTexCoordPointer(2, GL_SHORT, 0, spriteTexcoords); glEnableClientState(GL_TEXTURE_COORD_ARRAY); // sprite data is preloaded. 512x512 rgba8888 glGenTextures(1, &spriteTexture); glBindTexture(GL_TEXTURE_2D, spriteTexture); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, spriteData); free(spriteData); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glEnable(GL_TEXTURE_2D); glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA); glEnable(GL_BLEND); } - (void)drawView { .. glClear(GL_COLOR_BUFFER_BIT); glLoadIdentity(); glTranslatef(tx-100, ty-100,10); for (int i=0; i<200; i++) { glTranslatef(1, 1, 0); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); } .. } drawView is called every time the screen is touched or the finger on the screen is moved and tx,ty are set to the x,y coordinates where that touch happened. I've also tried using GLBuffer, when translation was pre-generated and there was only one DrawArray but gave the same performance (~4 FPS). ===EDIT=== Meanwhile I've modified this so that much smaller quads are used (sized: 34x20) and much less overlapping is done. There are ~400 quads-800 triangles spread on the whole screen. Texture size is 512x512 atlas and RGBA_8888 while the texture coordinates are in float. The code is very ugly in terms of API efficiency: there are two MatrixMode change along with two loads and two translation then a drawarrays for a triangle strip (quad). Now this produces ~45 FPS.

    Read the article

  • iphone quartz drawing 2 lines on top of each other causes worm effect

    - by Leonard
    I'm using Quartz-2D for iPhone to display a route on a map. The route is colored according to temperature. Because some streets are colored yellow, I am using a slightly thicker black line under the route line to create a border effect, so that yellow parts of the route are spottable on yellow streets. But, even if the black line is as thick as the route line, the whole route looks like a worm (very ugly). I tought this was because I was drawing lines from waypoint to waypoint, instead using the last waypoint as the next starting waypoint. That way if there is a couple of waypoints missing, the route will still have no cuts. What do I need to do to display both lines without a worm effect? -(void) drawRect:(CGRect) rect { CSRouteAnnotation* routeAnnotation = (CSRouteAnnotation*)self.routeView.annotation; // only draw our lines if we're not int he moddie of a transition and we // acutally have some points to draw. if(!self.hidden && nil != routeAnnotation.points && routeAnnotation.points.count > 0) { CGContextRef context = UIGraphicsGetCurrentContext(); Waypoint* fromWaypoint = [[Waypoint alloc] initWithDictionary:[routeAnnotation.points objectAtIndex:0]]; Waypoint* toWaypoint; for(int idx = 1; idx < routeAnnotation.points.count; idx++) { toWaypoint = [[Waypoint alloc] initWithDictionary:[routeAnnotation.points objectAtIndex:idx]]; CLLocation* fromLocation = [fromWaypoint getLocation]; CGPoint fromPoint = [self.routeView.mapView convertCoordinate:fromLocation.coordinate toPointToView:self]; CLLocation* toLocation = [toWaypoint getLocation]; CGPoint toPoint = [self.routeView.mapView convertCoordinate:toLocation.coordinate toPointToView:self]; routeAnnotation.lineColor = [fromWaypoint.weather getTemperatureColor]; CGContextBeginPath(context); CGContextSetLineWidth(context, 3.0); CGContextSetStrokeColorWithColor(context, [UIColor blackColor].CGColor); CGContextMoveToPoint(context, fromPoint.x, fromPoint.y); CGContextAddLineToPoint(context, toPoint.x, toPoint.y); CGContextStrokePath(context); CGContextClosePath(context); CGContextBeginPath(context); CGContextSetLineWidth(context, 3.0); CGContextSetStrokeColorWithColor(context, routeAnnotation.lineColor.CGColor); CGContextMoveToPoint(context, fromPoint.x, fromPoint.y); CGContextAddLineToPoint(context, toPoint.x, toPoint.y); CGContextStrokePath(context); CGContextClosePath(context); fromWaypoint = toWaypoint; } [fromWaypoint release]; [toWaypoint release]; } } Also, I get a <Error>: CGContextClosePath: no current point. error, which I think is bullshit. Please hint me! :)

    Read the article

  • JavaScript + Maths: Image zoom with CSS3 Transforms, How to set Origin? (with example)

    - by Sunday Ironfoot
    My Math skills really suck! I'm trying to implement an image zoom effect, a bit like how the Zoom works with Google Maps, but with a grid of fix position images. I've uploaded an example of what I have so far here: http://www.dominicpettifer.co.uk/Files/MosaicZoom.html (uses CSS3 transforms so only works with Firefox, Opera, Chrome or Safari) Use your mouse wheel to zoom in/out. The HTML source is basically an outer div with an inner-div, and that inner-div contains 16 images arranged using absolute position. It's going to be a Photo Mosaic basically. I've got the zoom bit working using CSS3 transforms: $(this).find('div').css('-moz-transform', 'scale(' + scale + ')'); ...however, I'm relying on the mouse X/Y position on the outer div to zoom in on where the mouse cursor is, similar to how Google Maps functions. The problem is that if you zoom right in on an image, move the cursor to the bottom/left corner and zoom again, instead of zooming to the bottom/left corner of the image, it zooms to the bottom/left of the entire mosaic. This has the effect of appearing to jump about the mosaic as you zoom in closer while moving the mouse around, even slightly. That's basically the problem, I want the zoom to work exactly like Google Maps where it zooms exactly to where your mouse cursor position is, but I can't get my head around the Maths to calculate the transform-origin: X/Y values correctly. Please help, been stuck on this for 3 days now. Here is the full code listing for the mouse wheel event: var scale = 1; $("#mosaicContainer").mousewheel(function(e, delta) { if (delta > 0) { scale += 1; } else { scale -= 1; } scale = scale < 1 ? 1 : (scale > 40 ? 40 : scale); var x = e.pageX - $(this).offset().left; var y = e.pageY - $(this).offset().top; $(this).find('div').css('-moz-transform', 'scale(' + scale + ')') .css('-moz-transform-origin', x + 'px ' + y + 'px'); return false; });

    Read the article

  • Strange behaviour when posting CGEvent to PSN

    - by Ben Packard
    If I set up a loop that posts some keyboard events to a PSN, I find that it works fine except for when first launched. The event only seems to post when i do something with the mouse manually - even just moving it slightly. Here's the details, if they help. An external application has a list box of text lines, which I am reading by posting copy commands (and checking the pasteboard). Unfortunately this is my only way to get this text. Sometimes, the application pulls focus away from the list, which I can detect. When this happens, the most reliable way to return focus is by sending a mouse event to click on a text field directly above the list, then send a 'tab' keyboard event to shift the focus onto the list. So at launch, the loop runs fine, scrolling down the list and copying the text. When focus is shifted away, its is detected fine, and the events are sent to move focus back to the list. But nothing seems to happen. The loop continues detecting that focus has changed, but the events only work once I move the mouse. Or even just use the scroll wheel. Strange. Once this has happened the first time, it works fine - each time focus moves, the PSN events switch it back without me having to do anything at all. Here's the code that runs in the loop - verified as working: //copy to pasteboard - CMD-V e3 = CGEventCreateKeyboardEvent(NULL, (CGKeyCode)8, true); CGEventSetFlags(e3, kCGEventFlagMaskCommand); CGEventPostToPSN(&psn, e3); CFRelease(e3); e4 = CGEventCreateKeyboardEvent(NULL, (CGKeyCode)8, false); CGEventPostToPSN(&psn, e4); CFRelease(e4); //move cursor down e1 = CGEventCreateKeyboardEvent(NULL, (CGKeyCode)125, true); CGEventPostToPSN(&psn, e1); CFRelease(e1); e2 = CGEventCreateKeyboardEvent(NULL, (CGKeyCode)125, false); CGEventPostToPSN(&psn, e2); CFRelease(e2); And here's where I switch focus, also working (except when first required): //click in text input box - point is derived earlier e6 = CGEventCreateMouseEvent(NULL, kCGEventLeftMouseDown, point, 0); CGEventPostToPSN(&psn, e6); CFRelease(e6); e7 = CGEventCreateMouseEvent(NULL, kCGEventLeftMouseUp, point, 0); CGEventPostToPSN(&psn, e7); CFRelease(e7); //press tab key to move to chat log table CGEventRef e = CGEventCreateKeyboardEvent(NULL, (CGKeyCode)48, true); //CGEventPost(kCGSessionEventTap, e); CGEventPostToPSN(&psn, e); CFRelease(e); CGEventRef e11 = CGEventCreateKeyboardEvent(NULL, (CGKeyCode)48, false); CGEventPostToPSN(&psn, e11); CFRelease(e11);

    Read the article

  • generated service mock: everything but RhinoMocks fails?

    - by hko
    I have the "quest" to search for the next Mocking Framework for my company, and basically it's down to NSubstitute (simplest syntax, but no strict mocks), FakeItEasy(best reviews, Roy Osherove bonus, and slightly better lib support than NSubstitute), Moq (best "other libs support", biggest featureset, downside: mock.Object). We definitely want to move on from RhinoMocks, e.g. because of the unusefull interactiontest error messages (it should tell me what the parameter was instead, when a verification fails). So I was pretty surprised the other day (that was yesterday) when I found out RhinoMocks could do a thing where every other mock framework fails at: Mocking an autogenerated SomethingService (a typical VS autogenerated service with a default construtor in a partial class). Please don't argue about the design.. I intend to write lightweight integration tests (and some unit tests), and I can't mess around with the service, the product is installed on too many customers system. See this code: // here the NSubstitute and FakeItEasy equivalents throw an exception.. see below TicketStoreService fakeTicketStoreService = MockRepository.GenerateMock<TicketStoreService>(); fakeTicketStoreService.Expect(service => service.DoSomething(Arg.Is(new Guid())).Return(new Guid()); fakeTicketStoreService.DoSomething(Arg.Is(new Guid())); fakeTicketStoreService.VerifyAllExpectations(); Note that DoSomething is a non-virtual methodcall in an autogenerated class. So it shouldn't work, according to common knowledge. But it does. Problem is that it's the only (non commercial) framework that can do this: Rhino.Mocks works, and verification works too FakeItEasy says it doesn't find a default constructor (probably just wrong exception message): No default constructor was found on the type SomeNamespace.TicketStoreService Moq gives something sane and understandable: Invalid setup on a non-virtual (overridable in VB) member: service=> service.DoSomething Nsubstitute gives a message System.NotSupportedException: Cannot serialize member System.ComponentModel.Component.Site of type System.ComponentModel.ISite because it is an interface. I'm really wondering what's going on here with the frameworks, except Moq. The "fancy new" frameworks seem to have an initial perf hit too, probably preparing some Type cache and serializing stuff, whilst RhinoMocks somehow manages to create a very "slim" mock without recursion. I have to admit I didn't like RhinoMocks very well, but here it shines.. unfortunately. So, is there a way to get that to work with newer (non-commercial!) mocking frameworks, or somehow get a sane error message out of Rhino.Mocks? And why can Rhino.Mocks achieve this, when clearly every Mocking framework states it can only work with virtual methods when given a concrete class? Let's not derail the discussion by talking about alternative approaches like Extract&Override or runtime-proxy Mocking frameworks like JustMock/TypeMock/Moles or the new Fakes framework, I know these, but that would be less ideal solutions, for reasons beyond this topic. Any help appreciated..

    Read the article

  • Is there any alternative to obfuscation to make it harder to get any string in javascript?

    - by MarceloRamires
    I use DropBox and I've had some trouble reaching to my files from other computers: I not always want to login to anything when I'm in a public computer, but I like being able to reach my stuff from wherever I am. So I've made a simple little application that when put in the public folder, ran and given the right UID, creates (still in your public folder) an HTML of all the content in the folder (including subfolders) as a tree of links. But I didn't risk loading it anywhere, since there are slightly private things in there (yes, I know that the folder's name is "PUBLIC"). So I've came up with the idea to make it a simple login page, given the right password, the rest of the page should load. brilliant!, but how? If I did this by redirecting to other HTML on the same folder, I'd still put the html link in the web history and the "url's accessed" history of the administrator. So I should generate itin the same page. I've done it. And currently the page is a textbox and a button, and only if you type in the right password (asked by the generator) the rest of the page loads. The fault is that everything (password, URL's) is easily reachable through the sourcecode. Now, assuming I only want to avoid silly people to get it all too easily, not make a bulletproof all-content-holding NSA certified website, I though about some ways to make these informations a bit harder to get. As you may have already figured, I use a streamwritter to write a .HTM file (head, loop through links, bottom), then it's extremely configurable, and I can come up with a pretty messy-but-working c# code, though my javascript knowledge is not that good. Public links in DropBox look like this: http://dl.dropbox.com/u/3045472/img.png Summarizing: How do I hide stuff (MAINLY the password, of course) in my source-code so that no bumb-a-joe that can read, use a computer and press CTRL+U can reach to my stuff too easily ? PS.: It's not that personal, if someone REALLY wants it, it could never be 100% protected, and if it was that important, I wouldnt put it in the public folder, also, if the dude really wants to get it that hard, he should deserve it. PS2.: "Use the ultra-3000'tron obfuscator!!11" is not a real answer, since my javascript is GENERATED by my c# program. PS3.: I don't want other solutions as "use a serverside application and host it somewhere to redirect and bla bla" or "compress the links in a .RAR file and put a password in it" since I'm doing this ALSO to learn, and I want the thrill of it =)

    Read the article

  • xerces-c: Xml parsing multiple files

    - by user459811
    I'm atempting to learn xerces-c and was following this tutorial online. http://www.yolinux.com/TUTORIALS/XML-Xerces-C.html I was able to get the tutorial to compile and run through a memory checker (valgrind) with no problems however when I made alterations to the program slightly, the memory checker returned some potential leak bytes. I only added a few extra lines to main to allow the program to read two files instead of one. int main() { string configFile="sample.xml"; // stat file. Get ambigious segfault otherwise. GetConfig appConfig; appConfig.readConfigFile(configFile); cout << "Application option A=" << appConfig.getOptionA() << endl; cout << "Application option B=" << appConfig.getOptionB() << endl; // Added code configFile = "sample1.xml"; appConfig.readConfigFile(configFile); cout << "Application option A=" << appConfig.getOptionA() << endl; cout << "Application option B=" << appConfig.getOptionB() << endl; return 0; } I was wondering why is it when I added the extra lines of code to read in another xml file, it would result in the following output? ==776== Using Valgrind-3.6.0 and LibVEX; rerun with -h for copyright info ==776== Command: ./a.out ==776== Application option A=10 Application option B=24 Application option A=30 Application option B=40 ==776== ==776== HEAP SUMMARY: ==776== in use at exit: 6 bytes in 2 blocks ==776== total heap usage: 4,031 allocs, 4,029 frees, 1,092,045 bytes allocated ==776== ==776== 3 bytes in 1 blocks are definitely lost in loss record 1 of 2 ==776== at 0x4C28B8C: operator new(unsigned long) (vg_replace_malloc.c:261) ==776== by 0x5225E9B: xercesc_3_1::MemoryManagerImpl::allocate(unsigned long) (MemoryManagerImpl.cpp:40) ==776== by 0x53006C8: xercesc_3_1::IconvGNULCPTranscoder::transcode(unsigned short const*, xercesc_3_1::MemoryManager*) (IconvGNUTransService.cpp:751) ==776== by 0x4038E7: GetConfig::readConfigFile(std::string&) (in /home/bonniehan/workspace/test/a.out) ==776== by 0x403B13: main (in /home/bonniehan/workspace/test/a.out) ==776== ==776== 3 bytes in 1 blocks are definitely lost in loss record 2 of 2 ==776== at 0x4C28B8C: operator new(unsigned long) (vg_replace_malloc.c:261) ==776== by 0x5225E9B: xercesc_3_1::MemoryManagerImpl::allocate(unsigned long) (MemoryManagerImpl.cpp:40) ==776== by 0x53006C8: xercesc_3_1::IconvGNULCPTranscoder::transcode(unsigned short const*, xercesc_3_1::MemoryManager*) (IconvGNUTransService.cpp:751) ==776== by 0x40393F: GetConfig::readConfigFile(std::string&) (in /home/bonniehan/workspace/test/a.out) ==776== by 0x403B13: main (in /home/bonniehan/workspace/test/a.out) ==776== ==776== LEAK SUMMARY: ==776== definitely lost: 6 bytes in 2 blocks ==776== indirectly lost: 0 bytes in 0 blocks ==776== possibly lost: 0 bytes in 0 blocks ==776== still reachable: 0 bytes in 0 blocks ==776== suppressed: 0 bytes in 0 blocks ==776== ==776== For counts of detected and suppressed errors, rerun with: -v ==776== ERROR SUMMARY: 2 errors from 2 contexts (suppressed: 2 from 2)

    Read the article

  • getline() sets failbit and skips last line

    - by Thanatos
    I'm using std::getline() to enumerate through the lines in a file, and it's mostly working. It's left me curious however - std::getline() is skipping the very last line in my file, but only if it's blank. Using this minimal example: #include <iostream> #include <string> int main() { std::string line; while(std::getline(std::cin, line)) std::cout << "Line: “" << line << "”\n"; return 0; } If I feed it this: Line A Line B Line C I get those lines back at me. But this: Line A Line B Line C [* line is present but blank, ie, the file end is: "...B\nLine C\n" *] (I unfortunately can't have a blank line in SO's little code box thing...) So, first file has three lines ( ["Line A", "Line B", "Line C"] ), second file has four ( ["Line A", "Line B", "Line C", ""] ) This to me seems wrong - I have a four line file, and enumerating it with getline() leaves me with 3. What's really got me scratching my head is that this is exactly what the standard says it should do. (21.3.7.9) Even Python has similar behaviour (but it gives me the newlines too - C++ chops them off.) Is this some weird thing where C++ is expected lines to be terminated, and not separated by '\n', and I'm feeding it differently? Edit Clearly, I need to expand a bit here. I've met up with two philosophies of determining what a "line" in a file is: Lines are terminated by newlines - Dominant in systems such as Linux, and editors like vim. Possible to have a slightly "odd" file by not having a final '\n' (a "noeol" in vim). Impossible to have a blank line at the end of a file. Lines are separated by newlines - Dominant in just about every Windows editor I've ever come across. Every file is valid, and it's possible to have the last line be blank. Of course, YMMV as to what a newline is. I've always treated these as two completely different schools of thought. One earlier point I tried to make was to ask if the C++ standard was explicitly or merely implicitly following the first. (Curiously, where is Mac? terminated or separated?)

    Read the article

  • SQL Server CTE referred in self joins slow

    - by Kharlos Dominguez
    Hello, I have written a table-valued UDF that starts by a CTE to return a subset of the rows from a large table. There are several joins in the CTE. A couple of inner and one left join to other tables, which don't contain a lot of rows. The CTE has a where clause that returns the rows within a date range, in order to return only the rows needed. I'm then referencing this CTE in 4 self left joins, in order to build subtotals using different criterias. The query is quite complex but here is a simplified pseudo-version of it WITH DataCTE as ( SELECT [columns] FROM table INNER JOIN table2 ON [...] INNER JOIN table3 ON [...] LEFT JOIN table3 ON [...] ) SELECT [aggregates_columns of each subset] FROM DataCTE Main LEFT JOIN DataCTE BananasSubset ON [...] AND Product = 'Bananas' AND Quality = 100 LEFT JOIN DataCTE DamagedBananasSubset ON [...] AND Product = 'Bananas' AND Quality < 20 LEFT JOIN DataCTE MangosSubset ON [...] GROUP BY [ I have the feeling that SQL Server gets confused and calls the CTE for each self join, which seems confirmed by looking at the execution plan, although I confess not being an expert at reading those. I would have assumed SQL Server to be smart enough to only perform the data retrieval from the CTE only once, rather than do it several times. I have tried the same approach but rather than using a CTE to get the subset of the data, I used the same select query as in the CTE, but made it output to a temp table instead. The version referring the CTE version takes 40 seconds. The version referring the temp table takes between 1 and 2 seconds. Why isn't SQL Server smart enough to keep the CTE results in memory? I like CTEs, especially in this case as my UDF is a table-valued one, so it allowed me to keep everything in a single statement. To use a temp table, I would need to write a multi-statement table valued UDF, which I find a slightly less elegant solution. Did some of you had this kind of performance issues with CTE, and if so, how did you get them sorted? Thanks, Kharlos

    Read the article

  • How to handle environment-specific application configuration organization-wide?

    - by Stuart Lange
    Problem Your organization has many separate applications, some of which interact with each other (to form "systems"). You need to deploy these applications to separate environments to facilitate staged testing (for example, DEV, QA, UAT, PROD). A given application needs to be configured slightly differently in each environment (each environment has a separate database, for example). You want this re-configuration to be handled by some sort of automated mechanism so that your release managers don't have to manually configure each application every time it is deployed to a different environment. Desired Features I would like to design an organization-wide configuration solution with the following properties (ideally): Supports "one click" deployments (only the environment needs to be specified, and no manual re-configuration during/after deployment should be necessary). There should be a single "system of record" where a shared environment-dependent property is specified (such as a database connection string that is shared by many applications). Supports re-configuration of deployed applications (in the event that an environment-specific property needs to change), ideally without requiring a re-deployment of the application. Allows an application to be run on the same machine, but in different environments (run a PROD instance and a DEV instance simultaneously). Possible Solutions I see two basic directions in which a solution could go: Make all applications "environment aware". You would pass the environment name (DEV, QA, etc) at the command line to the app, and then the app is "smart" enough to figure out the environment-specific configuration values at run-time. The app could fetch the values from flat files deployed along with the app, or from a central configuration service. Applications are not "smart" as they are in #1, and simply fetch configuration by property name from config files deployed with the app. The values of these properties are injected into the config files at deploy-time by the install program/script. That install script takes the environment name and fetches all relevant configuration values from a central configuration service. Question How would/have you achieved a configuration solution that solves these problems and supports these desired features? Am I on target with the two possible solutions? Do you have a preference between those solutions? Also, please feel free to tell me that I'm thinking about the problem all wrong. Any feedback would be greatly appreciated.

    Read the article

  • Delphi Unit local variables - how to make each instance unique?

    - by Justin
    Ok, this, I'm sure is something simple that is easy to do. The problem : I've inherited scary spaghetti code and am slowly trying to better it when new features need adding - generally when a refactor makes adding the new feature neater. I've got a bunch of code I'm packing into a single unit which, in different places in the application, controls the same physical thing in the outside world. The control appears in several places in the application and operates slightly differently in each instance. What I've done is to create a unit with all of the features I need which I can simply drop, as a frame, into each form that requires it. Each form then uses the unit's interface methods to customise the behaviour for each instance. The problem within the problem : In the unit in question (the frame) I have a variable declared in the IMPLEMENTATION section - local to the unit. I also have a procedure, declared in the TYPE section which takes an argument and assigns that argument to the local variable in question - each form passes a unique variable to each instance of the frame/unit. What I want it to do is for each instance of the frame to keep its own version of that variable, different from the others, and use that to define how it operates. What seems to be happening, however, is that all instances are using the same value, even if I explicitly pass each instance a different variable. ie: Unit FlexibleUnit; interface uses //the uses stuff type TFlexibleUnit=class(TFrame) //declarations including procedure makeThisInstanceX(passMeTheVar:integer); private // public // end; implementation uses //the uses var myLocalVar; procedure makeThisInstanceX(passMeTheVar:integer); begin myLocalVar:=passMeTheVar; end; //other procedures using myLocalVar //etc to the end; Now somewhere in another Form I've dropped this Frame onto the Design pane, sometimes two of these frames on one Form, and have it declared in the proper places, etc. Each is unique in that : ThisFlexibleUnit : TFlexibleUnit; ThatFlexibleUnit : TFlexibleUnit; and when I do a: ThisFlexibleUnit.makeThisInstanceX(var1); //want to behave in way "var1" ThatFlexibleUnit.makeThisInstanceX(var2); //want to behave in way "var2" it seems that they both share the same variable "myLocalVar". Am I doing this wrong, in principle? If this is the correct method then it's a matter of debugging what I have (which is too huge to post) but if this is not correct in principle then is there a way to do what I am suggesting? Thanks in advance, Stack Overflow - you guys (and gals!) are legendary.

    Read the article

  • Specializating a template function that takes a universal reference parameter

    - by David Stone
    How do I specialize a template function that takes a universal reference parameter? foo.hpp: template<typename T> void foo(T && t) // universal reference parameter foo.cpp template<> void foo<Class>(Class && class) { // do something complicated } Here, Class is no longer a deduced type and thus is Class exactly; it cannot possibly be Class &, so reference collapsing rules will not help me here. I could perhaps create another specialization that takes a Class & parameter (I'm not sure), but that implies duplicating all of the code contained within foo for every possible combination of rvalue / lvalue references for all parameters, which is what universal references are supposed to avoid. Is there some way to accomplish this? To be more specific about my problem in case there is a better way to solve it: I have a program that can connect to multiple game servers, and each server, for the most part, calls everything by the same name. However, they have slightly different versions for a few things. There are a few different categories that these things can be: a move, an item, etc. I have written a generic sort of "move string to move enum" set of functions for internal code to call, and my server interface code has similar functions. However, some servers have their own internal ID that they communicate with, some use strings, and some use both in different situations. Now what I want to do is make this a little more generic. I want to be able to call something like ServerNamespace::server_cast<Destination>(source). This would allow me to cast from a Move to a std::string or ServerMoveID. Internally, I may need to make a copy (or move from) because some servers require that I keep a history of messages sent. Universal references seem to be the obvious solution to this problem. The header file I'm thinking of right now would expose simply this: namespace ServerNamespace { template<typename Destination, typename Source> Destination server_cast(Source && source); } And the implementation file would define all legal conversions as template specializations.

    Read the article

  • How can I connect to MSMQ over a workgroup?

    - by cyclotis04
    I'm writing a simple console client-server app using MSMQ. I'm attempting to run it over the workgroup we have set up. They run just fine when run on the same computer, but I can't get them to connect over the network. I've tried adding Direct=, OS:, and a bunch of combinations of other prefaces, but I'm running out of ideas, and obviously don't know the right way to do it. My queue's don't have GUIDs, which is also slightly confusing. Whenever I attempt to connect to a remote machine, I get an invalid queue name message. What do I have to do to make this work? Server: class Program { static string _queue = @"\Private$\qim"; static MessageQueue _mq; static readonly object _mqLock = new object(); static void Main(string[] args) { _queue = Dns.GetHostName() + _queue; lock (_mqLock) { if (!MessageQueue.Exists(_queue)) _mq = MessageQueue.Create(_queue); else _mq = new MessageQueue(_queue); } Console.Write("Starting server at {0}:\n\n", _mq.Path); _mq.Formatter = new BinaryMessageFormatter(); _mq.BeginReceive(new TimeSpan(0, 1, 0), new object(), OnReceive); while (Console.ReadKey().Key != ConsoleKey.Escape) { } _mq.Close(); } static void OnReceive(IAsyncResult result) { Message msg; lock (_mqLock) { try { msg = _mq.EndReceive(result); Console.Write(msg.Body); } catch (Exception ex) { Console.Write("\n" + ex.Message + "\n"); } } _mq.BeginReceive(new TimeSpan(0, 1, 0), new object(), OnReceive); } } Client: class Program { static MessageQueue _mq; static void Main(string[] args) { string queue; while (_mq == null) { Console.Write("Enter the queue name:\n"); queue = Console.ReadLine(); //queue += @"\Private$\qim"; try { if (MessageQueue.Exists(queue)) _mq = new MessageQueue(queue); } catch (Exception ex) { Console.Write("\n" + ex.Message + "\n"); _mq = null; } } Console.Write("Connected. Begin typing.\n\n"); _mq.Formatter = new BinaryMessageFormatter(); ConsoleKeyInfo key = new ConsoleKeyInfo(); while (key.Key != ConsoleKey.Escape) { key = Console.ReadKey(); _mq.Send(key.KeyChar.ToString()); } } }

    Read the article

  • Helping linqtosql datacontext use implicit conversion between varchar column in the database and tab

    - by user213256
    I am creating an mssql database table, "Orders", that will contain a varchar(50) field, "Value" containing a string that represents a slightly complex data type, "OrderValue". I am using a linqtosql datacontext class, which automatically types the "Value" column as a string. I gave the "OrderValue" class implicit conversion operators to and from a string, so I can easily use implicit conversion with the linqtosql classes like this: // get an order from the orders table MyDataContext db = new MyDataContext(); Order order = db.Orders(o => o.id == 1); // use implicit converstion to turn the string representation of the order // value into the complex data type. OrderValue value = order.Value; // adjust one of the fields in the complex data type value.Shipping += 10; // use implicit conversion to store the string representation of the complex // data type back in the linqtosql order object order.Value = value; // save changes db.SubmitChanges(); However, I would really like to be able to tell the linqtosql class to type this field as "OrderValue" rather than as "string". Then I would be able to avoid complex code and re-write the above as: // get an order from the orders table MyDataContext db = new MyDataContext(); Order order = db.Orders(o => o.id == 1); // The Value field is already typed as the "OrderValue" type rather than as string. // When a string value was read from the database table, it was implicity converted // to "OrderValue" type. order.Value.Shipping += 10; // save changes db.SubmitChanges(); In order to achieve this desired goal, I looked at the datacontext designer and selected the "Value" field of the "Order" table. Then, in properties, I changed "Type" to "global::MyApplication.OrderValue". The "Server Data Type" property was left as "VarChar(50) NOT NULL" The project built without errors. However, when reading from the database table, I was presented with the following error message: Could not convert from type 'System.String' to type 'MyApplication.OrderValue'. at System.Data.Linq.DBConvert.ChangeType(Object value, Type type) at Read_Order(ObjectMaterializer1 ) at System.Data.Linq.SqlClient.ObjectReaderCompiler.ObjectReader2.MoveNext() at System.Linq.Buffer1..ctor(IEnumerable1 source) at System.Linq.Enumerable.ToArray[TSource](IEnumerable`1 source) at Example.OrdersProvider.GetOrders() at ... etc From the stack trace, I believe this error is happening while reading the data from the table. When presented with converting a string to my custom data type, even though the implicit conversion operators are present, the DBConvert class gets confused and throws an error. Is there anything I can do to help it not get confused and do the implicit conversion? Thanks in advance, and apologies if I have posted in the wrong forum. cheers / Ben

    Read the article

  • Replace Components.classesByID with document.implementation.createDocument

    - by Earl Smith
    I am not the author of this code, but it is no longer maintained. So I am trying to fix it, but I have very little experience in javascript. Since Firefox 9, Components.classesByID["{3a9cd622-264d-11d4-ba06-0060b0fc76dd}"]. has been obsolete. Instead, it is suggested that document.implementation.createDocument be used. Can someone here show me how to implement these changes? I seem to be, just banging my head with everything I have tried. The example given at Mozilla developer network is: var doc = document.implementation.createDocument ("http://www.w3.org/1999/xhtml", "html", null); var body = document.createElementNS("http://www.w3.org/1999/xhtml", "body"); body.setAttribute("id", "abc"); doc.documentElement.appendChild(body); alert(doc.getElementById("abc")); // [object HTMLBodyElement] and the code in the .jsm I am trying to fix is: this.fgImageData = {}; this.fgImageData["check"] = [ " *", " **", "* ***", "** *** ", "***** ", " *** ", " * "]; this.fgImageData["radio"] = [ " **** ", "******", "******", "******", "******", " **** "]; this.fgImageData["menu-ltr"] = [ "* ", "** ", "*** ", "****", "*** ", "** ", "* "]; this.fgImageData["menu-rtl"] = [ " *", " **", " ***", "****", " ***", " **", " *"]; // I think I'm doing something slightly wrong when creating the document // but I'm not sure. It works though. *FIX* var domi = Components.classesByID["{3a9cd622-264d-11d4-ba06-0060b0fc76dd}"]. createInstance(Components.interfaces.nsIDOMDOMImplementation); this.document = domi.createDocument("http://www.w3.org/1999/xhtml", "html", null); this.canvas = this.document.createElementNS("http://www.w3.org/1999/xhtml", "html:canvas"); for(var name in this.fgImageData) { if (this.fgImageData.hasOwnProperty(name)) { var data = this.fgImageData[name]; var width = data[0].length; var height = data.length; this.canvas.width = width; this.canvas.height = height; var g = this.canvas.getContext("2d"); g.clearRect(0, 0, width, height); var idata = g.getImageData(0, 0, width, height); for(var y=0, oy=0; y<height; y++, oy+=idata.width*4) for(var x=0, ox=oy; x<width; x++, ox+=4) idata.data[ox+3] = data[y][x] == " " ? 0 : 255; this.fgImageData[name] = idata; } } },

    Read the article

  • Minutia on Objective-C Categories and Extensions.

    - by Matt Wilding
    I learned something new while trying to figure out why my readwrite property declared in a private Category wasn't generating a setter. It was because my Category was named: // .m @interface MyClass (private) @property (readwrite, copy) NSArray* myProperty; @end Changing it to: // .m @interface MyClass () @property (readwrite, copy) NSArray* myProperty; @end and my setter is synthesized. I now know that Class Extension is not just another name for an anonymous Category. Leaving a Category unnamed causes it to morph into a different beast: one that now gives compile-time method implementation enforcement and allows you to add ivars. I now understand the general philosophies underlying each of these: Categories are generally used to add methods to any class at runtime, and Class Extensions are generally used to enforce private API implementation and add ivars. I accept this. But there are trifles that confuse me. First, at a hight level: Why differentiate like this? These concepts seem like similar ideas that can't decide if they are the same, or different concepts. If they are the same, I would expect the exact same things to be possible using a Category with no name as is with a named Category (which they are not). If they are different, (which they are) I would expect a greater syntactical disparity between the two. It seems odd to say, "Oh, by the way, to implement a Class Extension, just write a Category, but leave out the name. It magically changes." Second, on the topic of compile time enforcement: If you can't add properties in a named Category, why does doing so convince the compiler that you did just that? To clarify, I'll illustrate with my example. I can declare a readonly property in the header file: // .h @interface MyClass : NSObject @property (readonly, copy) NSString* myString; @end Now, I want to head over to the implementation file and give myself private readwrite access to the property. If I do it correctly: // .m @interface MyClass () @property (readonly, copy) NSString* myString; @end I get a warning when I don't synthesize, and when I do, I can set the property and everything is peachy. But, frustratingly, if I happen to be slightly misguided about the difference between Category and Class Extension and I try: // .m @interface MyClass (private) @property (readonly, copy) NSString* myString; @end The compiler is completely pacified into thinking that the property is readwrite. I get no warning, and not even the nice compile error "Object cannot be set - either readonly property or no setter found" upon setting myString that I would had I not declared the readwrite property in the Category. I just get the "Does not respond to selector" exception at runtime. If adding ivars and properties is not supported by (named) Categories, is it too much to ask that the compiler play by the same rules? Am I missing some grand design philosophy?

    Read the article

  • Speed up a web service for auto complete and avoid too many method calls.

    - by jphenow
    So I've got my jquery autocomplete 'working,' but its a little fidgety since I call the webservice method each time a keydown() fires so I get lots of methods hanging and sometimes to get the "auto" to work I have to type it out and backspace a bit because i'm assuming it got its return value a little slow. I've limited the query results to 8 to mininmize time. Is there anything i can do to make this a little snappier? This thing seems near useless if I don't get it a little more responsive. javascript $("#clientAutoNames").keydown(function () { $.ajax({ type: "POST", url: "WebService.asmx/LoadData", data: "{'input':" + JSON.stringify($("#clientAutoNames").val()) + "}", contentType: "application/json; charset=utf-8", dataType: "json", success: function (data) { if (data.d != null) { var serviceScript = data.d; } $("#autoNames").html(serviceScript); $('#clientAutoNames').autocomplete({ minLength: 2, source: autoNames, delay: 100, focus: function (event, ui) { $('#project').val(ui.item.label); return false; }, select: function (event, ui) { $('#clientAutoNames').val(ui.item.label); $('#projectid').val(ui.item.value); $('#project-description').html(ui.item.desc); pkey = $('#project-id').val; return false; } }) .data("autocomplete")._renderItem = function (ul, item) { return $("<li></li>") .data("item.autocomplete", item) .append("<a>" + item.label + "<br>" + item.desc + "</a>") .appendTo(ul); } } }); }); WebService.asmx <WebMethod()> _ Public Function LoadData(ByVal input As String) As String Dim result As String = "<script>var autoNames = [" Dim sqlOut As Data.SqlClient.SqlDataReader Dim connstring As String = *Datasource* Dim strSql As String = "SELECT TOP 2 * FROM v_Clients WHERE (SearchName Like '" + input + "%') ORDER BY SearchName" Dim cnn As Data.SqlClient.SqlConnection = New Data.SqlClient.SqlConnection(connstring) Dim cmd As Data.SqlClient.SqlCommand = New Data.SqlClient.SqlCommand(strSql, cnn) cnn.Open() sqlOut = cmd.ExecuteReader() Dim c As Integer = 0 While sqlOut.Read() result = result + "{" result = result + "value: '" + sqlOut("ContactID").ToString() + "'," result = result + "label: '" + sqlOut("SearchName").ToString() + "'," 'result = result + "desc: '" + title + " from " + company + "'," result = result + "}," End While result = result + "];</script>" sqlOut.Close() cnn.Close() Return result End Function I'm sure I'm just going about this slightly wrong or not doing a better balance of calls or something. Greatly appreciated!

    Read the article

  • WFP Textblock in Listbox not clipping properly

    - by Tobias Funke
    Her's what I want: A listbox whose items consist of a stackpanel with two textblocks. The textblocks need to support wrapping, the listbox should not expand, and there should be no horizontal scrollbar. Here's the code I have so far. Copy and paste it into XamlPad and you'll see what I'm talking about. <ListBox Height="300" Width="300" x:Name="tvShows"> <ListBox.Items> <ListBoxItem> <StackPanel> <TextBlock Width="{Binding ElementName=tvShows, Path=ActualWidth}" TextWrapping="Wrap">Lost is an American live-action television series. It follows the lives of plane crash survivors on a mysterious tropical island.</TextBlock> <TextBlock Width="{Binding ElementName=tvShows, Path=ActualWidth}" TextWrapping="Wrap">Lost is an American live-action television series. It follows the lives of plane crash survivors on a mysterious tropical island.</TextBlock> </StackPanel> </ListBoxItem> <ListBoxItem> <StackPanel> <TextBlock Width="{Binding ElementName=tvShows, Path=ActualWidth}" TextWrapping="Wrap">Lost is an American live-action television series. It follows the lives of plane crash survivors on a mysterious tropical island.</TextBlock> <TextBlock Width="{Binding ElementName=tvShows, Path=ActualWidth}" TextWrapping="Wrap">Lost is an American live-action television series. It follows the lives of plane crash survivors on a mysterious tropical island.</TextBlock> </StackPanel> </ListBoxItem> </ListBox.Items> </ListBox> This seems to be doing the job of keeping the textblocks from growing, but there's one problem. The textblocks seem to be slightly larger than the listbox, causing the horizontal scrollbar to appear. This is strange because their widths are bound to the lisbox's ActualWidth. Also, if you add a few more items to the listbox (just cut and paste in XamlPad) causing the vertical scrollbar to appear, the width of the textblocks do not resize to the vertical scrollbar. How do I keep the textblocks inside the listbox, with or without the vertical scrollbar?

    Read the article

  • Are there compelling reasons not to use Groovy?

    - by Leonard H Martin
    I'm developing a LoB application in Java after a long absence from the platform (having spent the last 8 years or so entrenched in Fortran, C, a smidgin of C++ and latterly .Net). Java, the language, is not much changed from how I remember it. I like it's strengths and I can work around its weaknesses - the platform has grown and deciding upon the myriad of different frameworks which appear to do much the same thing as one another is a different story; but that can wait for another day - all-in-all I'm comfortable with Java. However, over the last couple of weeks I've become enamoured with Groovy, and purely from a selfish point of view: but not just because it makes development against the JVM a more succinct and entertaining (and, well, "groovy") proposition than Java (the language). What strikes me most about Groovy is its inherent maintainability. We all (I hope!) strive to write well documented, easy to understand code. However, sometimes the languages we use themselves defeat us. An example: in 2001 I wrote a library in C to translate EDIFACT EDI messages into ANSI X12 messages. This is not a particularly complicated process, if slightly involved, and I thought at the time I had documented the code properly - and I probably had - but some six years later when I revisited the project (and after becoming acclimatised to C#) I found myself lost in so much C boilerplate (mallocs, pointers, etc. etc.) that it took three days of thoughtful analysis before I finally understood what I'd been doing six years previously. This evening I've written about 2000 lines of Java (it is the day of rest, after all!). I've documented as best as I know how, but, but, of those 2000 lines of Java a significant proportion is Java boiler plate. This is where I see Groovy and other dynamic languages winning through - maintainability and later comprehension. Groovy lets you concentrate on your intent without getting bogged down on the platform specific implementation; it's almost, but not quite, self documenting. I see this as being a huge boon to me when I revisit my current project (which I'll port to Groovy asap) in several years time and to my successors who will inherit it and carry on the good work. So, are there any reasons not to use Groovy?

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >