Search Results

Search found 6670 results on 267 pages for 'speed dial'.

Page 227/267 | < Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >

  • Link checker ; how to avoid false positives

    - by Burnzy
    I'm working a on a link checker/broken link finder and I am getting many false positives, after double checking I noticed that many error codes were returning webexceptions but they were actually downloadable, but in some other cases the statuscode is 404 and i can access the page from the browse. So here is the code, its pretty ugly, and id like to have something more, id say practical. All the status codes are in that big if are used to filter the ones i dont want to add to brokenlink because they are valid links ( i tested them all ). What i need to fix is the structure (if possible) and how to not get false 404. Thank you! try { HttpWebRequest request = ( HttpWebRequest ) WebRequest.Create ( uri ); request.Method = "Head"; request.MaximumResponseHeadersLength = 32; // FOR IE SLOW SPEED request.AllowAutoRedirect = true; using ( HttpWebResponse response = ( HttpWebResponse ) request.GetResponse() ) { request.Abort(); } /* WebClient wc = new WebClient(); wc.DownloadString( uri ); */ _validlinks.Add ( strUri ); } catch ( WebException wex ) { if ( !wex.Message.Contains ( "The remote name could not be resolved:" ) && wex.Status != WebExceptionStatus.ServerProtocolViolation ) { if ( wex.Status != WebExceptionStatus.Timeout ) { HttpStatusCode code = ( ( HttpWebResponse ) wex.Response ).StatusCode; if ( code != HttpStatusCode.OK && code != HttpStatusCode.BadRequest && code != HttpStatusCode.Accepted && code != HttpStatusCode.InternalServerError && code != HttpStatusCode.Forbidden && code != HttpStatusCode.Redirect && code != HttpStatusCode.Found ) { _brokenlinks.Add ( new Href ( new Uri ( strUri , UriKind.RelativeOrAbsolute ) , UrlType.External ) ); } else _validlinks.Add ( strUri ); } else _brokenlinks.Add ( new Href ( new Uri ( strUri , UriKind.RelativeOrAbsolute ) , UrlType.External ) ); } else _validlinks.Add ( strUri ); }

    Read the article

  • How to access C arrays from assembler for Windows x64?

    - by 0xdword32
    I've written an assembler function to speed up a few things for image processing (images are created with CreateDIBSection). For Win32 the assembler code works without problems, but for Win64 I get a crash as soon as I try to access my array data. I put the relevant info in a struct and my assembler function gets a pointer to this struct. The struct pointer is put into ebx/rbx and with indexing I read the data from the struct. Any idea what I am doing wrong? I use nasm together with Visual Studio 2008 and for Win64 I set "default rel". C++ code: struct myData { tUInt32 ulParam1; void* pData; }; CallMyAssemblerFunction(&myData); Assembler Code: Win32: ... push ebp; mov ebp,esp mov ebx, [ebp + 8]; pointer to our struct mov eax, [ebx]; ulParam1 mov esi, [ebx + 4]; pData, 4 byte pointer movd xmm0, [esi]; ... Win64: ... mov rbx, rcx; pointer to our struct mov eax, [rbx]; ulParam1 mov rsi, [rbx + 4]; pData, 8 byte pointer movd xmm0, [rsi]; CRASH! ...

    Read the article

  • Self-describing file format for gigapixel images?

    - by Adam Goode
    In medical imaging, there appears to be two ways of storing huge gigapixel images: Use lots of JPEG images (either packed into files or individually) and cook up some bizarre index format to describe what goes where. Tack on some metadata in some other format. Use TIFF's tile and multi-image support to cleanly store the images as a single file, and provide downsampled versions for zooming speed. Then abuse various TIFF tags to store metadata in non-standard ways. Also, store tiles with overlapping boundaries that must be individually translated later. In both cases, the reader must understand the format well enough to understand how to draw things and read the metadata. Is there a better way to store these images? Is TIFF (or BigTIFF) still the right format for this? Does XMP solve the problem of metadata? The main issues are: Storing images in a way that allows for rapid random access (tiling) Storing downsampled images for rapid zooming (pyramid) Handling cases where tiles are overlapping or sparse (scanners often work by moving a camera over a slide in 2D and capturing only where there is something to image) Storing important metadata, including associated images like a slide's label and thumbnail Support for lossy storage What kind of (hopefully non-proprietary) formats do people use to store large aerial photographs or maps? These images have similar properties.

    Read the article

  • need a near 100% uptime third-party web-accesible hosting for static web resources

    - by Jared Henderson
    I hope this makes sense: my business sells a website template, we currently have about 10,000 users. For various reasons that are unimportant to this question, I try to keep the file size of the zipped template we give them as small as possible. Because of this, I have taken a bunch of images and a couple of static files used by the template and moved them to external hosting. They are referenced by absolute URL in the css and markup, instead of shipping all of those images and files with every template. So, basically 10,000+ and growing users are requesting images and files from a third-party host. I don't use my own webhosting for this because I still kind of use a medium-cheap shared hosting for my website, and if it goes down, 10,000+ users are potentially effected. Currently I'm having the template directly access files inside of an open-source google-code project that I created for just this purpose. But, that seems like a bastardization of what a google-code repository is for, and plus, google code (i've found out) often spews 502 bad gateway errors for hours at a time. So, anyway, my question is: where is the right kind of place to host these? Obviously I'm willing to pay. My main needs are speed and uptime, since the images and files are being requested from thousands of different websites every day. Is this something that I should use Amazon S3 for? I'm guessing there's some kind of service exactly for this kind of need, but I'm at a loss to figure out what it is.

    Read the article

  • How can I make this method more Scalalicious

    - by Neil Chambers
    I have a function that calculates the left and right node values for some collection of treeNodes given a simple node.id, node.parentId association. It's very simple and works well enough...but, well, I am wondering if there is a more idiomatic approach. Specifically is there a way to track the left/right values without using some externally tracked value but still keep the tasty recursion. /* * A tree node */ case class TreeNode(val id:String, val parentId: String){ var left: Int = 0 var right: Int = 0 } /* * a method to compute the left/right node values */ def walktree(node: TreeNode) = { /* * increment state for the inner function */ var c = 0 /* * A method to set the increment state */ def increment = { c+=1; c } // poo /* * the tasty inner method * treeNodes is a List[TreeNode] */ def walk(node: TreeNode): Unit = { node.left = increment /* * recurse on all direct descendants */ treeNodes filter( _.parentId == node.id) foreach (walk(_)) node.right = increment } walk(node) } walktree(someRootNode) Edit - The list of nodes is taken from a database. Pulling the nodes into a proper tree would take too much time. I am pulling a flat list into memory and all I have is an association via node id's as pertains to parents and children. Adding left/right node values allows me to get a snapshop of all children (and childrens children) with a single SQL query. The calculation needs to run very quickly in order to maintain data integrity should parent-child associations change (which they do very frequently). In addition to using the awesome Scala collections I've also boosted speed by using parallel processing for some pre/post filtering on the tree nodes. I wanted to find a more idiomatic way of tracking the left/right node values. After looking at the answers listed I have settled on this synthesised version: def walktree(node: TreeNode) = { def walk(node: TreeNode, counter: Int): Int = { node.left = counter node.right = treeNodes .filter( _.parentId == node.id) .foldLeft(counter+1) { (counter, curnode) => walk(curnode, counter) + 1 } node.right } walk(node,1) }

    Read the article

  • Performance degrades for more than 2 threads on Xeon X5355

    - by zoolii
    Hi All, I am writing an application using boost threads and using boost barriers to synchronize the threads. I have two machines to test the application. Machine 1 is a core2 duo (T8300) cpu machine (windows XP professional - 4GB RAM) where I am getting following performance figures : Number of threads :1 , TPS :21 Number of threads :2 , TPS :35 (66 % improvement) further increase in number of threads decreases the TPS but that is understandable as the machine has only two cores. Machine 2 is a 2 quad core ( Xeon X5355) cpu machine (windows 2003 server with 4GB RAM) and has 8 effective cores. Number of threads :1 , TPS :21 Number of threads :2 , TPS :27 (28 % improvement) Number of threads :4 , TPS :25 Number of threads :8 , TPS :24 As you can see, performance is degrading after 2 threads (though it has 8 cores). If the program has some bottle neck , then for 2 thread also it should have degraded. Any idea? , Explanations ? , Does the OS has some role in performance ? - It seems like the Core2duo (2.4GHz) scales better than Xeon X5355 (2.66GHz) though it has better clock speed. Thank you -Zoolii

    Read the article

  • is mysql index useful on column 'state' when only doing bit-operations on the column?

    - by Geert-Jan
    I have a lot of domain entities (stored in mysql) which undergo lots of different operations. Each operation is executed from a different program. I need to keep (flow)-state for these entities which I implemented in as a long field 'flowstate' used as a bitset. to query mysql for entities which have undergone a certain operation I do something like: select * from entities where state >> 7 & 1 = 1 Indicating bit 7 (cooresponding to operation 7) has run. (<-- simplified) Anyway, I really didn't pay attention to the performance implications of this setup in the beginning, and I think I'm in a bit of trouble since queries as the above run pretty slow. What I'd like to know: Does an mysql index on 'flowstate' help at all? After all it's not a single value Mysql can quickly find using a binary sort or whatever. If it doesn't, are there any other things I could do to speed things up? . Are there special 'mask-indices' for fields with use-cases as the above? TIA, Geert-jan

    Read the article

  • Dependency between operations in scala actors

    - by paradigmatic
    I am trying to parallelise a code using scala actors. That is my first real code with actors, but I have some experience with Java Mulithreading and MPI in C. However I am completely lost. The workflow I want to realise is a circular pipeline and can be described as the following: Each worker actor has a reference to another one, thus forming a circle There is a coordinator actor which can trigger a computation by sending a StartWork() message When a worker receives a StartWork() message, it process some stuff locally and sends DoWork(...) message to its neighbour in the circle. The neighbours do some other stuff and sends in turn a DoWork(...) message to its own neighbour. This continues until the initial worker receives a DoWork() message. The coordinator can send a GetResult() message to the initial worker and wait for a reply. The point is that the coordinator should only receive a result when data is ready. How can a worker wait that the job returned to it before answering the GetResult() message ? To speed up computation, any worker can receive a StartWork() at any time. Here is my first try pseudo-implementation of the worker: class Worker( neighbor: Worker, numWorkers: Int ) { var ready = Foo() def act() { case StartWork() => { val someData = doStuff() neighbor ! DoWork( someData, numWorkers-1 ) } case DoWork( resultData, remaining ) => if( remaining == 0 ) { ready = resultData } else { val someOtherData = doOtherStuff( resultData ) neighbor ! DoWork( someOtherData, remaining-1 ) } case GetResult() => reply( ready ) } } On the coordinator side: worker ! StartWork() val result = worker !? GetResult() // should wait

    Read the article

  • ConcurrentLinkedQueue$Node remains in heap after remove()

    - by action8
    I have a multithreaded app writing and reading a ConcurrentLinkedQueue, which is conceptually used to back entries in a list/table. I originally used a ConcurrentHashMap for this, which worked well. A new requirement required tracking the order entries came in, so they could be removed in oldest first order, depending on some conditions. ConcurrentLinkedQueue appeared to be a good choice, and functionally it works well. A configurable amount of entries are held in memory, and when a new entry is offered when the limit is reached, the queue is searched in oldest-first order for one that can be removed. Certain entries are not to be removed by the system and wait for client interaction. What appears to be happening is I have an entry at the front of the queue that occurred, say 100K entries ago. The queue appears to have the limited number of configured entries (size() == 100), but when profiling, I found that there were ~100K ConcurrentLinkedQueue$Node objects in memory. This appears to be by design, just glancing at the source for ConcurrentLinkedQueue, a remove merely removes the reference to the object being stored but leaves the linked list in place for iteration. Finally my question: Is there a "better" lazy way to handle a collection of this nature? I love the speed of the ConcurrentLinkedQueue, I just cant afford the unbounded leak that appears to be possible in this case. If not, it seems like I'd have to create a second structure to track order and may have the same issues, plus a synchronization concern.

    Read the article

  • making check boxes clickable once in javascript?

    - by OVERTONE
    Sorry but im an absolute noob with javascript. Ive made a form for a simple quiz but cant figure out how to make radio's only click once. I can select two or three buttons as my answer. i want to change this. <form name = "Beginners Quiz"> <p>Film speed refers to:</p> <p><input type="radio" name="Answer 1" id="Answer1" value = "a" onclick = "recordAnswer(1,this.value"/>How long it takes to develop film. <br/> <p><input type="radio" name="Answer 2" id="Answer2" value = "b" onclick = "recordAnswer(1,this.value"/>How fast film moves through film-transport system. <br/> <p><input type="radio" name="Answer 3" id="Answer3" value = "c" onclick = "recordAnswer(1,this.value"/> How sensitive the film is to light. <br/> <p><input type="radio" name="Answer 4" id="Answer4" value = "d" onclick = "recordAnswer(1,this.value"/> None of these makes sense. <br/> ive been rooting around w3shcools tutorials to no avail. can someone shed some light?

    Read the article

  • Performance of Java matrix math libraries?

    - by dfrankow
    We are computing something whose runtime is bound by matrix operations. (Some details below if interested.) This experience prompted the following question: Do folk have experience with the performance of Java libraries for matrix math (e.g., multiply, inverse, etc.)? For example: JAMA: http://math.nist.gov/javanumerics/jama/ COLT: http://acs.lbl.gov/~hoschek/colt/ Apache commons math: http://commons.apache.org/math/ I searched and found nothing. Details of our speed comparison: We are using Intel FORTRAN (ifort (IFORT) 10.1 20070913). We have reimplemented it in Java (1.6) using Apache commons math 1.2 matrix ops, and it agrees to all of its digits of accuracy. (We have reasons for wanting it in Java.) (Java doubles, Fortran real*8). Fortran: 6 minutes, Java 33 minutes, same machine. jvisualm profiling shows much time spent in RealMatrixImpl.{getEntry,isValidCoordinate} (which appear to be gone in unreleased Apache commons math 2.0, but 2.0 is no faster). Fortran is using Atlas BLAS routines (dpotrf, etc.). Obviously this could depend on our code in each language, but we believe most of the time is in equivalent matrix operations. In several other computations that do not involve libraries, Java has not been much slower, and sometimes much faster.

    Read the article

  • fastest SCM tool available for Embedded software development

    - by wrapperm
    Hi All, In my company, presently we are using Rational clearcase as the Software Configuration Management tool for our Embedded software development. The software is basically for Automobiles, to be specific for Engines (I dont think these information really matters). But I find Clearcase to be very slow is performing any the activities (accesing files, branching and labelling), in addition to which there are various other limitations. We have recently decided to research on some free & open source, distributed version control system which could be able to handle our large projects with speed and efficiency. This tool should be a full-fledged repository with complete history and full revision tracking capabilities, not dependent on network access or a central server. Branching and merging are fast and easy to do. It should have multisite development facility. With these above mentioned requirement, we have come up with some of the tools that are presently available in the market: GIT, Mercurial, Bazaar, Subversion, CVS, Perforce, and Visual SourceSafe. I need everybody's help in finding me an approrpiate SCM tool for me which meets the above mentioned requirements. Thanking you in Advance, Rahamath.

    Read the article

  • linux new/delete, malloc/free large memory blocks

    - by brian_mk
    Hi folks, We have a linux system (kubuntu 7.10) that runs a number of CORBA Server processes. The server software uses glibc libraries for memory allocation. The linux PC has 4G physical memory. Swap is disabled for speed reasons. Upon receiving a request to process data, one of the server processes allocates a large data buffer (using the standard C++ operator 'new'). The buffer size varies depening upon a number of parameters but is typically around 1.2G Bytes. It can be up to about 1.9G Bytes. When the request has completed, the buffer is released using 'delete'. This works fine for several consecutive requests that allocate buffers of the same size or if the request allocates a smaller size than the previous. The memory appears to be free'd ok - otherwise buffer allocation attempts would eventually fail after just a couple of requests. In any case, we can see the buffer memory being allocated and freed for each request using tools such as KSysGuard etc. The problem arises when a request requires a buffer larger than the previous. In this case, operator 'new' throws an exception. It's as if the memory that has been free'd from the first allocation cannot be re-allocated even though there is sufficient free physical memory available. If I kill and restart the server process after the first operation, then the second request for a larger buffer size succeeds. i.e. killing the process appears to fully release the freed memory back to the system. Can anyone offer an explanation as to what might be going on here? Could it be some kind of fragmentation or mapping table size issue? I am thinking of replacing new/delete with malloc/free and use mallopt to tune the way the memory is being released to the system. BTW - I'm not sure if it's relevant to our problem, but the server uses Pthreads that get created and destroyed on each processing request. Cheers, Brian.

    Read the article

  • Use a vector to index a matrix without linear index

    - by David_G
    G'day, I'm trying to find a way to use a vector of [x,y] points to index from a large matrix in MATLAB. Usually, I would convert the subscript points to the linear index of the matrix.(for eg. Use a vector as an index to a matrix in MATLab) However, the matrix is 4-dimensional, and I want to take all of the elements of the 3rd and 4th dimensions that have the same 1st and 2nd dimension. Let me hopefully demonstrate with an example: Matrix = nan(4,4,2,2); % where the dimensions are (x,y,depth,time) Matrix(1,2,:,:) = 999; % note that this value could change in depth (3rd dim) and time (4th time) Matrix(3,4,:,:) = 888; % note that this value could change in depth (3rd dim) and time (4th time) Matrix(4,4,:,:) = 124; Now, I want to be able to index with the subscripts (1,2) and (3,4), etc and return not only the 999 and 888 which exist in Matrix(:,:,1,1) but the contents which exist at Matrix(:,:,1,2),Matrix(:,:,2,1) and Matrix(:,:,2,2), and so on (IRL, the dimensions of Matrix might be more like size(Matrix) = (300 250 30 200) I don't want to use linear indices because I would like the results to be in a similar vector fashion. For example, I would like a result which is something like: ans(time=1) 999 888 124 999 888 124 ans(time=2) etc etc etc etc etc etc I'd also like to add that due to the size of the matrix I'm dealing with, speed is an issue here - thus why I'd like to use subscript indices to index to the data. I should also mention that (unlike this question: Accessing values using subscripts without using sub2ind) since I want all the information stored in the extra dimensions, 3 and 4, of the i and jth indices, I don't think that a slightly faster version of sub2ind still would not cut it..

    Read the article

  • Slow Databinding setup time in C# .NET 4.0

    - by Svisstack
    Hello, I have got a problem. I have windows forms application with dynamic generated layout, but i have a problem in performance. In this form i use DataBinding from .NET 4.0 and databinding after setup works fine, but he binding setup time for ONE control blocking my application on approx 0.7 second. I have some controls and time of binging setuping is around 2 minutes. I trying all possible solutions, I dont have any ideas without write self binding class. Why is wrong with my code? case "Boolean": { Binding b = new Binding("Checked", __bindingsource, __ep.Name); CheckBox cb = new CheckBox(); /* * HERE is the problem */ cb.DataBindings.Add(b); /* * HERE is the end of problem */ __flp.Controls.Add(cb); __bindingcontrol.AddBinding(b); break; } Without problem code lines all works fast and without binding ;-( but i want binding turn on in normal speed. PS1. I have suspended layout in generation time. PS2. I have same problem with binding TextBox'es, PictureBoxe's, CheckBox is only example. How to do that?

    Read the article

  • Perl Regex - Condensing groups of find/replace

    - by brydgesk
    I'm using Perl to perform some file cleansing, and am running into some performance issues. One of the major parts of my code involves standardizing name fields. I have several sections that look like this: sub substitute_titles { my ($inStr) = @_; ${$inStr} =~ s/ PHD./ PHD /; ${$inStr} =~ s/ P H D / PHD /; ${$inStr} =~ s/ PROF./ PROF /; ${$inStr} =~ s/ P R O F / PROF /; ${$inStr} =~ s/ DR./ DR /; ${$inStr} =~ s/ D.R./ DR /; ${$inStr} =~ s/ HON./ HON /; ${$inStr} =~ s/ H O N / HON /; ${$inStr} =~ s/ MR./ MR /; ${$inStr} =~ s/ MRS./ MRS /; ${$inStr} =~ s/ M R S / MRS /; ${$inStr} =~ s/ MS./ MS /; ${$inStr} =~ s/ MISS./ MISS /; } I'm passing by reference to try and get at least a little speed, but I fear that running so many (literally hundreds) of specific string replaces on tens of thousands (likely hundreds of thousands eventually) of records is going to hurt the performance. Is there a better way to implement this kind of logic than what I'm doing currently? Thanks Edit: Quick note, not all the replace functions are just removing periods and spaces. There are string deletions, soundex groups, etc.

    Read the article

  • What are some typing patterns using a standard QWERTY keyboard that work well for you as a programme

    - by OrbMan
    After hunting and pecking for about 35 years, I have decided to learn to type. I am learning QWERTY and have learned about 2/3 of the letters so far. While learning, I have noticed how asymmeterical the keyboard is, which really bothers me. (I will probably switch to a symmetrical keyboard eventually, but for now am trying to do everything as standard and "correct" as possible.) Although I am not there yet in my lessons, it seems that many of the keys I am going to use as a C# web developer are supposed to be typed by the pinky of my right hand. Are there any typing patterns you have developed that are more ergonomic (or faster) when typing large volumes of code rife with braces, colons, semi-colons and quotes? Or, should I just accept the fact that every other key is going to be hit with my right pinky? It is not that speed is such a huge concern, as much as that it seems so inefficient to rely on one finger so much... As an example, some of the conventions I use as a hunt and pecker, like typing open and close braces right away with my index and middle finger, and then hitting the left arrow key to fill in the inner content, don't seem to work as well with just a pinky. What are some typing patterns using a standard QWERTY keyboard that work really well for you as a programmer? Update: US layout and I use home row Update 2: Despite my best efforts to the contrary, people are interpreting this questionas "how do I learn to type" or "what keyboard should I use". Take it as a given, that I will learn to type, and that I will be doing so on a standard QWERTY layout keyboard, not DVORAK. I am interested in aquiring a skill that will be useful wherever I go.

    Read the article

  • Do fluent interfaces significantly impact runtime performance of a .NET application?

    - by stakx
    I'm currently occupying myself with implementing a fluent interface for an existing technology, which would allow code similar to the following snippet: using (var directory = Open.Directory(@"path\to\some\directory")) { using (var file = Open.File("foobar.html").In(directory)) { // ... } } In order to implement such constructs, classes are needed that accumulate arguments and pass them on to other objects. For example, to implement the Open.File(...).In(...) construct, you would need two classes: // handles 'Open.XXX': public static class OpenPhrase { // handles 'Open.File(XXX)': public static OpenFilePhrase File(string filename) { return new OpenFilePhrase(filename); } // handles 'Open.Directory(XXX)': public static DirectoryObject Directory(string path) { // ... } } // handles 'Open.File(XXX).XXX': public class OpenFilePhrase { internal OpenFilePhrase(string filename) { _filename = filename } // handles 'Open.File(XXX).In(XXX): public FileObject In(DirectoryObject directory) { // ... } private readonly string _filename; } That is, the more constituent parts statements such as the initial examples have, the more objects need to be created for passing on arguments to subsequent objects in the chain until the actual statement can finally execute. Question: I am interested in some opinions: Does a fluent interface which is implemented using the above technique significantly impact the runtime performance of an application that uses it? With runtime performance, I refer to both speed and memory usage aspects. Bear in mind that a potentially large number of temporary, argument-saving objects would have to be created for only very brief timespans, which I assume may put a certain pressure on the garbage collector. If you think there is significant performance impact, do you know of a better way to implement fluent interfaces?

    Read the article

  • Render an SSRS report with a Map as an image map without actually having a ReportViewer on the page

    - by Erica Merchant
    I have a report that has a Map with spatial data. Clicking an object on that map sends you to other pages on the site. I have tried a few different ways of displaying the report: If I put a ReportViewer on the actual page, the page sometimes takes 10+ seconds to load, but the report viewer creates a fully operable image map. If I create a ReportViewer in the code behind, I can use the Render method to to format the report as HTML4.0 and get the streamids from which I can extract the image (painfully). This is pretty fast (1-2 seconds), but only gives me an image, no image map. I can get a very similar functionality as the above example by using rendering extensions on the report URL to create an image and then set an image's source to this url. This is the fastest method, but still does not create an image map. So is there a way to create an image map from the report without having to use the ReportViewer? Or a way to substantially speed up the Report Viewer?

    Read the article

  • oracle query with inconsistent results

    - by Spencer Stejskal
    Im having a very strange problem, i have a complicated view that returns incorrect data when i query on a particular column. heres an example: select empname, has_garnishment from timecard_v2 where empname = 'Testerson, Testy'; this returns the single result 'Testerson, Testy', 'N' however, if i use the query: select empname, has_garnishment from timecard_v2 where empname = 'Testerson, Testy' and has_garnishment = 'Y'; this returns the single result 'Testerson, Testy', 'Y' The second query should return a subset of the first query, but it returns a different answer. I have dissected the view and determined that this section of the view definition is where the problem arises(Note, I removed all of the select clause except the parts of interests for clarity, in the full query all joined tables are required): SELECT e.fullname empname , NVL2(ded.has_garn, 'Y', 'N') has_garnishment FROM timecard tc , orderdetail od , orderassign oa , employee e , employee3 e3 , customer10 c10 , order_misc om, (SELECT COUNT(*) has_garn, v_ssn FROM deductions WHERE yymmdd_stop = 0 OR (LENGTH(yymmdd_stop) = 7 AND to_date(SUBSTR(yymmdd_stop, 2), 'YYMMDD') sysdate) GROUP BY v_ssn ) ded WHERE oa.lrn(+) = tc.lrn_order AND om.lrn(+) = od.lrn AND od.orderno = oa.orderno AND e.ssn = tc.ssn AND c10.custno = tc.custno AND e.lrn = e3.lrn AND e.ssn = ded.v_ssn(+) One thing of note about the definition of the 'ded' subquery. The v_ssn field is a virtual field on the deductions table. I am not a DBA im a software developer but we recently lost our DBA and the new one is still getting up to speed so im trying to debug this issue. That being said, please explain things a little more thoroughly then you would for a fellow oracle expert. thanks

    Read the article

  • Testing performance of queries in mysl

    - by Unreason
    I am trying to setup a script that would test performance of queries on a development mysql server. Here are more details: I have root access I am the only user accessing the server Mostly interested in InnoDB performance The queries I am optimizing are mostly search queries (SELECT ... LIKE '%xy%') What I want to do is to create reliable testing environment for measuring the speed of a single query, free from dependencies on other variables. Till now I have been using SQL_NO_CACHE, but sometimes the results of such tests also show caching behaviour - taking much longer to execute on the first run and taking less time on subsequent runs. If someone can explain this behaviour in full detail I might stick to using SQL_NO_CACHE; I do believe that it might be due to file system cache and/or caching of indexes used to execute the query, as this post explains. It is not clear to me when Buffer Pool and Key Buffer get invalidated or how they might interfere with testing. So, short of restarting mysql server, how would you recommend to setup an environment that would be reliable in determining if one query performs better then the other?

    Read the article

  • Adding time to a timer/counter

    - by BoneStarr
    I've looked all over the web and everyone can teach you how to make a timer for your game or a countdown, but I can't seem to find out how to add time to an already counting timer. So here is my counter class: package { import flash.display.MovieClip; import flash.display.Stage; import flash.text.TextField; import flash.events.Event; import flash.utils.Timer; import flash.events.TimerEvent; public class Score extends MovieClip { public var second:Number = 0; public var timer:Timer = new Timer(100); private var stageRef:Stage; public function Score(stageRef:Stage) { x = 560.95; y = 31.35; this.stageRef = stageRef; timer.addEventListener(TimerEvent.TIMER, scoreTimer); timer.start(); } public function scoreTimer(evt:TimerEvent):void { second += 1; scoreDisplay.text = String("Score: " +second); } That works without any issues or problems and just keeps counting upwards at a speed of 100ms, what I want to know is how to add say 30 seconds if something happens in my game, say you kill an enemy for example. Please help!

    Read the article

  • Zend_Form using subforms getValues() problem

    - by wiseguydigital
    Hi all, I am building a form in Zend Framework 1.9 using subforms as well as Zend_JQuery being enabled on those forms. The form itself is fine and all the error checking etc is working as normal. But the issue I am having is that when I'm trying to retrieve the values in my controller, I'm receiving just the form entry for the last subform e.g. My master form class (abbreviated for speed): Master_Form extends Zend_Form { public function init() { ZendX_JQuery::enableForm($this); $this->setAction('actioninhere') ... ->setAttrib('id', 'mainForm') $sub_one = new Form_One(); $sub_one->setDecorators(... in here I add the jQuery as per the docs); $this->addSubForm($sub_one, 'form-one'); $sub_two = new Form_Two(); $sub_two->setDecorators(... in here I add the jQuery as per the docs); $this->addSubForm($sub_two, 'form-two'); } } So that all works as it should in the display and when I submit without filling in the required values, the correct errors are returned. However, in my controller I have this: class My_Controller extends Zend_Controller_Action { public function createAction() { $request = $this->getRequest(); $form = new Master_Form(); if ($request->isPost()) { if ($form->isValid($request->getPost()) { // This is where I am having the problems print_r($form->getValues()); } } } } When I submit this and it gets past isValid(), the $form-getValues() is only returning the elements from the second subform, not the entire form.

    Read the article

  • General Address Parser for Freeform Text

    - by Daemonic
    We have a program that displays map data (think Google Maps, but with much more interactivity and custom layers for our clients). We allow navigation via a set of combo boxes that prefill certain fields with a bunch of data (ie: Country: Canada, the Province field is filled in. Select Ontario, and a list of Counties/Regions is filled in. Select a county/region, and a city is filled in, etc...). While this guarantees accurate addresses, it's a pain for the users if they don't know where a street address or a city are located (ie, which county/region is kitchener in?). So we are looking at trying to do an address parser with a freeform text field. The user could enter something like this (similar to Google Maps, Bing Maps, etc...): 22 Main St, Kitchener, On And we could compartmentalize it into sections and do lookups on the data and get to the point they are looking for (or suggest alternatives). The problem with this is that how do we properly compartmentalize information? How do we break up the sections and find possible matches? I'm guessing we wouldn't be guaranteed that the user would enter data in a format we always expected (obviously). A follow up to this would be how to present the data if we don't find an exact match (or find multiple exact matches... two cities with the same street name in different counties, for example). We have a ton of data available in the mapping data (mapinfo tab format mostly). So we can do quick scans of street names, cities, states, etc. But I'm not sure about the best way to go about approaching this problem. Sure, using Google Maps would be nice, bue most of our clients are in closed in networks where outside access is not usually allowed and most aren't willing to rely on google maps (since it doesn't contain as much information as they need, such as custom map layers). They could, obviously, go to google and get the proper location then move to our software, but this would time consuming and speed of the process can be quite important.

    Read the article

  • Copy Small Bitmaps on to Large Bitmap with Transparency Blend: What is faster than graphics.DrawImag

    - by Glenn
    I have identified this call as a bottleneck in a high pressure function. graphics.DrawImage(smallBitmap, x , y); Is there a faster way to blend small semi transparent bitmaps into a larger semi transparent one? Example Usage: XY[] locations = GetLocs(); Bitmap[] bitmaps = GetBmps(); //small images sizes vary approx 30px x 30px using (Bitmap large = new Bitmap(500, 500, PixelFormat.Format32bppPArgb)) using (Graphics largeGraphics = Graphics.FromImage(large)) { for(var i=0; i < largeNumber; i++) { //this is the bottleneck largeGraphics.DrawImage(bitmaps[i], locations[i].x , locations[i].y); } } var done = new MemoryStream(); large.Save(done, ImageFormat.Png); done.Position = 0; return (done); The DrawImage calls take a small 32bppPArgb bitmaps and copies them into a larger bitmap at locations that vary and the small bitmaps might only partially overlap the larger bitmaps visible area. Both images have semi transparent contents that get blended by DrawImage in a way that is important to the output. I've done some testing with BitBlt but not seen significant speed improvement and the alpha blending didn't come out the same in my tests. I'm open to just about any method including a better call to bitblt or unsafe c# code.

    Read the article

< Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >