Search Results

Search found 2899 results on 116 pages for 'rate limiting'.

Page 96/116 | < Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >

  • iPhone Audio Queue Service sample units

    - by pion
    I am looking at Audio Queue Services document specifically on the following code: // Writing an audio queue buffer to disk AudioFileWritePackets ( // 1 pAqData->mAudioFile, // 2 false, // 3 inBuffer->mAudioDataByteSize, // 4 inPacketDesc, // 5 pAqData->mCurrentPacket, // 6 &inNumPackets, // 7 inBuffer->mAudioData // 8 ); inBuffer-mAudioDataByteSize is the number of bytes of audio data being written. inBuffer-mAudioData is the new audio data to write to the audio file. Assuming the sample rate is 44100. AudioStreamBasicDescription mDataFormat; mDataFormat.mSampleRate = 44100.0f; mDataFormat.mBitsPerChannel = 16; ... NSInteger numberSamples = inBuffer->mAudioDataByteSize / 2; SInt16 *audioSample = (SInt16 *)inBuffer->mAudioData; I use core-plot to plot the above where x axis is number of sample [1 .. numberSamples] and the y axis is audioSample[0] .. audioSample[numberSamples]. I can see the chart in "real-time" where the y axis goes up and down depending the loudness of my voice. Beginner questions: What does the audioSample represent? What am I looking at here? What is the unit of audioSample? What do I need to do if I just want to plot the range between 50 - 100 Hz? Thanks in advance for your help.

    Read the article

  • Silverlight async calls and anonymous methods....

    - by JLewis
    I know there are a couple of debates on this kind of thing.... At any rate, I have several cases where I need to populate combobox items based on enumerations returned from the WCF service. In an effort to keep code clean, I started this approach. After looking into it more, I don't think the this works as well is initially thought... I am throwing this out to get recommendations/advice/code snippets on how you would do this or how you currently do this. I may be forced to have a seperate, non anonymous method, procedure. I hate doing this for something like this but at the moment, don't see it working another way... EventHandler<GetEnumerationsForTypeCompletedEventArgs> ev = null; ev = delegate(object eventSender, GetEnumerationsForTypeCompletedEventArgs eventArgs) { if (eventArgs.Error == null) { //comboBox.ItemsSource = eventArgs.Result; // populate combox for display purposes (for now) foreach (Enumeration e in eventArgs.Result) { ComboBoxItem cbi = new ComboBoxItem(); cbi.Content = e.EnumerationValueDisplayed; comboBox.Items.Add(cbi); } // remove event so we don't keep adding new events each time we need an enumeration proxy.GetEnumerationsForTypeCompleted -= ev; } }; proxy.GetEnumerationsForTypeCompleted += ev; proxy.GetEnumerationsForTypeAsync(sEnumerationType); Basically in this example we use ev to hold the anonymous method so we can then use ev from within the method to remove it from the events once called. This prevents this method from getting called more than one time. I suspect that the comboBox local var declared before this call, but within the same method, is not always the combobox originally intended but can't really verify that yet. I may add a tag to it to do some tests and populating to verify. Sorry if this is not clear. I can elaborate more if needed. Thanks Jeff

    Read the article

  • Finding most efficient transmission size in varying network latency scenarios

    - by rwmnau
    I'm building a .NET remoting client/server that will be transmitting thousands of files, of varying sizes (everything from a few bytes to hundreds of MB), and I'm curious about a general method for finding the appropriate transmission size. As I see it, there's the following tradeoff: Serialize entire file into a transmission object and transmit at once, regardless of size. This would be the fastest, but a failure during tranmission requires that the whole file be re-transmitted. If the file size is larger than something small (like 4KB), break it into 4KB chunks and transmit those, re-assembling on the server. In addition to the complexity of this, it's slower because of continued round-trips and acknowledgements, though a failure of any one piece doesn't waste much time. The ideal transmission method (when taking into account negotiation latency vs. failure rate) is somewhere in between, and I'm wondering about how to find out the best size for that particular client. Do I have some dynamic tuning step in my transmission that looks at the current bytes/second average, and then raises the transmission size until the speed starts to drop (failures overwhelm negotiation cost)? Or is there some other method for determining ideal transmission size? The application will be multi-threaded, so number of threads also factors in to the calculation. I'm not looking for a formula (though I'll take one if you've got it), but just what to consider as I create this process.

    Read the article

  • SVG animation along path with Raphael

    - by Toby Hede
    I have a rather interesting issue with SVG animation. I am animating along a circular path using Raphael obj = canvas.circle(x, y, size); path = canvas.circlePath(x, y, radius); path = canvas.path(path); //generate path from path value string obj.animateAlong(path, rate, false); The circlePath method is one I have created myself to generate the circle path in SVG path notation: Raphael.fn.circlePath = function(x , y, r) { var s = "M" + x + "," + (y-r) + "A"+r+","+r+",0,1,1,"+(x-0.1)+","+(y-r)+" z"; return s; } So far, so good. This all works. I have my object (obj) animating along the circular path. BUT: The animation only works if I create the object at the same X, Y coords as the path itself. If I start the animation from any other coordinates (say, half-way along the path) the object animates in a circle of the correct radius, however it starts the animation from the object X,Y coordinates, rather than along the path as it is displayed visually. Ideally I would like to be able to stop/start the animation - the same problem occurs on restart. When I stop then restart the animation, it animates in a circle starting from the stopped X,Y.

    Read the article

  • C# Thread Pool Cross-Thread Communication

    - by Goober
    The Scenario I have a windows forms application containing a MAINFORM with a listbox on it. The MAINFORM also has a THREAD POOL that creates new threads and fires them off to do lots of different bits of processing. Whilst each of these different worker threads is doing its job, I want to report this progress back to my MAINFORM, however, I can't because it requires Cross-Thread communication. Progress So far all of the tutorials etc. that I have seen relating to this topic involve custom(ish) threading implementations, whereas I literally have a fairly basic(ish) standard THREAD POOL implementation. Since I don't want to really modify any of my code (since the application runs like a beast with no quarms) - I'm after some advice as to how I can go about doing this cross-thread communication. ALTERNATIVELY - How to implement a different "LOGTOSCREEN" method altogether (obviously still bearing in mind the cross-thread communication thing). WARNING: I use this website at work, where we are locked down to IE6 only, and the javascript thus fails, meaning I cannot click accept on any answers during work, and thus my acceptance rate is low. I can't do anything about it I'm afraid, sorry. EDIT: I DO NOT HAVE INSTALL RIGHTS ON MY COMPUTER AT WORK. I do have firefox but the proxy at work fails when using this site on firefox. FURTHER EDIT: I DO NOT WANT TO CHANGE MY THREADING IMPLEMENTATION. AT ALL! - Accept to enable cross-thread communication....why would a backgroundworker help here!?

    Read the article

  • SVM Classification - minimum number of input sets for each class

    - by Amol Joshi
    Im trying to build an app to detect images which are advertisements from the webpages. Once I detect those Ill not be allowing those to be displayed on the client side. From the help that I got here in stackoverflow, I thought SVM is the best approach to my aim. So, I have coded SVM and an SMO myself. The dataset which I have got from UCI data repository has 3280 instances ( Link to Dataset- http://archive.ics.uci.edu/ml/datasets/Internet+Advertisements )where around 400 of them are from class representing Advertisement images and rest of them representing non-advertisement images. Right now Im taking the first 2800 input sets and training the SVM. But after looking at the accuracy rate I realised that most of those 2800 input sets are from non-advertisement image class. So Im getting very good accuracy for that class. So what can I do here? About how many input set shall I give to SVM to train and how many of them for each class? Thanks. Cheers. ( Basically made a new question because the context was different from my previous question. http://stackoverflow.com/questions/1991113/optimization-of-neural-network-input-data )

    Read the article

  • Use Session state within pLinq queries

    - by Dima
    Hi, I have a fairly simple Linq query (simplified code): dim x = From Product In lstProductList.AsParallel Order By Product.Price.GrossPrice Descending Select Product Product is a class. Product.Price is a child class and GrossPrice is one of its properties. In order to work out the price I need to use Session("exchange_rate"). So for each item in lstProductList there's a function that does the following: NetPrice=NetPrice * Session("exchange_rate") (and then GrossPrice returns NetPrice+VatAmount) No matter what I've tried I cannot access session state. I have tried HttpContext.Current - but that returns Nothing. I've tried Implements IRequiresSessionState on the class (which helps in a similar situation in generic http handlers [.ashx]) - no luck. I'm using simple InProc session state mode. Exchange rate has to be user specific. What can I do? I'm working with: web development, .Net 4, VB.net Step-by-step: page_load (in .aspx) dim objSearch as new SearchClass() dim output = objSearch.renderProductsFound() then in objSearch.renderProductsFound: lstProductList.Add(objProduct(1)) ... lstProductList.Add(objProduct(n)) dim x = From Product In lstProductList.AsParallel Order By Product.Price.GrossPrice Descending Select Product In Product.Price.GrossPrice Get : return me.NetPrice+me.VatAmount In Product.Price.NetPrice Get: return NetBasePrice*Session("exchange_rate") Again, simplified code, too much to paste in here. Works fine if I unwrap the query into For loops.

    Read the article

  • Smoothing touch-based animation in iPhone OpenGL?

    - by quixoto
    I know this is vague, but looking for general tips/help on this, as it's not an area of significant expertise for me. I have some iPhone code that's basically an EAGL view handling a single touch. The app draws (using GL) a circle via triangle fan at the touch point, and moves it when the user moves the touch point, and re-renders the view then. When dragging a finger slowly, the circle keeps up and consistent with the finger as it moves. If I scribble my finger quickly back and forth across the screen, the rendering doesn't keep up with the touch motion, so you see an optical illusion of "multiple" discrete circles on the screen "at once". (Normal persistence of vision illusion). This optical illusion is jarring. How can I make this look more natural? Can I blur the motion of the circle somehow? Is this result the evidence of some bad frame rate issue? I see this artifact even when nothing else is being rendered, so I think this might just be as fast as we can go. Any hints or suggestions? Much appreciated. Thanks.

    Read the article

  • Adding multiple star rating widgets to one page

    - by diglettpotato
    I am using this jQuery plugin for star ratings: http://orkans-tmp.22web.net/star_rating/ Everything is going great, including using AJAX, etc. The problem is that I need to have many per page and can't quite figure out how to work with the selectors to achieve this. I think this is probably very easy for someone who knows their jQuery... Here's some code below... There are multiple kids per page, and I need one rating widget for each kid. I can set up each #rat to be unique with a kid attached to it, like #1rat, #2rat, etc. But then how would I set up the function to be run on each of them independently? <script type="text/javascript"> $(function(){ var kidval=1; //this needs to be setup for each kid... not just 1 var uidval=1; $('#rat').children().not("select, #rating_title").hide(); var $caption = $('<div id="caption"/>'); $('#rat').stars({ inputType: "select", oneVoteOnly: false, callback: function(ui, type, value) { $("#"+kidval+"messages").text("Saving...").fadeIn(30); $.post("vote.php", {rate: value, kid: kidval, uid: uidval}, function(json) { $("#"+kidval+"messages").text("Average"); ui.select(Math.round(json.avg)); $caption.text(" (" + json.votes + " votes, " + json.avg + " avg)"); $("#"+kidval+"messages").text("Rated " + value + " Thanks!").stop().css("opacity", 1).fadeIn(30); setTimeout(function(){ $("#"+kidval+"messages").fadeOut(1000) }, 2000); }, "json"); } }); $('#rat').stars("selectID", -1); $caption.appendTo(#rat); $('<div id="'+kidval+'messages"/>').appendTo(#rat); });

    Read the article

  • How Can I Prevent Memory Leaks in IE Mobile?

    - by Jake Howlett
    Hi All, I've written an application for use offline (with Google Gears) on devices using IE Mobile. The devices are experiencing memory leaks at such a rate that the device becomes unusable over time. The problem page fetches entries from the local Gears database and renders a table of each entry with a link in the last column of each row to open the entry ( the link is just onclick="open('myID')" ). When they've done with the entry they return to the table, which is RE-rendered. It's the repeated building of this table that appears to be the problem. Mainly the onclick events. The table is generated in essence like this: var tmp=""; for (var i=0; i<100; i++){ tmp+="<tr><td>row "+i+"</td><td><a href=\"#\" id=\"LINK-"+i+"\""+ " onclick=\"afunction();return false;\">link</a></td></tr>"; } document.getElementById('view').innerHTML = "<table>"+tmp+"</table>"; I've read up on common causes of memory leaks and tried setting the onclick event for each link to "null" before re-rendering the table but it still seems to leak. Anybody got any ideas? In case it matters, the function being called from each link looks like this: function afunction(){ document.getElementById('view').style.display="none"; } Would that constitute a circular reference in any way? Jake

    Read the article

  • Best full text search for mysql?

    - by ConroyP
    We're currently running MySQL on a LAMP stack and have been looking at implementing a more thorough, full-text search on our site. We've looked at MySQL's own freetext search, but it doesn't seem to cope well with large databases, which makes it far too slow for our needs. Our main requirements are: speed returning results simple updating of index In addition to the above, our "nice to have"s are: ideally not something that requires adding a module to MySQL plays nicely with PHP (majority of our dev work done using PHP) There seems to be quite a few healthy open-source projects to add fast, reliable full-text search to MySQL, so I'm basically looking for recommendations/suggestions on what you've found to be the most useful product out there, easiest to set up, etc. So far, the list of ones we've been starting to play around with are: Sphinx, C++ based, used by craigslist, thepiratebay Lucene, Java-based Apache project, powers zeoh.com and zoomf.com Solr, Java-based offshoot of Lucene, used to power searches on Digg, CNet & AOL Channels Are there any better ones out there that we haven't come across yet? Can you recommend / suggest against any of the options we've gathered so far? Thanks for your help! Update @Cletus suggested Google's Custom Search Engine. We recently trialled this on a couple of projects, and it's an almost-perfect fit for our needs. The problem is that entries on our site are updated quite regularly, and unfortunately the speed at which entries go in/get updated in Google's index was just too slow and erratic for us to rely on, even with the addition of sitemaps and requested crawl rate changes.

    Read the article

  • .NET Test Harness what should it have

    - by Conor
    Hi Folks, We have a software house developing code for us on a project, .NET Web Service (WCF) and we are also paying for a test harness to be built as a separate billable task on a daily rate. I have just joined the company and am reviewing what we are getting from the software house and wanted to know what you guys in industry thought about it? Basically what we got was a WinForm that called the w/s that had an input area (Web Service Request) to drop our XML a Submit button along with a response area for the result of the Web Response and that's it... Our internal BA has created all the xml request documents so there was no logic put into the harness around this. Looking on the Net for a definition of a Test Harness I got this: http://en.wikipedia.org/wiki/Test_harness It states it should have these 3 below things: Automate the testing process. Execute test suites of test cases. Generate associated test reports. Clearly we have got none of this apart from a partial "Automate the testing process" via a WinForm. OK, from my development background I would expect someone to Produce a WinForm as a test harness 5 years ago and really should be using some sort of Tooling around this, I explicitly told the Software House I expected some sort of tooling (NUnit,NBUnit, SOAPIU) so we could create a regression test pack for future use. [Didn’t get it but I asked for this after the requirements were signed off as I wasn’t employed then :)] Would someone be able to clarify with me if my requirement for this is over realistic, I know if I did this, I would use NUnit and TDD and then reuse the test harness as a regression test pack in future? I am interested to see what the community thought. Cheers

    Read the article

  • How can I log and retrieve error messages from a client-side desktop app?

    - by KeyboardMonkey
    Update: The service-based answers below are most likely the way to go, I am also curious to see if there are any out-the-box solutions anyone has tried in the field. Our system uses a client-server architecture, and with more clients using it I'm thinking of better ways to log client application errors, and get them sent to us. Currently we just show a simple error message, with a button that preps an email (with the default system email client) and the clients send this on to our support address. This contains extra info like the stack trace. We also tried saving errors to a network share in the company, but I'm not too keen on that archaic solution either. Now there are only two businesses that refer to clients as users, and I'm sure some of ours support both lifestyles, as they just ignore the email button, and sends a full screen-shot wrapped nicely in a word document. Some factors I'm thinking of include A solution to log errors, like the contrived one above, A robust solution; Logging to a SQL database won't work; if that fails too, then what? Is at least semi-automated, preferably to the point where the logs reach my side. It copes with load, our client base is growing and the current solution, and our inboxes, won't hold up. Minimise installing extra 3rd party components on clients, I want to keep the SPOF to a min. I'd love to hear about any experience or suggestions you have on how I can implement such a solution. System Details It's a Microsoft .Net 2 based system with a SQL backend. Some users work remotely over the net, so network shares aren't always available (unless they VPN, which is awesomely slow at any rate). We have users across different companies, their DB's are hosted on-site. We have remote access to 90% of them.

    Read the article

  • Simulating Google Appengine's Task Queue with Gearman

    - by sotangochips
    One of the characteristics I love most about Google's Task Queue is its simplicity. More specifically, I love that it takes a URL and some parameters and then posts to that URL when the task queue is ready to execute the task. This structure means that the tasks are always executing the most current version of the code. Conversely, my gearman workers all run code within my django project -- so when I push a new version live, I have to kill off the old worker and run a new one so that it uses the current version of the code. My goal is to have the task queue be independent from the code base so that I can push a new live version without restarting any workers. So, I got to thinking: why not make tasks executable by url just like the google app engine task queue? The process would work like this: User request comes in and triggers a few tasks that shouldn't be blocking. Each task has a unique URL, so I enqueue a gearman task to POST to the specified URL. The gearman server finds a worker, passes the url and post data to a worker The worker simply posts to the url with the data, thus executing the task. Assume the following: Each request from a gearman worker is signed somehow so that we know it's coming from a gearman server and not a malicious request. Tasks are limited to run in less than 10 seconds (There would be no long tasks that could timeout) What are the potential pitfalls of such an approach? Here's one that worries me: The server can potentially get hammered with many requests all at once that are triggered by a previous request. So one user request might entail 10 concurrent http requests. I suppose I could have a single worker with a sleep before every request to rate-limit. Any thoughts?

    Read the article

  • What does SQL Server's BACKUPIO wait type mean?

    - by solublefish
    I'm using Sql Server 2008 ("R1"), with some maintenance plans that back up my databases to a network share. Some of my backup jobs show long waits of type "BACKUPIO". Of course it seems like this is an I/O subsystem limitation, but I'm skeptical. Perfmon stats for I/O on the production (source) server are well within normal trends for that server. The destination server shows a sustained 7MB/s write rate, which seems incredibly low, even for a slow disk. The network link is gigabit ethernet and nowhere near saturated. The few docs I've turned up about BACKUPIO indicate that it's not specifically a wait on I/O, surprisingly enough. This MSFT doc says it's abnormal unless you're using a tape drive, which I'm not. But it doesn't say (or I don't understand) exactly what resource is missing. http://www.docstoc.com/docs/24580659/Performance-Tuning-in-SQL-Server-2005 And this piece says it's not related to I/O performance at all. http://www.informit.com/articles/article.aspx?p=686168&seqNum=5 "Note that BACKUPIO and IO_AUDIT_MUTEX are not related to IO performance." Anyway, does anyone know what BACKUPIO actually means and/or what I can do to diagnose or eliminate it?

    Read the article

  • Multiple sendto() using UDP socket

    - by ereOn
    Hi, I have a network software which uses UDP to communicate with other instances of the same program. For different reasons, I must use UDP here. I recently had problems sending huge ammounts of data over UDP and had to implement a fragmentation system to split my messages into small data chunks. So far, it worked well but I now encounter an issue when I have to send a lot of data chunks. I have the following algorithm: Split message into small data chunks (around 1500 bytes) Iterate over the data chunks list and for each, send it using sendto() However, when I send a lot of data chunks, the receiver only gets the first 6 messages. Sometimes it misses the sixth and receives the seventh. It depends. Anyway, sendto() always indicates success. This always happen when I test my software over a loopback interface (127.0.0.1) but never over my LAN network. If I add something like std::cout << "test" << std::endl; between the sendto() then every frame is received. I am aware that UDP allows packet loss and that my frames might be loss for a lot of reasons and I suppose it has to do with the rate I am sending the data chunks at. What would be the right approach here ? Implementing some acknowledgement mechanism (just like TCP) seems overkill. Adding some arbitrary waiting time between the sendto() is ugly and will probably decrease performance. Increasing (if possible) the receiver UDP internal buffer ? I don't even know if this is possible. Something else ? I really need your advices here. Thank very much.

    Read the article

  • move_uploaded_file() error

    - by James R.
    I have a website on free hosting - 000webhost.com and it allows you to upload images. However, when I try to upload an image, I get these errors: Warning: move_uploaded_file(images/SmallSmileyFace.jpg) [function.move-uploaded-file]: failed to open stream: Permission denied in /home/a6621074/public_html/m/write.php on line 76 Warning: move_uploaded_file() [function.move-uploaded-file]: Unable to move '/tmp/phpcmW3mo' to 'images/SmallSmileyFace.jpg' in /home/a6621074/public_html/m/write.php on line 76 This is the code: if (!empty($_FILES['fileImage']['name'])) { // check image type and size if ((($imagetype == 'image/gif') || ($imagetype == 'image/jpeg') || ($imagetype == 'image/pjpeg') || ($imagetype == 'image/png')) && ($imagesize > 0) && ($imagesize <= 32768)) { if ($_FILES['fileImage']['error'] == 0) { //move file $target = 'images/' . $image; if (move_uploaded_file($_FILES['fileImage']['tmp_name'], $target)) { $query = "INSERT INTO reviews (post_date, food_name, location, cafeteria, review, image, rating, user_id) VALUES (NOW(), '$foodname', '$location', '$cafeteria', '$review', '$image', $rate, $id)"; mysqli_query($dbc, $query); //confirm success echo '<p>Thank you for your submission!</p>'; } else { echo '<p class="errmsg">There was a problem uploading your image.</p>'; } } @unlink($_FILES['fileImage']['tmp_name']); } else { echo '<p class="errmsg">The screen shot must be a GIF, JPEG, or PNG image file no greater than 32KB in size.</p>'; } } Any ideas?

    Read the article

  • WPF performance on scaling a large scene

    - by Mark
    I have a full screen app that I want to be able to zoom in on certain areas. I have the code working fine, but I notice that when I get closer in, the zoom in animation (which animates the ScaleTransform.ScaleX and ScaleTransform.ScaleY properties on a Parent canvas) starts to jerk down a little and the frame rate suffers. Im not using any BitmapEffects or anything, and ideally I would like my scene to get more complicated than it currently already is. The scene is quite large, 1980x1024, this is a requirement and cannot be changed. The current layout is like this: <Canvas x:name="LayoutRoot"> <Canvas x:Name="ContainerCanvas"> <local:MyControl x:Name="c1" /> <!-- numerous or ther controls and elements that compose the scene --> </Canvas> </Canvas> The code that zooms in just animates the RenderTransform of the ContainerCanvas, which in tern, scales its children which gives the desired effect. However, Im wondering if I need to swap out the ContainerCanvas for a ViewBox or something like that? Ive never really worked with ViewBox/Viewport controls before in WPF can they even help me out here? Smooth zooming is a huge requirement of the client and I must get this resolved. All ideas are welcome Thanks a lot Mark

    Read the article

  • Record/Playback with AudioQueue on iPhone

    - by Biranchi
    Hi, I am currently using Audio Queues on the iPhone to record and playback audio. What I would like to be able to do is to record some audio, allow the user to pause the record queue, and to seek back and forward through the audio to select a position from where they can start recording from again. I have got over the seeking issue by making the playback AudioQueueBuffer sizes small enough so that the play audio queue callback happens at a rate that allows the user to use a slider control to hear the audio as they adjust the slider back and forth. I think I can achieve the recording at a new position by setting the inStartingPacket parameter of the AudioFileWritePackets function that I call from the Audio Recording Queue callback. The trouble is this only inserts audio over the previously recorded audio. The file length obviously doesn't change so if the user were to go backwards and record less audio than before, the old audio still remains after the end of the newly recorded audio. Is there a way I can get the AudioFile to truncate at the point the user starts to insert the new audio, is there some other way I can remove the old audio starting at the insert position or is there a better way about going about this task? Thanks

    Read the article

  • jquery access sibling TD in table

    - by Rob
    I have the following HTML Code. What I'm try to do is to have the div named javaRatingDiv to be displayed once the checkbox with the name java is checked. I can't seem to figure out how to navigate to the next TD in a table via jquery. <div id="languages"> <table style="width:inherit"> <tr style="height:50px; vertical-align:top"> <td>Select the languages that you are familiar with and rate your knowledge:</td> </tr> <tr> <table style="width:75%;" align="center"> <tr id="tableRow"> <td id="firstTD"><input type="checkbox" name="java" value="java" />&nbsp;Java</td> <td id="secondTD" style="width:200px;"> <div id="javaRatingDiv" style="display:none"> <input name="javaRating" type="radio" value="1" class="star"/> <input name="javaRating" type="radio" value="2" class="star"/> </div> </td> </tr> </table> </tr> </table> </div>

    Read the article

  • Has the recent version of subversion dealt with "Access Denied" errors from windows services that mo

    - by Eric LaForce
    Does anyone know if this subversion "bug" has been dealt with? https://svn.apache.org/repos/asf/subversion/tags/1.6.9/www/faq.html#windows-access-denied I'm getting occasional "Access Denied" errors on Windows. They seem to happen at random. Why? These appear to be due to the various Windows services that monitor the filesystem for changes (anti-virus software, indexing services, the COM+ Event Notification Service). This is not really a bug in Subversion, which makes it difficult for us to fix. A summary of the current state of the investigation is available here. A workaround that should reduce the incidence rate for most people was implemented in revision 7598; if you have an earlier version, please update to the latest release. Currently I am experiencing this same behavior in version 1.5.6 when I try and do a SVN switch (I have suspected McAfee as the culprit for a while and when I saw this I feel it validates my suspicions). I read through the link given but it seems pretty old, so I didn't know if this FAQ was just outdated and the issue has actually be resolved. Thanks for any help. Configuration: SVN 1.5.6 TortoiseSVN 1.5.9 Build 15518 Windows XP SP3 32-bit

    Read the article

  • Filtering documents against a dictionary key in MongoDB

    - by Thomas
    I have a collection of articles in MongoDB that has the following structure: { 'category': 'Legislature', 'updated': datetime.datetime(2010, 3, 19, 15, 32, 22, 107000), 'byline': None, 'tags': { 'party': ['Peter Hoekstra', 'Virg Bernero', 'Alma Smith', 'Mike Bouchard', 'Tom George', 'Rick Snyder'], 'geography': ['Michigan', 'United States', 'North America'] }, 'headline': '2 Mich. gubernatorial candidates speak to students', 'text': [ 'BEVERLY HILLS, Mich. (AP) \u2014 Two Democratic and Republican gubernatorial candidates found common ground while speaking to private school students in suburban Detroit', "Democratic House Speaker state Rep. Andy Dillon and Republican U.S. Rep. Pete Hoekstra said Friday a more business-friendly government can help reduce Michigan's nation-leading unemployment rate.", "The candidates were invited to Detroit Country Day Upper School in Beverly Hills to offer ideas for Michigan's future.", 'Besides Dillon, the Democratic field includes Lansing Mayor Virg Bernero and state Rep. Alma Wheeler Smith. Other Republicans running are Oakland County Sheriff Mike Bouchard, Attorney General Mike Cox, state Sen. Tom George and Ann Arbor business leader Rick Snyder.', 'Former Republican U.S. Rep. Joe Schwarz is considering running as an independent.' ], 'dateline': 'BEVERLY HILLS, Mich.', 'published': datetime.datetime(2010, 3, 19, 8, 0, 31), 'keywords': "Governor's Race", '_id': ObjectId('4ba39721e0e16cb25fadbb40'), 'article_id': 'urn:publicid:ap.org:0611e36fb084458aa620c0187999db7e', 'slug': "BC-MI--Governor's Race,2nd Ld-Writethr" } If I wanted to write a query that looked for all articles that had at least 1 geography tag, how would I do that? I have tried writing db.articles.find( {'tags': 'geography'} ), but that doesn't appear to work. I've also thought about changing the search parameter to 'tags.geography', but am having a devil of a time figuring out what the search predicate would be.

    Read the article

  • Bubble Breaker Game Solver better than greedy?

    - by Gregory
    For a mental exercise I decided to try and solve the bubble breaker game found on many cell phones as well as an example here:Bubble Break Game The random (N,M,C) board consists N rows x M columns with C colors The goal is to get the highest score by picking the sequence of bubble groups that ultimately leads to the highest score A bubble group is 2 or more bubbles of the same color that are adjacent to each other in either x or y direction. Diagonals do not count When a group is picked, the bubbles disappear, any holes are filled with bubbles from above first, ie shift down, then any holes are filled by shifting right A bubble group score = n * (n - 1) where n is the number of bubbles in the bubble group The first algorithm is a simple exhaustive recursive algorithm which explores going through the board row by row and column by column picking bubble groups. Once the bubble group is picked, we create a new board and try to solve that board, recursively descending down Some of the ideas I am using include normalized memoization. Once a board is solved we store the board and the best score in a memoization table. I create a prototype in python which shows a (2,15,5) board takes 8859 boards to solve in about 3 seconds. A (3,15,5) board takes 12,384,726 boards in 50 minutes on a server. The solver rate is ~3k-4k boards/sec and gradually decreases as the memoization search takes longer. Memoization table grows to 5,692,482 boards, and hits 6,713,566 times. What other approaches could yield high scores besides the exhaustive search? I don't seen any obvious way to divide and conquer. But trending towards larger and larger bubbles groups seems to be one approach Thanks to David Locke for posting the paper link which talks above a window solver which uses a constant-depth lookahead heuristic.

    Read the article

  • Flash video slooow in AIR 2 HTMLLoader component

    - by shane
    I am working on a full screen kiosk application in Flex 4/Air 2 using Flash Builder 4. We have a company training website which staff can access via the kiosk, and the main content is interactive flash training videos. Our target machines are by no means 'beefy', they are Atom n270s @ 1.6Ghz with 1Gb RAM. As it stands the videos are all but unusable when used from within the Air application, the application becomes completely unresponsive (100% cpu usage, click events take approx 5-10 seconds to register). So far I have tried: increasing the default frame rate from 24fps to 60. No improvement. nativeWindow.stage.frameRate = 60; running the videos in a stripped down version of my app, just a full screen HTMLLoader component pointed at the training website. No better than before. disabled hyper threading. The Atom CPU is split into two virtual cores, and the AIR app was only able to use one thread so maxed out at 50% CPU usage. Since the kiosk will only run the AIR app I am happy to loose hyper threading to increase the performance of the Air app. Marginal Improvement. The same website with the same videos is responsive if viewed in ie7 on the same machine, although Internet Explorer takes advantage of the CPU’s hyper threading. The flash videos are built with Adobe Captivate and from what I understand employee JavaScript to relay results back to the server. I will add more information about the video content asap as the training guru is back in the office later this week.

    Read the article

  • video streaming infrastructure

    - by alchemical
    We would like to set-up a live video-chat web site and are looking for basic architectural advice and/or a recomendation for a particular framework to use. Here are the basic features of the site: Most streams will be broadcast live from a single person with a web cam, etc., and viewed by typically 1-10 people, although there could be up to 100+ viewers on the high side. Audio and video do not have to be super-high quality, but do need to be "good enough". The main point is to convey the basic info in the video (and audio). If occasionally the frame-rate drops low and then goes back to normal fairly soon, we could live with that. Budget is an issue, so we are in general looking for a lower cost solution that will give us most of what we need in temers of performance and quality. We are looking at Peer1 for co-lo. The rest of our web site will be .Net / Windows platform. We are open to looking at any platform for the best streaming solution, although our technical expertise is currently more on the Windows side.

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >