Search Results

Search found 1552 results on 63 pages for 'bob deckx'.

Page 34/63 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • Trying to integrate Jeditable+MarkItUp and Thickbox/Lightbox

    - by Bob Sawyer
    I've managed to successfully integrate MarkItUp with Jeditable based on the instructions on those two sites. What I would really like to do, however, is to have the Jeditable/MarkItUp editing window appear in a Thickbox or Lightbox overlay. So far my attempts to do this have not been successful. So, at the moment, I have the standard code: $.editable.addInputType('markitup', { element : $.editable.types.textarea.element, plugin : function(settings, original) { $('textarea', this).markItUp(settings.markitup); } }); $(".editme").editable("/content/save", { event : 'dblclick', type : 'markitup', submit : 'OK', cancel : 'Cancel', width : 640, height : 'auto', tooltip : 'Double-click to edit...', onblur : 'ignore', markitup : mySettings }); I've found other posts here that show how to trigger the edit box by clicking on a link rather than the object itself, and I've tried integrating that with the Thickbox calls, to no avail. Would appreciate anyone pointing me in the right direction. Thanks!

    Read the article

  • Finding duplicate values in a SQL table

    - by Alex
    It's easy to find duplicates with one field SELECT name, COUNT(email) FROM users GROUP BY email HAVING ( COUNT(email) > 1 ) So if we have a table ID NAME EMAIL 1 John [email protected] 2 Sam [email protected] 3 Tom [email protected] 4 Bob [email protected] 5 Tom [email protected] This query will give us John, Sam, Tom, Tom because they all have the same e-mails. But what I want, is to get duplicates with the same e-mails and names. I want to get Tom, Tom. I made a mistake, and allowed to insert duplicate name and e-mail values. Now I need to remove/change the duplicates. But I need to find them first.

    Read the article

  • Linq to SQL with INSTEAD OF Trigger and an Identity Column

    - by Bob Horn
    I need to use the clock on my SQL Server to write a time to one of my tables, so I thought I'd just use GETDATE(). The problem is that I'm getting an error because of my INSTEAD OF trigger. Is there a way to set one column to GETDATE() when another column is an identity column? This is the Linq-to-SQL: internal void LogProcessPoint(WorkflowCreated workflowCreated, int processCode) { ProcessLoggingRecord processLoggingRecord = new ProcessLoggingRecord() { ProcessCode = processCode, SubId = workflowCreated.SubId, EventTime = DateTime.Now // I don't care what this is. SQL Server will use GETDATE() instead. }; this.Database.Add<ProcessLoggingRecord>(processLoggingRecord); } This is the table. EventTime is what I want to have as GETDATE(). I don't want the column to be null. And here is the trigger: ALTER TRIGGER [Master].[ProcessLoggingEventTimeTrigger] ON [Master].[ProcessLogging] INSTEAD OF INSERT AS BEGIN SET NOCOUNT ON; SET IDENTITY_INSERT [Master].[ProcessLogging] ON; INSERT INTO ProcessLogging (ProcessLoggingId, ProcessCode, SubId, EventTime, LastModifiedUser) SELECT ProcessLoggingId, ProcessCode, SubId, GETDATE(), LastModifiedUser FROM inserted SET IDENTITY_INSERT [Master].[ProcessLogging] OFF; END Without getting into all of the variations I've tried, this last attempt produces this error: InvalidOperationException Member AutoSync failure. For members to be AutoSynced after insert, the type must either have an auto-generated identity, or a key that is not modified by the database after insert. I could remove EventTime from my entity, but I don't want to do that. If it was gone though, then it would be NULL during the INSERT and GETDATE() would be used. Is there a way that I can simply use GETDATE() on the EventTime column for INSERTs? Note: I do not want to use C#'s DateTime.Now for two reasons: 1. One of these inserts is generated by SQL Server itself (from another stored procedure) 2. Times can be different on different machines, and I'd like to know exactly how fast my processes are happening.

    Read the article

  • fgets and strcmp [C]

    - by Blackbinary
    I'm trying to compare two strings. One stored in a file, the other retrieved from the user (stdin). Here is a sample program: int main() { char targetName[50]; fgets(targetName,50,stdin); char aName[] = "bob"; printf("%d",strcmp(aName,targetName)); return 0; } In this program, strcmp returns a value of -1 when the input is 'bob'. Why is this? I thought they should be equal. How can i get it so that they are?

    Read the article

  • Tomcat6 ignores logging.properties partially

    - by Bob
    I'm using Tomcat 6, and this is my logging.properties: handlers = org.apache.juli.FileHandler, java.util.logging.ConsoleHandler .level=FINE org.apache.catalina.core.ApplicationContext.level = OFF org.apache.juli.FileHandler.level = ALL org.apache.juli.FileHandler.directory = ${catalina.base}/logs org.apache.juli.FileHandler.prefix = mylog. java.util.logging.ConsoleHandler.level = FINE java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter On the one hand, Tomcat seems to read this file, as it correctly saves the logfiles with the prefix "mylog" and prints only messages with log-level FINE and above. On the other hand, it keeps on writing log messages like this: Jun 8, 2010 9:53:30 PM org.apache.catalina.core.ApplicationContext log SEVERE: Error writing messages ClientAbortException: java.net.SocketException: Broken pipe I actually wanted to suppress all log messages from this class, as they flood my logfile, and the error is irrelevant for me. So why is the following line ignored? org.apache.catalina.core.ApplicationContext.level = OFF Is there any other way to suppress the log output of this class?

    Read the article

  • JSON is used only for JavaScript?

    - by Bob Smith
    I am storing a JSON string in the database that represents a set of properties. In the code behind, I export it and use it for some custom logic. Essentially, I am using it only as a storage mechanism. I understand XML is better suited for this but I read that JSON is faster and preferred. Is it a good practice to use JSON if the intention is not to use the string on the client side?

    Read the article

  • install linux guest on windows host using a .iso file on vmserver 2.0

    - by Bob
    I have done this in vmserver 1.0. I set the Cd to point to the .iso file. Then I start the vm and it starts to install. I am on the summary screen. I have created a VM. I go to CD/DVD - edit I change the radio button to Host Medio Then I select ISO Image and Browse. Here is the problem: Then in my browser I get Inventory - Virtual Machine (directory) I don't know where that directory is. I have my ISO's in a folder on my desktop. I can't browse to my desktop. When I try to just put in a path such as c:\mydesktop\mylinuxdirectory\linux1.iso The following problems have been found:OKThe ISO Image File Path must be a valid file path in the format: "[datastore] /path/to/isoimage.iso". so how do I browse to an ISO? I can add the iso after I start the VM, but by then its too late and the VM says no Operating System installed.

    Read the article

  • How to search inbox using zend mail

    - by Bob Cavezza
    The following is a function from zend_mail_protocol_imap. i read that to search emails, I would want to override it using zend_mail_storage_imap (which is what I'm using now to grab email from gmail). I copy and pasted the following function into zend_mail_storage_imap, but I'm having issues with the params. I can't find documentation on what to use for the array $params. I initially thought it was the search term before reading it more thoroughly. I'm out of ideas. Here's the function... /** * do a search request * * This method is currently marked as internal as the API might change and is not * safe if you don't take precautions. * * @internal * @return array message ids */ public function search(array $params) { $response = $this->requestAndResponse('SEARCH', $params); if (!$response) { return $response; } foreach ($response as $ids) { if ($ids[0] == 'SEARCH') { array_shift($ids); return $ids; } } return array(); } Initially I thought this would do the trick... $storage = new Zend_Mail_Storage_Imap($imap); $searchresults = $storage->search('search term'); But nope, I need to send the info in an array. Any ideas?

    Read the article

  • HttpWebRequest Timeouts After Ten Consecutive Requests

    - by Bob Mc
    I'm writing a web crawler for a specific site. The application is a VB.Net Windows Forms application that is not using multiple threads - each web request is consecutive. However, after ten successful page retrievals every successive request times out. I have reviewed the similar questions already posted here on SO, and have implemented the recommended techniques into my GetPage routine, shown below: Public Function GetPage(ByVal url As String) As String Dim result As String = String.Empty Dim uri As New Uri(url) Dim sp As ServicePoint = ServicePointManager.FindServicePoint(uri) sp.ConnectionLimit = 100 Dim request As HttpWebRequest = WebRequest.Create(uri) request.KeepAlive = False request.Timeout = 15000 Try Using response As HttpWebResponse = DirectCast(request.GetResponse, HttpWebResponse) Using dataStream As Stream = response.GetResponseStream() Using reader As New StreamReader(dataStream) If response.StatusCode <> HttpStatusCode.OK Then Throw New Exception("Got response status code: " + response.StatusCode) End If result = reader.ReadToEnd() End Using End Using response.Close() End Using Catch ex As Exception Dim msg As String = "Error reading page """ & url & """. " & ex.Message Logger.LogMessage(msg, LogOutputLevel.Diagnostics) End Try Return result End Function Have I missed something? Am I not closing or disposing of an object that should be? It seems strange that it always happens after ten consecutive requests. Notes: In the constructor for the class in which this method resides I have the following: ServicePointManager.DefaultConnectionLimit = 100 If I set KeepAlive to true, the timeouts begin after five requests. All the requests are for pages in the same domain. EDIT I added a delay between each web request of between two and seven seconds so that I do not appear to be "hammering" the site or attempting a DOS attack. However, the problem still occurs.

    Read the article

  • Sql serve Full Text Search with Containstable is very slow when Used in JOIN!

    - by Bob
    Hello, I am using sql 2008 full text search and I am having serious issues with performance depending on how I use Contains or ContainsTable. Here are sample: (table one has about 5000 records and there is a covered index on table1 which has all the fields in the where clause. I tried to simplify the statements so forgive me if there is syntax issues.) Scenario 1: select * from table1 as t1 where t1.field1=90 and t1.field2='something' and Exists(select top 1 * from containstable(table1,*, 'something') as t2 where t2.[key]=t1.id) results: 10 second (very slow) Scenario 2: select * from table1 as t1 join containstable(table1,*, 'something') as t2 on t2.[key] = t1.id where t1.field1=90 and t1.field2='something' results: 10 second (very slow) Scenario 3: Declare @tbl Table(id uniqueidentifier primary key) insert into @tbl select {key] from containstable(table1,*, 'something') select * from table1 as t1 where t1.field1=90 and t1.field2='something' and Exists(select id from @tbl as tbl where id=req1.id) results: fraction of a second (super fast) Bottom line, it seems if I use Containstable in any kind of join or where clause condition of a select statement that also has other conditions, the performance is really bad. In addition if you look at profiler, the number of reads from the database goes to the roof. But if I first do the full text search and put results in a table variable and use that variable everything goes super fast. The number of reads are also much lower. It seems in "bad" scenarios, somehow it gets stuck in a loop which causes it to read many times from teh database but of course I don't understant why. Now the question is first of all whyis that happening? and question two is that how scalable table variables are? what if it results to 10s of thousands of records? is it still going to be fast. Any ideas? Thanks

    Read the article

  • Can someone clarify what this Joel On Software quote means: (functional programs have no side effect

    - by Bob
    I was reading Joel On Software today and ran across this quote: Without understanding functional programming, you can't invent MapReduce, the algorithm that makes Google so massively scalable. The terms Map and Reduce come from Lisp and functional programming. MapReduce is, in retrospect, obvious to anyone who remembers from their 6.001-equivalent programming class that purely functional programs have no side effects and are thus trivially parallelizable. What does he mean when he says functional programs have no side effects? And how does this make parallelizing trivial?

    Read the article

  • Rails + MongoMapper + EmbeddedDocument form help

    - by Bob Martens
    I am working on a pretty simple web application (famous last words) and am working with Rails 2.3.5 + MongoMapper 0.7.2 and using embedded documents. I have two questions to ask: First, are there any example applications out there using Rails + MongoMapper + EmbeddedDocument? Preferably on GitHub or some other similar site so that I can take a look at the source and see where I am supposed to head? If not ... ... what is the best way to approach this task? How would I go about creating a form to handle an embedded document. What I am attempting to do is add addresses to users. I can toss up the two models in question if you would like. Thanks for the help!

    Read the article

  • How to: Searchlogic and Tags

    - by bob
    I have installed searchlogic and added will_paginate etc. I currently have a product model that has tagging enabled using the acts_as_taggable_on plugin. I want to search the tags using searchlogic. Here is the taggable plugin page: http://github.com/mbleigh/acts-as-taggable-on Each product has a "tag_list" that i can access using Product.tag_list or i can access a specific tag using Product.tags[0] I can't find the scope to use for searching however with search logic. Here is my part of my working form. <p> <%= f.label :name_or_description_like, "Name" %><br /> <%= f.text_field :name_or_description_like %> </p> I have tried :name_or_description_or_tagged_with_like and :name_or_description_or_tags_like and also :name_or_description_or_tags_list_like to try and get it to work but I keep have an error that says the options i have tried are not found (named scopes not found). I am wondering how I can get this working or how to create my own named_scope that would allow me to search the tags added to each product by the taggable plugin. Thanks!

    Read the article

  • Make CSV from list of string in LINQ

    - by CmdrTallen
    Hi I would like to take a list collection and generate a single csv line. So take this; List<string> MakeStrings() { List<string> results = new List<string>(); results.add("Bob"); results.add("Nancy"); results.add("Joe"); results.add("Jack"); } string ContactStringsTogether(List<string> parts) { StringBuilder sb = new StringBuilder(); foreach (string part in parts) { if (sb.Length > 0) sb.Append(", "); sb.Append(part); } return sb.ToString(); } This returns "Bob,Nancy,Joe,Jack" Looking for help on the LINQ to do this in a single statement. Thanks!

    Read the article

  • Restructure XML nodes using XSLT

    - by Brian
    Looking to use XSLT to transform my XML. The sample XML is as follows: <root> <info> <firstname>Bob</firstname> <lastname>Joe</lastname> </info> <notes> <note>text1</note> <note>text2</note> </notes> <othernotes> <note>text3</note> <note>text4</note> </othernotes> I'm looking to extract all "note" elements, and have them under a parent node "notes". The result I'm looking for is as follows: <root> <info> <firstname>Bob</firstname> <lastname>Joe</lastname> </info> <notes> <note>text1</note> <note>text2</note> <note>text3</note> <note>text4</note> </notes> </root> The XSLT I attempted to use is allowing me to extract all my "note", however, I can't figure out how I can wrap them back within a "notes" node. Here's the XSLT I'm using: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output omit-xml-declaration="yes" indent="yes"/> <xsl:template match="notes|othernotes"> <xsl:apply-templates select="note"/> </xsl:template> <xsl:template match="*"> <xsl:copy><xsl:apply-templates/></xsl:copy> </xsl:template> </xsl:stylesheet> The result I'm getting with the above XSLT is: <root> <info> <firstname>Bob</firstname> <lastname>Joe</lastname> </info> <note>text1</note> <note>text2</note> <note>text3</note> <note>text4</note> </root> Thanks

    Read the article

  • Are lambda expressions/delegates in C# "pure", or can they be?

    - by Bob
    I recently asked about functional programs having no side effects, and learned what this means for making parallelized tasks trivial. Specifically, that "pure" functions make this trivial as they have no side effects. I've also recently been looking into LINQ and lambda expressions as I've run across examples many times here on StackOverflow involving enumeration. That got me to wondering if parallelizing an enumeration or loop can be "easier" in C# now. Are lambda expressions "pure" enough to pull off trivial parallelizing? Maybe it depends on what you're doing with the expression, but can they be pure enough? Would something like this be theoretically possible/trivial in C#?: Break the loop into chunks Run a thread to loop through each chunk Run a function that does something with the value from the current loop position of each thread For instance, say I had a bunch of objects in a game loop (as I am developing a game and was thinking about the possibility of multiple threads) and had to do something with each of them every frame, would the above be trivial to pull off? Looking at IEnumerable it seems it only keeps track of the current position, so I'm not sure I could use the normal generic collections to break the enumeration into "chunks". Sorry about this question. I used bullets above instead of pseudo-code because I don't even know enough to write pseudo-code off the top of my head. My .NET knowledge has been purely simple business stuff and I'm new to delegates and threads, etc. I mainly want to know if the above approach is good for pursuing, and if delegates/lambdas don't have to be worried about when it comes to their parallelization.

    Read the article

  • Redirection fails in IE but is fine with Firefox

    - by Bob
    I use an <Authorize> attribute in ASP.NET MVC to secure a controller. My page loads portions of its content via AJAX. Here's a problem I have with IE8, but not Firefox 3.6: Sign in as user JohnDoe and navigate to http://www.example.com/AjaxPage. Everything works fine. AjaxPage is protected with the <Authorize> attribute. Sign out, which redirects me to http://www.example.com. That page doesn't use <Authorize>. Navigate to http://www.example.com/AjaxPage without signing in again. I should be redirected to the Sign In page since that controller has the <Authorize> attribute. Step 3 works with Firefox, but IE8 displays the non-Ajax portion of http://www.example.com/AjaxPage and then never loads the Ajax content. I'm surprised any content is displayed at all since I should be redirected to the Sign In page. My code redirects to the login page with: Return Redirect("https://login.live.com/wlogin.srf?appid=MY-APP-ID&alg=wsignin1.0") Why does Firefox handle this redirection, but IE doesn't? Since it works the first time (Step 1 above), is there a cache issue?

    Read the article

  • Redirection fails in IE but is fine with Firefox

    - by Bob
    I use an <Authorize> attribute in ASP.NET MVC to secure a controller. My page loads portions of its content via AJAX. Here's a problem I have with IE8, but not Firefox 3.6: Sign in as user JohnDoe and navigate to http://www.example.com/AjaxPage. Everything works fine. AjaxPage is protected with the <Authorize> attribute. Sign out, which redirects me to http://www.example.com. That page doesn't use <Authorize>. Navigate to http://www.example.com/AjaxPage without signing in again. I should be redirected to the Sign In page since that controller has the <Authorize> attribute. Step 3 works with Firefox, but IE8 displays the non-Ajax portion of http://www.example.com/AjaxPage and then never loads the Ajax content. I'm surprised any content is displayed at all since I should be redirected to the Sign In page. My code redirects to the login page with: Return Redirect("https://login.live.com/wlogin.srf?appid=MY-APP-ID&alg=wsignin1.0") Why does Firefox handle this redirection, but IE doesn't? Since it works the first time (Step 1 above), is there a cache issue? EDIT: I used Fiddler to see if AjaxPage was being cached, but it appears not to be. I assume if it were cached, I'd get an HTTP Status Code 200 back. I may simply misunderstand this though.

    Read the article

  • Adobe Flex control missing vertical scroll bar

    - by Bob Spidell
    I have a canvas containing a datagrid. I set horz and vert scroll to 'off' for the canvas, and set both to 'auto' for the DG. This works until I have a larger number of columns in the DG (=16), then the vert scroll bar doesn't appear. Anyone seen this and, better yet, have an answer? TIA, Perflexed

    Read the article

  • A device specific alpha bitmap fails after switching resolutions in remote desktop

    - by Bob
    All my alpha bitmaps, created using CreateCompatibleBitmap(..), start to receive an error code 87 after someone signs in with Remote Desktop. I am assuming that this is because the resolution changed and I am using a device specific bitmap. I am wondering what the best route is to fix this issue without migrating to a device independent bitmap? Some options are: 1) Detect remote desktop change and flag all bitmaps to be reloaded (I have done this but it does not work as well as I would like). 2) Wait for error code 87 to happen on an alphablend image that previously worked, and then reload it (was going to try this next, im sure it will work, but a little hacky) 3) Detect random event such as WM_DISPLAYCHANGE or _ that tells me when I should do this (ie: a device change event I'm guessing, or maybe something more specific?) 4) _? Thanks for any help in advance

    Read the article

  • What does the question mark at then end of a css file mean/do?

    - by Bob Dylan
    I've noticed that on some websites (including SO) the link to the CSS will look like: <link rel="stylesheet" href="http://sstatic.net/so/all.css?v=6638"> I would say its safe to assume that ?v=6638 tells the browser to load version 6638 of the css file. But can I do this on my websites and can I include different versions of my CSS file just by changing the numbers?

    Read the article

  • al32utf8 in oracle and SQL Server and DB2 pulling data

    - by Bob
    I have a non-utf8 oracle database running on 11.1.0.7. We need to support greek characters. So we have two options: use nvarchar, nclob fields for those fields that need greek (it is not all fields). We have tested this and gotten it to work with java coding. convert Oracle to AL32UTF8 database. I am not asking how to do this. I got this from the Oracle Site/Oracle Support. I know what is involved, lossy data, etc, increasing the size of the database. My question is we have users to our system that connect to our database with database links but work on SQL Server and IBM DB2 databases. I do not have access to those databases and I do not have experience with them. If they are not in UTF-8 databases what happens when they pull UTF8 data? I would assume that English/Ascii characters are fine and the greek will end up as junk data. I also ran Oracle Character set scanner (oracle command line utility you use to get info about the affects of a character set conversion). It says that my database will crease in sizez by about 20%. Does this have an affect on users with 3rd party databases? These are customers of our data and there is a limit to how much access I can have to them to run tests. Any information you have would be welcome.

    Read the article

  • Fossil gpg workflow for teams

    - by Alex_coder
    I'm learning fossil and trying to reproduce a workflow for two people modifying the same source code tree. So, Alice and Bob both have local repositories of some source code. Both have autosync off. Alice hacks some more, does some commits signing check-ins with her gpg key. This part is fine, as Alice I've managed to generate gpg keys, fossil asked me the key password when commiting. I'm also aware of gpg-agent but don't use it yet, because I'm trying to keep things as simple as possible for now. Now, at some point Bob pulls changes from Alice's fossil repo. How would he verify Alice's signed check-ins?

    Read the article

  • enclosing double quotes in array

    - by Jared
    Hi all I might be looking at this the wrong way, but I have a form that does its thing (sends emails etc etc) but I also put in some code to make a simple flatfile csv log with some of the user entered details. If a user accidentally puts in for instance 'himynameis","bob' this would either break the csv row (because the quotes weren't encapsulated) or if I use htmlspecialchars() and stripslashes() on the data, I end up with a ugly data value of 'himynameis&quot;,&quot;bob'. My question is, how can I handle the incoming data to cater for '"' being put in the form without breaking my csv file? this is my code for creating the csv log file. @$name = htmlspecialchars(trim($_POST['name'])); @$emailCheck = htmlspecialchars(trim($_POST['email'])); @$title = htmlspecialchars(trim($_POST['title'])); @$phone = htmlspecialchars(trim($_POST['phone'])); function logFile($logText) { $path = 'D:\logs'; $filename = '\Log-' . date('Ym', time()) . '.csv'; $file = $path . $filename; if(!file_exists($file)) { $logHeader = array('Date', 'IP_Address', 'Title', 'Name', 'Customer_Email', 'Customer_Phone', 'file'); $fp = fopen($file, 'a'); fputcsv($fp, $line); } $fp = fopen($file, 'a'); foreach ($logText as $record) { fputcsv($fp, $record); } } //Log submission to file $date = date("Y/m/d H:i:s"); $clientIp = getIpAddress(); //get clients IP address $nameLog = stripslashes($name); $titleLog = stripslashes($title); if($_FILES['uploadedfile']['error'] == 4) $filename = "No file attached."; //check if file uploaded and return $logText = array(array("$date", "$clientIp", "$titleLog", "$nameLog", "$emailCheck", "$phone", "$filename")); logFile($logText); //write form details to log Here is a sample of the incoming array data: Array ( [0] => Array ( [0] => 2010/05/17 10:22:27 [1] => xxx.xxx.xxx.xxx [2] => title [3] => """"himynameis","bob" [4] => [email protected] [5] => 346346 [6] => No file attached. ) ) TIA Jared

    Read the article

  • How to produce precisely-timed tone and silence?

    - by Bob Denny
    I have a C# project that plays Morse code for RSS feeds. I write it using Managed DirectX, only to discover that Managed DirectX is old and deprecated. The task I have is to play pure sine wave bursts interspersed with silence periods (the code) which are precisely timed as to their duration. I need to be able to call a function which plays a pure tone for so many milliseconds, then Thread.Sleep() then play another, etc. At its fastest, the tones and spaces can be as short as 40ms. It's working quite well in Managed DirectX. To get the precisely timed tone I create 1 sec. of sine wave into a secondary buffer, then to play a tone of a certain duration I seek forward to within x milliseconds of the end of the buffer then play. I've tried System.Media.SoundPlayer. It's a loser because you have to Play(), Sleep(), then Stop() for arbitrary tone lengths. The result is a tone that is too long, variable by CPU load. It takes an indeterminate amount of time to actually stop the tone. I then embarked on a lengthy attempt to use NAudio 1.3. I ended up with a memory resident stream providing the tone data, and again seeking forward leaving the desired length of tone remaining in the stream, then playing. This worked OK on the DirectSoundOut class for a while (see below) but the WaveOut class quickly dies with an internal assert saying that buffers are still on the queue despite PlayerStopped = true. This is odd since I play to the end then put a wait of the same duration between the end of the tone and the start of the next. You'd think that 80ms after starting Play of a 40 ms tone that it wouldn't have buffers on the queue. DirectSoundOut works well for a while, but its problem is that for every tone burst Play() it spins off a separate thread. Eventually (5 min or so) it just stops working. You can see thread after thread after thread exiting in the Output window while running the project in VS2008 IDE. I don't create new objects during playing, I just Seek() the tone stream then call Play() over and over, so I don't think it's a problem with orphaned buffers/whatever piling up till it's choked. I'm out of patience on this one, so I'm asking in the hopes that someone here has faced a similar requirement and can steer me in a direction with a likely solution.

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >