Search Results

Search found 7651 results on 307 pages for 'execution plan'.

Page 281/307 | < Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >

  • A scheme for expiring downloaded content?

    - by Chad Johnson
    I am going to offer a web API service that allows users to download and "rent" content for a monthly subscription fee. The API will either be open to everyone or possibly just select parties (not sure yet). Each developer must agree to a license, and they receive a developer key for their person. Each software application will have its own key as well. So then end-users will download the software which will interact with my service's API. Each user will have a key for each application as well (probably using OAuth). Content will be cached on first download and accessible offline via just the third-party application that cached the content. If a user cancels their subscription, I plan on doing the following: Deactivate the user's OAuth key for all applications. Do not allow the user's account to download new content via the API (and subsequently any software that uses the API). Now, the big question is: how do I make content expire if they cancel their subscription? If they cancel, they should not have access to content anymore. Here are ideas I've thought of (some of these are half-solutions, not yet fully fleshed out): Require that applications encrypt downloaded content using the user's OAuth key, making it available to only the application. This will prevent most users from going to the cache directory and just copying and keeping files. Update the user's key once a month, forcing content to re-cache on a monthly basic. Users could then access content for a month after they cancel their subscription. Require applications to "phone home" [to the service] periodically and check whether the user's subscription has terminated. If so, require in the API developer license that applications expire cache. If it is found that applications do not comply, their keys (and possibly keys for all developers) are permanently deactivated as a consequence. One major worry is that some applications may blatantly ignore constraints of the license. Is it generally acceptable to rely on applications abiding by the licensing constraints? Bad idea? Any other ideas? Maybe a way to make content auto-expire after x days? Something else? I'm open to out-of-the-box ideas.

    Read the article

  • How to access controller dynamic properties within a base controller's constructor in Grails?

    - by 4h34d
    Basically, I want to be able to assign objects created within filters to members in a base controller from which every controller extends. Any possible way to do that? Here's how I tried, but haven't got to make it work. What I'm trying to achieve is to have all my controllers extend a base controller. The base controller's constructor would be used to assign values to its members, those values being pulled from the session map. Example below. File grails-app/controllers/HomeController.groovy: class HomeController extends BaseController { def index = { render username } } File grails-app/controllers/BaseController.groovy: abstract class BaseController { public String username public BaseController() { username = session.username } } When running the app, the output shown is: 2010-06-15 18:17:16,671 [main] ERROR [localhost].[/webapp] - Exception sending context initialized event to listener instance of class org.codehaus.groovy.grails.web.context.GrailsContextLoaderListener org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'pluginManager' defined in ServletContext resource [/WEB-INF/applicationContext.xml]: Invocation of init method failed; nested exception is java.lang.RuntimeException: Unable to locate constructor with Class parameter for class org.codehaus.groovy.grails.commons.DefaultGrailsControllerClass ... Caused by: java.lang.RuntimeException: Unable to locate constructor with Class parameter for class org.codehaus.groovy.grails.commons.DefaultGrailsControllerClass ... Caused by: java.lang.reflect.InvocationTargetException ... Caused by: org.codehaus.groovy.grails.exceptions.NewInstanceCreationException: Could not create a new instance of class [com.my.package.controller.HomeController]! ... Caused by: groovy.lang.MissingPropertyException: No such property: session for class: com.my.package.controller.HomeController at com.my.package.controller.BaseController.<init>(BaseController.groovy:16) at com.my.package.controller.HomeController.<init>(HomeController.groovy) ... 2010-06-15 18:17:16,687 [main] ERROR core.StandardContext - Error listenerStart 2010-06-15 18:17:16,687 [main] ERROR core.StandardContext - Context [/webapp] startup failed due to previous errors And the app won't run. This is just an example as in my case I wouldn't want to assign a username to a string value, but rather a few objects pulled from the session map. The objects pulled from the session map are being set within filters. The alternative I see is being able to access the controller's instance within the filter's execution. Is that possible? Please help! Thanks a bunch!

    Read the article

  • Speeding up templates in GAE-Py by aggregating RPC calls

    - by Sudhir Jonathan
    Here's my problem: class City(Model): name = StringProperty() class Author(Model): name = StringProperty() city = ReferenceProperty(City) class Post(Model): author = ReferenceProperty(Author) content = StringProperty() The code isn't important... its this django template: {% for post in posts %} <div>{{post.content}}</div> <div>by {{post.author.name}} from {{post.author.city.name}}</div> {% endfor %} Now lets say I get the first 100 posts using Post.all().fetch(limit=100), and pass this list to the template - what happens? It makes 200 more datastore gets - 100 to get each author, 100 to get each author's city. This is perfectly understandable, actually, since the post only has a reference to the author, and the author only has a reference to the city. The __get__ accessor on the post.author and author.city objects transparently do a get and pull the data back (See this question). Some ways around this are Use Post.author.get_value_for_datastore(post) to collect the author keys (see the link above), and then do a batch get to get them all - the trouble here is that we need to re-construct a template data object... something which needs extra code and maintenance for each model and handler. Write an accessor, say cached_author, that checks memcache for the author first and returns that - the problem here is that post.cached_author is going to be called 100 times, which could probably mean 100 memcache calls. Hold a static key to object map (and refresh it maybe once in five minutes) if the data doesn't have to be very up to date. The cached_author accessor can then just refer to this map. All these ideas need extra code and maintenance, and they're not very transparent. What if we could do @prefetch def render_template(path, data) template.render(path, data) Turns out we can... hooks and Guido's instrumentation module both prove it. If the @prefetch method wraps a template render by capturing which keys are requested we can (atleast to one level of depth) capture which keys are being requested, return mock objects, and do a batch get on them. This could be repeated for all depth levels, till no new keys are being requested. The final render could intercept the gets and return the objects from a map. This would change a total of 200 gets into 3, transparently and without any extra code. Not to mention greatly cut down the need for memcache and help in situations where memcache can't be used. Trouble is I don't know how to do it (yet). Before I start trying, has anyone else done this? Or does anyone want to help? Or do you see a massive flaw in the plan?

    Read the article

  • USB windows xp final USB access issues

    - by Lex Dean
    I basically understand you C++ people, Please do not get distracted because I'm writing in Delphi. I have a stable USB Listing method that accesses all my USB devices I get the devicepath, and this structure: TSPDevInfoData = packed record Size: DWORD; ClassGuid: TGUID; DevInst: DWORD; // DEVINST handle Reserved: DWord; end; I get my ProductID and VenderID successfully from my DevicePath Lists all USB devices connected to the computer at the time That enables me to access the registry data to each device in a stable way. What I'm lacking is a little direction Is friendly name able to be written inside the connected USB Micro chips by the firmware programmer? (I'm thinking of this to identify the device even further, or is this to help identify Bulk data transfer devices like memory sticks and camera's) Can I use SPDRP_REMOVAL_POLICY_OVERRIDE to some how reset these polices What else can I do with the registry details. Identifying when some one unplugs a device The program is using (in windows XP standard) I used a documented windows event that did not respond. Can I read a registry value to identify if its still connected? using CreateFileA (DevicePath) to send and receive data I have read when some one unplugs in the middle of a data transfer its difficult clearing resources. what can IoCreateDevice do for me and how does one use it for that task This two way point of connection status and system lock up situations is very concerning. Has some one read anything about this subject recently? My objectives are to 1. list connected USB devices identify a in development Micro Controller from everything else send and receive data in a stable and fast way to the limits of the controller No lock up's transferring data Note I'm not using any service packs I understand everything USB is in ANSI when windows xp is not and .Net is all about ANSI (what a waste of memory) I plan to continue this project into a .net at a later date as an addition. MSDN gives me Structures and Functions and what should link to what ok but say little to what they get used for. What is available in my language Delphi is way over priced that it needs a major price drop.

    Read the article

  • Problem in Fetching Table contents when adding rows in same table

    - by jasmine
    Im trying to write a function for adding category: function addCategory() { $cname = mysql_fix_string($_POST['cname']); $kabst = mysql_fix_string($_POST['kabst']); $kselect = $_POST['kselect']; $kradio = $_POST['kradio']; $ksubmit = $_POST['ksubmit']; $id = $_POST['id']; if($ksubmit){ $query = "INSERT INTO category VALUES (' ', '{$cname}', '{$kabst}', {$kselect}, {$kradio}, ' ') "; $result = mysql_query($query); if ($result) { echo "ok"; } else{ echo $query ; } } $text .= '<div class="form"> <h2>ADD new category</h2> <form action="?page=addCategory" method="post"> <ul> <li><label>Category</label></li> <li><input name="cname" type="text" class="inp" /></li> <li><label>Description</label></li> <li><textarea name="kabst" cols="40" rows="10" class="inx"></textarea></li> <li>Published:</li> <li> <select name="kselect" class="ins"> <option value="1">Active</option> <option value="0">Passive</option> </select> </li> <li>Show in home page:</li> <li> <input type="radio" name="kradio" value="1" /> yes <input type="radio" name="kradio" value="0" /> no </li> <li>Subcategory of</li> <li> <select>'; while ($row = mysql_fetch_assoc(mysql_query("SELECT * FROM category"))){ $text .= '<option>'.$row['name'].'</option>'; } $text .= '</select> </li> <li><input name="ksubmit" type="submit" value="ekle" class="int"/></li> </ul> </form> '; return $text;} And the error: Fatal error: Maximum execution time of 30 seconds exceeded What is wrong in my function?

    Read the article

  • Which programming language to choose? (for a specific problem/domain, details inside)

    - by Bijan
    I am building a trading portfolio management system that is responsible for production, optimization, and simulation of non-high frequency trading portfolios (dealing with 1min or 3min bars of data, not tick data). I plan on employing Amazon web services to take on the entire load of the application. I have four choices that I am considering as language. a) Java b) C++ c) C# d) Python Here is the scope of the extremes of the project scope. This isn't how it will be, maybe ever, but it's within the scope of the requirements: Weekly simulation of 10,000,000 trading systems. (Each trading system is expected to have its own data mining methods, including feature selection algorithms which are extremely computationally-expensive. Imagine 500-5000 features using wrappers. These are not run often by any means, but it's still a consideration) Real-time production of portfolio w/ 100,000 trading strategies Taking in 1 min or 3 min data from every stock/futures market around the globe (approx 100,000) Portfolio optimization of portfolios with up to 100,000 strategies. (rather intensive algorithm) Speed is a concern, but I believe that Java can handle the load. I just want to make sure that Java CAN handle the above requirements comfortably. I don't want to do the project in C++, but I will if it's required. The reason C# is on there is because I thought it was a good alternative to Java, even though I don't like Windows at all and would prefer Java if all things are the same. Python - I've read somethings on PyPy and pyscho that claim python can be optimized with JIT compiling to run at near C-like speeds.... That's pretty much the only reason it is on this list, besides that fact that Python is a great language and would probably be the most enjoyable language to code in, which is not a factor at all for this project, but a perk. To sum up: - real time production - weekly simulations of a large number of systems - weekly/monthly optimizations of portfolios - large numbers of connections to collect data from There is no dealing with millisecond or even second based trades. The only consideration is if Java can possibly deal with this kind of load when spread out of a necessary amount of EC2 servers. Thank you guys so much for your wisdom.

    Read the article

  • Inline HTML Syntax for Helpers in ASP.NET MVC

    - by kouPhax
    I have a class that extends the HtmlHelper in MVC and allows me to use the builder pattern to construct special output e.g. <%= Html.FieldBuilder<MyModel>(builder => { builder.Field(model => model.PropertyOne); builder.Field(model => model.PropertyTwo); builder.Field(model => model.PropertyThree); }) %> Which outputs some application specific HTML, lets just say, <ul> <li>PropertyOne: 12</li> <li>PropertyTwo: Test</li> <li>PropertyThree: true</li> </ul> What I would like to do, however, is add a new builder methid for defining some inline HTML without having to store is as a string. E.g. I'd like to do this. <% Html.FieldBuilder<MyModel>(builder => { builder.Field(model => model.PropertyOne); builder.Field(model => model.PropertyTwo); builder.ActionField(model => %> Generated: <%=DateTime.Now.ToShortDate()%> (<a href="#">Refresh</a>) <%); }).Render(); %> and generate this <ul> <li>PropertyOne: 12</li> <li>PropertyTwo: Test</li> <li>Generated: 29/12/2008 <a href="#">Refresh</a></li> </ul> Essentially an ActionExpression that accepts a block of HTML. However to do this it seems I need to execute the expression but point the execution of the block to my own StringWriter and I am not sure how to do this. Can anyone advise?

    Read the article

  • Summarising (permanently) data in a SQL table

    - by Cylindric
    Geetings, Stackers. I have a huge number of data-points in a SQL table, and I want to summarise them in a way reminiscent of RRD. Assuming a table such as ID | ENTITY_ID | SCORE_DATE | SCORE | SOME_OTHER_DATA ----+-----------+------------+-------+----------------- 1 | A00000001 | 01/01/2010 | 100 | some data 2 | A00000002 | 01/01/2010 | 105 | more data 3 | A00000003 | 01/01/2010 | 104 | various text ... | ......... | .......... | ..... | ... ... | A00009999 | 01/01/2010 | 101 | ... | A00000001 | 02/01/2010 | 104 | ... | A00000002 | 02/01/2010 | 119 | ... | A00000003 | 02/01/2010 | 119 | ... | ......... | .......... | ..... | ... | A00009999 | 02/01/2010 | 101 | arbitrary data ... | ......... | .......... | ..... | ... ... | A00000001 | 01/02/2010 | 104 | ... | A00000002 | 01/02/2010 | 119 | ... | A00000003 | 01/01/2010 | 119 | I want to end up with one record per entity, per month: ID | ENTITY_ID | SCORE_DATE | SCORE | ----+-----------+------------+-------+ ... | A00000001 | 01/01/2010 | 100 | ... | A00000002 | 01/01/2010 | 105 | ... | A00000003 | 01/01/2010 | 104 | ... | A00000001 | 01/02/2010 | 100 | ... | A00000002 | 01/02/2010 | 105 | ... | A00000003 | 01/02/2010 | 104 | (I Don't care about the SOME_OTHER_DATA - I'll pick something - either the first or last record probably.) What's an easy way of doing this on a regular basis, so that anything in the last calendar month is summarised in this way? At the moment my plan is kind of: For each EntityID For each month Find average score for all records in given month Update first record with results of previous step Delete all records that aren't the first I can't think of a neat way of doing it though, that doesn't involve lots of updates and iteration. This can either be done in a SQL Stored Procedure, or it can be incorporated into the .Net app that's generating this data, so the solution doesn't really need to be "one big SQL script", but can be :) (SQL-2005)

    Read the article

  • Animating a pulsing UILabel?

    - by fuzzygoat
    I am trying to animate the color the the text on a UILabel to pulse from: [Black] to [White] to [Black] and repeat. - (void)timerFlash:(NSTimer *)timer { [[self navTitle] setTextColor:[[UIColor whiteColor] colorWithAlphaComponent:0.0]]; [UIView animateWithDuration:1 delay:0 options:UIViewAnimationOptionAllowUserInteraction animations:^{[[self navTitle] setTextColor:[[UIColor whiteColor] colorWithAlphaComponent:1.0]];} completion:nil]; } . [self setFadeTimer:[NSTimer scheduledTimerWithTimeInterval:1 target:self selector:@selector(timerFlash:) userInfo:nil repeats:YES]]; Firstly I am not sure of my method, my plan (as you can see above) was to set up a animation block and call it using a repeating NSTimer until canceled. My second problem (as you can see above) is that I am animating from black (alpha 0) to white (alpha 1) but I don't know how to animate back to black again so the animation loops seamlessly Essentially what I want is the text color to pulse on a UILabel until the user presses a button to continue. EDIT_001: I was getting into trouble because you can't animate [UILabel setColor:] you can however animated [UILabel setAlpha:] so I am going to give that a go. EDIT_002: - (void)timerFlash:(NSTimer *)timer { [[self navTitle] setAlpha:0.5]; [UIView animateWithDuration:2 delay:0 options:UIViewAnimationOptionAllowUserInteraction animations:^{[[self navTitle] setAlpha:0.9];} completion:nil]; } This works (BTW: I do want it to stop which is why I hooked it up to a NSTimer so I can cancel that) the only thing is that this animates from midGray to nearWhite and then pops back. Does anyone know how I would animate back from nearWhite to midGray so I get a nice smooth cycle? EDIT_003: (Solution) The code suggested by dave DeLong (see below) does indeed work when modified to use the CALayer opacity style attribute: UILabel *navTitle; @property(nonatomic, retain) UILabel *navTitle; . // ADD ANIMATION CABasicAnimation *anim = [CABasicAnimation animationWithKeyPath:@"opacity"]; [anim setTimingFunction:[CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionEaseInEaseOut]]; [anim setFromValue:[NSNumber numberWithFloat:0.5]]; [anim setToValue:[NSNumber numberWithFloat:1.0]]; [anim setAutoreverses:YES]; [anim setDuration:0.5]; [[[self navTitle] layer] addAnimation:anim forKey:@"flash"]; . // REMOVE ANIMATION [[[self navTitle] layer] removeAnimationForKey:@"flash__"];

    Read the article

  • System.MissingMemberException was unhandled by user code

    - by AmRoSH
    I'm using this code: Dim VehiclesTable1 = dsVehicleList.Tables(0) Dim VT1 = (From d In VehiclesTable1.AsEnumerable _ Select VehicleTypeName = d.Item("VehicleTypeName") _ , VTypeID = d.Item("VTypeID") _ , ImageURL = d.Item("ImageURL") _ , DailyRate = d.Item("DailyRate") _ , RateID = d.Item("RateID")).Distinct its linq to dataset and I Take Data on THis Rotator: <telerik:RadRotator ID="RadRotatorVehicleType" runat="server" Width="620px" Height="145" ItemWidth="155" ItemHeight="145" ScrollDirection="Left" FrameDuration="1" RotatorType="Buttons"> <ItemTemplate> <div style="text-align: center; cursor: pointer; width: 150px"> <asp:Image ID="ImageVehicleType" runat="server" Width="150" ImageUrl='<%# Container.DataItem("ImageURL") %>' /> <asp:Label ID="lblVehicleType" runat="server" Text='<%# Container.DataItem("VehicleTypeName") %>' Font-Bold="true"></asp:Label> <br /> <asp:Label ID="lblDailyRate" runat="server" Text='<%# Container.DataItem("DailyRate") %>' Visible="False"></asp:Label> <input id="HiddenVehicleTypeID" type="hidden" value='<%# Container.DataItem("VTypeID") %>' name="HiddenVehicleTypeID" runat="server" /> <input id="HiddenRateID" type="hidden" value='<%# Container.DataItem("RateID") %>' name="HiddenRateID" runat="server" /> </div> </ItemTemplate> <ControlButtons LeftButtonID="img_left" RightButtonID="img_right" /> </telerik:RadRotator> and I got this Exception: No default member found for type 'VB$AnonymousType_0(Of Object,Object,Object,Object,Object)'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.MissingMemberException: No default member found for type 'VB$AnonymousType_0(Of Object,Object,Object,Object,Object)'. I don't know whats up ? Any help please. Thanks for who tried to solve this but I got solution: using '<%# DataBinder.Eval(Container.DataItem,"ImageURL") %>' instead of '<%# Container.DataItem("RateID") %>' Thanks,

    Read the article

  • VB6 ADO Command to SQL Server

    - by Emtucifor
    I'm getting an inexplicable error with an ADO command in VB6 run against a SQL Server 2005 database. Here's some code to demonstrate the problem: Sub ADOCommand() Dim Conn As ADODB.Connection Dim Rs As ADODB.Recordset Dim Cmd As ADODB.Command Dim ErrorAlertID As Long Dim ErrorTime As Date Set Conn = New ADODB.Connection Conn.ConnectionString = "Provider=SQLOLEDB.1;Integrated Security=SSPI;Initial Catalog=database;Data Source=server" Conn.CursorLocation = adUseClient Conn.Open Set Rs = New ADODB.Recordset Rs.CursorType = adOpenStatic Rs.LockType = adLockReadOnly Set Cmd = New ADODB.Command With Cmd .Prepared = False .CommandText = "ErrorAlertCollect" .CommandType = adCmdStoredProc .NamedParameters = True .Parameters.Append .CreateParameter("@ErrorAlertID", adInteger, adParamOutput) .Parameters.Append .CreateParameter("@CreateTime", adDate, adParamOutput) Set .ActiveConnection = Conn Rs.Open Cmd ErrorAlertID = .Parameters("@ErrorAlertID").Value ErrorTime = .Parameters("@CreateTime").Value End With Debug.Print Rs.State ' Shows 0 - Closed Debug.Print Rs.RecordCount ' Of course this fails since the recordset is closed End Sub So this code was working not too long ago but now it's failing on the last line with the error: Run-time error '3704': Operation is not allowed when the object is closed Why is it closed? I just opened it and the SP returns rows. I ran a trace and this is what the ADO library is actually submitting to the server: declare @p1 int set @p1=1 declare @p2 datetime set @p2=''2010-04-22 15:31:07:770'' exec ErrorAlertCollect @ErrorAlertID=@p1 output,@CreateTime=@p2 output select @p1, @p2 Running this as a separate batch from my query editor yields: Msg 102, Level 15, State 1, Line 4 Incorrect syntax near '2010'. Of course there's an error. Look at the double single quotes in there. What the heck could be causing that? I tried using adDBDate and adDBTime as data types for the date parameter, and they give the same results. When I make the parameters adParamInputOutput, then I get this: declare @p1 int set @p1=default declare @p2 datetime set @p2=default exec ErrorAlertCollect @ErrorAlertID=@p1 output,@CreateTime=@p2 output select @p1, @p2 Running that as a separate batch yields: Msg 156, Level 15, State 1, Line 2 Incorrect syntax near the keyword 'default'. Msg 156, Level 15, State 1, Line 4 Incorrect syntax near the keyword 'default'. What the heck? SQL Server doesn't support this kind of syntax. You can only use the DEFAULT keyword in the actual SP execution statement. I should note that removing the extra single quotes from the above statement makes the SP run fine. ... Oh my. I just figured it out. I guess it's worth posting anyway.

    Read the article

  • Reliable session faulting for unknown reason

    - by Scarfman007
    I am trying to achieve the following - one client-side proxy instance (kept open) accessed by multiple threads using a reliable session. What I have managed so far is to have either A) a reliable session with a client-side proxy which is created and disposed per call or B) what I aim for, but without a reliable session. When I enable reliable sessions on my binding however, the following behaviour is exhibited: Client-side Upon application startup everything appears to work fine until roughly 18 messages in to the WCF session. I firstly get the proxy.InnerChannel.Faulted event raised, then an exception is caught at the point where I am calling the method on the proxy. The exception is a System.TimeoutException, with message: "The request channel timed out while waiting for a reply after 00:00:59.9062512. Increase the timeout value passed to the call to Request or increase the SendTimeout value on the Binding. The time allotted to this operation may have been a portion of a longer timeout." The inner exception has a similar message: "The request operation did not complete within the allotted timeout of 00:01:00. The time allotted to this operation may have been a portion of a longer timeout." With the method at the top of the inner stack trace being: System.ServiceModel.Channels.ReliableRequestSessionChannel.SyncRequest.WaitForReply(TimeSpan timeout) I then call proxy.Close followed by proxy.Abort (catching and ignoring exceptions). If I utilize the default settings (i.e. have simply <reliableSession/>), then calling proxy. Close results in another System.Timeout exception (although this time the allotted timeout is 00:00:00), however if I override the defaults as specified above no exception is thrown. Service-side Utilizing WCF tracing I get a System.ServiceModel.CommunicationException, with message: "The sequence has been terminated by the remote endpoint. The session has stopped waiting for a particular reply. Because of this the reliable session cannot continue. The reliable session was faulted." And a stack trace ending at: System.ServiceModel.AsyncResult.End[TAsyncResult](IAsyncResult result) When remotely attaching to the server I get the same message, which occurs when code execution steps over the return statement of my service in the service call which causes the error. The puzzling thing to me is that the service is stable and runs with options A) or B) as decribed at the beginning of my post, and occurs after a varying number of messages (around 18). The former fact points to there being nothing wrong with the code (indeed I have checked that no exceptions are thrown), and the latter just serves to confuse me and is why I modified the settings on the reliable session binding. I am quite stuck on this. Can anyone suggest why the reliable session would fault in such a way?

    Read the article

  • Selecting the contents of an ASP.NET TextBox in an UpdatePanel after a partial page postback

    - by Scott Mitchell
    I am having problems selecting the text within a TextBox in an UpdatePanel. Consider a very simple page that contains a single UpdatePanel. Within that UpdatePanel there are two Web controls: A DropDownList with three statically-defined list items, whose AutoPostBack property is set to True, and A TextBox Web control The DropDownList has a server-side event handler for its SelectedIndexChanged event, and in that event handler there's two lines of code: TextBox1.Text = "Whatever"; ScriptManager.RegisterStartupScript(this, this.GetType(), "Select-" + TextBox1.ClientID, string.Format("document.getElementById('{0}').select();", TextBox1.ClientID), true); The idea is that whenever a user chooses and item from the DropDownList there is a partial page postback, at which point the TextBox's Text property is set and selected (via the injected JavaScript). Unfortunately, this doesn't work as-is. (I have also tried putting the script in the pageLoad function with no luck, as in: ScriptManager.RegisterStartupScript(..., "function pageLoad() { ... my script ... }");) What happens is the code runs, but something else on the page receives focus at the conclusion of the partial page postback, causing the TextBox's text to be unselected. I can "fix" this by using JavaScript's setTimeout to delay the execution of my JavaScript code. For instance, if I update the emitted JavaScript to the following: setTimeout("document.getElementById('{0}').select();", 111); It "works." I put works in quotes because it works for this simple page on my computer. In a more complex page on a slower computer with more markup getting passed between the client and server on the partial page postback, I have to up the timeout to over a second to get it to work. I would hope that there is a more foolproof way to achieve this. Rather than saying, "Delay for X milliseconds," it would be ideal to say, "Run this when you're not going to steal the focus." What's perplexing is that the .Focus() method works beautifully. That is, if I scrap my JavaScript and replace it with a call to TextBox1.Focus(); then the TextBox receives focus (although the text is not selected). I've examined the contents of MicrosoftAjaxWebForms.js and see that the focus is set after the registered scripts run, but I'm my JavaScript skills are not strong enough to decode what all is happening here and why the selected text is unselected between the time it is selected and the end of the partial page postback. I've also tried using Firebug's JavaScript debugger and see that when my script runs the TextBox's text is selected. As I continue to step through it the text remains selected, but then after stepping off the last line of script (apparently) it all of the sudden gets unselected. Any ideas? I am pulling my hair out. Thanks in advance...

    Read the article

  • dojo layout tutorial dor version 1.7 doesn't work for 1.7.2

    - by Sheena
    This is sortof a continuation to dojo1.7 layout acting screwy. So I made some working widgets and tested them out, i then tried altering my work using the tutorial at http://dojotoolkit.org/documentation/tutorials/1.7/dijit_layout/ to make the layout nice. After failing at that in many interesting ways (thus my last question) I started on a new path. My plan is now to implement the layout tutorial example and then stick in my widgets. For some reason even following the tutorial wont work... everything loads then disappears and I'm left with a blank browser window. Any ideas? It just struck me that it could be browser compatibility issues, I'm working on Firefox 13.0.1. As far as I know Dojo is supposed to be compatible with this... anyway, have some code: HTML: <body class="claro"> <div id="appLayout" class="demoLayout" data-dojo-type="dijit.layout.BorderContainer" data-dojo-props="design: 'headline'"> <div class="centerPanel" data-dojo-type="dijit.layout.ContentPane" data-dojo-props="region: 'center'"> <div> <h4>Group 1 Content</h4> <p>stuff</p> </div> <div> <h4>Group 2 Content</h4> </div> <div> <h4>Group 3 Content</h4> </div> </div> <div class="edgePanel" data-dojo-type="dijit.layout.ContentPane" data-dojo-props="region: 'top'"> Header content (top) </div> <div id="leftCol" class="edgePanel" data-dojo-type="dijit.layout.ContentPane" data-dojo-props="region: 'left', splitter: true"> Sidebar content (left) </div> </div> </body> Dojo Configuration: var dojoConfig = { baseUrl: "${request.static_url('mega:static/js')}", //this is in a mako template tlmSiblingOfDojo: false, packages: [ { name: "dojo", location: "libs/dojo" }, { name: "dijit", location: "libs/dijit" }, { name: "dojox", location: "libs/dojox" }, ], parseOnLoad: true, has: { "dojo-firebug": true, "dojo-debug-messages": true }, async: true }; other js stuff: require(["dijit/layout/BorderContainer", "dijit/layout/TabContainer", "dijit/layout/ContentPane", "dojo/parser"]); css: html, body { height: 100%; margin: 0; overflow: hidden; padding: 0; } #appLayout { height: 100%; } #leftCol { width: 14em; }

    Read the article

  • jQuery post request is not sent until first post request is compleated

    - by Champ
    I have a function which have a long execution time. public void updateCampaign() { context.Session[processId] = "0|Fetching Lead360 Campaign"; Lead360 objLead360 = new Lead360(); string campaignXML = objLead360.getCampaigns(); string todayDate = DateTime.Now.ToString("dd-MMMM-yyyy"); context.Session[processId] = "1|Creating File for Lead360 Campaign on " + todayDate; string fileName = HttpContext.Current.Server.MapPath("campaigns") + todayDate + ".xml"; objLead360.createFile(fileName, campaignXML); context.Session[processId] = "2|Reading The latest Lead360 Campaign"; string file = File.ReadAllText(fileName); context.Session[processId] = "3|Updating Lead360 Campaign"; string updateStatus = objLead360.updateCampaign(fileName); string[] statusArr = updateStatus.Split('|'); context.Session[processId] = "99|" + statusArr[0] + " New Inserted , " + statusArr[1] + " Updated , With " + statusArr[2] + " Error , "; } So to track the Progress of the function I wrote a another function public void getProgress() { if (context.Session[processId] == null) { string json = "{\"error\":true}"; Response.Write(json); Response.End(); }else{ string[] status = context.Session[processId].ToString().Split('|'); if (status[0] == "99") context.Session.Remove(processId); string json = "{\"error\":false,\"statuscode\":" + status[0] + ",\"statusmsz\":\"" + status[1] + "\" }"; Response.Write(json); Response.End(); } } To call this by jQuery post request is used reqUrl = "AjaxPages/lead360Campaign.aspx?processid=" + progressID + "&action=updatecampaign"; $.post(reqUrl); setTimeout(getProgress, 500); get getProgress is : function getProgress() { reqUrl = "AjaxPages/lead360Campaign.aspx?processid=" + progressID + "&action=getProgress"; $.post(reqUrl, function (response) { var progress = jQuery.parseJSON(response); console.log(progress) if (progress.error) { $("#fetchedCampaign .waitingMsz").html("Some error occured. Please try again later."); $("#fetchedCampaign .waitingMsz").css({ "background": "url(common/images/ajax_error.jpg) no-repeat center 6px" }); return; } if (progress.statuscode == 99) { $("#fetchedCampaign .waitingMsz").html("Update Status :"+ progress.statusmsz ); $("#fetchedCampaign .waitingMsz").css({ "background": "url(common/images/ajax_loded.jpg) no-repeat center 6px" }); return; } $("#fetchedCampaign .waitingMsz").html("Please Wait... " + progress.statusmsz); setTimeout(getProgress, 500); }); } But the problem is that I can't see the intermediate message. Only the last message is been displayed after a long lime of ajax loading message Also on the browser console I just see that after a long time first requested is completed and after that the second request is completed. but there should be for getProgress ? I have checked jquery.doc and it says that $post is an asynchronous request. Can anyone please explain what is wrong with the code or logic?

    Read the article

  • If array is thread safe, what the issue with this function?

    - by Ajay Sharma
    I am totally lost with the things that is happening with my code.It make me to think & get clear with Array's thread Safe concept. Is NSMutableArray OR NSMutableDictionary Thread Safe ? While my code is under execution, the values for the MainArray get's changes although, that has been added to Array. Please try to execute this code, onyour system its very much easy.I am not able to get out of this Trap. It is the function where it is returning Array. What I am Looking to do is : -(Array) (Main Array) --(Dictionary) with Key Value (Multiple Dictionary in Main Array) ----- Above dictionary has 9 Arrays in it. This is the structure I am developing for Array.But even before #define TILE_ROWS 3 #define TILE_COLUMNS 3 #define TILE_COUNT (TILE_ROWS * TILE_COLUMNS) -(NSArray *)FillDataInArray:(int)counter { NSMutableArray *temprecord = [[NSMutableArray alloc] init]; for(int i = 0; i <counter;i++) { if([temprecord count]<=TILE_COUNT) { NSMutableDictionary *d1 = [[NSMutableDictionary alloc]init]; [d1 setValue:[NSString stringWithFormat:@"%d/2011",i+1] forKey:@"serial_data"]; [d1 setValue:@"Friday 13 Sep 12:00 AM" forKey:@"date_data"]; [d1 setValue:@"Description Details " forKey:@"details_data"]; [d1 setValue:@"Subject Line" forKey:@"subject_data"]; [temprecord addObject:d1]; d1= nil; [d1 release]; if([temprecord count]==TILE_COUNT) { NSMutableDictionary *holderKey = [[NSMutableDictionary alloc]initWithObjectsAndKeys:temprecord,[NSString stringWithFormat:@"%d",[casesListArray count]+1],nil]; [self.casesListArray addObject:holderKey]; [holderKey release]; holderKey =nil; [temprecord removeAllObjects]; } } else { [temprecord removeAllObjects]; NSMutableDictionary *d1 = [[NSMutableDictionary alloc]init]; [d1 setValue:[NSString stringWithFormat:@"%d/2011",i+1] forKey:@"serial_data"]; [d1 setValue:@"Friday 13 Sep 12:00 AM" forKey:@"date_data"]; [d1 setValue:@"Description Details " forKey:@"details_data"]; [d1 setValue:@"Subject Line" forKey:@"subject_data"]; [temprecord addObject:d1]; d1= nil; [d1 release]; } } return temprecord; [temprecord release]; } What is the problem with this Code ? Every time there are 9 records in Array, it just replaces the whole Array value instead of just for specific key Value.

    Read the article

  • Neo4j 1.9.4 (REST Server,CYPHER) performance issue

    - by user2968943
    I have Neo4j 1.9.4 installed on 24 core 24Gb ram (centos) machine and for most queries CPU usage spikes goes to 200% with only few concurrent requests. Domain: some sort of social application where few types of nodes(profiles) with 3-30 text/array properties and 36 relationship types with at least 3 properties. Most of nodes currently has ~300-500 relationships. Current data set footprint(from console): LogicalLogSize=4294907 (32MB) ArrayStoreSize=1675520 (12MB) NodeStoreSize=1342170 (10MB) PropertyStoreSize=1739548 (13MB) RelationshipStoreSize=6395202 (48MB) StringStoreSize=1478400 (11MB) which is IMHO really small. most queries looks like this one(with more or less WITH .. MATCH .. statements and few queries with variable length relations but the often fast): START targetUser=node({id}), currentUser=node({current}) MATCH targetUser-[contact:InContactsRelation]->n, n-[:InLocationRelation]->l, n-[:InCategoryRelation]->c WITH currentUser, targetUser,n, l,c, contact.fav is not null as inFavorites MATCH n<-[followers?:InContactsRelation]-() WITH currentUser, targetUser,n, l,c,inFavorites, COUNT(followers) as numFollowers RETURN id(n) as id, n.name? as name, n.title? as title, n._class as _class, n.avatar? as avatar, n.avatar_type? as avatar_type, l.name as location__name, c.name as category__name, true as isInContacts, inFavorites as isInFavorites, numFollowers it runs in ~1s-3s(for first run) and ~1s-70ms (for consecutive and it depends on query) and there is about 5-10 queries runs for each impression. Another interesting behavior is when i try run query from console(neo4j) on my local machine many consecutive times(just press ctrl+enter for few seconds) it has almost constant execution time but when i do it on server it goes slower exponentially and i guess it somehow related with my problem. Problem: So my problem is that neo4j is very CPU greedy(for 24 core machine its may be not an issue but its obviously overkill for small project). First time i used AWS EC2 m1.large instance but over all performance was bad, during testing, CPU always was over 100%. Some relevant parts of configuration: neostore.nodestore.db.mapped_memory=1280M wrapper.java.maxmemory=8192 note: I already tried configuration where all memory related parameters where HIGH and it didn't worked(no change at all). Question: Where to digg? configuration? scheme? queries? what i'm doing wrong? if need more info(logs, configs) just ask ;)

    Read the article

  • UIScrollView Infinite Scrolling

    - by Ben Robinson
    I'm attempting to setup a scrollview with infinite (horizontal) scrolling. Scrolling forward is easy - I have implemented scrollViewDidScroll, and when the contentOffset gets near the end I make the scrollview contentsize bigger and add more data into the space (i'll have to deal with the crippling effect this will have later!) My problem is scrolling back - the plan is to see when I get near the beginning of the scroll view, then when I do make the contentsize bigger, move the existing content along, add the new data to the beginning and then - importantly adjust the contentOffset so the data under the view port stays the same. This works perfectly if I scroll slowly (or enable paging) but if I go fast (not even very fast!) it goes mad! Heres the code: - (void) scrollViewDidScroll:(UIScrollView *)scrollView { float pageNumber = scrollView.contentOffset.x / 320; float pageCount = scrollView.contentSize.width / 320; if (pageNumber > pageCount-4) { //Add 10 new pages to end mainScrollView.contentSize = CGSizeMake(mainScrollView.contentSize.width + 3200, mainScrollView.contentSize.height); //add new data here at (320*pageCount, 0); } //*** the problem is here - I use updatingScrollingContent to make sure its only called once (for accurate testing!) if (pageNumber < 4 && !updatingScrollingContent) { updatingScrollingContent = YES; mainScrollView.contentSize = CGSizeMake(mainScrollView.contentSize.width + 3200, mainScrollView.contentSize.height); mainScrollView.contentOffset = CGPointMake(mainScrollView.contentOffset.x + 3200, 0); for (UIView *view in [mainContainerView subviews]) { view.frame = CGRectMake(view.frame.origin.x+3200, view.frame.origin.y, view.frame.size.width, view.frame.size.height); } //add new data here at (0, 0); } //** MY CHECK! NSLog(@"%f", mainScrollView.contentOffset.x); } As the scrolling happens the log reads: 1286.500000 1285.500000 1284.500000 1283.500000 1282.500000 1281.500000 1280.500000 Then, when pageNumber<4 (we're getting near the beginning): 4479.500000 4479.500000 Great! - but the numbers should continue to go down in the 4,000s but the next log entries read: 1278.000000 1277.000000 1276.500000 1275.500000 etc.... Continiuing from where it left off! Just for the record, if scrolled slowly the log reads: 1294.500000 1290.000000 1284.500000 1280.500000 4476.000000 4476.000000 4473.000000 4470.000000 4467.500000 4464.000000 4460.500000 4457.500000 etc.... Any ideas???? Thanks Ben.

    Read the article

  • CRM2011 - "The given key was not present in the dictionary"

    - by DJZorrow
    I am what you call a "n00b" in CRM plugin development. I am trying to write a plugin for Microsoft's Dynamics CRM 2011 that will create a new activity entity when you create a new contact. I want this activity entity to be associated with the contact entity. This is my current code: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xrm.Sdk; namespace ITPH_CRM_Deactivate_Account_SSP_Disable { public class SSPDisable_Plugin: IPlugin { public void Execute(IServiceProvider serviceProvider) { // Obtain the execution context from the service provider. IPluginExecutionContext context = (IPluginExecutionContext) serviceProvider.GetService(typeof(IPluginExecutionContext)); IOrganizationServiceFactory serviceFactory = (IOrganizationServiceFactory)serviceProvider.GetService(typeof(IOrganizationServiceFactory)); IOrganizationService service = serviceFactory.CreateOrganizationService(context.UserId); if (context.InputParameters.Contains("Target") && context.InputParameters["target"] is Entity) { Entity entity = context.InputParameters["Target"] as Entity; if (entity.LogicalName != "account") { return; } Entity followup = new Entity(); followup.LogicalName = "activitypointer"; followup.Attributes = new AttributeCollection(); followup.Attributes.Add("subject", "Created via Plugin."); followup.Attributes.Add("description", "This is generated by the magic of C# ..."); followup.Attributes.Add("scheduledstart", DateTime.Now.AddDays(3)); followup.Attributes.Add("actualend", DateTime.Now.AddDays(5)); if (context.OutputParameters.Contains("id")) { Guid regardingobjectid = new Guid(context.OutputParameters["id"].ToString()); string regardingobjectidType = "account"; followup["regardingobjectid"] = new EntityReference(regardingobjectidType, regardingobjectid); } service.Create(followup); } } } But when i try to run this code: I get an error when i try to create a new contact in the CRM environment. The error is: "The given key was not present in the dictionary" (Link *1). The error pops up right as i try to save the new contact. Link *1: http://puu.sh/4SXrW.png (Translated bold text: "Error on business process") Thanks for any help or suggestions :)

    Read the article

  • SQL Server CTE referred in self joins slow

    - by Kharlos Dominguez
    Hello, I have written a table-valued UDF that starts by a CTE to return a subset of the rows from a large table. There are several joins in the CTE. A couple of inner and one left join to other tables, which don't contain a lot of rows. The CTE has a where clause that returns the rows within a date range, in order to return only the rows needed. I'm then referencing this CTE in 4 self left joins, in order to build subtotals using different criterias. The query is quite complex but here is a simplified pseudo-version of it WITH DataCTE as ( SELECT [columns] FROM table INNER JOIN table2 ON [...] INNER JOIN table3 ON [...] LEFT JOIN table3 ON [...] ) SELECT [aggregates_columns of each subset] FROM DataCTE Main LEFT JOIN DataCTE BananasSubset ON [...] AND Product = 'Bananas' AND Quality = 100 LEFT JOIN DataCTE DamagedBananasSubset ON [...] AND Product = 'Bananas' AND Quality < 20 LEFT JOIN DataCTE MangosSubset ON [...] GROUP BY [ I have the feeling that SQL Server gets confused and calls the CTE for each self join, which seems confirmed by looking at the execution plan, although I confess not being an expert at reading those. I would have assumed SQL Server to be smart enough to only perform the data retrieval from the CTE only once, rather than do it several times. I have tried the same approach but rather than using a CTE to get the subset of the data, I used the same select query as in the CTE, but made it output to a temp table instead. The version referring the CTE version takes 40 seconds. The version referring the temp table takes between 1 and 2 seconds. Why isn't SQL Server smart enough to keep the CTE results in memory? I like CTEs, especially in this case as my UDF is a table-valued one, so it allowed me to keep everything in a single statement. To use a temp table, I would need to write a multi-statement table valued UDF, which I find a slightly less elegant solution. Did some of you had this kind of performance issues with CTE, and if so, how did you get them sorted? Thanks, Kharlos

    Read the article

  • Trappings MySQL Warnings on Calls Wrapped in Classes -- Python

    - by chernevik
    I can't get Python's try/else blocks to catch MySQL warnings when the execution statements are wrapped in classes. I have a class that has as a MySQL connection object as an attribute, a MySQL cursor object as another, and a method that run queries through that cursor object. The cursor is itself wrapped in a class. These seem to run queries properly, but the MySQL warnings they generate are not caught as exceptions in a try/else block. Why don't the try/else blocks catch the warnings? How would I revise the classes or method calls to catch the warnings? Also, I've looked through the prominent sources and can't find a discussion that helps me understand this. I'd appreciate any reference that explains this. Please see code below. Apologies for verbosity, I'm newbie. #!/usr/bin/python import MySQLdb import sys import copy sys.path.append('../../config') import credentials as c # local module with dbase connection credentials #============================================================================= # CLASSES #------------------------------------------------------------------------ class dbMySQL_Connection: def __init__(self, db_server, db_user, db_passwd): self.conn = MySQLdb.connect(db_server, db_user, db_passwd) def getCursor(self, dict_flag=True): self.dbMySQL_Cursor = dbMySQL_Cursor(self.conn, dict_flag) return self.dbMySQL_Cursor def runQuery(self, qryStr, dict_flag=True): qry_res = runQueryNoCursor(qryStr=qryStr, \ conn=self, \ dict_flag=dict_flag) return qry_res #------------------------------------------------------------------------ class dbMySQL_Cursor: def __init__(self, conn, dict_flag=True): if dict_flag: dbMySQL_Cursor = conn.cursor(MySQLdb.cursors.DictCursor) else: dbMySQL_Cursor = conn.cursor() self.dbMySQL_Cursor = dbMySQL_Cursor def closeCursor(self): self.dbMySQL_Cursor.close() #============================================================================= # QUERY FUNCTIONS #------------------------------------------------------------------------------ def runQueryNoCursor(qryStr, conn, dict_flag=True): dbMySQL_Cursor = conn.getCursor(dict_flag) qry_res =runQueryFnc(qryStr, dbMySQL_Cursor.dbMySQL_Cursor) dbMySQL_Cursor.closeCursor() return qry_res #------------------------------------------------------------------------------ def runQueryFnc(qryStr, dbMySQL_Cursor): qry_res = {} qry_res['rows'] = dbMySQL_Cursor.execute(qryStr) qry_res['result'] = copy.deepcopy(dbMySQL_Cursor.fetchall()) qry_res['messages'] = copy.deepcopy(dbMySQL_Cursor.messages) qry_res['query_str'] = qryStr return qry_res #============================================================================= # USAGES qry = 'DROP DATABASE IF EXISTS database_of_armaments' dbConn = dbMySQL_Connection(**c.creds) def dbConnRunQuery(): # Does not trap an exception; warning displayed to standard error. try: dbConn.runQuery(qry) except: print "dbConn.runQuery() caught an exception." def dbConnCursorExecute(): # Does not trap an exception; warning displayed to standard error. dbConn.getCursor() # try/except block does catches error without this try: dbConn.dbMySQL_Cursor.dbMySQL_Cursor.execute(qry) except Exception, e: print "dbConn.dbMySQL_Cursor.execute() caught an exception." print repr(e) def funcRunQueryNoCursor(): # Does not trap an exception; no warning displayed try: res = runQueryNoCursor(qry, dbConn) print 'Try worked. %s' % res except Exception, e: print "funcRunQueryNoCursor() caught an exception." print repr(e) #============================================================================= if __name__ == '__main__': print '\n' print 'EXAMPLE -- dbConnRunQuery()' dbConnRunQuery() print '\n' print 'EXAMPLE -- dbConnCursorExecute()' dbConnCursorExecute() print '\n' print 'EXAMPLE -- funcRunQueryNoCursor()' funcRunQueryNoCursor() print '\n'

    Read the article

  • Why is my producer-consumer blocking?

    - by User007
    My code is here: http://pastebin.com/Fi3h0E0P Here is the output 0 Should we take order today (y or n): y Enter order number: 100 More customers (y or n): n Stop serving customers right now. Passing orders to cooker: There are total of 1 order(s) 1 Roger, waiter. I am processing order #100 The goal is waiter must take orders and then give them to the cook. The waiter has to wait cook finishes all pizza, deliver the pizza, and then take new orders. I asked how P-V work in my previous post here. I don't think it has anything to do with \n consuming? I tried all kinds of combination of wait(), but none work. Where did I make a mistake? The main part is here: //Producer process if(pid > 0) { while(1) { printf("0"); P(emptyShelf); // waiter as P finds no items on shelf; P(mutex); // has permission to use the shelf waiter_as_producer(); V(mutex); // cooker now can use the shelf V(orderOnShelf); // cooker now can pickup orders wait(); printf("2"); P(pizzaOnShelf); P(mutex); waiter_as_consumer(); V(mutex); V(emptyShelf); printf("3 "); } } if(pid == 0) { while(1) { printf("1"); P(orderOnShelf); // make sure there is an order on shelf P(mutex); //permission to work cooker_as_consumer(); // take order and put pizza on shelf printf("return from cooker"); V(mutex); //release permission printf("just released perm"); V(pizzaOnShelf); // pizza is now on shelf printf("after"); wait(); printf("4"); } } So I imagine this is the execution path: enter waiter_as_producer, then go to child process (cooker), then transfer the control back to parent, finish waiter_as_consumer, switch back to child. The two waits switch back to parent (like I said I tried all possible wait() combination...).

    Read the article

  • How to stream semi-live audio over internet

    - by Thomas Tempelmann
    I want to write something like Skype, i.e. I have a constant audio stream on one computer and then recompress it in a format that's suitable for a latent internet connection, receive it on the other end and play it. Let's also assume that the internet connection is fairly modern and fast, i.e. DSL or alike, no slow connections over phone and such. The involved computers will also be rather modern (Dual Core Intel CPUs at 2GHz or more). I know how to handle the audio on the machines. What I don't know is how to transmit the audio in an efficient way. The challenges are: I'd like get good audio quality across the line. The stream should be received without drops. The stream may, however, be received with a little delay (a second delay is acceptable). I imagine that the transport software could first determine the average (and max) latency, then start the stream and tell the receiver to wait for that max latency before starting to play the audio. With that, if the latency doesn't get any higher, the entire stream will be playable on the other side without stutter or drops. If, due to unexpected IP latencies or blockages, the stream does get cut off, I want to be able to notice this so that I can take actions (e.g. abort the stream) and eventually start a new transmission. What are my options if I want do use ready-made software for the compression and tranmission? I have no intention to write my own audio compression engine, really. OTOH, I plan to sell the solution in a vertical market, meaning I can afford a few dollars of license fees per copy, but not $100s. I guess the simplest solution would be to just open a TCP stream, send a few packets back and forth to determine their running time (or even use UDP for that), then use the results as the guide for my max latency value, then simply fire the audio data in its raw form (uncompressed 16 bit stereo), along with a timing code over the TCP connection. The receiver reads the data and plays it with the pre-determined delay. That might just work with the type of fast connection I expect. I just wonder if there are better solutions to reach this goal, with better performance (lower latency) and less data (compressed). BTW, I first try to implement this on OS X, but might want to do it on Windows, too, if it proves successful.

    Read the article

  • How would you implement this "WorkerChain" functionality in .NET?

    - by Dan Tao
    Sorry for the vague question title -- not sure how to encapsulate what I'm asking below succinctly. (If someone with editing privileges can think of a more descriptive title, feel free to change it.) The behavior I need is this. I am envisioning a worker class that accepts a single delegate task in its constructor (for simplicity, I would make it immutable -- no more tasks can be added after instantiation). I'll call this task T. The class should have a simple method, something like GetToWork, that will exhibit this behavior: If the worker is not currently running T, then it will start doing so right now. If the worker is currently running T, then once it is finished, it will start T again immediately. GetToWork can be called any number of times while the worker is running T; the simple rule is that, during any execution of T, if GetToWork was called at least once, T will run again upon completion (and then if GetToWork is called while T is running that time, it will repeat itself again, etc.). Now, this is pretty straightforward with a boolean switch. But this class needs to be thread-safe, by which I mean, steps 1 and 2 above need to comprise atomic operations (at least I think they do). There is an added layer of complexity. I have need of a "worker chain" class that will consist of many of these workers linked together. As soon as the first worker completes, it essentially calls GetToWork on the worker after it; meanwhile, if its own GetToWork has been called, it restarts itself as well. Logically calling GetToWork on the chain is essentially the same as calling GetToWork on the first worker in the chain (I would fully intend that the chain's workers not be publicly accessible). One way to imagine how this hypothetical "worker chain" would behave is by comparing it to a team in a relay race. Suppose there are four runners, W1 through W4, and let the chain be called C. If I call C.StartWork(), what should happen is this: If W1 is at his starting point (i.e., doing nothing), he will start running towards W2. If W1 is already running towards W2 (i.e., executing his task), then once he reaches W2, he will signal to W2 to get started, immediately return to his starting point and, since StartWork has been called, start running towards W2 again. When W1 reaches W2's starting point, he'll immediately return to his own starting point. If W2 is just sitting around, he'll start running immediately towards W3. If W2 is already off running towards W3, then W2 will simply go again once he's reached W3 and returned to his starting point. The above is probably a little convoluted and written out poorly. But hopefully you get the basic idea. Obviously, these workers will be running on their own threads. Also, I guess it's possible this functionality already exists somewhere? If that's the case, definitely let me know!

    Read the article

  • Java Socket Connection is flooding network OR resulting in high ping

    - by user1461100
    i have a little problem with my java socket code. I'm writing an android client application which is sending data to a java multithreaded socket server on my pc through direct(!) wireless connection. It works fine but i want to improve it for mobile applications as it is very power consuming by now. When i remove two special lines in my code, the cpu usage of my mobile device (htc one x) is totally okay but then my connection seems to have high ping rates or something like that... Here is a server code snippet where i receive the clients data: while(true) { try { .... Object obj = in.readObject(); if(obj != null) { Class clazz = obj.getClass(); String className = clazz.getName(); if(className.equals("java.lang.String")) { String cmd = (String)obj; if(cmd.equals("dc")) { System.out.println("Client "+id+" disconnected!"); Server.connectedClients[id-1] = false; break; } if(cmd.substring(0,1).equals("!")) { robot.keyRelease(PlayerEnum.getKey(cmd,id)); } else { robot.keyPress(PlayerEnum.getKey(cmd,id)); } } } } catch .... Heres the client part, where i send my data in a while loop: private void networking() { try { if(client != null) { .... out.writeObject(sendQueue.poll()); .... } } catch .... when i write it this why, i send data everytime the while loop gets executed.. when sendQueue is empty, a null "Object" will be send. this results in "high" network traffic and in "high" cpu usage. BUT: all send comments are received nearly immediately. when i change the code to following: while(true) ... if(sendQueue.peek() != null) { out.writeObject(sendQueue.poll()); } ... the cpu usage is totally okay but i'm getting some laggs.. the commands do not arrive fast enough.. as i said, it works fine (besides cpu usage) if i'm sending data(with that null objects) every while execution. but i'm sure that this is very rough coding style because i'm kind of flooding the network. any hints? what am i doing wrong?? Thanks for your Help! Sincerly yours, maaft

    Read the article

< Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >