Search Results

Search found 7651 results on 307 pages for 'execution plan'.

Page 281/307 | < Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >

  • Choosing a distributed shared memory solution

    - by mindas
    I have a task to build a prototype for a massively scalable distributed shared memory (DSM) app. The prototype would only serve as a proof-of-concept, but I want to spend my time most effectively by picking the components which would be used in the real solution later on. The aim of this solution is to take data input from an external source, churn it and make the result available for a number of frontends. Those "frontends" would just take the data from the cache and serve it without extra processing. The amount of frontend hits on this data can literally be millions per second. The data itself is very volatile; it can (and does) change quite rapidly. However the frontends should see "old" data until the newest has been processed and cached. The processing and writing is done by a single (redundant) node while other nodes only read the data. In other words: no read-through behaviour. I was looking into solutions like memcached however this particular one doesn't fulfil all our requirements which are listed below: The solution must at least have Java client API which is reasonably well maintained as the rest of app is written in Java and we are seasoned Java developers; The solution must be totally elastic: it should be possible to add new nodes without restarting other nodes in the cluster; The solution must be able to handle failover. Yes, I realize this means some overhead, but the overall served data size isn't big (1G max) so this shouldn't be a problem. By "failover" I mean seamless execution without hardcoding/changing server IP address(es) like in memcached clients when a node goes down; Ideally it should be possible to specify the degree of data overlapping (e.g. how many copies of the same data should be stored in the DSM cluster); There is no need to permanently store all the data but there might be a need of post-processing of some of the data (e.g. serialization to the DB). Price. Obviously we prefer free/open source but we're happy to pay a reasonable amount if a solution is worth it. In any way, paid 24hr/day support contract is a must. The whole thing has to be hosted in our data centers so SaaS offerings like Amazon SimpleDB are out of scope. We would only consider this if no other options would be available. Ideally the solution would be strictly consistent (as in CAP); however, eventual consistence can be considered as an option. Thanks in advance for any ideas.

    Read the article

  • ArithmeticException thrown during BigDecimal.divide

    - by polygenelubricants
    I thought java.math.BigDecimal is supposed to be The Answer™ to the need of performing infinite precision arithmetic with decimal numbers. Consider the following snippet: import java.math.BigDecimal; //... final BigDecimal one = BigDecimal.ONE; final BigDecimal three = BigDecimal.valueOf(3); final BigDecimal third = one.divide(three); assert third.multiply(three).equals(one); // this should pass, right? I expect the assert to pass, but in fact the execution doesn't even get there: one.divide(three) causes ArithmeticException to be thrown! Exception in thread "main" java.lang.ArithmeticException: Non-terminating decimal expansion; no exact representable decimal result. at java.math.BigDecimal.divide It turns out that this behavior is explicitly documented in the API: In the case of divide, the exact quotient could have an infinitely long decimal expansion; for example, 1 divided by 3. If the quotient has a non-terminating decimal expansion and the operation is specified to return an exact result, an ArithmeticException is thrown. Otherwise, the exact result of the division is returned, as done for other operations. Browsing around the API further, one finds that in fact there are various overloads of divide that performs inexact division, i.e.: final BigDecimal third = one.divide(three, 33, RoundingMode.DOWN); System.out.println(three.multiply(third)); // prints "0.999999999999999999999999999999999" Of course, the obvious question now is "What's the point???". I thought BigDecimal is the solution when we need exact arithmetic, e.g. for financial calculations. If we can't even divide exactly, then how useful can this be? Does it actually serve a general purpose, or is it only useful in a very niche application where you fortunately just don't need to divide at all? If this is not the right answer, what CAN we use for exact division in financial calculation? (I mean, I don't have a finance major, but they still use division, right???).

    Read the article

  • dojo layout tutorial dor version 1.7 doesn't work for 1.7.2

    - by Sheena
    This is sortof a continuation to dojo1.7 layout acting screwy. So I made some working widgets and tested them out, i then tried altering my work using the tutorial at http://dojotoolkit.org/documentation/tutorials/1.7/dijit_layout/ to make the layout nice. After failing at that in many interesting ways (thus my last question) I started on a new path. My plan is now to implement the layout tutorial example and then stick in my widgets. For some reason even following the tutorial wont work... everything loads then disappears and I'm left with a blank browser window. Any ideas? It just struck me that it could be browser compatibility issues, I'm working on Firefox 13.0.1. As far as I know Dojo is supposed to be compatible with this... anyway, have some code: HTML: <body class="claro"> <div id="appLayout" class="demoLayout" data-dojo-type="dijit.layout.BorderContainer" data-dojo-props="design: 'headline'"> <div class="centerPanel" data-dojo-type="dijit.layout.ContentPane" data-dojo-props="region: 'center'"> <div> <h4>Group 1 Content</h4> <p>stuff</p> </div> <div> <h4>Group 2 Content</h4> </div> <div> <h4>Group 3 Content</h4> </div> </div> <div class="edgePanel" data-dojo-type="dijit.layout.ContentPane" data-dojo-props="region: 'top'"> Header content (top) </div> <div id="leftCol" class="edgePanel" data-dojo-type="dijit.layout.ContentPane" data-dojo-props="region: 'left', splitter: true"> Sidebar content (left) </div> </div> </body> Dojo Configuration: var dojoConfig = { baseUrl: "${request.static_url('mega:static/js')}", //this is in a mako template tlmSiblingOfDojo: false, packages: [ { name: "dojo", location: "libs/dojo" }, { name: "dijit", location: "libs/dijit" }, { name: "dojox", location: "libs/dojox" }, ], parseOnLoad: true, has: { "dojo-firebug": true, "dojo-debug-messages": true }, async: true }; other js stuff: require(["dijit/layout/BorderContainer", "dijit/layout/TabContainer", "dijit/layout/ContentPane", "dojo/parser"]); css: html, body { height: 100%; margin: 0; overflow: hidden; padding: 0; } #appLayout { height: 100%; } #leftCol { width: 14em; }

    Read the article

  • System.MissingMemberException was unhandled by user code

    - by AmRoSH
    I'm using this code: Dim VehiclesTable1 = dsVehicleList.Tables(0) Dim VT1 = (From d In VehiclesTable1.AsEnumerable _ Select VehicleTypeName = d.Item("VehicleTypeName") _ , VTypeID = d.Item("VTypeID") _ , ImageURL = d.Item("ImageURL") _ , DailyRate = d.Item("DailyRate") _ , RateID = d.Item("RateID")).Distinct its linq to dataset and I Take Data on THis Rotator: <telerik:RadRotator ID="RadRotatorVehicleType" runat="server" Width="620px" Height="145" ItemWidth="155" ItemHeight="145" ScrollDirection="Left" FrameDuration="1" RotatorType="Buttons"> <ItemTemplate> <div style="text-align: center; cursor: pointer; width: 150px"> <asp:Image ID="ImageVehicleType" runat="server" Width="150" ImageUrl='<%# Container.DataItem("ImageURL") %>' /> <asp:Label ID="lblVehicleType" runat="server" Text='<%# Container.DataItem("VehicleTypeName") %>' Font-Bold="true"></asp:Label> <br /> <asp:Label ID="lblDailyRate" runat="server" Text='<%# Container.DataItem("DailyRate") %>' Visible="False"></asp:Label> <input id="HiddenVehicleTypeID" type="hidden" value='<%# Container.DataItem("VTypeID") %>' name="HiddenVehicleTypeID" runat="server" /> <input id="HiddenRateID" type="hidden" value='<%# Container.DataItem("RateID") %>' name="HiddenRateID" runat="server" /> </div> </ItemTemplate> <ControlButtons LeftButtonID="img_left" RightButtonID="img_right" /> </telerik:RadRotator> and I got this Exception: No default member found for type 'VB$AnonymousType_0(Of Object,Object,Object,Object,Object)'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.MissingMemberException: No default member found for type 'VB$AnonymousType_0(Of Object,Object,Object,Object,Object)'. I don't know whats up ? Any help please. Thanks for who tried to solve this but I got solution: using '<%# DataBinder.Eval(Container.DataItem,"ImageURL") %>' instead of '<%# Container.DataItem("RateID") %>' Thanks,

    Read the article

  • How to access controller dynamic properties within a base controller's constructor in Grails?

    - by 4h34d
    Basically, I want to be able to assign objects created within filters to members in a base controller from which every controller extends. Any possible way to do that? Here's how I tried, but haven't got to make it work. What I'm trying to achieve is to have all my controllers extend a base controller. The base controller's constructor would be used to assign values to its members, those values being pulled from the session map. Example below. File grails-app/controllers/HomeController.groovy: class HomeController extends BaseController { def index = { render username } } File grails-app/controllers/BaseController.groovy: abstract class BaseController { public String username public BaseController() { username = session.username } } When running the app, the output shown is: 2010-06-15 18:17:16,671 [main] ERROR [localhost].[/webapp] - Exception sending context initialized event to listener instance of class org.codehaus.groovy.grails.web.context.GrailsContextLoaderListener org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'pluginManager' defined in ServletContext resource [/WEB-INF/applicationContext.xml]: Invocation of init method failed; nested exception is java.lang.RuntimeException: Unable to locate constructor with Class parameter for class org.codehaus.groovy.grails.commons.DefaultGrailsControllerClass ... Caused by: java.lang.RuntimeException: Unable to locate constructor with Class parameter for class org.codehaus.groovy.grails.commons.DefaultGrailsControllerClass ... Caused by: java.lang.reflect.InvocationTargetException ... Caused by: org.codehaus.groovy.grails.exceptions.NewInstanceCreationException: Could not create a new instance of class [com.my.package.controller.HomeController]! ... Caused by: groovy.lang.MissingPropertyException: No such property: session for class: com.my.package.controller.HomeController at com.my.package.controller.BaseController.<init>(BaseController.groovy:16) at com.my.package.controller.HomeController.<init>(HomeController.groovy) ... 2010-06-15 18:17:16,687 [main] ERROR core.StandardContext - Error listenerStart 2010-06-15 18:17:16,687 [main] ERROR core.StandardContext - Context [/webapp] startup failed due to previous errors And the app won't run. This is just an example as in my case I wouldn't want to assign a username to a string value, but rather a few objects pulled from the session map. The objects pulled from the session map are being set within filters. The alternative I see is being able to access the controller's instance within the filter's execution. Is that possible? Please help! Thanks a bunch!

    Read the article

  • Which programming language to choose? (for a specific problem/domain, details inside)

    - by Bijan
    I am building a trading portfolio management system that is responsible for production, optimization, and simulation of non-high frequency trading portfolios (dealing with 1min or 3min bars of data, not tick data). I plan on employing Amazon web services to take on the entire load of the application. I have four choices that I am considering as language. a) Java b) C++ c) C# d) Python Here is the scope of the extremes of the project scope. This isn't how it will be, maybe ever, but it's within the scope of the requirements: Weekly simulation of 10,000,000 trading systems. (Each trading system is expected to have its own data mining methods, including feature selection algorithms which are extremely computationally-expensive. Imagine 500-5000 features using wrappers. These are not run often by any means, but it's still a consideration) Real-time production of portfolio w/ 100,000 trading strategies Taking in 1 min or 3 min data from every stock/futures market around the globe (approx 100,000) Portfolio optimization of portfolios with up to 100,000 strategies. (rather intensive algorithm) Speed is a concern, but I believe that Java can handle the load. I just want to make sure that Java CAN handle the above requirements comfortably. I don't want to do the project in C++, but I will if it's required. The reason C# is on there is because I thought it was a good alternative to Java, even though I don't like Windows at all and would prefer Java if all things are the same. Python - I've read somethings on PyPy and pyscho that claim python can be optimized with JIT compiling to run at near C-like speeds.... That's pretty much the only reason it is on this list, besides that fact that Python is a great language and would probably be the most enjoyable language to code in, which is not a factor at all for this project, but a perk. To sum up: - real time production - weekly simulations of a large number of systems - weekly/monthly optimizations of portfolios - large numbers of connections to collect data from There is no dealing with millisecond or even second based trades. The only consideration is if Java can possibly deal with this kind of load when spread out of a necessary amount of EC2 servers. Thank you guys so much for your wisdom.

    Read the article

  • VB6 ADO Command to SQL Server

    - by Emtucifor
    I'm getting an inexplicable error with an ADO command in VB6 run against a SQL Server 2005 database. Here's some code to demonstrate the problem: Sub ADOCommand() Dim Conn As ADODB.Connection Dim Rs As ADODB.Recordset Dim Cmd As ADODB.Command Dim ErrorAlertID As Long Dim ErrorTime As Date Set Conn = New ADODB.Connection Conn.ConnectionString = "Provider=SQLOLEDB.1;Integrated Security=SSPI;Initial Catalog=database;Data Source=server" Conn.CursorLocation = adUseClient Conn.Open Set Rs = New ADODB.Recordset Rs.CursorType = adOpenStatic Rs.LockType = adLockReadOnly Set Cmd = New ADODB.Command With Cmd .Prepared = False .CommandText = "ErrorAlertCollect" .CommandType = adCmdStoredProc .NamedParameters = True .Parameters.Append .CreateParameter("@ErrorAlertID", adInteger, adParamOutput) .Parameters.Append .CreateParameter("@CreateTime", adDate, adParamOutput) Set .ActiveConnection = Conn Rs.Open Cmd ErrorAlertID = .Parameters("@ErrorAlertID").Value ErrorTime = .Parameters("@CreateTime").Value End With Debug.Print Rs.State ' Shows 0 - Closed Debug.Print Rs.RecordCount ' Of course this fails since the recordset is closed End Sub So this code was working not too long ago but now it's failing on the last line with the error: Run-time error '3704': Operation is not allowed when the object is closed Why is it closed? I just opened it and the SP returns rows. I ran a trace and this is what the ADO library is actually submitting to the server: declare @p1 int set @p1=1 declare @p2 datetime set @p2=''2010-04-22 15:31:07:770'' exec ErrorAlertCollect @ErrorAlertID=@p1 output,@CreateTime=@p2 output select @p1, @p2 Running this as a separate batch from my query editor yields: Msg 102, Level 15, State 1, Line 4 Incorrect syntax near '2010'. Of course there's an error. Look at the double single quotes in there. What the heck could be causing that? I tried using adDBDate and adDBTime as data types for the date parameter, and they give the same results. When I make the parameters adParamInputOutput, then I get this: declare @p1 int set @p1=default declare @p2 datetime set @p2=default exec ErrorAlertCollect @ErrorAlertID=@p1 output,@CreateTime=@p2 output select @p1, @p2 Running that as a separate batch yields: Msg 156, Level 15, State 1, Line 2 Incorrect syntax near the keyword 'default'. Msg 156, Level 15, State 1, Line 4 Incorrect syntax near the keyword 'default'. What the heck? SQL Server doesn't support this kind of syntax. You can only use the DEFAULT keyword in the actual SP execution statement. I should note that removing the extra single quotes from the above statement makes the SP run fine. ... Oh my. I just figured it out. I guess it's worth posting anyway.

    Read the article

  • sharepoint custom aspx page with database connection

    - by Megini
    hi there i have created a custom aspx page whithin my sharepoint site with a sql server connection to a database on that server to select data when i view the page it works but when another user tries to view it it gives the following error : Server Error in '/' Application. Login failed for user 'GRINCOR\GuguK'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.SqlClient.SqlException: Login failed for user 'GRINCOR\GuguK'. Source Error: The source code that generated this unhandled exception can only be shown when compiled in debug mode. To enable this, please follow one of the below steps, then request the URL: Add a "Debug=true" directive at the top of the file that generated the error. Example: <%@ Page Language="C#" Debug="true" % or: 2) Add the following section to the configuration file of your application: Note that this second technique will cause all files within a given application to be compiled in debug mode. The first technique will cause only that particular file to be compiled in debug mode. Important: Running applications in debug mode does incur a memory/performance overhead. You should make sure that an application has debugging disabled before deploying into production scenario. Stack Trace: [SqlException (0x80131904): Login failed for user 'GRINCOR\GuguK'.] System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) +248 System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) +245 System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) +2811 System.Data.SqlClient.SqlInternalConnectionTds.CompleteLogin(Boolean enlistOK) +53 System.Data.SqlClient.SqlInternalConnectionTds.AttemptOneLogin(ServerInfo serverInfo, String newPassword, Boolean ignoreSniOpenTimeout, Int64 timerExpire, SqlConnection owningObject) +327 System.Data.SqlClient.SqlInternalConnectionTds.LoginNoFailover(String host, String newPassword, Boolean redirectedUserInstance, SqlConnection owningObject, SqlConnectionString connectionOptions, Int64 timerStart) +2445370 System.Data.SqlClient.SqlInternalConnectionTds.OpenLoginEnlist(SqlConnection owningObject, SqlConnectionString connectionOptions, String newPassword, Boolean redirectedUserInstance) +2445224 System.Data.SqlClient.SqlInternalConnectionTds..ctor(DbConnectionPoolIdentity identity, SqlConnectionString connectionOptions, Object providerInfo, String newPassword, SqlConnection owningObject, Boolean redirectedUserInstance) +354 System.Data.SqlClient.SqlConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningConnection) +703 System.Data.ProviderBase.DbConnectionFactory.CreatePooledConnection(DbConnection owningConnection, DbConnectionPool pool, DbConnectionOptions options) +54 System.Data.ProviderBase.DbConnectionPool.CreateObject(DbConnection owningObject) +2414776 System.Data.ProviderBase.DbConnectionPool.UserCreateRequest(DbConnection owningObject) +92 System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) +1657 System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) +84 System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) +1645767 System.Data.SqlClient.SqlConnection.Open() +258 ASP.d7922f0d_ac20_4f87_91a2_a99a52c2b2fa__233736835.DisplayData() in C:\inetpub\wwwroot\wss\VirtualDirectories\80\sites\hrportal2\tester.aspx:151 ASP.d7922f0d_ac20_4f87_91a2_a99a52c2b2fa_233736835._RenderMain(HtmlTextWriter __w, Control parameterContainer) in C:\inetpub\wwwroot\wss\VirtualDirectories\80\sites\hrportal2\tester.aspx:346 System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) +115 System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) +240 System.Web.UI.HtmlControls.HtmlContainerControl.Render(HtmlTextWriter writer) +42 System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) +240 System.Web.UI.HtmlControls.HtmlForm.RenderChildren(HtmlTextWriter writer) +253 System.Web.UI.HtmlControls.HtmlForm.Render(HtmlTextWriter output) +87 System.Web.UI.HtmlControls.HtmlForm.RenderControl(HtmlTextWriter writer) +53 System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) +240 System.Web.UI.HtmlControls.HtmlContainerControl.Render(HtmlTextWriter writer) +42 System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) +240 System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) +240 System.Web.UI.Page.Render(HtmlTextWriter writer) +38 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +4240 Version Information: Microsoft .NET Framework Version:2.0.50727.3603; ASP.NET Version:2.0.50727.3601 can someone give me a solution to this problem ? i am using sharepoint services 3.0

    Read the article

  • How to optimize this SQL query for a rectangular region?

    - by Andrew B.
    I'm trying to optimize the following query, but it's not clear to me what index or indexes would be best. I'm storing tiles in a two-dimensional plane and querying for rectangular regions of that plane. The table has, for the purposes of this question, the following columns: id: a primary key integer world_id: an integer foreign key which acts as a namespace for a subset of tiles tileY: the Y-coordinate integer tileX: the X-coordinate integer value: the contents of this tile, a varchar if it matters. I have the following indexes: "ywot_tile_pkey" PRIMARY KEY, btree (id) "ywot_tile_world_id_key" UNIQUE, btree (world_id, "tileY", "tileX") "ywot_tile_world_id" btree (world_id) And this is the query I'm trying to optimize: ywot=> EXPLAIN ANALYZE SELECT * FROM "ywot_tile" WHERE ("world_id" = 27685 AND "tileY" <= 6 AND "tileX" <= 9 AND "tileX" >= -2 AND "tileY" >= -1 ); QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------- Bitmap Heap Scan on ywot_tile (cost=11384.13..149421.27 rows=65989 width=168) (actual time=79.646..80.075 rows=96 loops=1) Recheck Cond: ((world_id = 27685) AND ("tileY" <= 6) AND ("tileY" >= (-1)) AND ("tileX" <= 9) AND ("tileX" >= (-2))) -> Bitmap Index Scan on ywot_tile_world_id_key (cost=0.00..11367.63 rows=65989 width=0) (actual time=79.615..79.615 rows=125 loops=1) Index Cond: ((world_id = 27685) AND ("tileY" <= 6) AND ("tileY" >= (-1)) AND ("tileX" <= 9) AND ("tileX" >= (-2))) Total runtime: 80.194 ms So the world is fixed, and we are querying for a rectangular region of tiles. Some more information that might be relevant: All the tiles for a queried region may or may not be present The height and width of a queried rectangle are typically about 10x10-20x20 For any given (world, X) or (world, Y) pair, there may be an unbounded number of matching tiles, but the worst case is currently around 10,000, and typically there are far fewer. New tiles are created far less frequently than existing ones are updated (changing the 'value'), and that itself is far less frequent that just reading as in the query above. The only thing I can think of would be to index on (world, X) and (world, Y). My guess is that the database would be able to take those two sets and intersect them. The problem is that there is a potentially unbounded number of matches for either for either of those. Is there some other kind of index that would be more appropriate?

    Read the article

  • A scheme for expiring downloaded content?

    - by Chad Johnson
    I am going to offer a web API service that allows users to download and "rent" content for a monthly subscription fee. The API will either be open to everyone or possibly just select parties (not sure yet). Each developer must agree to a license, and they receive a developer key for their person. Each software application will have its own key as well. So then end-users will download the software which will interact with my service's API. Each user will have a key for each application as well (probably using OAuth). Content will be cached on first download and accessible offline via just the third-party application that cached the content. If a user cancels their subscription, I plan on doing the following: Deactivate the user's OAuth key for all applications. Do not allow the user's account to download new content via the API (and subsequently any software that uses the API). Now, the big question is: how do I make content expire if they cancel their subscription? If they cancel, they should not have access to content anymore. Here are ideas I've thought of (some of these are half-solutions, not yet fully fleshed out): Require that applications encrypt downloaded content using the user's OAuth key, making it available to only the application. This will prevent most users from going to the cache directory and just copying and keeping files. Update the user's key once a month, forcing content to re-cache on a monthly basic. Users could then access content for a month after they cancel their subscription. Require applications to "phone home" [to the service] periodically and check whether the user's subscription has terminated. If so, require in the API developer license that applications expire cache. If it is found that applications do not comply, their keys (and possibly keys for all developers) are permanently deactivated as a consequence. One major worry is that some applications may blatantly ignore constraints of the license. Is it generally acceptable to rely on applications abiding by the licensing constraints? Bad idea? Any other ideas? Maybe a way to make content auto-expire after x days? Something else? I'm open to out-of-the-box ideas.

    Read the article

  • Why is my producer-consumer blocking?

    - by User007
    My code is here: http://pastebin.com/Fi3h0E0P Here is the output 0 Should we take order today (y or n): y Enter order number: 100 More customers (y or n): n Stop serving customers right now. Passing orders to cooker: There are total of 1 order(s) 1 Roger, waiter. I am processing order #100 The goal is waiter must take orders and then give them to the cook. The waiter has to wait cook finishes all pizza, deliver the pizza, and then take new orders. I asked how P-V work in my previous post here. I don't think it has anything to do with \n consuming? I tried all kinds of combination of wait(), but none work. Where did I make a mistake? The main part is here: //Producer process if(pid > 0) { while(1) { printf("0"); P(emptyShelf); // waiter as P finds no items on shelf; P(mutex); // has permission to use the shelf waiter_as_producer(); V(mutex); // cooker now can use the shelf V(orderOnShelf); // cooker now can pickup orders wait(); printf("2"); P(pizzaOnShelf); P(mutex); waiter_as_consumer(); V(mutex); V(emptyShelf); printf("3 "); } } if(pid == 0) { while(1) { printf("1"); P(orderOnShelf); // make sure there is an order on shelf P(mutex); //permission to work cooker_as_consumer(); // take order and put pizza on shelf printf("return from cooker"); V(mutex); //release permission printf("just released perm"); V(pizzaOnShelf); // pizza is now on shelf printf("after"); wait(); printf("4"); } } So I imagine this is the execution path: enter waiter_as_producer, then go to child process (cooker), then transfer the control back to parent, finish waiter_as_consumer, switch back to child. The two waits switch back to parent (like I said I tried all possible wait() combination...).

    Read the article

  • jQuery post request is not sent until first post request is compleated

    - by Champ
    I have a function which have a long execution time. public void updateCampaign() { context.Session[processId] = "0|Fetching Lead360 Campaign"; Lead360 objLead360 = new Lead360(); string campaignXML = objLead360.getCampaigns(); string todayDate = DateTime.Now.ToString("dd-MMMM-yyyy"); context.Session[processId] = "1|Creating File for Lead360 Campaign on " + todayDate; string fileName = HttpContext.Current.Server.MapPath("campaigns") + todayDate + ".xml"; objLead360.createFile(fileName, campaignXML); context.Session[processId] = "2|Reading The latest Lead360 Campaign"; string file = File.ReadAllText(fileName); context.Session[processId] = "3|Updating Lead360 Campaign"; string updateStatus = objLead360.updateCampaign(fileName); string[] statusArr = updateStatus.Split('|'); context.Session[processId] = "99|" + statusArr[0] + " New Inserted , " + statusArr[1] + " Updated , With " + statusArr[2] + " Error , "; } So to track the Progress of the function I wrote a another function public void getProgress() { if (context.Session[processId] == null) { string json = "{\"error\":true}"; Response.Write(json); Response.End(); }else{ string[] status = context.Session[processId].ToString().Split('|'); if (status[0] == "99") context.Session.Remove(processId); string json = "{\"error\":false,\"statuscode\":" + status[0] + ",\"statusmsz\":\"" + status[1] + "\" }"; Response.Write(json); Response.End(); } } To call this by jQuery post request is used reqUrl = "AjaxPages/lead360Campaign.aspx?processid=" + progressID + "&action=updatecampaign"; $.post(reqUrl); setTimeout(getProgress, 500); get getProgress is : function getProgress() { reqUrl = "AjaxPages/lead360Campaign.aspx?processid=" + progressID + "&action=getProgress"; $.post(reqUrl, function (response) { var progress = jQuery.parseJSON(response); console.log(progress) if (progress.error) { $("#fetchedCampaign .waitingMsz").html("Some error occured. Please try again later."); $("#fetchedCampaign .waitingMsz").css({ "background": "url(common/images/ajax_error.jpg) no-repeat center 6px" }); return; } if (progress.statuscode == 99) { $("#fetchedCampaign .waitingMsz").html("Update Status :"+ progress.statusmsz ); $("#fetchedCampaign .waitingMsz").css({ "background": "url(common/images/ajax_loded.jpg) no-repeat center 6px" }); return; } $("#fetchedCampaign .waitingMsz").html("Please Wait... " + progress.statusmsz); setTimeout(getProgress, 500); }); } But the problem is that I can't see the intermediate message. Only the last message is been displayed after a long lime of ajax loading message Also on the browser console I just see that after a long time first requested is completed and after that the second request is completed. but there should be for getProgress ? I have checked jquery.doc and it says that $post is an asynchronous request. Can anyone please explain what is wrong with the code or logic?

    Read the article

  • USB windows xp final USB access issues

    - by Lex Dean
    I basically understand you C++ people, Please do not get distracted because I'm writing in Delphi. I have a stable USB Listing method that accesses all my USB devices I get the devicepath, and this structure: TSPDevInfoData = packed record Size: DWORD; ClassGuid: TGUID; DevInst: DWORD; // DEVINST handle Reserved: DWord; end; I get my ProductID and VenderID successfully from my DevicePath Lists all USB devices connected to the computer at the time That enables me to access the registry data to each device in a stable way. What I'm lacking is a little direction Is friendly name able to be written inside the connected USB Micro chips by the firmware programmer? (I'm thinking of this to identify the device even further, or is this to help identify Bulk data transfer devices like memory sticks and camera's) Can I use SPDRP_REMOVAL_POLICY_OVERRIDE to some how reset these polices What else can I do with the registry details. Identifying when some one unplugs a device The program is using (in windows XP standard) I used a documented windows event that did not respond. Can I read a registry value to identify if its still connected? using CreateFileA (DevicePath) to send and receive data I have read when some one unplugs in the middle of a data transfer its difficult clearing resources. what can IoCreateDevice do for me and how does one use it for that task This two way point of connection status and system lock up situations is very concerning. Has some one read anything about this subject recently? My objectives are to 1. list connected USB devices identify a in development Micro Controller from everything else send and receive data in a stable and fast way to the limits of the controller No lock up's transferring data Note I'm not using any service packs I understand everything USB is in ANSI when windows xp is not and .Net is all about ANSI (what a waste of memory) I plan to continue this project into a .net at a later date as an addition. MSDN gives me Structures and Functions and what should link to what ok but say little to what they get used for. What is available in my language Delphi is way over priced that it needs a major price drop.

    Read the article

  • CRM2011 - "The given key was not present in the dictionary"

    - by DJZorrow
    I am what you call a "n00b" in CRM plugin development. I am trying to write a plugin for Microsoft's Dynamics CRM 2011 that will create a new activity entity when you create a new contact. I want this activity entity to be associated with the contact entity. This is my current code: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xrm.Sdk; namespace ITPH_CRM_Deactivate_Account_SSP_Disable { public class SSPDisable_Plugin: IPlugin { public void Execute(IServiceProvider serviceProvider) { // Obtain the execution context from the service provider. IPluginExecutionContext context = (IPluginExecutionContext) serviceProvider.GetService(typeof(IPluginExecutionContext)); IOrganizationServiceFactory serviceFactory = (IOrganizationServiceFactory)serviceProvider.GetService(typeof(IOrganizationServiceFactory)); IOrganizationService service = serviceFactory.CreateOrganizationService(context.UserId); if (context.InputParameters.Contains("Target") && context.InputParameters["target"] is Entity) { Entity entity = context.InputParameters["Target"] as Entity; if (entity.LogicalName != "account") { return; } Entity followup = new Entity(); followup.LogicalName = "activitypointer"; followup.Attributes = new AttributeCollection(); followup.Attributes.Add("subject", "Created via Plugin."); followup.Attributes.Add("description", "This is generated by the magic of C# ..."); followup.Attributes.Add("scheduledstart", DateTime.Now.AddDays(3)); followup.Attributes.Add("actualend", DateTime.Now.AddDays(5)); if (context.OutputParameters.Contains("id")) { Guid regardingobjectid = new Guid(context.OutputParameters["id"].ToString()); string regardingobjectidType = "account"; followup["regardingobjectid"] = new EntityReference(regardingobjectidType, regardingobjectid); } service.Create(followup); } } } But when i try to run this code: I get an error when i try to create a new contact in the CRM environment. The error is: "The given key was not present in the dictionary" (Link *1). The error pops up right as i try to save the new contact. Link *1: http://puu.sh/4SXrW.png (Translated bold text: "Error on business process") Thanks for any help or suggestions :)

    Read the article

  • Neo4j 1.9.4 (REST Server,CYPHER) performance issue

    - by user2968943
    I have Neo4j 1.9.4 installed on 24 core 24Gb ram (centos) machine and for most queries CPU usage spikes goes to 200% with only few concurrent requests. Domain: some sort of social application where few types of nodes(profiles) with 3-30 text/array properties and 36 relationship types with at least 3 properties. Most of nodes currently has ~300-500 relationships. Current data set footprint(from console): LogicalLogSize=4294907 (32MB) ArrayStoreSize=1675520 (12MB) NodeStoreSize=1342170 (10MB) PropertyStoreSize=1739548 (13MB) RelationshipStoreSize=6395202 (48MB) StringStoreSize=1478400 (11MB) which is IMHO really small. most queries looks like this one(with more or less WITH .. MATCH .. statements and few queries with variable length relations but the often fast): START targetUser=node({id}), currentUser=node({current}) MATCH targetUser-[contact:InContactsRelation]->n, n-[:InLocationRelation]->l, n-[:InCategoryRelation]->c WITH currentUser, targetUser,n, l,c, contact.fav is not null as inFavorites MATCH n<-[followers?:InContactsRelation]-() WITH currentUser, targetUser,n, l,c,inFavorites, COUNT(followers) as numFollowers RETURN id(n) as id, n.name? as name, n.title? as title, n._class as _class, n.avatar? as avatar, n.avatar_type? as avatar_type, l.name as location__name, c.name as category__name, true as isInContacts, inFavorites as isInFavorites, numFollowers it runs in ~1s-3s(for first run) and ~1s-70ms (for consecutive and it depends on query) and there is about 5-10 queries runs for each impression. Another interesting behavior is when i try run query from console(neo4j) on my local machine many consecutive times(just press ctrl+enter for few seconds) it has almost constant execution time but when i do it on server it goes slower exponentially and i guess it somehow related with my problem. Problem: So my problem is that neo4j is very CPU greedy(for 24 core machine its may be not an issue but its obviously overkill for small project). First time i used AWS EC2 m1.large instance but over all performance was bad, during testing, CPU always was over 100%. Some relevant parts of configuration: neostore.nodestore.db.mapped_memory=1280M wrapper.java.maxmemory=8192 note: I already tried configuration where all memory related parameters where HIGH and it didn't worked(no change at all). Question: Where to digg? configuration? scheme? queries? what i'm doing wrong? if need more info(logs, configs) just ask ;)

    Read the article

  • If array is thread safe, what the issue with this function?

    - by Ajay Sharma
    I am totally lost with the things that is happening with my code.It make me to think & get clear with Array's thread Safe concept. Is NSMutableArray OR NSMutableDictionary Thread Safe ? While my code is under execution, the values for the MainArray get's changes although, that has been added to Array. Please try to execute this code, onyour system its very much easy.I am not able to get out of this Trap. It is the function where it is returning Array. What I am Looking to do is : -(Array) (Main Array) --(Dictionary) with Key Value (Multiple Dictionary in Main Array) ----- Above dictionary has 9 Arrays in it. This is the structure I am developing for Array.But even before #define TILE_ROWS 3 #define TILE_COLUMNS 3 #define TILE_COUNT (TILE_ROWS * TILE_COLUMNS) -(NSArray *)FillDataInArray:(int)counter { NSMutableArray *temprecord = [[NSMutableArray alloc] init]; for(int i = 0; i <counter;i++) { if([temprecord count]<=TILE_COUNT) { NSMutableDictionary *d1 = [[NSMutableDictionary alloc]init]; [d1 setValue:[NSString stringWithFormat:@"%d/2011",i+1] forKey:@"serial_data"]; [d1 setValue:@"Friday 13 Sep 12:00 AM" forKey:@"date_data"]; [d1 setValue:@"Description Details " forKey:@"details_data"]; [d1 setValue:@"Subject Line" forKey:@"subject_data"]; [temprecord addObject:d1]; d1= nil; [d1 release]; if([temprecord count]==TILE_COUNT) { NSMutableDictionary *holderKey = [[NSMutableDictionary alloc]initWithObjectsAndKeys:temprecord,[NSString stringWithFormat:@"%d",[casesListArray count]+1],nil]; [self.casesListArray addObject:holderKey]; [holderKey release]; holderKey =nil; [temprecord removeAllObjects]; } } else { [temprecord removeAllObjects]; NSMutableDictionary *d1 = [[NSMutableDictionary alloc]init]; [d1 setValue:[NSString stringWithFormat:@"%d/2011",i+1] forKey:@"serial_data"]; [d1 setValue:@"Friday 13 Sep 12:00 AM" forKey:@"date_data"]; [d1 setValue:@"Description Details " forKey:@"details_data"]; [d1 setValue:@"Subject Line" forKey:@"subject_data"]; [temprecord addObject:d1]; d1= nil; [d1 release]; } } return temprecord; [temprecord release]; } What is the problem with this Code ? Every time there are 9 records in Array, it just replaces the whole Array value instead of just for specific key Value.

    Read the article

  • Reliable session faulting for unknown reason

    - by Scarfman007
    I am trying to achieve the following - one client-side proxy instance (kept open) accessed by multiple threads using a reliable session. What I have managed so far is to have either A) a reliable session with a client-side proxy which is created and disposed per call or B) what I aim for, but without a reliable session. When I enable reliable sessions on my binding however, the following behaviour is exhibited: Client-side Upon application startup everything appears to work fine until roughly 18 messages in to the WCF session. I firstly get the proxy.InnerChannel.Faulted event raised, then an exception is caught at the point where I am calling the method on the proxy. The exception is a System.TimeoutException, with message: "The request channel timed out while waiting for a reply after 00:00:59.9062512. Increase the timeout value passed to the call to Request or increase the SendTimeout value on the Binding. The time allotted to this operation may have been a portion of a longer timeout." The inner exception has a similar message: "The request operation did not complete within the allotted timeout of 00:01:00. The time allotted to this operation may have been a portion of a longer timeout." With the method at the top of the inner stack trace being: System.ServiceModel.Channels.ReliableRequestSessionChannel.SyncRequest.WaitForReply(TimeSpan timeout) I then call proxy.Close followed by proxy.Abort (catching and ignoring exceptions). If I utilize the default settings (i.e. have simply <reliableSession/>), then calling proxy. Close results in another System.Timeout exception (although this time the allotted timeout is 00:00:00), however if I override the defaults as specified above no exception is thrown. Service-side Utilizing WCF tracing I get a System.ServiceModel.CommunicationException, with message: "The sequence has been terminated by the remote endpoint. The session has stopped waiting for a particular reply. Because of this the reliable session cannot continue. The reliable session was faulted." And a stack trace ending at: System.ServiceModel.AsyncResult.End[TAsyncResult](IAsyncResult result) When remotely attaching to the server I get the same message, which occurs when code execution steps over the return statement of my service in the service call which causes the error. The puzzling thing to me is that the service is stable and runs with options A) or B) as decribed at the beginning of my post, and occurs after a varying number of messages (around 18). The former fact points to there being nothing wrong with the code (indeed I have checked that no exceptions are thrown), and the latter just serves to confuse me and is why I modified the settings on the reliable session binding. I am quite stuck on this. Can anyone suggest why the reliable session would fault in such a way?

    Read the article

  • UIScrollView Infinite Scrolling

    - by Ben Robinson
    I'm attempting to setup a scrollview with infinite (horizontal) scrolling. Scrolling forward is easy - I have implemented scrollViewDidScroll, and when the contentOffset gets near the end I make the scrollview contentsize bigger and add more data into the space (i'll have to deal with the crippling effect this will have later!) My problem is scrolling back - the plan is to see when I get near the beginning of the scroll view, then when I do make the contentsize bigger, move the existing content along, add the new data to the beginning and then - importantly adjust the contentOffset so the data under the view port stays the same. This works perfectly if I scroll slowly (or enable paging) but if I go fast (not even very fast!) it goes mad! Heres the code: - (void) scrollViewDidScroll:(UIScrollView *)scrollView { float pageNumber = scrollView.contentOffset.x / 320; float pageCount = scrollView.contentSize.width / 320; if (pageNumber > pageCount-4) { //Add 10 new pages to end mainScrollView.contentSize = CGSizeMake(mainScrollView.contentSize.width + 3200, mainScrollView.contentSize.height); //add new data here at (320*pageCount, 0); } //*** the problem is here - I use updatingScrollingContent to make sure its only called once (for accurate testing!) if (pageNumber < 4 && !updatingScrollingContent) { updatingScrollingContent = YES; mainScrollView.contentSize = CGSizeMake(mainScrollView.contentSize.width + 3200, mainScrollView.contentSize.height); mainScrollView.contentOffset = CGPointMake(mainScrollView.contentOffset.x + 3200, 0); for (UIView *view in [mainContainerView subviews]) { view.frame = CGRectMake(view.frame.origin.x+3200, view.frame.origin.y, view.frame.size.width, view.frame.size.height); } //add new data here at (0, 0); } //** MY CHECK! NSLog(@"%f", mainScrollView.contentOffset.x); } As the scrolling happens the log reads: 1286.500000 1285.500000 1284.500000 1283.500000 1282.500000 1281.500000 1280.500000 Then, when pageNumber<4 (we're getting near the beginning): 4479.500000 4479.500000 Great! - but the numbers should continue to go down in the 4,000s but the next log entries read: 1278.000000 1277.000000 1276.500000 1275.500000 etc.... Continiuing from where it left off! Just for the record, if scrolled slowly the log reads: 1294.500000 1290.000000 1284.500000 1280.500000 4476.000000 4476.000000 4473.000000 4470.000000 4467.500000 4464.000000 4460.500000 4457.500000 etc.... Any ideas???? Thanks Ben.

    Read the article

  • Summarising (permanently) data in a SQL table

    - by Cylindric
    Geetings, Stackers. I have a huge number of data-points in a SQL table, and I want to summarise them in a way reminiscent of RRD. Assuming a table such as ID | ENTITY_ID | SCORE_DATE | SCORE | SOME_OTHER_DATA ----+-----------+------------+-------+----------------- 1 | A00000001 | 01/01/2010 | 100 | some data 2 | A00000002 | 01/01/2010 | 105 | more data 3 | A00000003 | 01/01/2010 | 104 | various text ... | ......... | .......... | ..... | ... ... | A00009999 | 01/01/2010 | 101 | ... | A00000001 | 02/01/2010 | 104 | ... | A00000002 | 02/01/2010 | 119 | ... | A00000003 | 02/01/2010 | 119 | ... | ......... | .......... | ..... | ... | A00009999 | 02/01/2010 | 101 | arbitrary data ... | ......... | .......... | ..... | ... ... | A00000001 | 01/02/2010 | 104 | ... | A00000002 | 01/02/2010 | 119 | ... | A00000003 | 01/01/2010 | 119 | I want to end up with one record per entity, per month: ID | ENTITY_ID | SCORE_DATE | SCORE | ----+-----------+------------+-------+ ... | A00000001 | 01/01/2010 | 100 | ... | A00000002 | 01/01/2010 | 105 | ... | A00000003 | 01/01/2010 | 104 | ... | A00000001 | 01/02/2010 | 100 | ... | A00000002 | 01/02/2010 | 105 | ... | A00000003 | 01/02/2010 | 104 | (I Don't care about the SOME_OTHER_DATA - I'll pick something - either the first or last record probably.) What's an easy way of doing this on a regular basis, so that anything in the last calendar month is summarised in this way? At the moment my plan is kind of: For each EntityID For each month Find average score for all records in given month Update first record with results of previous step Delete all records that aren't the first I can't think of a neat way of doing it though, that doesn't involve lots of updates and iteration. This can either be done in a SQL Stored Procedure, or it can be incorporated into the .Net app that's generating this data, so the solution doesn't really need to be "one big SQL script", but can be :) (SQL-2005)

    Read the article

  • SQL Server CTE referred in self joins slow

    - by Kharlos Dominguez
    Hello, I have written a table-valued UDF that starts by a CTE to return a subset of the rows from a large table. There are several joins in the CTE. A couple of inner and one left join to other tables, which don't contain a lot of rows. The CTE has a where clause that returns the rows within a date range, in order to return only the rows needed. I'm then referencing this CTE in 4 self left joins, in order to build subtotals using different criterias. The query is quite complex but here is a simplified pseudo-version of it WITH DataCTE as ( SELECT [columns] FROM table INNER JOIN table2 ON [...] INNER JOIN table3 ON [...] LEFT JOIN table3 ON [...] ) SELECT [aggregates_columns of each subset] FROM DataCTE Main LEFT JOIN DataCTE BananasSubset ON [...] AND Product = 'Bananas' AND Quality = 100 LEFT JOIN DataCTE DamagedBananasSubset ON [...] AND Product = 'Bananas' AND Quality < 20 LEFT JOIN DataCTE MangosSubset ON [...] GROUP BY [ I have the feeling that SQL Server gets confused and calls the CTE for each self join, which seems confirmed by looking at the execution plan, although I confess not being an expert at reading those. I would have assumed SQL Server to be smart enough to only perform the data retrieval from the CTE only once, rather than do it several times. I have tried the same approach but rather than using a CTE to get the subset of the data, I used the same select query as in the CTE, but made it output to a temp table instead. The version referring the CTE version takes 40 seconds. The version referring the temp table takes between 1 and 2 seconds. Why isn't SQL Server smart enough to keep the CTE results in memory? I like CTEs, especially in this case as my UDF is a table-valued one, so it allowed me to keep everything in a single statement. To use a temp table, I would need to write a multi-statement table valued UDF, which I find a slightly less elegant solution. Did some of you had this kind of performance issues with CTE, and if so, how did you get them sorted? Thanks, Kharlos

    Read the article

  • Selecting the contents of an ASP.NET TextBox in an UpdatePanel after a partial page postback

    - by Scott Mitchell
    I am having problems selecting the text within a TextBox in an UpdatePanel. Consider a very simple page that contains a single UpdatePanel. Within that UpdatePanel there are two Web controls: A DropDownList with three statically-defined list items, whose AutoPostBack property is set to True, and A TextBox Web control The DropDownList has a server-side event handler for its SelectedIndexChanged event, and in that event handler there's two lines of code: TextBox1.Text = "Whatever"; ScriptManager.RegisterStartupScript(this, this.GetType(), "Select-" + TextBox1.ClientID, string.Format("document.getElementById('{0}').select();", TextBox1.ClientID), true); The idea is that whenever a user chooses and item from the DropDownList there is a partial page postback, at which point the TextBox's Text property is set and selected (via the injected JavaScript). Unfortunately, this doesn't work as-is. (I have also tried putting the script in the pageLoad function with no luck, as in: ScriptManager.RegisterStartupScript(..., "function pageLoad() { ... my script ... }");) What happens is the code runs, but something else on the page receives focus at the conclusion of the partial page postback, causing the TextBox's text to be unselected. I can "fix" this by using JavaScript's setTimeout to delay the execution of my JavaScript code. For instance, if I update the emitted JavaScript to the following: setTimeout("document.getElementById('{0}').select();", 111); It "works." I put works in quotes because it works for this simple page on my computer. In a more complex page on a slower computer with more markup getting passed between the client and server on the partial page postback, I have to up the timeout to over a second to get it to work. I would hope that there is a more foolproof way to achieve this. Rather than saying, "Delay for X milliseconds," it would be ideal to say, "Run this when you're not going to steal the focus." What's perplexing is that the .Focus() method works beautifully. That is, if I scrap my JavaScript and replace it with a call to TextBox1.Focus(); then the TextBox receives focus (although the text is not selected). I've examined the contents of MicrosoftAjaxWebForms.js and see that the focus is set after the registered scripts run, but I'm my JavaScript skills are not strong enough to decode what all is happening here and why the selected text is unselected between the time it is selected and the end of the partial page postback. I've also tried using Firebug's JavaScript debugger and see that when my script runs the TextBox's text is selected. As I continue to step through it the text remains selected, but then after stepping off the last line of script (apparently) it all of the sudden gets unselected. Any ideas? I am pulling my hair out. Thanks in advance...

    Read the article

  • How to stream semi-live audio over internet

    - by Thomas Tempelmann
    I want to write something like Skype, i.e. I have a constant audio stream on one computer and then recompress it in a format that's suitable for a latent internet connection, receive it on the other end and play it. Let's also assume that the internet connection is fairly modern and fast, i.e. DSL or alike, no slow connections over phone and such. The involved computers will also be rather modern (Dual Core Intel CPUs at 2GHz or more). I know how to handle the audio on the machines. What I don't know is how to transmit the audio in an efficient way. The challenges are: I'd like get good audio quality across the line. The stream should be received without drops. The stream may, however, be received with a little delay (a second delay is acceptable). I imagine that the transport software could first determine the average (and max) latency, then start the stream and tell the receiver to wait for that max latency before starting to play the audio. With that, if the latency doesn't get any higher, the entire stream will be playable on the other side without stutter or drops. If, due to unexpected IP latencies or blockages, the stream does get cut off, I want to be able to notice this so that I can take actions (e.g. abort the stream) and eventually start a new transmission. What are my options if I want do use ready-made software for the compression and tranmission? I have no intention to write my own audio compression engine, really. OTOH, I plan to sell the solution in a vertical market, meaning I can afford a few dollars of license fees per copy, but not $100s. I guess the simplest solution would be to just open a TCP stream, send a few packets back and forth to determine their running time (or even use UDP for that), then use the results as the guide for my max latency value, then simply fire the audio data in its raw form (uncompressed 16 bit stereo), along with a timing code over the TCP connection. The receiver reads the data and plays it with the pre-determined delay. That might just work with the type of fast connection I expect. I just wonder if there are better solutions to reach this goal, with better performance (lower latency) and less data (compressed). BTW, I first try to implement this on OS X, but might want to do it on Windows, too, if it proves successful.

    Read the article

  • Unlimited SMS API/Gateway (Sending and Receiving)

    - by Naif
    I am creating a chat application which requires that users be able to send and receive sms messages through a web interface. It would be somewhat similar to the text messaging service available in yahoo mail or in aol instant messenger. The situation is this: Given the high quantity of messages that would be sent and received, paying on a per message basis is not economically feasible. How is it that sites such as yahoo, aim, and twitter can send and receive unlimited sms messages? Essentially, I am looking for a way to send and receive unlimited sms from my computer. Below is a list of some approaches I've come up with but have run into problems with as well. If just one of the approaches can be utilized effectively, then I am fine. As a note on the nature of my application: I will only be sending messages to users that explicitly sign up for the service and permit the receiving of messages. They can unsubscribe at any time. This is to prevent spam. I am aware of software such as Kannel which allows one to connect to a providers smsc gateway. However, this adds the risk of not being approved by the provider which would be unacceptable. Is there any way to significantly mitigate this risk? Utilizing a gateway provider eliminates this risk, but adds the issue of per message pricing. I am also aware of email to sms. However, I have done some testing with that and it appears that this method results in many messages being undelivered or delivered VERY late. If it weren't for that, this approach would have been ideal. Is there any way to negotiate with carriers to remove me from their filters (considering the nature of my service as stated before)? I could use a gsm modem, but even with an "unilimited" plan on a sim card, there are still limits (around 100,000 messages or so). Furthermore, from my understanding, gsm modems are capable of only sending out around a dozen messages per minute. I need to be able to send out as much as several hundred messages per second. During the first 2 months, around 10 messages per second would suffice. There are ways to send out ads with messages to cover the per message costs. However, this is a deal-breaker since it has a high chance of tarnishing the quality of service. Furthermore, I know it is possible to do it without such ads since yahoo, aim, and twitter do not send ads with their messages.

    Read the article

  • Java Socket Connection is flooding network OR resulting in high ping

    - by user1461100
    i have a little problem with my java socket code. I'm writing an android client application which is sending data to a java multithreaded socket server on my pc through direct(!) wireless connection. It works fine but i want to improve it for mobile applications as it is very power consuming by now. When i remove two special lines in my code, the cpu usage of my mobile device (htc one x) is totally okay but then my connection seems to have high ping rates or something like that... Here is a server code snippet where i receive the clients data: while(true) { try { .... Object obj = in.readObject(); if(obj != null) { Class clazz = obj.getClass(); String className = clazz.getName(); if(className.equals("java.lang.String")) { String cmd = (String)obj; if(cmd.equals("dc")) { System.out.println("Client "+id+" disconnected!"); Server.connectedClients[id-1] = false; break; } if(cmd.substring(0,1).equals("!")) { robot.keyRelease(PlayerEnum.getKey(cmd,id)); } else { robot.keyPress(PlayerEnum.getKey(cmd,id)); } } } } catch .... Heres the client part, where i send my data in a while loop: private void networking() { try { if(client != null) { .... out.writeObject(sendQueue.poll()); .... } } catch .... when i write it this why, i send data everytime the while loop gets executed.. when sendQueue is empty, a null "Object" will be send. this results in "high" network traffic and in "high" cpu usage. BUT: all send comments are received nearly immediately. when i change the code to following: while(true) ... if(sendQueue.peek() != null) { out.writeObject(sendQueue.poll()); } ... the cpu usage is totally okay but i'm getting some laggs.. the commands do not arrive fast enough.. as i said, it works fine (besides cpu usage) if i'm sending data(with that null objects) every while execution. but i'm sure that this is very rough coding style because i'm kind of flooding the network. any hints? what am i doing wrong?? Thanks for your Help! Sincerly yours, maaft

    Read the article

  • Proper HTML technique to create an web form out of an image

    - by Lars
    I plan to create an interactive golf score card for my website (XHTML). (Btw. thats how such a scorecard looks like: ScoreCard). So at the end one should be able to insert a score for each hole in the appropriated input field in the virtual scorecard on the website. For me it is very important that the interactive scorecard really looks the same as the original (paper-) scorecard does and so my first approach was to scan and slice the scorecard image to reach that appearance. Here you can see the way I sliced the image: The idea was to insert HTML text input for each score field ending up with something like this: After I sliced the image I reconstructed it using the HTML . To do that I put the image slices as the cell background. <table> <tr> <td style="background: url("slice1.jpg") width="58px" height="25px"> <input type="text"></inputText> </td> </tr> ... </table> At the first moment this worked fine (as Gimp offers quite a nice feature for this). Then the problem was that I had to create a HTML table to create the exact layout. As you can see the lower part of the layout is split up into 3 columns. The middle column is split up into several (for each hole) rows. So the left and right column have to be spanned over those rows. Ok finally that worked, but it lead to some kind of scaling problem. If I zoom in or out on the table the middle column (and only that one) is not scaled the right way. Iam not able to fix this, and so I start doubting if this is the right technique for html image virtualization. Iam really no specialist in the area of creating websites, so I would really appriciate any help on this. Maybe there is a complete other and better technique to do that, as I think it is a common job in webcreation. I couldnt find any nice examples or tuts on that.

    Read the article

< Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >