Search Results

Search found 7583 results on 304 pages for 'roger guess'.

Page 284/304 | < Previous Page | 280 281 282 283 284 285 286 287 288 289 290 291  | Next Page >

  • Measure width() with jQuery after DOM refresh

    - by o_O Tync
    My script dynamically creates a <ul> width left-floating <li>s inside: it's a paginator. Afterwards, the script measures width of all <li>s and summs them up. The problem is that after the nodes are injected into the document — the browser refreshed DOM and applies CSS styles which takes a while. It has a negative effect on my script: when these operations are not complete before I measure the width — my script gets a wrong value. If I perform the measure in a second — everything is ok. The thing I'm looking for is a way to detect the moment when the <ul> is fully drawn, styles applied and the width has stabilizes. Or at least a way to detect every dimensions changes. Of course I can use setTimeout(..., 100) but it's ugly and I guess — not a solution at all. If there's a way to detect width stabilization — I would do the measuring right after it to get the correct values. HTML code generated by the DOM <div> <ul> <li><a href="...">1</a></li> <li><a href="...">2</a></li> .... </ul> </div> P.S. Why I need this. My paginator's left-floating <li> items tend to move to the next line when the <ul> tries to become wider than the page itself. Even though most of <li>s are invisible because of parent <div>'s width restriction: div { width: 500px; overflow: hidden; } div ul { width: 100%; white-space: nowrap; } div ul li { display: block; float: left; } they still go down unless I specify the actual summed width of the <ul> with the script.

    Read the article

  • Approach for authentication and storing user details.

    - by cappuccino
    Hey folks, I am using the Zend Framework but my question is broadly about sessions / databases / auth (PHP MySQL). Currently this is my approach to authentication: 1) User signs in, the details are checked in database. - Standard stuff really. 2) If the details are correct only the user's unique ID is stored in the session and a security token (user unique ID + IP + Browser info + salt). The session in written to the filesystem. I've been reading around and many are saying that storing stuff in sessions is not a good idea, and that you should really only write a unique ID which refers back to the user's details and a security token to prevent session hijacking. So this is the approach i've taken, i use to write the user's details in session, but i've moved that out. Wanted to know your opinions on this. I'm keeping sessions in the filesystem since i don't run on multiple servers, and since i'm only writting a tiny tiny bit of data to sessions, i thought that performance would be greater keeping sessions in the filesystem to reduce load on the database. Once the session is written on authentication, it really is only read-only from then on. 3) The rest of the user's details (like subscription details, permissions, account info etc) are cached in the filesystem (this can always be easily moved to memory if i wanted even more performance). So rather than keeping the user's details in session, the user's details are cached in the file system. I'm using Zend_Cache and the unique cache id is something like md5(/cache/auth/2892), the number is the unique id of the user. I guess the benefit of this method is that once the user is logged in, there is essentially not database queries being run to get the user's details. Just wonder if this approach is better than keeping the whole lot in session... 4) As the user moves throughout the site the only thing that is checked is the ID in the session and the security token. So, overall the first question is 1) is the filesystem more efficient than a database for this purpose 2) have i taken enough security precautions 3) is separating user detail's from the session into a cached file a pointless task? Thanks.

    Read the article

  • VB6 ADO Command to SQL Server

    - by Emtucifor
    I'm getting an inexplicable error with an ADO command in VB6 run against a SQL Server 2005 database. Here's some code to demonstrate the problem: Sub ADOCommand() Dim Conn As ADODB.Connection Dim Rs As ADODB.Recordset Dim Cmd As ADODB.Command Dim ErrorAlertID As Long Dim ErrorTime As Date Set Conn = New ADODB.Connection Conn.ConnectionString = "Provider=SQLOLEDB.1;Integrated Security=SSPI;Initial Catalog=database;Data Source=server" Conn.CursorLocation = adUseClient Conn.Open Set Rs = New ADODB.Recordset Rs.CursorType = adOpenStatic Rs.LockType = adLockReadOnly Set Cmd = New ADODB.Command With Cmd .Prepared = False .CommandText = "ErrorAlertCollect" .CommandType = adCmdStoredProc .NamedParameters = True .Parameters.Append .CreateParameter("@ErrorAlertID", adInteger, adParamOutput) .Parameters.Append .CreateParameter("@CreateTime", adDate, adParamOutput) Set .ActiveConnection = Conn Rs.Open Cmd ErrorAlertID = .Parameters("@ErrorAlertID").Value ErrorTime = .Parameters("@CreateTime").Value End With Debug.Print Rs.State ' Shows 0 - Closed Debug.Print Rs.RecordCount ' Of course this fails since the recordset is closed End Sub So this code was working not too long ago but now it's failing on the last line with the error: Run-time error '3704': Operation is not allowed when the object is closed Why is it closed? I just opened it and the SP returns rows. I ran a trace and this is what the ADO library is actually submitting to the server: declare @p1 int set @p1=1 declare @p2 datetime set @p2=''2010-04-22 15:31:07:770'' exec ErrorAlertCollect @ErrorAlertID=@p1 output,@CreateTime=@p2 output select @p1, @p2 Running this as a separate batch from my query editor yields: Msg 102, Level 15, State 1, Line 4 Incorrect syntax near '2010'. Of course there's an error. Look at the double single quotes in there. What the heck could be causing that? I tried using adDBDate and adDBTime as data types for the date parameter, and they give the same results. When I make the parameters adParamInputOutput, then I get this: declare @p1 int set @p1=default declare @p2 datetime set @p2=default exec ErrorAlertCollect @ErrorAlertID=@p1 output,@CreateTime=@p2 output select @p1, @p2 Running that as a separate batch yields: Msg 156, Level 15, State 1, Line 2 Incorrect syntax near the keyword 'default'. Msg 156, Level 15, State 1, Line 4 Incorrect syntax near the keyword 'default'. What the heck? SQL Server doesn't support this kind of syntax. You can only use the DEFAULT keyword in the actual SP execution statement. I should note that removing the extra single quotes from the above statement makes the SP run fine. ... Oh my. I just figured it out. I guess it's worth posting anyway.

    Read the article

  • IP address numbers in MySQL subquery

    - by Iain Collins
    I have a problem with a subquery involving IPV4 addresses stored in MySQL (MySQL 5.0). The IP addresses are stored in two tables, both in network number format - e.g. the format output by MySQL's INET_ATON(). The first table ('events') contains lots of rows with IP addresses associated with them, the second table ('network_providers') contains a list of provider information for given netblocks. events table (~4,000,000 rows): event_id (int) event_name (varchar) ip_address (unsigned 4 byte int) network_providers table (~60,000 rows): ip_start (unsigned 4 byte int) ip_end (unsigned 4 byte int) provider_name (varchar) Simplified for the purposes of the problem I'm having, the goal is to create an export along the lines of: event_id,event_name,ip_address,provider_name If do a query along the lines of either of the following, I get the result I expect: SELECT provider_name FROM network_providers WHERE INET_ATON('192.168.0.1') >= network_providers.ip_start ORDER BY network_providers.ip_start DESC LIMIT 1 SELECT provider_name FROM network_providers WHERE 3232235521 >= network_providers.ip_start ORDER BY network_providers.ip_start DESC LIMIT 1 That is to say, it returns the correct provider_name for whatever IP I look up (of course I'm not really using 192.168.0.1 in my queries). However, when performing this same query as a subquery, in the following manner, it doesn't yield the result I would expect: SELECT event.id, event.event_name, (SELECT provider_name FROM network_providers WHERE event.ip_address >= network_providers.ip_start ORDER BY network_providers.ip_start DESC LIMIT 1) as provider FROM events Instead the a different (incorrect) value for network_provider is returned - over 90% (but curiously not all) values returned in the provider column contain the wrong provider information for that IP. Using event.ip_address in a subquery just to echo out the value confirms it contains the value I'd expect and that the subquery can parse it. Replacing event.ip_address with an actual network number also works, just using it dynamically in the subquery in this manner that doesn't work for me. I suspect the problem is there is something fundamental and important about subqueries in MySQL that I don't get. I've worked with IP addresses like this in MySQL quite a bit before, but haven't previously done lookups for them using a subquery. The question: I'd really appreciate an example of how I could get the output I want, and if someone here knows, some enlightenment as to why what I'm doing doesn't work so I can avoid making this mistake again. Notes: The actual real-world usage I'm trying to do is considerably more complicated (involving joining two or three tables). This is a simplified version, to avoid overly complicating the question. Additionally, I know I'm not using a between on ip_start & ip_end - that's intentional (the DB's can be out of date, and such cases the owner in the DB is almost always in the next specified range and 'best guess' is fine in this context) however I'm grateful for any suggestions for improvement that relate to the question. Efficiency is always nice, but in this case absolutely not essential - any help appreciated.

    Read the article

  • user defined Copy ctor, and copy-ctors further down the chain - compiler bug ? programmers brainbug

    - by J.Colmsee
    Hi. i have a little problem, and I am not sure if it's a compiler bug, or stupidity on my side. I have this struct : struct BulletFXData { int time_next_fx_counter; int next_fx_steps; Particle particles[2];//this is the interesting one ParticleManager::ParticleId particle_id[2]; }; The member "Particle particles[2]" has a self-made kind of smart-ptr in it (resource-counted texture-class). this smart-pointer has a default constructor, that initializes to the ptr to 0 (but that is not important) I also have another struct, containing the BulletFXData struct : struct BulletFX { BulletFXData data; BulletFXRenderFunPtr render_fun_ptr; BulletFXUpdateFunPtr update_fun_ptr; BulletFXExplosionFunPtr explode_fun_ptr; BulletFXLifetimeOverFunPtr lifetime_over_fun_ptr; BulletFX( BulletFXData data, BulletFXRenderFunPtr render_fun_ptr, BulletFXUpdateFunPtr update_fun_ptr, BulletFXExplosionFunPtr explode_fun_ptr, BulletFXLifetimeOverFunPtr lifetime_over_fun_ptr) :data(data), render_fun_ptr(render_fun_ptr), update_fun_ptr(update_fun_ptr), explode_fun_ptr(explode_fun_ptr), lifetime_over_fun_ptr(lifetime_over_fun_ptr) { } /* //USER DEFINED copy-ctor. if it's defined things go crazy BulletFX(const BulletFX& rhs) :data(data),//this line of code seems to do a plain memory-copy without calling the right ctors render_fun_ptr(render_fun_ptr), update_fun_ptr(update_fun_ptr), explode_fun_ptr(explode_fun_ptr), lifetime_over_fun_ptr(lifetime_over_fun_ptr) { } */ }; If i use the user-defined copy-ctor my smart-pointer class goes crazy, and it seems that calling the CopyCtor / assignment operator aren't called as they should. So - does this all make sense ? it seems as if my own copy-ctor of struct BulletFX should do exactly what the compiler-generated would, but it seems to forget to call the right constructors down the chain. compiler bug ? me being stupid ? Sorry about the big code, some small example could have illustrated too. but often you guys ask for the real code, so well - here it is :D EDIT : more info : typedef ParticleId unsigned int; Particle has no user defined copyctor, but has a member of type : Particle { .... Resource<Texture> tex_res; ... } Resource is a smart-pointer class, and has all ctor's defined (also asignment operator) and it seems that Resource is copied bitwise. EDIT : henrik solved it... data(data) is stupid of course ! it should of course be rhs.data !!! sorry for huge amount of code, with a very little bug in it !!! (Guess you shouldn't code at 1 in the morning :D )

    Read the article

  • Is this a valid pattern for raising events in C#?

    - by Will Vousden
    Update: For the benefit of anyone reading this, since .NET 4, the lock is unnecessary due to changes in synchronization of auto-generated events, so I just use this now: public static void Raise<T>(this EventHandler<T> handler, object sender, T e) where T : EventArgs { if (handler != null) { handlerCopy(sender, e); } } And to raise it: SomeEvent.Raise(this, new FooEventArgs()); Having been reading one of Jon Skeet's articles on multithreading, I've tried to encapsulate the approach he advocates to raising an event in an extension method like so (with a similar generic version): public static void Raise(this EventHandler handler, object @lock, object sender, EventArgs e) { EventHandler handlerCopy; lock (@lock) { handlerCopy = handler; } if (handlerCopy != null) { handlerCopy(sender, e); } } This can then be called like so: protected virtual void OnSomeEvent(EventArgs e) { this.someEvent.Raise(this.eventLock, this, e); } Are there any problems with doing this? Also, I'm a little confused about the necessity of the lock in the first place. As I understand it, the delegate is copied in the example in the article to avoid the possibility of it changing (and becoming null) between the null check and the delegate call. However, I was under the impression that access/assignment of this kind is atomic, so why is the lock necessary? Update: With regards to Mark Simpson's comment below, I threw together a test: static class Program { private static Action foo; private static Action bar; private static Action test; static void Main(string[] args) { foo = () => Console.WriteLine("Foo"); bar = () => Console.WriteLine("Bar"); test += foo; test += bar; test.Test(); Console.ReadKey(true); } public static void Test(this Action action) { action(); test -= foo; Console.WriteLine(); action(); } } This outputs: Foo Bar Foo Bar This illustrates that the delegate parameter to the method (action) does not mirror the argument that was passed into it (test), which is kind of expected, I guess. My question is will this affect the validity of the lock in the context of my Raise extension method? Update: Here is the code I'm now using. It's not quite as elegant as I'd have liked, but it seems to work: public static void Raise<T>(this object sender, ref EventHandler<T> handler, object eventLock, T e) where T : EventArgs { EventHandler<T> copy; lock (eventLock) { copy = handler; } if (copy != null) { copy(sender, e); } }

    Read the article

  • how to create a system-wide independent universal counter object primarily for Database keys?

    - by andora
    I would like to create/use a system-wide independent universal 'counter object' that can be called via COM in a thread-safe manner. The counter object will be passed an ID to identify which counter to return, handle the counting, 'persist' the count (occasionally), have reasonable performance (as fast as possible) perhaps capable of 1000 counts per second or better (1mS) and be accessible cross-process/out-of-process. The current count status must be persisted between object restarts/shutdowns. The counter object is liklely to be a 'singleton' type object implemented in some form of free-threaded dictionary, containing maybe 10 counters (perhaps 50 max). The count needs to be monotonic and consistent, (ie: guaranteed unique sequential values). Each counter should have a few methods, like reset, inc, dec, set, clear, remove. As a luxury, I would like to have a variable-increment (ie: 'step by' value). To support thread-safefty, perhaps some sorm of critical-section or mutex call. It just needs to return a long/4byte signed integer. I really want something that can be called from anywhere, including VBScript, so I figure COM is my preferred solution. The primary use of this is for database keys. I am unable to use autoinc or guid type keys and have ruled out database-generated counting systems at this point. I've spent days researching this and I have really struggled to find a solution. The best I can find is a free-threaded dictionary object that can be instantiated using COM+ from Motobit - it seems to offer all the 'basics' and I guess I could create some form of wrapper for this. So, here are my questions: Does such a 'general purpose counter-object already exist? Can you direct me to it? (MS did do an IIS/ASP object called 'MSWC.Counter' but this isn't 'cross-process'/ out-of-process component and isn't thread-safe. (but if it was, it would do!) What is the best way of creating such a Component? (I'd prefer VB6 right-now, [don't ask!] but can do in VB.NET2005 if I had to). I don't have the skills/knowledge/tools to use anything else. I am desparate for a workable solution. I need specific guidance! If anybody can code something up for me I am prepared to pay for it.

    Read the article

  • Understanding WebRequest

    - by Nai
    I found this snippet of code here that allows you to log into a website and get the response from the logged in page. However, I'm having trouble understanding all the part of the code. I've tried my best to fill in whatever I understand so far. Hope you guys can fill in the blanks for me. Thanks string nick = "mrbean"; string password = "12345"; //this is the query data that is getting posted by the website. //the query parameters 'nick' and 'password' must match the //name of the form you're trying to log into. you can find the input names //by using firebug and inspecting the text field string postData = "nick=" + nick + "&password=" + password; // this puts the postData in a byte Array with a specific encoding //Why must the data be in a byte array? byte[] data = Encoding.ASCII.GetBytes(postData); // this basically creates the login page of the site you want to log into WebRequest request = WebRequest.Create("http://www.mrbeanandme.com/login/"); // im guessing these parameters need to be set but i dont why? request.Method = "POST"; request.ContentType = "application/x-www-form-urlencoded"; request.ContentLength = data.Length; // this opens a stream for writing the post variables. // im not sure what a stream class does. need to do some reading into this. Stream stream = request.GetRequestStream(); // you write the postData to the website and then close the connection? stream.Write(data, 0, data.Length); stream.Close(); // this receives the response after the log in WebResponse response = request.GetResponse(); stream = response.GetResponseStream(); // i guess you need a stream reader to read a stream? StreamReader sr = new StreamReader(stream); // this outputs the code to console and terminates the program Console.WriteLine(sr.ReadToEnd()); Console.ReadLine();

    Read the article

  • Estimating the boundary of arbitrarily distributed data

    - by Dave
    I have two dimensional discrete spatial data. I would like to make an approximation of the spatial boundaries of this data so that I can produce a plot with another dataset on top of it. Ideally, this would be an ordered set of (x,y) points that matplotlib can plot with the plt.Polygon() patch. My initial attempt is very inelegant: I place a fine grid over the data, and where data is found in a cell, a square matplotlib patch is created of that cell. The resolution of the boundary thus depends on the sampling frequency of the grid. Here is an example, where the grey region are the cells containing data, black where no data exists. OK, problem solved - why am I still here? Well.... I'd like a more "elegant" solution, or at least one that is faster (ie. I don't want to get on with "real" work, I'd like to have some fun with this!). The best way I can think of is a ray-tracing approach - eg: from xmin to xmax, at y=ymin, check if data boundary crossed in intervals dx y=ymin+dy, do 1 do 1-2, but now sample in y An alternative is defining a centre, and sampling in r-theta space - ie radial spokes in dtheta increments. Both would produce a set of (x,y) points, but then how do I order/link neighbouring points them to create the boundary? A nearest neighbour approach is not appropriate as, for example (to borrow from Geography), an isthmus (think of Panama connecting N&S America) could then close off and isolate regions. This also might not deal very well with the holes seen in the data, which I would like to represent as a different plt.Polygon. The solution perhaps comes from solving an area maximisation problem. For a set of points defining the data limits, what is the maximum contiguous area contained within those points To form the enclosed area, what are the neighbouring points for the nth point? How will the holes be treated in this scheme - is this erring into topology now? Apologies, much of this is me thinking out loud. I'd be grateful for some hints, suggestions or solutions. I suspect this is an oft-studied problem with many solution techniques, but I'm looking for something simple to code and quick to run... I guess everyone is, really! Cheers, David

    Read the article

  • Should I add try/catch around when casting on an attribute of JSP implicit object?

    - by Michael Mao
    Hi all: Basically what I mean is like this: List<String[]> routes = (List<String[]>)application.getAttribute("routes"); For the above code, it tries to get an attribute named "routes" from the JSP implicit object - application. But as everyone knows, after this line of code, routes may very well contains a null - which means this application hasn't got an attribute named "routes". Is this "casting on null" good programming practice in Java or not? Basically I try to avoid exceptions such as java.io.InvalidCastException I reckon things like this are more as "heritage" of Java 1.4 when generic types were not introduced to this language. So I guess everything stored in application attributes as Objects (Similar to traditional ArrayList). And when you do "downcast", there might be invalid casts. What would you do in this case? Update: Just found that although in the implicit object application I did store a List of String arrays, when I do this : List<double[]> routes = (List<double[]>)application.getAttribute("routes"); It doesn't produce any error... And I felt not comfortable already... And even with this code: out.print(routes.get(0)); It does print out something strange : [Ljava.lang.String;@18b352f Am I printing a "pointer to String"? I can finally get an exception like this: out.print(routes.get(0)[1]); with error : java.lang.ClassCastException: [Ljava.lang.String; Because it was me to add the application attribute, I know which type should it be cast to. I feel this is not "good enough", what if I forget the exact type? I know there are still some cases where this sort of thing would happen, such as in JSP/Servlet when you do casting on session attributes. Is there a better way to do this? Before you say:"OMG, why you don't use Servlets???", I should justify my reason as this: - because my uni sucks, its server can only work with Servlets generated by JSP, and I don't really want to learn how to fix issues like that. look at my previous question on this , and this is uni homework, so I've got no other choice, forget all about war, WEB-INF,etc, but "code everything directly into JSP" - because the professors do that too. :)

    Read the article

  • Android HelloGoogleMaps to OSMdroid (Open Street Maps)

    - by birgit
    I am trying to reproduce a working HelloGoogleMaps app in Open Street Maps - but I have trouble including the itemized overlay in OSMdroid. I have looked at several resources but I cannot figure out how to fix the error on OsmItemizedOverlay - I guess I am constructing OsmItemizedOverlay wrongly or have a mixup with OsmItemizedOverlay and ItemizedOverlay? But everything I tried to change just raised more errors... "Implicit super constructor ItemizedOverlay() is undefined. Must explicitly invoke another constructor" "Cannot make a static reference to the non-static method setMarker(Drawable) from the type OverlayItem" - I hope someone can help me getting the class definition straight? Thanks so much! package com.example.osmdroiddemomap; import java.util.ArrayList; import android.app.AlertDialog; import android.content.Context; import android.graphics.Point; import android.graphics.drawable.Drawable; import org.osmdroid.api.IMapView; import org.osmdroid.views.*; import org.osmdroid.views.overlay.*; import org.osmdroid.views.overlay.OverlayItem.HotspotPlace; public class OsmItemizedOverlay extends ItemizedOverlay<OverlayItem> { Context mContext; private ArrayList<OverlayItem> mOverlays = new ArrayList<OverlayItem>(); //ERRORS are raised by the following 3 lines: public OsmItemizedOverlay(Drawable defaultMarker, Context context) { OverlayItem.setMarker(defaultMarker); OverlayItem.setMarkerHotspot(HotspotPlace.CENTER); mContext = context; } public void addOverlay(OverlayItem overlay) { mOverlays.add(overlay); populate(); } @Override protected OverlayItem createItem(int i) { return mOverlays.get(i); } @Override public int size() { return mOverlays.size(); } protected boolean onTap(int index) { OverlayItem item = mOverlays.get(index); AlertDialog.Builder dialog = new AlertDialog.Builder(mContext); dialog.setTitle(item.getTitle()); dialog.setMessage(item.getSnippet()); dialog.show(); return true; } @Override public boolean onSnapToItem(int arg0, int arg1, Point arg2, IMapView arg3) { // TODO Auto-generated method stub return false; } }

    Read the article

  • Android: Autostart app and load preferences

    - by BBoom
    Hi, I have a problem with initializing my app properly after the autostart. I've managed to get an autostart to work, after a reboot the app is shown as started but the timer's are not. My guess is that the "onCreate" function of MyApp is not called when I call the context.startService(). The timers are set in the doActivity() function of MyApp. I would greatly appreciate any tips on what I could be doing wrong or links to good tutorials. :) The manifest: <activity android:name=".MyApp" android:label="@string/app_name"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <receiver android:name="MyApp_Receiver"> <intent-filter> <action android:name="android.intent.action.BOOT_COMPLETED" /> <category android:name="android.intent.category.DEFAULT" /> </intent-filter> </receiver>[/syntax] MyApp_Receiver is a BoradcastReciever with the following two functions public void onReceive(Context context, Intent intent) { // Do Autostart if intent is "BOOT_COMPLETED" if ((intent.getAction() != null) && (intent.getAction().equals("android.intent.action.BOOT_COMPLETED"))) { // Start the service context.startService(new Intent(context, MyApp.class)); } // Else do activity else MAIN_ACTIVITY.doActivity(); } public static void setMainActivity(MyApp activity) { MAIN_ACTIVITY = activity; } MyApp extends PreferenceActivity and has an onCreate() and a doActivity(), the doActivity() reads out the preferences and sets a timer depending on them. public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); // Show preferences addPreferencesFromResource(R.xml.preferences);; // Register Preference Click Listeners getPreferenceScreen().getSharedPreferences().registerOnSharedPreferenceChangeListener(this); // Prepare for one-shot alarms if (mIntent == null) { mIntent = new Intent(MyApp.this, MyApp_Receiver.class); mSender = PendingIntent.getBroadcast(MyApp.this, 0, mIntent, 0); MyApp_Receiver.setMainActivity(this); } // Refresh and set all timers on start doActivity(); }

    Read the article

  • Cepstral Analysis for pitch detection

    - by Ohmu
    Hi! I'm looking to extract pitches from a sound signal. Someone on IRC just explain to me how taking a double FFT achieves this. Specifically: take FFT take log of square of absolute value (can be done with lookup table) take another FFT take absolute value I am attempting this using vDSP I can't understand how I didn't come across this technique earlier. I did a lot of hunting and asking questions; several weeks worth. More to the point, I can't understand why I didn't think of it. I am attempting to achieve this with vDSP library. it looks as though it has functions to handle all of these tasks. However, I'm wondering about the accuracy of the final result. I have previously used a technique which scours the frequency bins of a single FFT for local maxima. when it encounters one, it uses a cunning technique (the change in phase since the last FFT) to more accurately place the actual peak within the bin. I am worried that this precision will be lost with this technique I'm presenting here. I guess the technique could be used after the second FFT to get the fundamental accurately. But it kind of looks like the information is lost in step 2. as this is a potentially tricky process, could someone with some experience just look over what I'm doing and check it for sanity? also, I've heard there is an alternative technique involving fitting a quadratic over neighbouring bins. Is this of comparable accuracy? if so, I would favour it, as it doesn't involve remembering bin phases. so questions: does this approach makes sense? Can it be improved? I'm a bit worried about And the log square component; there seems to be a vDSP function to do exactly that: vDSP_vdbcon however, there is no indication it precalculates a log-table -- I assume it doesn't, as the FFT function requires an explicit pre-calculation function to be called and passed into it. and this function doesn't. Is there some danger of harmonics being picked up? is there any cunning way of making vDSP pull out the maxima, biggest first? Can anyone point me towards some research or literature on this technique? the main question: is it accurate enough? Can the accuracy be improved? I have just been told by an expert that the accuracy IS INDEED not sufficient. Is this the end of the line? Pi PS I get SO annoyed (npi) when I want to create tags, but cannot. :| I have suggested to the maintainers that SO keep track of attempted tags, but I'm sure I was ignored. we need tags for vDSP, accelerate framework, cepstral analysis

    Read the article

  • Neo4j 1.9.4 (REST Server,CYPHER) performance issue

    - by user2968943
    I have Neo4j 1.9.4 installed on 24 core 24Gb ram (centos) machine and for most queries CPU usage spikes goes to 200% with only few concurrent requests. Domain: some sort of social application where few types of nodes(profiles) with 3-30 text/array properties and 36 relationship types with at least 3 properties. Most of nodes currently has ~300-500 relationships. Current data set footprint(from console): LogicalLogSize=4294907 (32MB) ArrayStoreSize=1675520 (12MB) NodeStoreSize=1342170 (10MB) PropertyStoreSize=1739548 (13MB) RelationshipStoreSize=6395202 (48MB) StringStoreSize=1478400 (11MB) which is IMHO really small. most queries looks like this one(with more or less WITH .. MATCH .. statements and few queries with variable length relations but the often fast): START targetUser=node({id}), currentUser=node({current}) MATCH targetUser-[contact:InContactsRelation]->n, n-[:InLocationRelation]->l, n-[:InCategoryRelation]->c WITH currentUser, targetUser,n, l,c, contact.fav is not null as inFavorites MATCH n<-[followers?:InContactsRelation]-() WITH currentUser, targetUser,n, l,c,inFavorites, COUNT(followers) as numFollowers RETURN id(n) as id, n.name? as name, n.title? as title, n._class as _class, n.avatar? as avatar, n.avatar_type? as avatar_type, l.name as location__name, c.name as category__name, true as isInContacts, inFavorites as isInFavorites, numFollowers it runs in ~1s-3s(for first run) and ~1s-70ms (for consecutive and it depends on query) and there is about 5-10 queries runs for each impression. Another interesting behavior is when i try run query from console(neo4j) on my local machine many consecutive times(just press ctrl+enter for few seconds) it has almost constant execution time but when i do it on server it goes slower exponentially and i guess it somehow related with my problem. Problem: So my problem is that neo4j is very CPU greedy(for 24 core machine its may be not an issue but its obviously overkill for small project). First time i used AWS EC2 m1.large instance but over all performance was bad, during testing, CPU always was over 100%. Some relevant parts of configuration: neostore.nodestore.db.mapped_memory=1280M wrapper.java.maxmemory=8192 note: I already tried configuration where all memory related parameters where HIGH and it didn't worked(no change at all). Question: Where to digg? configuration? scheme? queries? what i'm doing wrong? if need more info(logs, configs) just ask ;)

    Read the article

  • Setting up a "to-many" relationship value dependency for a transient Core Data attribute

    - by Greg Combs
    I've got a relatively complicated Core Data relationship structure and I'm trying to figure out how to set up value dependencies (or observations) across various to-many relationships. Let me start out with some basic info. I've got a classroom with students, assignments, and grades (students X assignments). For simplicity's sake, we don't really have to focus much on the assignments yet. StudentObj <--->> ScoreObj <<---> AssignmentObj Each ScoreObj has a to-one relation with the StudentObj and the AssignmentObj. ScoreObj has real attributes for the numerical grade, the turnInDate, and notes. AssignmentObj.scores is the set of Score objects for that assignment (N = all students). AssignmentObj has real attributes for name, dueDate, curveFunction, gradeWeight, and maxPoints. StudentObj.scores is the set of Score objects for that student (N = all assignments). StudentObj also has real attributes like name, studentID, email, etc. StudentObj has a transient (calculated, not stored) attribute called gradeTotal. This last item, gradeTotal, is the real pickle. it calculates the student's overall semester grade using the scores (ScoreObj) from all their assignments, their associated assignment gradeWeights, curves, and maxPoints, and various other things. This gradeTotal value is displayed in a table column, along with all the students and their individual assignment grades. Determining the value of gradeTotal is a relatively expensive operation, particularly with a large class, therefore I want to run it only when necessary. For simplicity's sake, I'm not storing that gradeTotal value in the core data model. I don't mind caching it somewhere, but I'm having a bitch of a time determining where and how to best update that cache. I need to run that calculation for each student whenever any value changes that affects their gradeTotal. If this were a simple to-one relationship, I know I could use something like keyPathsForValuesAffectingGradeTotal ... but it's more like a many-to-one-to-many relationship. Does anyone know of an elegant (and KVC correct) solution? I guess I could tear through all those score and assignment objects and tell them to register their students as observers. But this seems like a blunt force approach.

    Read the article

  • How to add new object to an IList mapped as a one-to-many with NHibernate?

    - by Jørn Schou-Rode
    My model contains a class Section which has an ordered list of Statics that are part of this section. Leaving all the other properties out, the implementation of the model looks like this: public class Section { public virtual int Id { get; private set; } public virtual IList<Static> Statics { get; private set; } } public class Static { public virtual int Id { get; private set; } } In the database, the relationship is implemented as a one-to-many, where the table Static has a foreign key pointing to Section and an integer column Position to store its index position in the list it is part of. The mapping is done in Fluent NHibernate like this: public SectionMap() { Id(x => x.Id); HasMany(x => x.Statics).Cascade.All().LazyLoad() .AsList(x => x.WithColumn("Position")); } public StaticMap() { Id(x => x.Id); References(x => x.Section); } Now I am able to load existing Statics, and I am also able to update the details of those. However, I cannot seem to find a way to add new Statics to a Section, and have this change persisted to the database. I have tried several combinations of: mySection.Statics.Add(myStatic) session.Update(mySection) session.Save(myStatic) but the closest I have gotten (using the first two statements), is to an SQL exception reading: "Cannot insert the value NULL into column 'Position'". Clearly an INSERT is attempted here, but NHibernate does not seem to automatically append the index position to the SQL statement. What am I doing wrong? Am I missing something in my mappings? Do I need to expose the Position column as a property and assign a value to it myself? EDIT: Apparently everything works as expected, if I remove the NOT NULL constraint on the Static.Position column in the database. I guess NHibernate makes the insert and immediatly after updates the row with a Position value. While this is an anwers to the question, I am not sure if it is the best one. I would prefer the Position column to be not nullable, so I still hope there is some way to make NHibernate provide a value for that column directly in the INSERT statement. Thus, the question is still open. Any other solutions?

    Read the article

  • Arbitrary Form Processing with Drupal

    - by Aaron
    I am writing a module for my organization to cache XML feeds to static files to an arbitrary place on our webserver. I am new at Drupal development, and would like to know if I am approaching this the right way. Basically I: Expose a url via the menu hook, where a user can enter in a an output directory on the webserver and press the "dump" button and then have PHP go to drupal and get the feed xml. I don't need help with that functionality, because I actually have a prototype working in Python (outside of Drupal).. Provide a callback for the form where I can do my logic, using the form parameters. Here's the menu hook: function ncbi_cache_files_menu() { $items = array(); $items['admin/content/ncbi_cache_files'] = array( 'title' => 'NCBI Cache File Module', 'description' => 'Cache Guide static content to files', 'page callback' => 'drupal_get_form', 'page arguments' => array( 'ncbi_cache_files_show_submit'), 'access arguments' => array( 'administer site configuration' ), 'type' => MENU_NORMAL_ITEM, ); return $items; } I generate the form in: function ncbi_cache_files_show_submit() { $DEFAULT_OUT = 'http://myorg/foo'; $form[ 'ncbi_cache_files' ] = array( '#type' => 'textfield', '#title' => t('Output Directory'), '#description' => t('Where you want the static files to be dumped. This should be a directory that www has write access to, and should be accessible from the foo server'), '#default_value' => t( $DEFAULT_OUT ), '#size' => strlen( $DEFAULT_OUT ) + 5, ); $form['dump'] = array( '#type' => 'submit', '#value' => 'Dump', '#submit' => array( 'ncbi_cache_files_dump'), ); return system_settings_form( $form ); } Then the functionality is in the callback: function ncbi_cache_files_dump( $p, $q) { //dpm( get_defined_vars() ); $outdir = $p['ncbi_cache_files']['#post']['ncbi_cache_files']; drupal_set_message('outdir: ' . $outdir ); } The question: Is this a decent way of processing an arbitrary form in Drupal? I not really need to listen for any drupal hooks, because I am basically just doing some URL and file processing. What are those arguments that I'm getting in the callback ($q)? That's the form array I guess, with the post values? Is this the best way to get the form parameters to work on? Thanks for any advice.

    Read the article

  • Autoconf -- building with static library (newbie)

    - by EB
    I am trying to migrate my application from manual build to autoconf, which is working very nicely so far. But I have one static library that I can't figure out how to integrate. That library will NOT be located in the usual library locations - the location of the binary (.a file) and header (.h file) will be given as a configure argument. (Notably, even if I move the .a file to /usr/lib or anywhere else I can think of, it still won't work.) It is also not named traditionally (it does not start with "lib" or "l"). Manual compilation is working with these (directory is not predictable - this is just an example): gcc ... -I/home/john/mystuff /home/john/mystuff/helper.a (Uh, I actually don't understand why the .a file is referenced directly, not with -L or anything. Yes, I have a half-baked understanding of building C programs.) So, in my configure.ac, I can use the relevant configure argument to successfully find the header (.h file) using AC_CHECK_HEADER. Inside the AC_CHECK_HEADER I then add the location to CPFLAGS and the #include of the header file in the actual C code picks it up nicely. Given a configure argument that has been put into $location and the name of the needed files are helper.h and helper.a (which are both in the same directory), here is what works so far: AC_CHECK_HEADER([$location/helper.h], [AC_DEFINE([HAVE_HELPER_H], [1], [found helper.h]) CFLAGS="$CFLAGS -I$location"]) Where I run into difficulties is getting the binary (.a file) linked in. No matter what I try, I always get an error about undefined references to the function calls for that library. I'm pretty sure it's a linkage issue, because I can fuss with the C code and make an intentional error in the function calls to that library which produces earlier errors that indicate that the function prototypes have been loaded and used to compile. I tried adding the location that contains the .a file to LDFLAGS and then doing a AC_CHECK_LIB but it is not found. Maybe my syntax is wrong, or maybe I'm missing something more fundamental, which would not be surprising since I'm a newbie and don't really know what I'm doing. Here is what I have tried: AC_CHECK_HEADER([$location/helper.h], [AC_DEFINE([HAVE_HELPER_H], [1], [found helper.h]) CFLAGS="$CFLAGS -I$location"; LDFLAGS="$LDFLAGS -L$location"; AC_CHECK_LIB(helper)]) No dice. AC_CHECK_LIB is looking for -lhelper I guess (or libhelper?) so I'm not sure if that's a problem, so I tried this, too (omit AC_CHECK_LIB and include the .a directly in LDFLAGS), without luck: AC_CHECK_HEADER([$location/helper.h], [AC_DEFINE([HAVE_HELPER_H], [1], [found helper.h]) CFLAGS="$CFLAGS -I$location"; LDFLAGS="$LDFLAGS -L$location/helper.a"]) To emulate the manual compilation, I tried removing the -L but that doesn't help: AC_CHECK_HEADER([$location/helper.h], [AC_DEFINE([HAVE_HELPER_H], [1], [found helper.h]) CFLAGS="$CFLAGS -I$location"; LDFLAGS="$LDFLAGS $location/helper.a"]) I tried other combinations and permutations, but I think I might be missing something more fundamental....

    Read the article

  • How to stream semi-live audio over internet

    - by Thomas Tempelmann
    I want to write something like Skype, i.e. I have a constant audio stream on one computer and then recompress it in a format that's suitable for a latent internet connection, receive it on the other end and play it. Let's also assume that the internet connection is fairly modern and fast, i.e. DSL or alike, no slow connections over phone and such. The involved computers will also be rather modern (Dual Core Intel CPUs at 2GHz or more). I know how to handle the audio on the machines. What I don't know is how to transmit the audio in an efficient way. The challenges are: I'd like get good audio quality across the line. The stream should be received without drops. The stream may, however, be received with a little delay (a second delay is acceptable). I imagine that the transport software could first determine the average (and max) latency, then start the stream and tell the receiver to wait for that max latency before starting to play the audio. With that, if the latency doesn't get any higher, the entire stream will be playable on the other side without stutter or drops. If, due to unexpected IP latencies or blockages, the stream does get cut off, I want to be able to notice this so that I can take actions (e.g. abort the stream) and eventually start a new transmission. What are my options if I want do use ready-made software for the compression and tranmission? I have no intention to write my own audio compression engine, really. OTOH, I plan to sell the solution in a vertical market, meaning I can afford a few dollars of license fees per copy, but not $100s. I guess the simplest solution would be to just open a TCP stream, send a few packets back and forth to determine their running time (or even use UDP for that), then use the results as the guide for my max latency value, then simply fire the audio data in its raw form (uncompressed 16 bit stereo), along with a timing code over the TCP connection. The receiver reads the data and plays it with the pre-determined delay. That might just work with the type of fast connection I expect. I just wonder if there are better solutions to reach this goal, with better performance (lower latency) and less data (compressed). BTW, I first try to implement this on OS X, but might want to do it on Windows, too, if it proves successful.

    Read the article

  • How can I group an array of rectangles into "Islands" of connected regions?

    - by Eric
    The problem I have an array of java.awt.Rectangles. For those who are not familiar with this class, the important piece of information is that they provide an .intersects(Rectangle b) function. I would like to write a function that takes this array of Rectangles, and breaks it up into groups of connected rectangles. Lets say for example, that these are my rectangles (constructor takes the arguments x, y, width,height): Rectangle[] rects = new Rectangle[] { new Rectangle(0, 0, 4, 2), //A new Rectangle(1, 1, 2, 4), //B new Rectangle(0, 4, 8, 2), //C new Rectangle(6, 0, 2, 2) //D } A quick drawing shows that A intersects B and B intersects C. D intersects nothing. A tediously drawn piece of ascii art does the job too: +-------+ +---+ ¦A+---+ ¦ ¦ D ¦ +-+---+-+ +---+ ¦ B ¦ +-+---+---------+ ¦ +---+ C ¦ +---------------+ Therefore, the output of my function should be: new Rectangle[][]{ new Rectangle[] {A,B,C}, new Rectangle[] {D} } The failed code This was my attempt at solving the problem: public List<Rectangle> getIntersections(ArrayList<Rectangle> list, Rectangle r) { List<Rectangle> intersections = new ArrayList<Rectangle>(); for(Rectangle rect : list) { if(r.intersects(rect)) { list.remove(rect); intersections.add(rect); intersections.addAll(getIntersections(list, rect)); } } return intersections; } public List<List<Rectangle>> mergeIntersectingRects(Rectangle... rectArray) { List<Rectangle> allRects = new ArrayList<Rectangle>(rectArray); List<List<Rectangle>> groups = new ArrayList<ArrayList<Rectangle>>(); for(Rectangle rect : allRects) { allRects.remove(rect); ArrayList<Rectangle> group = getIntersections(allRects, rect); group.add(rect); groups.add(group); } return groups; } Unfortunately, there seems to be an infinite recursion loop going on here. My uneducated guess would be that java does not like me doing this: for(Rectangle rect : allRects) { allRects.remove(rect); //... } Can anyone shed some light on the issue?

    Read the article

  • How to Work Around Limitations in Generic Type Constraints in C#?

    - by Jose
    Okay I'm looking for some input, I'm pretty sure this is not currently supported in .NET 3.5 but here goes. I want to require a generic type passed into my class to have a constructor like this: new(IDictionary<string,object>) so the class would look like this public MyClass<T> where T : new(IDictionary<string,object>) { T CreateObject(IDictionary<string,object> values) { return new T(values); } } But the compiler doesn't support this, it doesn't really know what I'm asking. Some of you might ask, why do you want to do this? Well I'm working on a pet project of an ORM so I get values from the DB and then create the object and load the values. I thought it would be cleaner to allow the object just create itself with the values I give it. As far as I can tell I have two options: 1) Use reflection(which I'm trying to avoid) to grab the PropertyInfo[] array and then use that to load the values. 2) require T to support an interface like so: public interface ILoadValues { void LoadValues(IDictionary values); } and then do this public MyClass<T> where T:new(),ILoadValues { T CreateObject(IDictionary<string,object> values) { T obj = new T(); obj.LoadValues(values); return obj; } } The problem I have with the interface I guess is philosophical, I don't really want to expose a public method for people to load the values. Using the constructor the idea was that if I had an object like this namespace DataSource.Data { public class User { protected internal User(IDictionary<string,object> values) { //Initialize } } } As long as the MyClass<T> was in the same assembly the constructor would be available. I personally think that the Type constraint in my opinion should ask (Do I have access to this constructor? I do, great!) Anyways any input is welcome.

    Read the article

  • libXcodeDebuggerSupport.dylib is missing in iOS 4.2.1 development SDK

    - by Kalle
    Note: creating a symbolic link to use the 4.2 lib seems to work fine -- maybe cd /Developer/Platforms/iPhoneOS.platform/DeviceSupport/4.2.1\ \(8C148\)/Symbols/ sudo ln -s ../../4.2 (8C134)/Symbols/Developer Request: See end of this question! After upgrading from 4.2.0 (beta, I believe) to 4.2.1, the libXcodeDebuggerSupport.dylib file is missing, which results in: warning: Unable to read symbols for /Developer/Platforms/iPhoneOS.platform/DeviceSupport/4.2.1 (8C148)/Symbols/Developer/usr/lib/libXcodeDebuggerSupport.dylib (file not found). which I guess isn't good. Looking at the directory in question I note: .../DeviceSupport/4.2 (8C134)/Symbols/Developer/usr/lib/libXcodeDebuggerSupport.dylib but .../DeviceSupport/4.2.1 (8C148)/Symbols/System/ .../DeviceSupport/4.2.1 (8C148)/Symbols/usr/ the above two dirs make up all the content in the 4.2.1 folder. No "Developer" folder. Checking the /usr/ dir there, I find no libXcodeDebuggerSupport.dylib file in the lib dir either, so ln -s'ing isn't an option. Worth mentioning: after the upgrade, I plugged the iPad in and had to click "Use for development" in Xcode organizer. Doing so, I got a message about symbols missing for that version, and Xcode proceeded to generate such, then failed. I restored the iPad and did "Use for development" again, and nothing about missing symbols appeared... Update: deletion of /Developer and reinstallation of Xcode from scratch does not fix this issue. Update 2: I just realized that after the reinstall of Xcode, .../DeviceSupport/4.2 (8C134)/Symbols is now a symbolic link, lrwxr-xr-x 1 root admin 36 Dec 3 17:17 Symbols -> ../../Developer/SDKs/iPhoneOS4.2.sdk And the directory in question has the appropriate files. Maybe this is simply a matter of linking the 4.2.1 dir in the same fashion? I'll try that and see if Xcode freaks out. If someone who has this file could provide a md5 sum that would be splendid. This is what it says for me: $ md5 /Developer/Platforms/iPhoneOS.platform/DeviceSupport/4.2\ \(8C134\)/Symbols/Developer/usr/lib/libXcodeDebuggerSupport.dylib MD5 (/Developer/Platforms/iPhoneOS.platform/DeviceSupport/4.2 (8C134)/Symbols/Developer/usr/lib/libXcodeDebuggerSupport.dylib) = 08f93a0a2e3b03feaae732691f112688 If the MD5 sum is identical to the output of $ md5 /Developer/Platforms/iPhoneOS.platform/DeviceSupport/4.2.1\ \(8C148\)/Symbols/Developer/usr/lib/libXcodeDebuggerSupport.dylib then we're all set.

    Read the article

  • Is this a legitimate implementation of a 'remember me' function for my web app?

    - by user246114
    Hi, I'm trying to add a "remember me" feature to my web app to let a user stay logged in between browser restarts. I think I got the bulk of it. I'm using google app engine for the backend which lets me use java servlets. Here is some pseudo-code to demo: public class MyServlet { public void handleRequest() { if (getThreadLocalRequest().getSession().getAttribute("user") != null) { // User already has session running for them. } else { // No session, but check if they chose 'remember me' during // their initial login, if so we can have them 'auto log in' // now. Cookie[] cookies = getThreadLocalRequest().getCookies(); if (cookies.find("rememberMePlz").exists()) { // The value of this cookie is the cookie id, which is a // unique string that is in no way based upon the user's // name/email/id, and is hard to randomly generate. String cookieid = cookies.find("rememberMePlz").value(); // Get the user object associated with this cookie id from // the data store, would probably be a two-step process like: // // select * from cookies where cookieid = 'cookieid'; // select * from users where userid = 'userid fetched from above select'; User user = DataStore.getUserByCookieId(cookieid); if (user != null) { // Start session for them. getThreadLocalRequest().getSession() .setAttribute("user", user); } else { // Either couldn't find a matching cookie with the // supplied id, or maybe we expired the cookie on // our side or blocked it. } } } } } // On first login, if user wanted us to remember them, we'd generate // an instance of this object for them in the data store. We send the // cookieid value down to the client and they persist it on their side // in the "rememberMePlz" cookie. public class CookieLong { private String mCookieId; private String mUserId; private long mExpirationDate; } Alright, this all makes sense. The only frightening thing is what happens if someone finds out the value of the cookie? A malicious individual could set that cookie in their browser and access my site, and essentially be logged in as the user associated with it! On the same note, I guess this is why the cookie ids must be difficult to randomly generate, because a malicious user doesn't have to steal someone's cookie - they could just randomly assign cookie values and start logging in as whichever user happens to be associated with that cookie, if any, right? Scary stuff, I feel like I should at least include the username in the client cookie such that when it presents itself to the server, I won't auto-login unless the username+cookieid match in the DataStore. Any comments would be great, I'm new to this and trying to figure out a best practice. I'm not writing a site which contains any sensitive personal information, but I'd like to minimize any potential for abuse all the same, Thanks

    Read the article

  • clarification on the concept of "web service"

    - by udit
    Im a little confused on the varying definitions and implementations of web services available as implementations. Need some clarification please. Ones I have used till now: If a vendor gives me a specific format of XML that I can send populated with data to request and I make a simple HTTP POST over the internet passing in the XML String as the payload, is this a web service call ? If so, is there a specific name to it, this kind of web service ? Because obviously, it does not use anything like Axis, WSDL or SOAP to establish this connection. A variant of this is If the vendor gives me an XSD, I use JAXB to make a java class out of it and pass in the serialized version of the object, which eventually works out to be the same as option 1. RESTful web service: Vendor gives me a URL like http://restfulservice/products and I can make HTTP Requests to the URL and depending on what HTTP verb I use, the appropropriate action is called and the response sent over the wire. Ones I have only read about\ have a vague idea about SOAP. How does this work?.. Ive read the W3Schools tutorial and I undertsand that there is a very specific form of XML that is standardized according to W3C standards that we use to pass the same kind of messages as we did in option 1. But how does this work in real life? Vendor sends me what? Do I generate classes? Do I serialize some objects and http post them over to an address? Or do the generated objects themselves have connection methods that will do them for me? What about WSDL? When does a vendor send me WSDL and what do I do with it ? I guess I can generate classes from it. If yes, then what do I do with the generated classes ? When do I need that axis jar to generate classes from something that the vendor sends ? As you can see, I have some clear and other mostly vague ideas about the different kinds of web services available. would help if someone ould clarify and\or point to more real-world resources. I've looked a little bit into Java Web Services on the internet and the numerous four letter acronyms that get thrown at me make me dizzy. Thanks

    Read the article

  • How would you implement this "WorkerChain" functionality in .NET?

    - by Dan Tao
    Sorry for the vague question title -- not sure how to encapsulate what I'm asking below succinctly. (If someone with editing privileges can think of a more descriptive title, feel free to change it.) The behavior I need is this. I am envisioning a worker class that accepts a single delegate task in its constructor (for simplicity, I would make it immutable -- no more tasks can be added after instantiation). I'll call this task T. The class should have a simple method, something like GetToWork, that will exhibit this behavior: If the worker is not currently running T, then it will start doing so right now. If the worker is currently running T, then once it is finished, it will start T again immediately. GetToWork can be called any number of times while the worker is running T; the simple rule is that, during any execution of T, if GetToWork was called at least once, T will run again upon completion (and then if GetToWork is called while T is running that time, it will repeat itself again, etc.). Now, this is pretty straightforward with a boolean switch. But this class needs to be thread-safe, by which I mean, steps 1 and 2 above need to comprise atomic operations (at least I think they do). There is an added layer of complexity. I have need of a "worker chain" class that will consist of many of these workers linked together. As soon as the first worker completes, it essentially calls GetToWork on the worker after it; meanwhile, if its own GetToWork has been called, it restarts itself as well. Logically calling GetToWork on the chain is essentially the same as calling GetToWork on the first worker in the chain (I would fully intend that the chain's workers not be publicly accessible). One way to imagine how this hypothetical "worker chain" would behave is by comparing it to a team in a relay race. Suppose there are four runners, W1 through W4, and let the chain be called C. If I call C.StartWork(), what should happen is this: If W1 is at his starting point (i.e., doing nothing), he will start running towards W2. If W1 is already running towards W2 (i.e., executing his task), then once he reaches W2, he will signal to W2 to get started, immediately return to his starting point and, since StartWork has been called, start running towards W2 again. When W1 reaches W2's starting point, he'll immediately return to his own starting point. If W2 is just sitting around, he'll start running immediately towards W3. If W2 is already off running towards W3, then W2 will simply go again once he's reached W3 and returned to his starting point. The above is probably a little convoluted and written out poorly. But hopefully you get the basic idea. Obviously, these workers will be running on their own threads. Also, I guess it's possible this functionality already exists somewhere? If that's the case, definitely let me know!

    Read the article

< Previous Page | 280 281 282 283 284 285 286 287 288 289 290 291  | Next Page >