Search Results

Search found 23265 results on 931 pages for 'justin case'.

Page 701/931 | < Previous Page | 697 698 699 700 701 702 703 704 705 706 707 708  | Next Page >

  • Optimizing Levenshtein Distance Algorithm

    - by Matt
    I have a stored procedure that uses Levenshtein Distance to determine the result closest to what the user typed. The only thing really affecting the speed is the function that calculates the Levenshtein Distance for all the records before selecting the record with the lowest distance (I've verified this by putting a 0 in place of the call to the Levenshtein function). The table has 1.5 million records, so even the slightest adjustment may shave off a few seconds. Right now the entire thing runs over 10 minutes. Here's the method I'm using: ALTER function dbo.Levenshtein ( @Source nvarchar(200), @Target nvarchar(200) ) RETURNS int AS BEGIN DECLARE @Source_len int, @Target_len int, @i int, @j int, @Source_char nchar, @Dist int, @Dist_temp int, @Distv0 varbinary(8000), @Distv1 varbinary(8000) SELECT @Source_len = LEN(@Source), @Target_len = LEN(@Target), @Distv1 = 0x0000, @j = 1, @i = 1, @Dist = 0 WHILE @j <= @Target_len BEGIN SELECT @Distv1 = @Distv1 + CAST(@j AS binary(2)), @j = @j + 1 END WHILE @i <= @Source_len BEGIN SELECT @Source_char = SUBSTRING(@Source, @i, 1), @Dist = @i, @Distv0 = CAST(@i AS binary(2)), @j = 1 WHILE @j <= @Target_len BEGIN SET @Dist = @Dist + 1 SET @Dist_temp = CAST(SUBSTRING(@Distv1, @j+@j-1, 2) AS int) + CASE WHEN @Source_char = SUBSTRING(@Target, @j, 1) THEN 0 ELSE 1 END IF @Dist > @Dist_temp BEGIN SET @Dist = @Dist_temp END SET @Dist_temp = CAST(SUBSTRING(@Distv1, @j+@j+1, 2) AS int)+1 IF @Dist > @Dist_temp SET @Dist = @Dist_temp BEGIN SELECT @Distv0 = @Distv0 + CAST(@Dist AS binary(2)), @j = @j + 1 END END SELECT @Distv1 = @Distv0, @i = @i + 1 END RETURN @Dist END Anyone have any ideas? Any input is appreciated. Thanks, Matt

    Read the article

  • Incoming connections from 2 different Internet connections

    - by RJClinton
    I am hosting a Gameserver on my Windows 2003 Server 32 Bit Server. Due to some limitations of my ISP, I have to take 2 Internet connections in order to satisfy my bandwidth requirement. I am facing serious problems here. The Internet is supplied using Wimax. I am given a POE and the RJ45 from both the POE (2 connections) are plugged into a Netgear 100mbps Switch and another RJ45 connects the server to the switch. I have tried to configure the IP addresses and gateway of both connections to the single NIC in the server. The IP series and the gateway of each Internet connection are very much different. I know that this is not the proper approach, because alternate gateways are meant to be only for backup in case of main gateway failure and the usage of the gateways depends upon the metric (correct me if I am wrong). I am planning to get a second PCI NIC so that I can connect each of the Internet connections into its respective NIC. If I use this approach, will I be able to accept connections on both NICs? Also, please suggest any other alternatives that I might use.

    Read the article

  • AbstractMethodError when invoking createArrayOf, with postgresql 8.4 jdbc4 and JBoss 5.1GA

    - by Francesco
    Hi, when using this method public List<Field> getFieldWithoutId(List<Integer> idSections) throws Exception { try { Connection conn = this.getConnection(); Array arraySections = conn.createArrayOf("int4", idSections.toArray()); this.log.info("Recupero field"); List<Field> fields = this.getJdbcTemplate().query(getFieldWithoutIdQuery, new Object[] {arraySections},ParameterizedBeanPropertyRowMapper.newInstance(Field.class)); /*if (!conn.isClosed()) conn.close(); */ releaseConnection(conn); return fields; } catch (Exception e) { e.printStackTrace(); throw new Exception("Errore."); } } I have an exception at conn.createArrayOf("int4", idSections.toArray());. The exception is: javax.ejb.EJBException : Unexpected Error java.lang.AbstractMethodError: org.jboss.resource.adapter.jdbc.jdk5.WrappedConnectionJDK5.createArrayOf(Ljava/lang/String;[Ljava/lang/Object;)Ljava/sql/Array; postgresql-8.4-701.jdbc4.jar is in jboss/server/all/lib dir. Application is spring based with ejb3. When working locally with the same setup everything is fine. This only happens on a preproduction environment. Only difference is locally I have jboss run in default mode, in the other case there are 2 jbosses in all configuration. I can't track down the cause of this error. Could someone help me please?

    Read the article

  • Django custom managers - how do I return only objects created by the logged-in user?

    - by Tom Tom
    I want to overwrite the custom objects model manager to only return objects a specific user created. Admin users should still return all objects using the objects model manager. Now I have found an approach that could work. They propose to create your own middleware looking like this: #### myproject/middleware/threadlocals.py try: from threading import local except ImportError: # Python 2.3 compatibility from django.utils._threading_local import local _thread_locals = local() def get_current_user(): return getattr(_thread_locals, 'user', None) class ThreadLocals(object): """Middleware that gets various objects from the request object and saves them in thread local storage.""" def process_request(self, request): _thread_locals.user = getattr(request, 'user', None) #### end And in the Custom manager you could call the get_current_user() method to return only objects a specific user created. class UserContactManager(models.Manager): def get_query_set(self): return super(UserContactManager, self).get_query_set().filter(creator=get_current_user()) Is this a good approach to this use-case? Will this work? Or is this like "using a sledgehammer to crack a nut" ? ;-) Just using: Contact.objects.filter(created_by= user) in each view doesn`t look very neat to me. EDIT Do not use this middleware approach !!! use the approach stated by Jack M. below After a while of testing this approach behaved pretty strange and with this approach you mix up a global-state with a current request. Use the approach presented below. It is really easy and no need to hack around with the middleware. create a custom manager in your model with a function that expects the current user or any other user as an input. #in your models.py class HourRecordManager(models.Manager): def for_user(self, user): return self.get_query_set().filter(created_by=user) class HourRecord(models.Model): #Managers objects = HourRecordManager() #in vour view you can call the manager like this and get returned only the objects from the currently logged-in user. hr_set = HourRecord.objects.for_user(request.user)

    Read the article

  • How to clean-up an Entity Framework object context?

    - by Daniel Brückner
    I am adding several entities to an object context. try { forach (var document in documents) { this.Validate(document); // May throw a ValidationException. this.objectContext.AddToDocuments(document); } this.objectContext.SaveChanges(); } catch { // How to clean-up the object context here? throw; } If some of the documents pass the the validation and one fails, all documents that passed the validation remain added to the object context. I have to clean-up the object context because it may be reused and the following can happen. var documentA = new Document { Id = 1, Data = "ValidData" }; var documentB = new Document { Id = 2, Data = "InvalidData" }; var documentC = new Document { Id = 3, Data = "ValidData" }; try { // Adding document B will cause a ValidationException but only // after document A is added to the object context. this.DocumentStore.AddDocuments(new[] { documentA, documentB, documentC }); } catch (ValidationException) { } // Try again without the invalid document B. this.DocumentStore.AddDocuments(new[] { documentA, documentC }); This will again add document A to the object context and in consequence SaveChanges() will throw an exception because of a duplicate primary key. So I have to remove all already added documents in the case of an validation error. I could of course perform the validation first and only add all documents after they have been successfully validated. But sadly this does not solve the whole problem - if SaveChanges() fails all documents still remain add but unsaved. I tried to detach all objects returned by this.objectContext.ObjectStateManager.GetObjectStateEntries(EntityState.Added) but I am getting a exception stating that the object is not attached. So how do I get rid of all added but unsaved objects?

    Read the article

  • How to upgrade the project build in visual studio 2005 to visual studio 2008?

    - by Shailesh Jaiswal
    I have one OPC ( OLE for Process control ) server project which is developed into visual studio 2005. I want to run it in visual studio 2008. The coding for the OPC server project is done in VC++. I want to connect my OPC client to this OPC server. When I was opened the OPC server project which was build into visual studio 2005 into visual studio 2008 first time it was asking for conversion wizard. I gone through that wizard & successfully finished that wizard. But when I build ( by right clicking on the project & choosing build solution ) it is giving lots of error near about 64 errors. Most of the errors are like - fetal error C1083:Can not open type library file:'msxml4.dll':No such file or directory, fetal error LINK1181:can not open input file 'rpcndr.lib' , error C2051:case expression not constant. only these 3 types of errors in am getting. All these 3 errors are repeated in Error list & becoming bunch of 64 errors. Please provide me the solution for the above issue. Can you provide me any suusgestion or link or any way through whcih I can resolve the above issue?

    Read the article

  • x86 opcode alignment references and guidelines

    - by mrjoltcola
    I'm generating some opcodes dynamically in a JIT compiler and I'm looking for guidelines for opcode alignment. 1) I've read comments that briefly "recommend" alignment by adding nops after calls 2) I've also read about using nop for optimizing sequences for parallelism. 3) I've read that alignment of ops is good for "cache" performance Usually these comments don't give any supporting references. Its one thing to read a blog or a comment that says, "its a good idea to do such and such", but its another to actually write a compiler that implements specific op sequences and realize most material online, especially blogs, are not useful for practical application. So I'm a believer in finding things out myself (disassembly, etc. to see what real world apps do). This is one case where I need some outside info. I notice compilers will usually start an odd byte instruction immediately after whatever previous instruction sequence there was. So the compiler is not taking any special care in most cases. I see "nop" here or there, but usually it seems nop is used sparingly, if at all. How critical is opcode alignment? Can you provide references for cases that I can actually use for implementation? Thanks.

    Read the article

  • Scheme homework Black jack help....

    - by octavio
    So I need to do a game of blackjack simulator, butt can't seem to figure out whats wrong with the shuffle it's suppose to take a card randomly from the pack the put it on top of the pack. The delete it from the rest. so : (ace)(2)(3)(4)(5)...(k) if random card is let say 5 (5)(ace)(2)(3)(4)(5)...(k) then it deletes the 2nd 5 (5)(ace)(2)(3)(4)(6)...(k) here is the code: (define deck '((A . C) (2 . C) (3 . C) (4 . C) (5 . C) (6 . C) (7 . C) (8 . C) (9 . C) (10 . C) (V . C) (Q . C) (K . C))) ;auxilliary function for shuffle let you randomly select a card. (define shuffAux (lambda (t) (define cardR (lambda (t) (list-ref t (random 13)))) (cardR t))) ;auxilliary function used to remove the card after the car to prevent you from removing the randomly selected from the car(begining of the deck). (define (removeDupC card deck) (delete card (cdr deck)) ) (define shuffle2ndtry (lambda (deck seed) (define do-shuffle (lambda (deck seed) (if (> seed 0)( (cons (shuffAux deck) deck) (removeDupC (car deck) deck) (- 1 seed)) (write deck) ) ) ) (do-shuffle deck seed))) (define (shuffle deck seed) (define cards (cons (shuffAux deck) deck)) (write cards) (case (> seed 0) [(#t) (removeDupC (car cards) (cdr cards)) (shuffle cards (- seed 1))] [(#f) (write cards)])) (define random (let ((seed 0) (a 3141592653) (c 2718281829) (m (expt 2 35))) (lambda (limit) (cond ((and (integer? limit)) (set! seed (modulo (+ (* seed a) c) m)) (quotient (* seed limit) m)) (else (/ (* limit (random 34359738368)) 34359738368)))))) ;function in which you can delete an element from the list. (define delete (lambda (item list) (cond ((equal? item (car list)) (cdr list)) (else (cons (car list) (delete item (cdr list))))))) (

    Read the article

  • Beginner question - Loop invariants (Specifically Ch.3 of "Accelerated C++")

    - by Owen
    Hi - as I said, a complete beginner question here. I'm currently working my way through "Accelerated C++" and just came across this in chapter 3: // invariant: // we have read count grades so far, and // sum is the sum of the first count grades while (cin >> x) { ++count; sum += x; } The authors follow this by explaining that the invariant needs special attention paid to it because when the input is read into the variable x, we will have read count+1 grades and thus the invariant will be untrue. Similarly, when we have incremented the counter, the variable sum will no longer be the sum of the last count grades (in case you hadn't guessed, it's the traditional program for calculating student marks). What I don't understand is why this matters. Surely for just about any other loop, a similar statement would be true? For example, here is the book's first while loop (the output is filled in later): // invariant: we have written r rows so far while (r != rows) { // write a row of output std::cout << std::endl; ++r; } Once we have written the appropriate row of output, surely the invariant is false until we have incremented r, just as in the other example? It's probably something really obvious, anyone could enlighten me as to what makes these two cases different, that'd be great - and thanks in advance for taking the time to answer such a complete novice question. Owen

    Read the article

  • Named pipe blocking with user nobody

    - by dnagirl
    I have 2 short scripts. The first, an awk script, processes a large file and prints to a named pipe 'myfifo.dat'. The second, a Perl script, runs a LOAD DATA LOCAL INFILE 'myfifo.dat'... command. Both of these scripts work when run locally like so: lee.awk big.file & lee.pl However, when I call these scripts from a PHP webpage, the named pipe blocks: $awk="/path/to/lee.awk {$_FILES['uploadfile']['tmp_name']} &"; $sql="/path/to/lee.pl"; if(!exec($awk,$return,$err)) throw new ZException(print_r($err,true)); //blocks here if(!exec($sql,$return,$err)) throw new ZException(print_r($err,true)); If I modify the awk and Perl scripts so that they write and read to a normal file, everything works fine from PHP. The permissions on the fifo and the normal file are 666 (for testing purposes). These operations run much more quickly through a named pipe, so I'd prefer to use one. Any ideas how to unblock it? ps. In case you're wondering why I'm going to all this aggravation, see this SO question.

    Read the article

  • How i can add border to ListViewItem, ListView in GridView mode.

    - by Andrew
    Hello! I want to have a border around ListViewItem (row in my case). ListView source and columns generated during Runtime. In XAML i have this structure: <ListView Name="listViewRaw"> <ListView.View> <GridView> </GridView> </ListView.View> </ListView> During Runtime i bind listview to DataTable, adding necessary columns and bindings: var view = (listView.View as GridView); view.Columns.Clear(); for (int i = 0; i < table.Columns.Count; i++) { GridViewColumn col = new GridViewColumn(); col.Header = table.Columns[i].ColumnName; col.DisplayMemberBinding = new Binding(string.Format("[{0}]", i.ToString())); view.Columns.Add(col); } listView.CoerceValue(ListView.ItemsSourceProperty); listView.DataContext = table; listView.SetBinding(ListView.ItemsSourceProperty, new Binding()); So i want to add border around each row, and set border behavior (color etc) with DataTriggers (for example if value in 1st column = "Visible", set border color to black). Can i put border through DataTemplate in ItemTemplate? I know solution, where you manipulate with CellTemplates, but i don't really like it. I want something like this if this even possible. <DataTemplate> <Border Name="Border" BorderBrush="Transparent" BorderThickness="2"> <ListViewItemRow><!-- Put my row here, but i ll know about table structure only during runtime --></ListViewItemRow> </Border> </DataTemplate>

    Read the article

  • Fastest image iteration in Python

    - by Greg
    I am creating a simple green screen app with Python 2.7.4 but am getting quite slow results. I am currently using PIL 1.1.7 to load and iterate the images and saw huge speed-ups changing from the old getpixel() to the newer load() and pixel access object indexing. However the following loop still takes around 2.5 seconds to run for an image of around 720p resolution: def colorclose(Cb_p, Cr_p, Cb_key, Cr_key, tola, tolb): temp = math.sqrt((Cb_key-Cb_p)**2+(Cr_key-Cr_p)**2) if temp < tola: return 0.0 else: if temp < tolb: return (temp-tola)/(tolb-tola) else: return 1.0 .... for x in range(width): for y in range(height): Y, cb, cr = fg_cbcr_list[x, y] mask = colorclose(cb, cr, cb_key, cr_key, tola, tolb) mask = 1 - mask bgr, bgg, bgb = bg_list[x,y] fgr, fgg, fgb = fg_list[x,y] pixels[x,y] = ( (int)(fgr - mask*key_color[0] + mask*bgr), (int)(fgg - mask*key_color[1] + mask*bgg), (int)(fgb - mask*key_color[2] + mask*bgb)) Am I doing anything hugely inefficient here which makes it run so slow? I have seen similar, simpler examples where the loop is replaced by a boolean matrix for instance, but for this case I can't see a way to replace the loop. The pixels[x,y] assignment seems to take the most amount of time but not knowing Python very well I am unsure of a more efficient way to do this. Any help would be appreciated.

    Read the article

  • WPF Background Thread Invocation

    - by jeffn825
    Maybe I'm mis-remembering how Winforms works or I'm overcomplicating the hell out of this, but here's my problem. I have a WPF client app application that talks to a server over WCF. The current user may "log out" of the WPF client, which closes all open screens, leaves only the navigation pane, and minimizes the program window. When the user re-maximizes the program window, they are prompted to log in. Simple. But sometimes things happen on background threads - like every 5 minutes the client tries to make a WCF calls that refreshes some cached data. And what if the user is logged out when this 5 minute timer triggers? Well, then the user should be prompted to log back in...and this must of course happen on the UI thread. private static ISecurityContext securityContext; public static ISecurityContext SecurityContext { get { if (securityContext == null) { // Login method shows a window and prompts the user to log in Application.Current.Dispatcher.Invoke((Action)Login); } return securityContext; } } So far so good, right? But what happens when multiple threads hit this spot of code? Well, my first intuition was that since I'm syncrhonizing across the Application.Current.Dispatcher, I should be fine, and whichever thread hit first would be responsible for showing the login form and getting the user logged in... Not the case... Thread 1 will hit the code and call ShowDialog on the login form Thread 2 will also hit the code and will call Login as soon as Thread 1 has called ShowDialog, since calling ShowDialog unblocked Thread 1 (I believe because of the way the WPF message pump works) All I want is a synchronized way of getting the user logged back into the application...what am I missing here? Thanks in advance.

    Read the article

  • T-SQL Right Joins to ALL Entries inc Selected Column

    - by Pace
    Hi Experts, I have the following Query which produces the output below; SELECT TBLUSERS.USERID, TBLUSERS.ADusername, TBLACCESSLEVELS.ACCESSLEVELID, TBLACCESSLEVELS.AccessLevelName FROM TBLACCESSLEVELS INNER JOIN TBLACCESSRIGHTS ON TBLACCESSLEVELS.ACCESSLEVELID = TBLACCESSRIGHTS.ACCESSLEVELID INNER JOIN TBLUSERS ON TBLACCESSRIGHTS.USERID = TBLUSERS.USERID The output is this; 29 administrator 1 AllUsers 29 administrator 2 JobQueue 29 administrator 3 Telephone Directory Admin 29 administrator 4 Jobqueueadmin 29 administrator 5 UserAdmin 29 administrator 6 Product System 27 alan 1 AllUsers 97 andy 1 AllUsers 26 barry 1 AllUsers 26 barry 2 JobQueue 26 barry 3 Telephone Directory Admin 26 barry 4 Jobqueueadmin 26 barry 5 UserAdmin 26 barry 6 Product System 26 barry 7 Newseditor 26 barry 8 GreetingBoard What I would like to do is modify the query so I get all Access Levels regardless of weather there is an entry for that user. What I would also like to do is some sort of exist case so that I get output like the following; 29 administrator 1 AllUsers True 29 administrator 2 JobQueue True 29 administrator 3 Telephone Directory Admin True 29 administrator 4 Jobqueueadmin True 29 administrator 5 UserAdmin True 29 administrator 6 Product System True 29 administrator 7 Newseditor False 29 administrator 8 GreetingBoard False 27 alan 1 AllUsers True 27 alan 2 JobQueue False 27 alan 3 Telephone Directory Admin False 27 alan 4 Jobqueueadmin False 27 alan 5 UserAdmin False 27 alan 6 Product System False 27 alan 7 Newseditor False 27 alan 8 GreetingBoard False 97 andy 1 AllUsers True 97 andy 2 JobQueue False 97 andy 3 Telephone Directory Admin False 97 andy 4 Jobqueueadmin False 97 andy 5 UserAdmin False 97 andy 6 Product System False 97 andy 7 Newseditor False 97 andy 8 GreetingBoard False 26 Barry 1 AllUsers True 26 Barry 2 JobQueue True 26 Barry 3 Telephone Directory Admin True 26 Barry 4 Jobqueueadmin True 26 Barry 5 UserAdmin True 26 Barry 6 Product System True 26 Barry 7 Newseditor True 26 Barry 8 GreetingBoard True ......................................... So the rules are ALWAYS show ALL Entries for ACCESSLEVELS and where EXISTS in ACCESSRIGHTS produce a true / false to show this. I hope this makes sense and hopefully you dont need the table definitions as everything I need to work with is in the original Query. I just need a way of manipulating it slightly and getting the join in the right place. Thank you. Pace

    Read the article

  • Rails: keeping DRY with ActiveRecord models that share similar complex attributes

    - by Greg
    This seems like it should have a straightforward answer, but after much time on Google and SO I can't find it. It might be a case of missing the right keywords. In my RoR application I have several models that share a specific kind of string attribute that has special validation and other functionality. The closest similar example I can think of is a string that represents a URL. This leads to a lot of duplication in the models (and even more duplication in the unit tests), but I'm not sure how to make it more DRY. I can think of several possible directions... create a plugin along the lines of the "validates_url_format_of" plugin, but that would only make the validations DRY give this special string its own model, but this seems like a very heavy solution create a ruby class for this special string, but how do I get ActiveRecord to associate this class with the model attribute that is a string in the db Number 3 seems the most reasonable, but I can't figure out how to extend ActiveRecord to handle anything other than the base data types. Any pointers? Finally, if there is a way to do this, where in the folder hierarchy would you put the new class that is not a model? Many thanks.

    Read the article

  • How to separate ear classloader and system classloader in JBoss 6?

    - by dskiles
    I'm trying to upgrade from JBoss 4.2.1 to JBoss 6. In JBoss 4.2.1, we are manually deploying our application as an exploded war and everything works beautifully. I'm running into problems because the application that I am trying to deploy uses versions of 3rd party libraries that are older than the ones that JBoss 6 now includes by default. The result of this is that I'm getting classloader conflicts all over the place and the application won't even start. At present, I have tried using the JBoss Classloading Documentation as well as the scanty bits of documentation for jboss-classloading.xml and haven't had any success. Has anyone out there managed to do this successfully? If you have, how did you do it? I've included a stack trace below in case it offers any useful information. Caused by: java.lang.Error: Error visiting "/C:/jboss6/server/default/deploy/app.war/WEB-INF/lib/jaxb-xjc-2.1.12.jar/1.0/com/sun/codemodel/JConditional.class" at org.jboss.classloading.plugins.vfs.VFSResourceVisitor.visit(VFSResourceVisitor.java:268) [jboss-classloading-vfs.jar:2.2.0.Alpha9] at org.jboss.vfs.VirtualFile.visit(VirtualFile.java:407) [jboss-vfs.jar:3.0.0.CR5] at org.jboss.vfs.VirtualFile.visit(VirtualFile.java:409) [jboss-vfs.jar:3.0.0.CR5] at org.jboss.vfs.VirtualFile.visit(VirtualFile.java:409) [jboss-vfs.jar:3.0.0.CR5] at org.jboss.vfs.VirtualFile.visit(VirtualFile.java:409) [jboss-vfs.jar:3.0.0.CR5] at org.jboss.vfs.VirtualFile.visit(VirtualFile.java:409) [jboss-vfs.jar:3.0.0.CR5] at org.jboss.vfs.VirtualFile.visit(VirtualFile.java:395) [jboss-vfs.jar:3.0.0.CR5] at org.jboss.classloading.plugins.vfs.VFSResourceVisitor.visit(VFSResourceVisitor.java:102) [jboss-classloading-vfs.jar:2.2.0.Alpha9] at org.jboss.deployers.vfs.plugins.classloader.VFSDeploymentClassLoaderPolicyModule.visit(VFSDeploymentClassLoaderPolicyModule.java:181) [:2.2.0.Alpha8] at org.jboss.scanning.plugins.DeploymentUnitScanner.scan(DeploymentUnitScanner.java:111) [:1.0.0.Alpha7] at org.jboss.scanning.spi.helpers.UrlScanner.scan(UrlScanner.java:96) [:1.0.0.Alpha7] at org.jboss.scanning.deployers.ScanningDeployer.deploy(ScanningDeployer.java:90) [:1.0.0.Alpha7] at org.jboss.deployers.plugins.deployers.DeployerWrapper.deploy(DeployerWrapper.java:179) [:2.2.0.Alpha8] ... 41 more

    Read the article

  • help in the Donalds B. Johnson's algorithm, i cannot understand the pseudo code

    - by Pitelk
    Hi , does anyone know the Donald B. Johnson's algorithm which enumarates all the elementary circuits (cycles) in a Directed graph? link text I have the paper he had published in 1975 but I cannot understand the pseudo-code. My goal is to implement this algorithm in java. Some questions i have is for example what is the matrix Ak it refers to. In the pseudo code mentions that Ak:=adjacency structure of strong component K with least vertex in subgraph of G induced by {s,s+1,....n}; Does that mean i have to implement another algorithm that finds the Ak matrix? Another question is what the following means? begin logical f; Does also the line "logical procedure CIRCUIT (integer value v);" means that the circuit procedure returns a logical variable. In the pseudo code also has the line "CIRCUIT := f;" . Does this mean? It would be great if someone could translate this 1970's pseudocode to a more modern type of pseudo code so i can understand it in case you are interested to help but you cannot find the paper please email me at [email protected] and i will send you the paper. Thanks in advance

    Read the article

  • Large number of simultaneous long-running operations in Qt

    - by Hostile Fork
    I have some long-running operations that number in the hundreds. At the moment they are each on their own thread. My main goal in using threads is not to speed these operations up. The more important thing in this case is that they appear to run simultaneously. I'm aware of cooperative multitasking and fibers. However, I'm trying to avoid anything that would require touching the code in the operations, e.g. peppering them with things like yieldToScheduler(). I also don't want to prescribe that these routines be stylized to be coded to emit queues of bite-sized task items...I want to treat them as black boxes. For the moment I can live with these downsides: Maximum # of threads tend to be O(1000) Cost per thread is O(1MB) To address the bad cache performance due to context-switches, I did have the idea of a timer which would juggle the priorities such that only idealThreadCount() threads were ever at Normal priority, with all the rest set to Idle. This would let me widen the timeslices, which would mean fewer context switches and still be okay for my purposes. Question #1: Is that a good idea at all? One certain downside is it won't work on Linux (docs say no QThread::setPriority() there). Question #2: Any other ideas or approaches? Is QtConcurrent thinking about this scenario? (Some related reading: how-many-threads-does-it-take-to-make-them-a-bad-choice, many-threads-or-as-few-threads-as-possible, maximum-number-of-threads-per-process-in-linux)

    Read the article

  • UniqueConstraint in EmbeddedConfiguration

    - by LantisGaius
    I just started using db4o on C#, and I'm having trouble setting the UniqueConstraint on the DB.. here's the db4o configuration static IObjectContainer db = Db4oEmbedded.OpenFile(dbase.Configuration(), "data.db4o"); static IEmbeddedConfiguration Configuration() { IEmbeddedConfiguration dbConfig = Db4oEmbedded.NewConfiguration(); // Initialize Replication dbConfig.File.GenerateUUIDs = ConfigScope.Globally; dbConfig.File.GenerateVersionNumbers = ConfigScope.Globally; // Initialize Indexes dbConfig.Common.ObjectClass(typeof(DAObs.Environment)).ObjectField("Key").Indexed(true); dbConfig.Common.Add(new Db4objects.Db4o.Constraints.UniqueFieldValueConstraint(typeof(DAObs.Environment), "Key")); return dbConfig; } and the object to serialize: class Environment { public string Key { get; set; } public string Value { get; set; } } everytime I get to commiting some values, an "Object reference not set to an instance of an object." Exception pops up, with a stack trace pointing to the UniqueFieldValueConstraint. Also, when I comment out the two lines after the "Initialize Indexes" comment, everything runs fine (Except you can save non-unique keys, which is a problem)~ Commit code (In case I'm doing something wrong in this part too:) public static void Create(string key, string value) { try { db.Store(new DAObs.Environment() { Key = key, Value = value }); db.Commit(); } catch (Db4objects.Db4o.Events.EventException ex) { System.Console.WriteLine (DateTime.Now + " :: Environment.Create\n" + ex.InnerException.Message +"\n" + ex.InnerException.StackTrace); db.Rollback(); } } Help please? Thanks in advance~

    Read the article

  • C++ Exceptions and Inheritance from std::exception

    - by fbrereto
    Given this sample code: #include <iostream> #include <stdexcept> class my_exception_t : std::exception { public: explicit my_exception_t() { } virtual const char* what() const throw() { return "Hello, world!"; } }; int main() { try { throw my_exception_t(); } catch (const std::exception& error) { std::cerr << "Exception: " << error.what() << std::endl; } catch (...) { std::cerr << "Exception: unknown" << std::endl; } return 0; } I get the following output: Exception: unknown Yet simply making the inheritance of my_exception_t from std::exception public, I get the following output: Exception: Hello, world! Could someone please explain to me why the type of inheritance matters in this case? Bonus points for a reference in the standard.

    Read the article

  • overriding enumeration base type using pragma or code change

    - by vprajan
    Problem: I am using a big C/C++ code base which works on gcc & visual studio compilers where enum base type is by default 32-bit(integer type). This code also has lots of inline + embedded assembly which treats enum as integer type and enum data is used as 32-bit flags in many cases. When compiled this code with realview ARM RVCT 2.2 compiler, we started getting many issues since realview compiler decides enum base type automatically based on the value an enum is set to. http://www.keil.com/support/man/docs/armccref/armccref_Babjddhe.htm For example, Consider the below enum, enum Scale { TimesOne, //0 TimesTwo, //1 TimesFour, //2 TimesEight, //3 }; This enum is used as a 32-bit flag. but compiler optimizes it to unsigned char type for this enum. Using --enum_is_int compiler option is not a good solution for our case, since it converts all the enum's to 32-bit which will break interaction with any external code compiled without --enum_is_int. This is warning i found in RVCT compilers & Library guide, The --enum_is_int option is not recommended for general use and is not required for ISO-compatible source. Code compiled with this option is not compliant with the ABI for the ARM Architecture (base standard) [BSABI], and incorrect use might result in a failure at runtime. This option is not supported by the C++ libraries. Question How to convert all enum's base type (by hand-coded changes) to use 32-bit without affecting value ordering? enum Scale { TimesOne=0x00000000, TimesTwo, // 0x00000001 TimesFour, // 0x00000002 TimesEight, //0x00000003 }; I tried the above change. But compiler optimizes this also for our bad luck. :( There is some syntax in .NET like enum Scale: int Is this a ISO C++ standard and ARM compiler lacks it? There is no #pragma to control this enum in ARM RVCT 2.2 compiler. Is there any hidden pragma available ?

    Read the article

  • Programmatically implementing an interface that combines some instances of the same interface in var

    - by namin
    What is the best way to implement an interface that combines some instances of the same interface in various specified ways? I need to do this for multiple interfaces and I want to minimize the boilerplate and still achieve good efficiency, because I need this for a critical production system. Here is a sketch of the problem. Abstractly, I have a generic combiner class which takes the instances and specify the various combinators: class Combiner<I> { I[] instances; <T> T combineSomeWay(InstanceMethod<I,T> method) { // ... method.call(instances[i]) ... combined in some way ... } // more combinators } Now, let's say I want to implement the following interface among many others: Interface Foo { String bar(int baz); } I want to end up with code like this: class FooCombiner implements Foo { Combiner<Foo> combiner; @Override public String bar(final int baz) { return combiner.combineSomeWay(new InstanceMethod<Foo, String> { @Override public call(Foo instance) { return instance.bar(baz); } }); } } Now, this can quickly get long and winded if the interfaces have lots of methods. I know I could use a dynamic proxy from the Java reflection API to implement such interfaces, but method access via reflection is hundred times slower. So what are the alternatives to boilerplate and reflection in this case?

    Read the article

  • Indy Write Buffering / Efficient TCP communication

    - by Smasher
    I know, I'm asking a lot of questions...but as a new delphi developer I keep falling over all these questions :) This one deals with TCP communication using indy 10. To make communication efficient, I code a client operation request as a single byte (in most scenarios followed by other data bytes of course, but in this case only one single byte). Problem is that var Bytes : TBytes; ... SetLength (Bytes, 1); Bytes [0] := OpCode; FConnection.IOHandler.Write (Bytes, 1); ErrorCode := Connection.IOHandler.ReadByte; does not send that byte immediately (at least the servers execute handler is not invoked). If I change the '1' to a '9' for example everything works fine. I assumed that Indy buffers the outgoing bytes and tried to disable write buffering with FConnection.IOHandler.WriteBufferClose; but it did not help. How can I send a single byte and make sure that it is immediatly sent? And - I add another little question here - what is the best way to send an integer using indy? Unfortunately I can't find function like WriteInteger in the IOHandler of TIdTCPServer...and WriteLn (IntToStr (SomeIntVal)) seems not very efficient to me. Does it make a difference whether I use multiple write commands in a row or pack things together in a byte array and send that once? Thanks for any answers! EDIT: I added a hint that I'm using Indy 10 since there seem to be major changes concerning the read and write procedures.

    Read the article

  • How to not persist NSManagedObjects retrieved from NSManagedObjectContext

    - by RickiG
    Hi I parse an xml file containing books, for each new node I go: Book *book = (Book*)[NSEntityDescription insertNewObjectForEntityForName:@"Book" inManagedObjectContext:managedObjectContext]; To obtain an NSManagedObject of my Core Data Book Entity, I then proceed to populate the managed Book object with data, add it to an array, rinse, repeat. When I am done, I present the list of books to the user. I have not yet executed the save: NSError *error; if (![managedObjectContext save:&error]) { NSLog(@"%@", [error domain]); } The user now selects one of the books, this one I would like to persist, but only this one, all the other books are of no interest to me any more. The Book Entity does not have/or is part of any relationships. It is just a "single" Entity. If I pull the "save lever" every Book object will be persisted and I will have to delete everything but my desired one. How would I get around this challenge, I can't really seem to find that particular use-case in the Core Data Programming Guide, which sort of also bugs me a bit, am I going against best practice here? Thanks for any help given.

    Read the article

  • Handling mach exceptions in 64bit OS X application

    - by Brad S
    I have been able to register my own mach port to capture mach exceptions in my applications and it works beautifully when I target 32 bit. However when I target 64 bit, my exception handler catch_exception_raise() gets called but the array of exception codes that is passed to the handler are 32 bits wide. This is expected in a 32 bit build but not in 64 bit. In the case where I catch EXC_BAD_ACCESS the first code is the error number and the second code should be the address of the fault. Since the second code is 32 bits wide the high 32 bits of the 64 bit fault address is truncated. I found a flag in <mach/exception_types.h> I can pass in task_set_exception_ports() called MACH_EXCEPTION_CODES which from looking at the Darwin sources appears to control the size of the codes passed to the handler. It looks like it is meant to be ored with the behavior passed in to task_set_exception_ports(). However when I do this and trigger an exception, my mach port gets notified, I call exc_server() but my handler never gets called, and when the reply message is sent back to the kernel I get the default exception behavior. I am targeting the 10.6 SDK. I really wish apple would document this stuff better. Any one have any ideas?

    Read the article

< Previous Page | 697 698 699 700 701 702 703 704 705 706 707 708  | Next Page >