Search Results

Search found 39405 results on 1577 pages for 'zeta two'.

Page 216/1577 | < Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >

  • What would you choose for your project between .NET and Java at this point in time ?

    - by Basic
    You are just starting a new project and you have these two technologies to choose from, Java and .NET. The project you are working doesn't involve having features that would make it easy to choose between the two technologies (e.g. .NET has this that I need and Java does not) and both of them should work just fine for you (though you only need one of course). Take into account: Performance Tools available (even 3rd party tools) Cross platform compatibility Libraries (especially 3rd party libraries) Cost (Oracle seems to try and monetize Java) Development process (Easiest/Fastest) Also keep in mind that Linux is not your main platform but you would like to port your project to Linux/MacOs as well. You should definitely keep in mind the trouble that has been revolving around Oracle and the Java community and the limitations of Mono and Java as well. It would be much appreciated if people with experience in both can give an overview and their own subjective view about which they would choose and why.

    Read the article

  • Application stuck in TCP retransmit

    - by SandeepJ
    I am running Linux kernel 3.13 (Ubuntu 14.04) on two Virtual Machines each of which operates inside two different servers running ESXi 5.1. There is a zeromq client-server application running between the two VMs. After running for about 10-30 minutes, this application consistently hangs due to inability to retransmit a lost packet. When I run the same setup over Ubuntu 12.04 (Linux 3.11), the application never fails If you notice below, "ss" (socket statistics) shows 1 packet lost, sk_wmem_queued of 14110 (i.e. w14110) and a high rto (120000). State Recv-Q Send-Q Local Address:Port Peer Address:Port ESTAB 0 12350 192.168.2.122:41808 192.168.2.172:55550 timer:(on,16sec,10) uid:1000 ino:35042 sk:ffff880035bcb100 <- skmem:(r0,rb648720,t0,tb1164800,f2274,w14110,o0,bl0) ts sack cubic wscale:7,7 rto:120000 rtt:7.5/3 ato:40 mss:8948 cwnd:1 ssthresh:21 send 9.5Mbps unacked:1 retrans:1/10 lost:1 rcv_rtt:1476 rcv_space:37621 Since this has happened so consistently, I was able to capture the TCP log in wireshark. I found that the packet which is lost does get retransmitted and even acknowledged by the TCP in the other OS (the sequence number is seen in the ACK), but the sender doesn't seem to understand this ACK and continues retransmitting. MTU is 9000 on both virtual machines and througout the route. The packets being sent are large in size. As I said earlier, this does not happen on Ubuntu 12.04 (kernel 3.11). So I did a diff on the TCP config options (seen via "sysctl -a |grep tcp ") between 14.04 and 12.04 and found the following differences. I also noticed that net.ipv4.tcp_mtu_probing=0 in both configurations. Left side is 3.11, right side is 3.13 <<net.ipv4.tcp_abc = 0 <<net.ipv4.tcp_cookie_size = 0 <<net.ipv4.tcp_dma_copybreak = 4096 14c11 << net.ipv4.tcp_early_retrans = 2 --- >> net.ipv4.tcp_early_retrans = 3 17c14 << net.ipv4.tcp_fastopen = 0 >> net.ipv4.tcp_fastopen = 1 20d16 << net.ipv4.tcp_frto_response = 0 26,27c22 << net.ipv4.tcp_max_orphans = 16384 << net.ipv4.tcp_max_ssthresh = 0 >> net.ipv4.tcp_max_orphans = 4096 29,30c24,25 << net.ipv4.tcp_max_tw_buckets = 16384 << net.ipv4.tcp_mem = 94377 125837 188754 >> net.ipv4.tcp_max_tw_buckets = 4096 >> net.ipv4.tcp_mem = 23352 31138 46704 34a30 >> net.ipv4.tcp_notsent_lowat = -1 My question to the networking experts on this forum : Are there any other debugging tools or options I can install/enable to dig further into why this TCP retransmit failure is occurring so consistently ? Are there any configuration changes which might account for this weird behaviour.

    Read the article

  • Boot prompt hyphens

    - by purjuntu
    Booting an Ubuntu DVD, pressing F6 and then ESC presents the boot prompt with the default kernel options, with the possibility of editing and adding extra options. Something like: kernel /casper/vmlinuz boot=casper quiet splash -- Questions: What's the meaning of the two hyphens? When adding an extra option (such as "toram" or "vga=791"), is there any difference between adding it BEFORE or AFTER the hyphens? When typing commands in Bash, two hyphens in a row means "options end here; anything that follows should be treated as an argument, even if it starts with a hyphen". But the hyphens must have another meaning at the boot prompt, as "toram" or "vga=791" really are options.

    Read the article

  • What is the best taxonomy from Google's perspective?

    - by ZakGottlieb
    I was wondering what the best way is to structure a new website in Google's eyes. Currently, it contains two top-level categories (X & Y), and clicking a term under either one will result in the URL: www.nameofsite.com/X/X type term, or /Y/Y type term Technically, it is correct to group all "X type terms" under X and "Y type terms" under Y, but we could probably be more granular and break all articles into 5-6 top-level categories by breaking Y up into more specific categories. Given that the current URL structure will eventually result in 1000's of "X type terms" and "Y type terms" under just two top-level categories, would it be more advisable to have several of these, as suggested? Thank you in advance.

    Read the article

  • Database Context and Singleton injection with IoC

    - by zaitsman
    All of the below relates to a ASP.NET c# app. I have a Singleton Settings MemoryCache that reads values from database on first access and caches these, then invalidates them using SQL Service Broker message and re-reads as required. For the purposes of standard controllers, i create my Db Context in a request scope. However, this obviously means that i can't use the same context in the Settings Cache class, since that is a singleton and we have a scope collision. At the moment, i ended up with two db contexts - the Controllers get it via IoC container, whereas a Singleton just creates it's own. However, i am not satisfied with this approach (mostly due to the way i feel about two contexts, the cache doesn't set anything on the db hence concurrency is not an issue as much). What is a better way to do it?

    Read the article

  • What is the difference from the push and pull development models?

    - by michelpm
    I was reading Extreme Programming Explained, Second Edition and in the chapter 11 "The Theory of Constraints" the authors talk about the old and obsolete "push" development model and the XP way, the "pull" development model. It looks like a quite important concept, but it takes only a very small paragraph and two images that are mere illustrations of the "waterfall" and iterative process, nothing specific about these models except by the image caption. I searched and it doesn't go any further about it in the rest of the book. I couldn't find any further explanations or discussions about it in the Internet either. If the only difference about those is that one is "waterfall" and the other is iterative, them why push and why pull? Does anyone understand what is really the difference between those two and give some good examples?

    Read the article

  • Update Manager Partitions

    - by user170585
    Perhaps this is completely stupid, but here's my inquiry: I have Ubuntu 12.04 installed on an external hard drive. On that HD there are 4 partitions. Two for operating systems, two for swap (unnecessary but I like it that way). The actual computer itself has Windows 7. If I use the Update manager to update to 12.10 or even 13.04, would the new Ubuntu install itself on the same partition it already was on? The other operating system I'm running on the Hard Drive is Lubuntu, for when I need to run Linux on older computers, if that matters. Thanks, Adam

    Read the article

  • In setting up dual Boot with Windows XP and Ubuntu, which OS do I install first?

    - by markl
    I'd like to install both Ubuntu 12.04 and Windows XP on a Dell laptop, and I was thinking about using a dual boot structure, and using the bulk of my hard drive as empty hard drive space to share files between the two operating systems (so choice of file system type is very important in this set-up). The kind of partitioning structure I would like to use is Partition 1 - Ubuntu 12.04 (root) (20GB) Partition 2 - Ubuntu /home (20GB) Partition 3 : Free Space (560GB) Partition 4 : Windows XP (35GB) Partition 5 : SWAP (3GB) (Total Hardrive Capacity is ~640GB) My question is; what is the best way to go about setting up this kind this system? Should I install Windows XP first and setup the partitions, and then install Ubuntu which I believe will install the GRUB bootloader for OS booting choice or Do I install Ubuntu first, setting up the available partitions and then perform a WIndows install? Please let me know if there is anything in this setup that I have left out and should know about, including things related to setting particular partitions as logical or primary, and whether the boot partition and the filesystem partition should actually be two separate partitions.

    Read the article

  • Touchpad stop working

    - by Diegov
    I'm in a HP 430 PC-Notebook using Oneiric. And sometimes my touchpad just stop working :(... I haven't installed anything, just safe playing with terminal following linuxcommand.org BTW my touchpad has a "hole" that in Windows, it works like this: two touch, block touchpad, two touch, unblock. And as far as I now, Ubuntu hasn't recognize this function so I think that the "hole" is not the problem... Also, I'm pretty sure that I haven't touch a little that... PD: Spanish speaker, sorry if "hole" is not the appropriate term haha.. Thanks!

    Read the article

  • What book do you recommend for the OCAJP certification (1Z0-803) and OCPJP (1Z0-804) [on hold]

    - by Muhammad Gelbana
    I find completely contradicting reviews for the VERY same book on amazon and even some book writers are rewarding people for good reviews so basically most of the reviews are totally fake ! You can even figure it out from the reviewer name, which you'll similar to the writer's name and assume that they could actually be from the same country and the reviewer is just being helpful, to the book writer of course ! I can't make my mind for which book I should buy ! I only need a book or two that covers the Java associate and professional topics very well, not just an overview, I need a material that covers everything from A to Z. Even though I've been developing in Java for around 4.5 years but I must not know a detail or two. Would someone kindly shed some light on a good book based on actual experience with the book ? THANK YOU !

    Read the article

  • How is WPF Data Binding using Object Data Source in Visual Studio 2010 done?

    - by Rob Perkins
    This is probably mostly a question about how to use the VS 2010 IDE tools in a way the Microsofties didn't specifically intend. But since this is something I immediately tried without success. I have defined a .NET 4.0 WPF Application project with a simple class that looks like this: Public Class Class1 Public Property One As String = "OneString" Public Property Two As String = "TwoString" End Class I then defined it as an "Object Data Source" in VS2010, using the IDE's "Add New Data Source..." feature. This exposes the class members in a GUI element in the IDE as given in this image: Dragging "Class1" from that tool to the surface of "Window1.xaml" in a default "WPF Application" results in the design view looking like this: And generated XAML like this: <Window x:Class="Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Window1" Height="133" Width="170" xmlns:my="clr-namespace:WpfApplication1" mc:Ignorable="d" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" > <Window.Resources> <CollectionViewSource x:Key="Class1ViewSource" d:DesignSource="{d:DesignInstance my:Class1, CreateList=True}" /> </Window.Resources> <Grid DataContext="{StaticResource Class1ViewSource}" HorizontalAlignment="Left" Name="Grid1" VerticalAlignment="Top"> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto" /> <ColumnDefinition Width="Auto" /> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition Height="Auto" /> <RowDefinition Height="Auto" /> </Grid.RowDefinitions> <Label Content="One:" Grid.Column="0" Grid.Row="0" HorizontalAlignment="Left" Margin="3" VerticalAlignment="Center" /> <TextBlock Grid.Column="1" Grid.Row="0" Height="23" HorizontalAlignment="Left" Margin="3" Name="OneTextBlock" Text="{Binding Path=One}" VerticalAlignment="Center" /> <Label Content="Two:" Grid.Column="0" Grid.Row="1" HorizontalAlignment="Left" Margin="3" VerticalAlignment="Center" /> <TextBlock Grid.Column="1" Grid.Row="1" Height="23" HorizontalAlignment="Left" Margin="3" Name="TwoTextBlock" Text="{Binding Path=Two}" VerticalAlignment="Center" /> </Grid> Note the data bindings Text="{Binding Path=One}" and Text="{Binding Path=Two}" in the TextBlock elements. Code-behind for Window1.xaml has this in Window_Loaded: Class Window1 Private m_c1 As New Class1 Private Sub Window1_Loaded(ByVal sender As Object, ByVal e As System.Windows.RoutedEventArgs) Handles Me.Loaded Dim Class1ViewSource As System.Windows.Data.CollectionViewSource = CType(Me.FindResource("Class1ViewSource"), System.Windows.Data.CollectionViewSource) 'Load data by setting the CollectionViewSource.Source property: 'Class1ViewSource.Source = [generic data source] Me.DataContext = m_c1 End Sub End Class Running the application produces this output: The expected result was that "OneString" would appear next to "One" and "TwoString" next to "Two" in the running window. The question is: Why didn't this work? What will work instead? If I put bindings in a DataTemplate, it works. Blend, with its sample data stuff, implied that this should work, but it doesn't. I know I'm missing something pretty fundamental here; what is it?

    Read the article

  • How to get IMediaControl.Run() to start a file playing with no delay

    - by MusiGenesis
    I am attempting to use DirectShow to play two AVI files consecutively (one after the other) so that there is no interruption in the audio or video when the player transitions from one file to the next. I have two custom controls on my form. Each one is pre-loaded with an AVI file, and before playback begins I set up all the DirectShow interfaces, set the video windows and resize them, call IMediaControl.Run(), then IMediaControl.Pause(), then IMediaSeeking.SetPositions to reset to frame 0, on both controls. On the form, you can see that both files are paused at their initial frames. I then call IMediaControl.Run() on the first control, and wait for it to complete before calling Run() on the second control. Initially, I hooked into the first video's EC_COMPLETE notification message, and used this to start the second. Thinking that this event might be slow to arrive (turns out it is, but for a weird reason), I tried two other approaches: Check the first video's current position inside a timer that goes off every second or so (using IMediaPosition.get_CurrentPosition). When the current position is within a second of the video's stop time (known in advance from IMediaPosition.get_StopTime), I go into a tight while loop and wait for the current position to equal the stop time, and then call Run() on the second video. Same as the first, except I replace the while loop with a call to timeSetEvent from winmm.dll, with a delay set so that it fires right when the first file is supposed to end. I use the callback to Run() the second file. Either of these two methods substantially cuts down the delay between the end of the first file and the beginning of the second, indicating that the EC_COMPLETE message doesn't arrive immediately after the file is complete (I also tried hooking the EC_SEGMENT_COMPLETE message, which is supposed to be used for looping within a file, but apparently nobody supports this - it never occurs on my machine, at least). Doing all of the above has cut the transition delay from as much as a second, down to a barely perceptible glitch; about a third of the time the files transition with no interruption at all, which suggests there's no fundamental reason I can't get this to work all the time. The slight delay is still unacceptable, unfortunately. I assume (and I could easily be wrong) that the remaining delay is due to a slight variable delay between the call to IMediaControl.Run() and when the video actually starts playing. Does anybody know anything I can do to eliminate this little lag? It would also help to be told this is fundamentally impossible for whatever reason, which wouldn't surprise me. I've never encountered a video player in Windows that doesn't have this problem, so it may not be doable. More info: the AVI files I'm playing are completely uncompressed (video and audio are uncompressed), so I don't think the lag is due to DirectShow's having to uncompress the video ahead of play start, although it may still buffer ahead as matter of course (and this may be the source of the problem). I would have though that starting play, pausing and then rewinding to the beginning would fix this. Also, the way I'm handling the transition is to actually have the second control underneath the first; when the first completes playing, I start the second and then call BringToFront on it, creating the appearance of a single video transitioning between the two originals. I don't think the glitch is due to this, because it works perfectly some of the time, and even if this were problematic, it wouldn't explain the matching audio glitch. Even more: I just tried starting the second video 30-50 milliseconds "early" and that seemed to eliminate even more of the gap, so I'm guessing that the lag in Run() is about that long. It appears to be variable, though, so this is still not where I need it to be.

    Read the article

  • WebSphere Application Server EJB Optimization

    - by Chris Aldrich
    We are working on developing a Java EE based application. Our application is Java 1.5 compatible and will be deployed to WAS ND 6.1.0.21 with EBJ 3.0 and Web Services feature packs. The configuration is currently one cell with two clusters. Each cluster will have two nodes. Our application, or our system, as I should rather say, comes in two or three parts. Part 1: An ear deployed to one cluster that contains 3rd party vendor code combined with customization code. Their code is EJB 2.0 compliant and has a lot of Remote Home interfaces. Part 2: An ear deployed to the same cluster as the first ear. This ear contains EBJ 3's that make calls into the EJB 2's supplied by the vendor and the custom code. These EJB 3's are used by the JSF UI also packaged with the EAR, and some of them are also exposed as web services (JAX-WS 2.0 with SOAP 1.2 compliance) for other clients. Part 3: There may be other services that do not depend on our vendor/custom code app. These services will be EJB 3.0's and web services that are deployed to the other cluster. Per a recommendation from some IBM staff on site here, communication between nodes in a cluster can be EJB RMI. But if we are going across clusters and/or other cells, then the communication should be web services. That said, some of us are wondering about performance and optimizing communication for speed of our applications that will use our web services and EJB's. Right now most EJB's are exposed as remote. (and our vendor set theirs up that way, rather than also exposing local home interfaces). We are wondering if WAS does any optimizations between apps in the same node/cluster node space. If two apps are installed in the same area and they call each other via remote home interface, is WAS smart enough to make it a local home interface call? Are their other optimization techniques? Should we consider them? Should we not? What are the costs/benefits? Here is the question from one of our team members as sent in their email: The question is: Supposing we develop our EJBs as remote EJBs, where our UI controller code is talking to our EXT java services via EJB3...what are our options for performance optimization when both the EJB server and client are running in the same container? As one point of reference, google has given me some oooooold websphere performance tuning documentation from 2000 that explains a tuning configuration you can set to enable Call By Reference for EJB communication when they're in the same application server JVM. It states the following: Because EJBs are inherently location independent, they use a remote programming model. Method parameters and return values are serialized over RMI-IIOP and returned by value. This is the intrinsic RMI "Call By Value" model. WebSphere provides the "No Local Copies" performance optimization for running EJBs and clients (typically servlets) in the same application server JVM. The "No Local Copies" option uses "Call By Reference" and does not create local proxies for called objects when both the client and the remote object are in the same process. Depending on your workload, this can result in a significant overhead savings. Configure "No Local Copies" by adding the following two command line parameters to the application server JVM: * -Djavax.rmi.CORBA.UtilClass=com.ibm.CORBA.iiop.Util * -Dcom.ibm.CORBA.iiop.noLocalCopies=true CAUTION: The "No Local Copies" configuration option improves performance by changing "Call By Value" to "Call By Reference" for clients and EJBs in the same JVM. One side effect of this is that the Java object derived (non-primitive) method parameters can actually be changed by the called enterprise bean. Consider Figure 16a: Also, we will also be using Process Server 6.2 and WESB 6.2 as well in the future. Any ideas? recommendations? Thanks

    Read the article

  • Minimum-Waste Print Job Grouping Algorithm?

    - by Matt Mc
    I work at a publishing house and I am setting up one of our presses for "ganging", in other words, printing multiple jobs simultaneously. Given that different print jobs can have different quantities, and anywhere from 1 to 20 jobs might need to be considered at a time, the problem would be to determine which jobs to group together to minimize waste (waste coming from over-printing on smaller-quantity jobs in a given set, that is). Given the following stable data: All jobs are equal in terms of spatial size--placement on paper doesn't come into consideration. There are three "lanes", meaning that three jobs can be printed simultaneously. Ideally, each lane has one job. Part of the problem is minimizing how many lanes each job is run on. If necessary, one job could be run on two lanes, with a second job on the third lane. The "grouping" waste from a given set of jobs (let's say the quantities of them are x, y and z) would be the highest number minus the two lower numbers. So if x is the higher number, the grouping waste would be (x - y) + (x - z). Otherwise stated, waste is produced by printing job Y and Z (in excess of their quantities) up to the quantity of X. The grouping waste would be a qualifier for the given set, meaning it could not exceed a certain quantity or the job would simply be printed alone. So the question is stated: how to determine which sets of jobs are grouped together, out of any given number of jobs, based on the qualifiers of 1) Three similar quantities OR 2) Two quantities where one is approximately double the other, AND with the aim of minimal total grouping waste across the various sets. (Edit) Quantity Information: Typical job quantities can be from 150 to 350 on foreign languages, or 500 to 1000 on English print runs. This data can be used to set up some scenarios for an algorithm. For example, let's say you had 5 jobs: 1000, 500, 500, 450, 250 By looking at it, I can see a couple of answers. Obviously (1000/500/500) is not efficient as you'll have a grouping waste of 1000. (500/500/450) is better as you'll have a waste of 50, but then you run (1000) and (250) alone. But you could also run (1000/500) with 1000 on two lanes, (500/250) with 500 on two lanes and then (450) alone. In terms of trade-offs for lane minimization vs. wastage, we could say that any grouping waste over 200 is excessive. (End Edit) ...Needless to say, quite a problem. (For me.) I am a moderately skilled programmer but I do not have much familiarity with algorithms and I am not fully studied in the mathematics of the area. I'm I/P writing a sort of brute-force program that simply tries all options, neglecting any option tree that seems to have excessive grouping waste. However, I can't help but hope there's an easier and more efficient method. I've looked at various websites trying to find out more about algorithms in general and have been slogging my way through the symbology, but it's slow going. Unfortunately, Wikipedia's articles on the subject are very cross-dependent and it's difficult to find an "in". The only thing I've been able to really find would seem to be a definition of the rough type of algorithm I need: "Exclusive Distance Clustering", one-dimensionally speaking. I did look at what seems to be the popularly referred-to algorithm on this site, the Bin Packing one, but I was unable to see exactly how it would work with my problem. Any help is appreciated. :)

    Read the article

  • One letter game problem?

    - by Alex K
    Recently at a job interview I was given the following problem: Write a script capable of running on the command line as python It should take in two words on the command line (or optionally if you'd prefer it can query the user to supply the two words via the console). Given those two words: a. Ensure they are of equal length b. Ensure they are both words present in the dictionary of valid words in the English language that you downloaded. If so compute whether you can reach the second word from the first by a series of steps as follows a. You can change one letter at a time b. Each time you change a letter the resulting word must also exist in the dictionary c. You cannot add or remove letters If the two words are reachable, the script should print out the path which leads as a single, shortest path from one word to the other. You can /usr/share/dict/words for your dictionary of words. My solution consisted of using breadth first search to find a shortest path between two words. But apparently that wasn't good enough to get the job :( Would you guys know what I could have done wrong? Thank you so much. import collections import functools import re def time_func(func): import time def wrapper(*args, **kwargs): start = time.time() res = func(*args, **kwargs) timed = time.time() - start setattr(wrapper, 'time_taken', timed) return res functools.update_wrapper(wrapper, func) return wrapper class OneLetterGame: def __init__(self, dict_path): self.dict_path = dict_path self.words = set() def run(self, start_word, end_word): '''Runs the one letter game with the given start and end words. ''' assert len(start_word) == len(end_word), \ 'Start word and end word must of the same length.' self.read_dict(len(start_word)) path = self.shortest_path(start_word, end_word) if not path: print 'There is no path between %s and %s (took %.2f sec.)' % ( start_word, end_word, find_shortest_path.time_taken) else: print 'The shortest path (found in %.2f sec.) is:\n=> %s' % ( self.shortest_path.time_taken, ' -- '.join(path)) def _bfs(self, start): '''Implementation of breadth first search as a generator. The portion of the graph to explore is given on demand using get_neighboors. Care was taken so that a vertex / node is explored only once. ''' queue = collections.deque([(None, start)]) inqueue = set([start]) while queue: parent, node = queue.popleft() yield parent, node new = set(self.get_neighbours(node)) - inqueue inqueue = inqueue | new queue.extend([(node, child) for child in new]) @time_func def shortest_path(self, start, end): '''Returns the shortest path from start to end using bfs. ''' assert start in self.words, 'Start word not in dictionnary.' assert end in self.words, 'End word not in dictionnary.' paths = {None: []} for parent, child in self._bfs(start): paths[child] = paths[parent] + [child] if child == end: return paths[child] return None def get_neighbours(self, word): '''Gets every word one letter away from the a given word. We do not keep these words in memory because bfs accesses a given vertex only once. ''' neighbours = [] p_word = ['^' + word[0:i] + '\w' + word[i+1:] + '$' for i, w in enumerate(word)] p_word = '|'.join(p_word) for w in self.words: if w != word and re.match(p_word, w, re.I|re.U): neighbours += [w] return neighbours def read_dict(self, size): '''Loads every word of a specific size from the dictionnary into memory. ''' for l in open(self.dict_path): l = l.decode('latin-1').strip().lower() if len(l) == size: self.words.add(l) if __name__ == '__main__': import sys if len(sys.argv) not in [3, 4]: print 'Usage: python one_letter_game.py start_word end_word' else: g = OneLetterGame(dict_path = '/usr/share/dict/words') try: g.run(*sys.argv[1:]) except AssertionError, e: print e

    Read the article

  • Parallel processing via multithreading in Java

    - by Robz
    There are certain algorithms whose running time can decrease significantly when one divides up a task and gets each part done in parallel. One of these algorithms is merge sort, where a list is divided into infinitesimally smaller parts and then recombined in a sorted order. I decided to do an experiment to test whether or not I could I increase the speed of this sort by using multiple threads. I am running the following functions in Java on a Quad-Core Dell with Windows Vista. One function (the control case) is simply recursive: // x is an array of N elements in random order public int[] mergeSort(int[] x) { if (x.length == 1) return x; // Dividing the array in half int[] a = new int[x.length/2]; int[] b = new int[x.length/2+((x.length%2 == 1)?1:0)]; for(int i = 0; i < x.length/2; i++) a[i] = x[i]; for(int i = 0; i < x.length/2+((x.length%2 == 1)?1:0); i++) b[i] = x[i+x.length/2]; // Sending them off to continue being divided mergeSort(a); mergeSort(b); // Recombining the two arrays int ia = 0, ib = 0, i = 0; while(ia != a.length || ib != b.length) { if (ia == a.length) { x[i] = b[ib]; ib++; } else if (ib == b.length) { x[i] = a[ia]; ia++; } else if (a[ia] < b[ib]) { x[i] = a[ia]; ia++; } else { x[i] = b[ib]; ib++; } i++; } return x; } The other is in the 'run' function of a class that extends thread, and recursively creates two new threads each time it is called: public class Merger extends Thread { int[] x; boolean finished; public Merger(int[] x) { this.x = x; } public void run() { if (x.length == 1) { finished = true; return; } // Divide the array in half int[] a = new int[x.length/2]; int[] b = new int[x.length/2+((x.length%2 == 1)?1:0)]; for(int i = 0; i < x.length/2; i++) a[i] = x[i]; for(int i = 0; i < x.length/2+((x.length%2 == 1)?1:0); i++) b[i] = x[i+x.length/2]; // Begin two threads to continue to divide the array Merger ma = new Merger(a); ma.run(); Merger mb = new Merger(b); mb.run(); // Wait for the two other threads to finish while(!ma.finished || !mb.finished) ; // Recombine the two arrays int ia = 0, ib = 0, i = 0; while(ia != a.length || ib != b.length) { if (ia == a.length) { x[i] = b[ib]; ib++; } else if (ib == b.length) { x[i] = a[ia]; ia++; } else if (a[ia] < b[ib]) { x[i] = a[ia]; ia++; } else { x[i] = b[ib]; ib++; } i++; } finished = true; } } It turns out that function that does not use multithreading actually runs faster. Why? Does the operating system and the java virtual machine not "communicate" effectively enough to place the different threads on different cores? Or am I missing something obvious?

    Read the article

  • Android inserting into SQLite JSON object

    - by Nizmon
    I'm trying to insert the below json response from a server into my sqlite DB and then read from the DB. The problem I am getting is that when I run my code it compiles fine and runs with no errors. But when trying to read from the DB I just get back what seems like an empty string so I know that the table is being created. I have the correct permissions within the manifest. I have also reference all my classes within there. {"locations": [{"locations":"Newcastle","location_id":"1"},{"locations":"London","location_id":"2"},{"locations":"Sunderland","location_id":"3"}]} Below where I use: Log.v("one", one); Log.v("two", two); below It only prints out the first set within the JSON object so Newcastle and 1. I don't get any errors at all which is stumping me. When I call the method getName within the Location class I just seem to get a blank string back! // This class creates the table as well as inserts and returns data from the sqlite DB public class Location { private DatabaseHelper mDbHelper; private SQLiteDatabase mDb; private final Context mCtx; private static final String vd_location = ("CREATE TABLE " + TABLE_VD_LOCATION + " (" + LOCATION + " TEXT," + LOCATION_ID + " TEXT " +");"); private static class DatabaseHelper extends SQLiteOpenHelper { DatabaseHelper(Context context) { super(context, DATABASE_NAME, null, DATABASE_VERSION); } @Override public void onCreate(SQLiteDatabase db) { db.execSQL(vd_location); } @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { } } public Location(Context ctx) { this.mCtx = ctx; } public Location open() throws SQLException { this.mDbHelper = new DatabaseHelper(this.mCtx); this.mDb = this.mDbHelper.getWritableDatabase(); return this; } public void close() { this.mDbHelper.close(); } public void addOffer (JSONObject json){ try{ JSONArray earthquakes = json.getJSONArray("offers"); for(int i=0;i<earthquakes.length();i++){ open(); JSONObject e = earthquakes.getJSONObject(i); String one = e.getString("locations"); String two = e.getString("location_id"); Log.v("one", one); Log.v("two", two); mDb.execSQL("INSERT INTO " + TABLE_VD_LOCATION + " ( locations, location_id ) " + "VALUES (?, ?)", new Object [] { e.getString("locations"), e.getString("location_id")}); close(); } }catch(Exception e){ }finally{ close(); } } public String getName(long l) throws SQLException{ String[] columns = new String[]{ LOCATION, LOCATION_ID}; Cursor c = mDb.query(TABLE_VD_LOCATION, columns, null, null, null, null, null); String result = ""; int iRow = c.getColumnIndex(LOCATION); int iName = c.getColumnIndex(LOCATION_ID); for (c.moveToFirst(); !c.isAfterLast(); c.moveToNext()){ result = result + c.getString(iRow) + " " + c.getString(iName) + " " + "\n"; } return result; } } // This code reads from the DB, this is just some very hacked together code so excuse it, it also works when used on other tables public void getData(){ boolean didItWork = true; try { Location loc = new Location(this); loc.open(); String data = loc.getName(0); loc.close(); Dialog t = new Dialog(this); t.setTitle("get" + data); t.show(); } catch (Exception e) { didItWork = false; String error = e.toString(); Dialog d = new Dialog(this); d.setTitle("Dang it!"); TextView tv = new TextView(this); tv.setText(error); d.setContentView(tv); d.show(); } finally { if (didItWork) { Dialog d = new Dialog(this); d.setTitle("Heck Yea!"); TextView tv = new TextView(this); tv.setText("Success"); d.setContentView(tv); d.show(); //entry.close(); } } }

    Read the article

  • Why doesn't PHP's oci_connect return false?

    - by absolethe
    I have a situation in which we have two production databases that synchronize with one other. Server One is considered the primary. Sometimes due to maintenance or a disaster Server Two will become primary. In some of our code that means we have to manually go in and edit the server name for database connections. I find this annoying, so the last thing I wrote I put the server information for both and set up a loop. If oci_connect failed on the Server One 3 times it would move on to Server Two. If Server Two failed 3 times it would notify the user a connection couldn't be made. This has worked fine most times we've had the situation of switching the servers. Yesterday, for example, it worked fine. Today it didn't. It just sat and spun endlessly. No error in the PHP error log. No failure to move on from. No error output to the screen. Nothing for 5 minutes. So then I had to manually edit the stupid config file. I asked what could possibly be different and I was told "yesterday the database was down, but not the server. today the server is down." Okay...? But I don't see a distinction. I would expect oci_connect to return false if it can't establish any sort of communication with the server. I'd expect it to timeout and error. Not just pass it on when it receives an error code from the server. What if there's a network problem, for example? Is this a bug in oci_connect or is there a possibility that something in our PHP configuration gives oci_connect a crazily long timeout? If it is a sort of "bug" is there some way I can check to see if the server is up first? Like a ping? (Of course when I did a ping through the command prompt I got a response from Server One and then was told, "it's back now" although I am skeptical about the timing on that.) Anyway, if anyone could shed some light on why oci_connect might run endlessly without failing and how to keep it from doing so I'd be grateful. -- Edit: My code looks like the examples on PHP.net only in some loops. $count = count($servers); for($i = 0; $i < $count; $i++){ if((!isset($connection)) || ($connection == false)){ // Attempt to connect to the oracle database $connection = @oci_connect($servers[$i]["user"], $servers[$i]["pass"], $servers[$i]["conid"]) or ($conn_error = oracle_error()); // Try again if there was a failure if(($connection == false) || (isset($con_error))){ // Three (two more) tries per alternative for($j = $st; $j < $fn; $j++){ // Try again to connect $connection = @oci_connect($servers[$i]["user"], $servers[$i]["pass"], $servers[$i]["conid"]) or ($conn_error = oracle_error()); } // for($j = 2; $j < 4; $j++) } // if($connection == false) } // if(!isset($connection) || ($connection == false)) } // for($i = 0; $i < $count; $i++)

    Read the article

  • How do I recursively define a Hash in Ruby from supplied arguments?

    - by Sarah Beckham
    This snippet of code populates an @options hash. values is an Array which contains zero or more heterogeneous items. If you invoke populate with arguments that are Hash entries, it uses the value you specify for each entry to assume a default value. def populate(*args) args.each do |a| values = nil if (a.kind_of? Hash) # Converts {:k => "v"} to `a = :k, values = "v"` a, values = a.to_a.first end @options[:"#{a}"] ||= values ||= {} end end What I'd like to do is change populate such that it recursively populates @options. There is a special case: if the values it's about to populate a key with are an Array consisting entirely of (1) Symbols or (2) Hashes whose keys are Symbols (or some combination of the two), then they should be treated as subkeys rather than the values associated with that key, and the same logic used to evaluate the original populate arguments should be recursively re-applied. That was a little hard to put into words, so I've written some test cases. Here are some test cases and the expected value of @options afterwards: populate :a => @options is {:a => {}} populate :a => 42 => @options is {:a => 42} populate :a, :b, :c => @options is {:a => {}, :b => {}, :c => {}} populate :a, :b => "apples", :c => @options is {:a => {}, :b => "apples", :c => {}} populate :a => :b => @options is {:a => :b} # Because [:b] is an Array consisting entirely of Symbols or # Hashes whose keys are Symbols, we assume that :b is a subkey # of @options[:a], rather than the value for @options[:a]. populate :a => [:b] => @options is {:a => {:b => {}}} populate :a => [:b, :c => :d] => @options is {:a => {:b => {}, :c => :d}} populate :a => [:a, :b, :c] => @options is {:a => {:a => {}, :b => {}, :c => {}}} populate :a => [:a, :b, "c"] => @options is {:a => [:a, :b, "c"]} populate :a => [:one], :b => [:two, :three => "four"] => @options is {:a => :one, :b => {:two => {}, :three => "four"}} populate :a => [:one], :b => [:two => {:four => :five}, :three => "four"] => @options is {:a => :one, :b => { :two => { :four => :five } }, :three => "four" } } It is acceptable if the signature of populate needs to change to accommodate some kind of recursive version. There is no limit to the amount of nesting that could theoretically happen. Any thoughts on how I might pull this off?

    Read the article

  • bulls and cows game -- programming algorithm(python)

    - by IcyFlame
    This is a simulation of the game Cows and Bulls with three digit numbers I am trying to get the number of cows and bulls between two numbers. One of which is generated by the computer and the other is guessed by the user. I have parsed the two numbers I have so that now I have two lists with three elements each and each element is one of the digits in the number. So: 237 will give the list [2,3,7]. And I make sure that the relative indices are maintained.the general pattern is:(hundreds, tens, units). And these two lists are stored in the two lists: machine and person. ALGORITHM 1 So, I wrote the following code, The most intuitive algorithm: cows and bulls are initialized to 0 before the start of this loop. for x in person: if x in machine: if machine.index(x) == person.index(x): bulls += 1 print x,' in correct place' else: print x,' in wrong place' cows += 1 And I started testing this with different type of numbers guessed by the computer. Quite randomly, I decided on 277. And I guessed 447. Here, I got the first clue that this algorithm may not work. I got 1 cow and 0 bulls. Whereas I should have got 1 bull and 1 cow. This is a table of outputs with the first algorithm: Guess Output Expected Output 447 0 bull, 1 cow 1 bull, 1 cow 477 2 bulls, 0 cows 2 bulls, 0 cows 777 0 bulls, 3 cows 2 bulls, 0 cows So obviously this algorithm was not working when there are repeated digits in the number randomly selected by the computer. I tried to understand why these errors are taking place, But I could not. I have tried a lot but I just could not see any mistake in the algorithm(probably because I wrote it!) ALGORITHM 2 On thinking about this for a few days I tried this: cows and bulls are initialized to 0 before the start of this loop. for x in range(3): for y in range(3): if x == y and machine[x] == person[y]: bulls += 1 if not (x == y) and machine[x] == person[y]: cows += 1 I was more hopeful about this one. But when I tested this, this is what I got: Guess Output Expected Output 447 1 bull, 1 cow 1 bull, 1 cow 477 2 bulls, 2 cows 2 bulls, 0 cows 777 2 bulls, 4 cows 2 bulls, 0 cows The mistake I am making is quite clear here, I understood that the numbers were being counted again and again. i.e.: 277 versus 477 When you count for bulls then the 2 bulls come up and thats alright. But when you count for cows: the 7 in 277 at units place is matched with the 7 in 477 in tens place and thus a cow is generated. the 7 in 277 at tens place is matched with the 7 in 477 in units place and thus a cow is generated.' Here the matching is exactly right as I have written the code as per that. But this is not what I want. And I have no idea whatsoever on what to do after this. Furthermore... I would like to stress that both the algorithms work perfectly, if there are no repeated digits in the number selected by the computer. Please help me with this issue. P.S.: I have been thinking about this for over a week, But I could not post a question earlier as my account was blocked(from asking questions) because I asked a foolish question. And did not delete it even though I got 2 downvotes immediately after posting the question.

    Read the article

  • SBS 2008 BPA Warnings After Migration From SBS 2003

    - by Nicholas Piasecki
    We just finished a we-know-just-enough-to-be-dangerous migration from SBS 2003 to SBS 2008, and things seem to have gone relatively smoothly. After running the SBS 2008 Best Practices Analyzer on the destination server, we've got three warning messages, and I can't tell if they're important or not. First, the easy one: SMTP Port (TCP 25 Status): The Edgetransport.exe process should listen on SMTP port 25, but that port is owned by the process. I don't think that this one is a big deal--e-mail is flowing through the SMTP connector. Since there are two spaces between "the" and "process," I'm assuming that for some reason BPA just couldn't figure out the owning process name and this is just some sloppy programming when displaying the message. (Indeed, on subsequent runs of the BPA this message goes away, and other times it comes back.) Now, two more scary sounding ones: No DNS name server records: There are no DNS name server (NS) resource records in the _msdcs sub-domain in the forward lookup zone for Windows SBS 2008. and, similarly, No DNS name server records: There are no DNS name server (NS) resource records in the _msdcs zone for Windows SBS 2008. Now for these two, everything appears to be functioning correctly--but I'm assuming this is a weird state as a result of the SBS 2003 to 2008 migration. Can anyone provide any pointers on how to fix it, or whether or not it can be safely ignored? Thanks!

    Read the article

< Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >