Search Results

Search found 734 results on 30 pages for 'yield'.

Page 21/30 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Using awk to split text file every 10,000 lines

    - by Sneaky Wombat
    I have a large gzip'd text file. I'd like to something like: zcat BIGFILE.GZ | awk (snag 10,000 lines and redirect to...)|gzip -9 smallerPartFile.gz the awk part up there, I basically want it to take 10,000 lines and send it to gzip and then repeat until all lines in the original input file are consumed. I found a script that claims to do this, but when I run it on my files and then diff the original to the ones that were split and then merged, lines are missing. So, something is wrong with the awk part and I'm not sure what part is broken. Here's the code. Can someone tell me why this doesn't yield a file that can be split and merged and then diff'd to the original successfully? # Generate files part0.dat.gz, part1.dat.gz, etc. # restore with: zcat foo* | gzip -9 > restoredFoo.sql.gz (or something like that) prefix="foo" count=0 suffix=".sql" lines=10000 # Split every 10000 line. zcat /home/foo/foo.sql.gz | while true; do partname=${prefix}${count}${suffix} # Use awk to read the required number of lines from the input stream. awk -v lines=${lines} 'NR <= lines {print} NR == lines {exit}' >${partname} if [[ -s ${partname} ]]; then # Compress this part file. gzip -9 ${partname} (( ++count )) else # Last file generated is empty, delete it. rm -f ${partname} break fi done

    Read the article

  • How do I find the cause for a huge difference in performance between two identical Ubuntu servers?

    - by the.duckman
    I am running two Dell R410 servers in the same rack of a data center. Both have the same hardware configuration, run Ubuntu 10.4, have the same packages installed and run the same Java web servers. No other load. One of them is 20-30% faster than the other, very consistently. I used dstat to figure out, if there are more context switches, IO, swapping or anything, but I see no reason for the difference. With the same workload, (no swapping, virtually no IO), the cpu usage and load is higher on one server. So the difference appears to be mainly CPU bound, but while a simple cpu benchmark using sysbench (with all other load turned off) did yield a difference, it was only 6%. So maybe it is not only CPU but also memory performance. I tried to figure out if the BIOS settings differ in some parameter, did a dump using dmidecode, but that yielded no difference. I compared /proc/cpuinfo, no difference. I compared the output of cpufreq-info, no difference. I am lost. What can I do, to figure out, what is going on?

    Read the article

  • IIS 6 Denies access to the default document

    - by Jim
    I've got Windows Server 2k3 with IIS6 hosting a couple ASP.NET MVC 2 applications (.NET 4), all in the Default Web Site. Most of them simply use Integrated authentication, but a couple use forms as well. All the applications work properly and are correctly accessible. The problem I'm trying to resolve is access to the default document. It is currently specified as index.htm. Both index.htm and the Default Web Site are configured to allow anonymous access (with none of the authenticated acces boxes checked). However, access is denied to the file. Accessing via server.domain.tld/ and server.domain.tld/index.htm both yield 401 errors. However, server.domain.tld/default.htm (file does not exist) properly returns a 404. If I alter the file security on index.htm to allow integrated authentication, then requesting /index.htm directly works properly for users with domain accounts, but anonymous users get a login prompt/401. How can I configure IIS to allow all users to view index.htm via server.domain.tld/?

    Read the article

  • Is it possible to use ffmpeg to trim off X seconds from the beginning of a video with an unspecified length?

    - by marcelebrate
    I need to trim the just the first 1 or 2 seconds off of a series of FLV recordings of varying, unspecified lengths. I've found plenty of resources for extracting a specified duration from a video (e.g. 30 second clips), but none for continuing to the end of a video. Both of these attempts just yield a copied version of the video, sans desired trimming: ffmpeg -ss 2 -vcodec copy -acodec copy -i input.flv output.flv ffmpeg -ss 2 -t 120 -vcodec copy -acodec copy -i input.flv output.flv The thought on the second one was: perhaps if I specified a length beyond what was possible, it'd just go to the end. No dice. I know it's not an issue with codecs or using seconds instead of timecode since the following worked a charm: ffmpeg -ss 2 -t 5 -vcodec copy -acodec copy -i input.flv output.flv Any other ideas? I'm open to using other (Windows-based) command line tools, however am strongly favoring ffmpeg since I'm already using it for thumbnail creation and am familiar with it. If it helps, my videos will all be under 2 minutes.

    Read the article

  • HAProxy appsession vs cookie precedence

    - by user1139473
    I am trying to find the best solution for balancing and keeping persistence on our application behind HAProxy. Here is our basic configuration: https://gist.github.com/endzyme/1804046b23c37beba520 After playing around with taking members down and up and also reloading the haproxy (with -sf) I have noticed that appsession isn't 100% effective, it would appear that sometimes it doesn't always 'request-learn'. I also tried to add a cookie JSESSION prefix to balance in case request-learn didn't take. Unfortunately it would present scenarios where the prefix would list svr2 but it was balanced to a different server. I am assuming it's because the appsession table takes first then sticks on that before using the cookie parameter. I have not tested with using cookie as an inserted option (not prefix on existing cookie) but I am thinking it would yield similar results. My question is: Which one is checked first, appsession or cookie, and is it an immediate catch after it reads the first one, or a fall through? Also as a follow up - is it not recommended to use both in the same backend? Cookie as I understand takes less memory resources, is agnostic to reloads and has way better reliability of persistence. Appsession I assume takes less cpu resource, since it's reading not writing. (Bonus Question: is there a way to inspect appsession/cookie table map? socket show table doesn't show anything except stick-tables) Many thanks in advance, -Nick

    Read the article

  • Solving embarassingly parallel problems using Python multiprocessing

    - by gotgenes
    How does one use multiprocessing to tackle embarrassingly parallel problems? Embarassingly parallel problems typically consist of three basic parts: Read input data (from a file, database, tcp connection, etc.). Run calculations on the input data, where each calculation is independent of any other calculation. Write results of calculations (to a file, database, tcp connection, etc.). We can parallelize the program in two dimensions: Part 2 can run on multiple cores, since each calculation is independent; order of processing doesn't matter. Each part can run independently. Part 1 can place data on an input queue, part 2 can pull data off the input queue and put results onto an output queue, and part 3 can pull results off the output queue and write them out. This seems a most basic pattern in concurrent programming, but I am still lost in trying to solve it, so let's write a canonical example to illustrate how this is done using multiprocessing. Here is the example problem: Given a CSV file with rows of integers as input, compute their sums. Separate the problem into three parts, which can all run in parallel: Process the input file into raw data (lists/iterables of integers) Calculate the sums of the data, in parallel Output the sums Below is traditional, single-process bound Python program which solves these three tasks: #!/usr/bin/env python # -*- coding: UTF-8 -*- # basicsums.py """A program that reads integer values from a CSV file and writes out their sums to another CSV file. """ import csv import optparse import sys def make_cli_parser(): """Make the command line interface parser.""" usage = "\n\n".join(["python %prog INPUT_CSV OUTPUT_CSV", __doc__, """ ARGUMENTS: INPUT_CSV: an input CSV file with rows of numbers OUTPUT_CSV: an output file that will contain the sums\ """]) cli_parser = optparse.OptionParser(usage) return cli_parser def parse_input_csv(csvfile): """Parses the input CSV and yields tuples with the index of the row as the first element, and the integers of the row as the second element. The index is zero-index based. :Parameters: - `csvfile`: a `csv.reader` instance """ for i, row in enumerate(csvfile): row = [int(entry) for entry in row] yield i, row def sum_rows(rows): """Yields a tuple with the index of each input list of integers as the first element, and the sum of the list of integers as the second element. The index is zero-index based. :Parameters: - `rows`: an iterable of tuples, with the index of the original row as the first element, and a list of integers as the second element """ for i, row in rows: yield i, sum(row) def write_results(csvfile, results): """Writes a series of results to an outfile, where the first column is the index of the original row of data, and the second column is the result of the calculation. The index is zero-index based. :Parameters: - `csvfile`: a `csv.writer` instance to which to write results - `results`: an iterable of tuples, with the index (zero-based) of the original row as the first element, and the calculated result from that row as the second element """ for result_row in results: csvfile.writerow(result_row) def main(argv): cli_parser = make_cli_parser() opts, args = cli_parser.parse_args(argv) if len(args) != 2: cli_parser.error("Please provide an input file and output file.") infile = open(args[0]) in_csvfile = csv.reader(infile) outfile = open(args[1], 'w') out_csvfile = csv.writer(outfile) # gets an iterable of rows that's not yet evaluated input_rows = parse_input_csv(in_csvfile) # sends the rows iterable to sum_rows() for results iterable, but # still not evaluated result_rows = sum_rows(input_rows) # finally evaluation takes place as a chain in write_results() write_results(out_csvfile, result_rows) infile.close() outfile.close() if __name__ == '__main__': main(sys.argv[1:]) Let's take this program and rewrite it to use multiprocessing to parallelize the three parts outlined above. Below is a skeleton of this new, parallelized program, that needs to be fleshed out to address the parts in the comments: #!/usr/bin/env python # -*- coding: UTF-8 -*- # multiproc_sums.py """A program that reads integer values from a CSV file and writes out their sums to another CSV file, using multiple processes if desired. """ import csv import multiprocessing import optparse import sys NUM_PROCS = multiprocessing.cpu_count() def make_cli_parser(): """Make the command line interface parser.""" usage = "\n\n".join(["python %prog INPUT_CSV OUTPUT_CSV", __doc__, """ ARGUMENTS: INPUT_CSV: an input CSV file with rows of numbers OUTPUT_CSV: an output file that will contain the sums\ """]) cli_parser = optparse.OptionParser(usage) cli_parser.add_option('-n', '--numprocs', type='int', default=NUM_PROCS, help="Number of processes to launch [DEFAULT: %default]") return cli_parser def main(argv): cli_parser = make_cli_parser() opts, args = cli_parser.parse_args(argv) if len(args) != 2: cli_parser.error("Please provide an input file and output file.") infile = open(args[0]) in_csvfile = csv.reader(infile) outfile = open(args[1], 'w') out_csvfile = csv.writer(outfile) # Parse the input file and add the parsed data to a queue for # processing, possibly chunking to decrease communication between # processes. # Process the parsed data as soon as any (chunks) appear on the # queue, using as many processes as allotted by the user # (opts.numprocs); place results on a queue for output. # # Terminate processes when the parser stops putting data in the # input queue. # Write the results to disk as soon as they appear on the output # queue. # Ensure all child processes have terminated. # Clean up files. infile.close() outfile.close() if __name__ == '__main__': main(sys.argv[1:]) These pieces of code, as well as another piece of code that can generate example CSV files for testing purposes, can be found on github. I would appreciate any insight here as to how you concurrency gurus would approach this problem. Here are some questions I had when thinking about this problem. Bonus points for addressing any/all: Should I have child processes for reading in the data and placing it into the queue, or can the main process do this without blocking until all input is read? Likewise, should I have a child process for writing the results out from the processed queue, or can the main process do this without having to wait for all the results? Should I use a processes pool for the sum operations? If yes, what method do I call on the pool to get it to start processing the results coming into the input queue, without blocking the input and output processes, too? apply_async()? map_async()? imap()? imap_unordered()? Suppose we didn't need to siphon off the input and output queues as data entered them, but could wait until all input was parsed and all results were calculated (e.g., because we know all the input and output will fit in system memory). Should we change the algorithm in any way (e.g., not run any processes concurrently with I/O)?

    Read the article

  • Handling inheritance with overriding efficiently

    - by Fyodor Soikin
    I have the following two data structures. First, a list of properties applied to object triples: Object1 Object2 Object3 Property Value O1 O2 O3 P1 "abc" O1 O2 O3 P2 "xyz" O1 O3 O4 P1 "123" O2 O4 O5 P1 "098" Second, an inheritance tree: O1 O2 O4 O3 O5 Or viewed as a relation: Object Parent O2 O1 O4 O2 O3 O1 O5 O3 O1 null The semantics of this being that O2 inherits properties from O1; O4 - from O2 and O1; O3 - from O1; and O5 - from O3 and O1, in that order of precedence. NOTE 1: I have an efficient way to select all children or all parents of a given object. This is currently implemented with left and right indexes, but hierarchyid could also work. This does not seem important right now. NOTE 2: I have tiggers in place that make sure that the "Object" column always contains all possible objects, even when they do not really have to be there (i.e. have no parent or children defined). This makes it possible to use inner joins rather than severely less effiecient outer joins. The objective is: Given a pair of (Property, Value), return all object triples that have that property with that value either defined explicitly or inherited from a parent. NOTE 1: An object triple (X,Y,Z) is considered a "parent" of triple (A,B,C) when it is true that either X = A or X is a parent of A, and the same is true for (Y,B) and (Z,C). NOTE 2: A property defined on a closer parent "overrides" the same property defined on a more distant parent. NOTE 3: When (A,B,C) has two parents - (X1,Y1,Z1) and (X2,Y2,Z2), then (X1,Y1,Z1) is considered a "closer" parent when: (a) X2 is a parent of X1, or (b) X2 = X1 and Y2 is a parent of Y1, or (c) X2 = X1 and Y2 = Y1 and Z2 is a parent of Z1 In other words, the "closeness" in ancestry for triples is defined based on the first components of the triples first, then on the second components, then on the third components. This rule establishes an unambigous partial order for triples in terms of ancestry. For example, given the pair of (P1, "abc"), the result set of triples will be: O1, O2, O3 -- Defined explicitly O1, O2, O5 -- Because O5 inherits from O3 O1, O4, O3 -- Because O4 inherits from O2 O1, O4, O5 -- Because O4 inherits from O2 and O5 inherits from O3 O2, O2, O3 -- Because O2 inherits from O1 O2, O2, O5 -- Because O2 inherits from O1 and O5 inherits from O3 O2, O4, O3 -- Because O2 inherits from O1 and O4 inherits from O2 O3, O2, O3 -- Because O3 inherits from O1 O3, O2, O5 -- Because O3 inherits from O1 and O5 inherits from O3 O3, O4, O3 -- Because O3 inherits from O1 and O4 inherits from O2 O3, O4, O5 -- Because O3 inherits from O1 and O4 inherits from O2 and O5 inherits from O3 O4, O2, O3 -- Because O4 inherits from O1 O4, O2, O5 -- Because O4 inherits from O1 and O5 inherits from O3 O4, O4, O3 -- Because O4 inherits from O1 and O4 inherits from O2 O5, O2, O3 -- Because O5 inherits from O1 O5, O2, O5 -- Because O5 inherits from O1 and O5 inherits from O3 O5, O4, O3 -- Because O5 inherits from O1 and O4 inherits from O2 O5, O4, O5 -- Because O5 inherits from O1 and O4 inherits from O2 and O5 inherits from O3 Note that the triple (O2, O4, O5) is absent from this list. This is because property P1 is defined explicitly for the triple (O2, O4, O5) and this prevents that triple from inheriting that property from (O1, O2, O3). Also note that the triple (O4, O4, O5) is also absent. This is because that triple inherits its value of P1="098" from (O2, O4, O5), because it is a closer parent than (O1, O2, O3). The straightforward way to do it is the following. First, for every triple that a property is defined on, select all possible child triples: select Children1.Id as O1, Children2.Id as O2, Children3.Id as O3, tp.Property, tp.Value from TriplesAndProperties tp -- Select corresponding objects of the triple inner join Objects as Objects1 on Objects1.Id = tp.O1 inner join Objects as Objects2 on Objects2.Id = tp.O2 inner join Objects as Objects3 on Objects3.Id = tp.O3 -- Then add all possible children of all those objects inner join Objects as Children1 on Objects1.Id [isparentof] Children1.Id inner join Objects as Children2 on Objects2.Id [isparentof] Children2.Id inner join Objects as Children3 on Objects3.Id [isparentof] Children3.Id But this is not the whole story: if some triple inherits the same property from several parents, this query will yield conflicting results. Therefore, second step is to select just one of those conflicting results: select * from ( select Children1.Id as O1, Children2.Id as O2, Children3.Id as O3, tp.Property, tp.Value, row_number() over( partition by Children1.Id, Children2.Id, Children3.Id, tp.Property order by Objects1.[depthInTheTree] descending, Objects2.[depthInTheTree] descending, Objects3.[depthInTheTree] descending ) as InheritancePriority from ... (see above) ) where InheritancePriority = 1 The window function row_number() over( ... ) does the following: for every unique combination of objects triple and property, it sorts all values by the ancestral distance from the triple to the parents that the value is inherited from, and then I only select the very first of the resulting list of values. A similar effect can be achieved with a GROUP BY and ORDER BY statements, but I just find the window function semantically cleaner (the execution plans they yield are identical). The point is, I need to select the closest of contributing ancestors, and for that I need to group and then sort within the group. And finally, now I can simply filter the result set by Property and Value. This scheme works. Very reliably and predictably. It has proven to be very powerful for the business task it implements. The only trouble is, it is awfuly slow. One might point out the join of seven tables might be slowing things down, but that is actually not the bottleneck. According to the actual execution plan I'm getting from the SQL Management Studio (as well as SQL Profiler), the bottleneck is the sorting. The problem is, in order to satisfy my window function, the server has to sort by Children1.Id, Children2.Id, Children3.Id, tp.Property, Parents1.[depthInTheTree] descending, Parents2.[depthInTheTree] descending, Parents3.[depthInTheTree] descending, and there can be no indexes it can use, because the values come from a cross join of several tables. EDIT: Per Michael Buen's suggestion (thank you, Michael), I have posted the whole puzzle to sqlfiddle here. One can see in the execution plan that the Sort operation accounts for 32% of the whole query, and that is going to grow with the number of total rows, because all the other operations use indexes. Usually in such cases I would use an indexed view, but not in this case, because indexed views cannot contain self-joins, of which there are six. The only way that I can think of so far is to create six copies of the Objects table and then use them for the joins, thus enabling an indexed view. Did the time come that I shall be reduced to that kind of hacks? The despair sets in.

    Read the article

  • Hidden Features of C#?

    - by Serhat Özgel
    This came to my mind after I learned the following from this question: where T : struct We, C# developers, all know the basics of C#. I mean declarations, conditionals, loops, operators, etc. Some of us even mastered the stuff like Generics, anonymous types, lambdas, linq, ... But what are the most hidden features or tricks of C# that even C# fans, addicts, experts barely know? Here are the revealed features so far: Keywords yield by Michael Stum var by Michael Stum using() statement by kokos readonly by kokos as by Mike Stone as / is by Ed Swangren as / is (improved) by Rocketpants default by deathofrats global:: by pzycoman using() blocks by AlexCuse volatile by Jakub Šturc extern alias by Jakub Šturc Attributes DefaultValueAttribute by Michael Stum ObsoleteAttribute by DannySmurf DebuggerDisplayAttribute by Stu DebuggerBrowsable and DebuggerStepThrough by bdukes ThreadStaticAttribute by marxidad FlagsAttribute by Martin Clarke ConditionalAttribute by AndrewBurns Syntax ?? operator by kokos number flaggings by Nick Berardi where T:new by Lars Mæhlum implicit generics by Keith one-parameter lambdas by Keith auto properties by Keith namespace aliases by Keith verbatim string literals with @ by Patrick enum values by lfoust @variablenames by marxidad event operators by marxidad format string brackets by Portman property accessor accessibility modifiers by xanadont ternary operator (?:) by JasonS checked and unchecked operators by Binoj Antony implicit and explicit operators by Flory Language Features Nullable types by Brad Barker Currying by Brian Leahy anonymous types by Keith __makeref __reftype __refvalue by Judah Himango object initializers by lomaxx format strings by David in Dakota Extension Methods by marxidad partial methods by Jon Erickson preprocessor directives by John Asbeck DEBUG pre-processor directive by Robert Durgin operator overloading by SefBkn type inferrence by chakrit boolean operators taken to next level by Rob Gough pass value-type variable as interface without boxing by Roman Boiko programmatically determine declared variable type by Roman Boiko Static Constructors by Chris Easier-on-the-eyes / condensed ORM-mapping using LINQ by roosteronacid Visual Studio Features select block of text in editor by Himadri snippets by DannySmurf Framework TransactionScope by KiwiBastard DependantTransaction by KiwiBastard Nullable<T> by IainMH Mutex by Diago System.IO.Path by ageektrapped WeakReference by Juan Manuel Methods and Properties String.IsNullOrEmpty() method by KiwiBastard List.ForEach() method by KiwiBastard BeginInvoke(), EndInvoke() methods by Will Dean Nullable<T>.HasValue and Nullable<T>.Value properties by Rismo GetValueOrDefault method by John Sheehan Tips & Tricks nice method for event handlers by Andreas H.R. Nilsson uppercase comparisons by John access anonymous types without reflection by dp a quick way to lazily instantiate collection properties by Will JavaScript-like anonymous inline-functions by roosteronacid Other netmodules by kokos LINQBridge by Duncan Smart Parallel Extensions by Joel Coehoorn

    Read the article

  • Using RabbitMQ (Java client), is there a way to determine if network connection is closed during con

    - by MItch Branting
    I'm using RabbitMQ on RHEL 5.3 using the Java client. I have 2 nodes (machines). Node1 is consuming messages from a queue on Node2 using the Java helper class QueueingConsumer. QueueingConsumer consumer = new QueueingConsumer(channel); channel.basicConsume("MyQueueOnNode2", noAck, consumer); while (true) { QueueingConsumer.Delivery delivery = consumer.nextDelivery(); ... Process message - delivery.getBody() } If the interface is brought down on Node1 or Node2 (e.g. ifconfig eth1 down), the client (above) never knows the network isn't there anymore. Does RabbitMQ provide some type of configuration on the Java client that can be used to determine if the connection has gone away. Shutting down the RabbitMQ server on Node2 will trigger a ShutdownSignalException, which can be caught and the app can go into a reconnect loop. But bringing down the interface doesn't cause any type of exception to happen, so the code will be waiting forever on consumer.nextDelivery(). I've also tried using the timeout version of this call. e.g. QueueingConsumer consumer = new QueueingConsumer(channel); channel.basicConsume("MyQueueOnNode2", noAck, consumer); int timeout_ms = 30000; while (true) { QueueingConsumer.Delivery delivery = consumer.nextDelivery(timeout_ms); if (delivery == null) { if (channel.isOpen() == false) // Seems to always return true { throw new ShutdownSignalException(); } } else { ... Process message - delivery.getBody() } } but appears that this always returns true (even though the interface is down). I assume registering for the ShutdownListener on the connection will yield the same results, but haven't tried that yet. Is there a way to configure some sort of heartbeat, or do you just have to write custom lease logic (e.g. "I'm here now") in order to get this to work?

    Read the article

  • Nokogiri pull parser (Nokogiri::XML::Reader) issue with self closing tag

    - by Vlad Zloteanu
    I have a huge XML(400MB) containing products. Using a DOM parser is therefore excluded, so i tried to parse and process it using a pull parser. Below is a snippet from the each_product(&block) method where i iterate over the product list. Basically, using a stack, i transform each <product> ... </product> node into a hash and process it. while (reader.read) case reader.node_type #start element when Nokogiri::XML::Node::ELEMENT_NODE elem_name = reader.name.to_s stack.push([elem_name, {}]) #text element when Nokogiri::XML::Node::TEXT_NODE, Nokogiri::XML::Node::CDATA_SECTION_NODE stack.last[1] = reader.value #end element when Nokogiri::XML::Node::ELEMENT_DECL return if stack.empty? elem = stack.pop parent = stack.last if parent.nil? yield(elem[1]) elem = nil next end key = elem[0] parent_childs = parent[1] # ... parent_childs[key] = elem[1] end The issue is on self-closing tags (EG <country/>), as i can not make the difference between a 'normal' and a 'self-closing' tag. They both are of type Nokogiri::XML::Node::ELEMENT_NODE and i am not able to find any other discriminator in the documentation. Any ideas on how to solve this issue?

    Read the article

  • Eigenvector computation using OpenCV

    - by Andriyev
    Hi I have this matrix A, representing similarities of pixel intensities of an image. For example: Consider a 10 x 10 image. Matrix A in this case would be of dimension 100 x 100, and element A(i,j) would have a value in the range 0 to 1, representing the similarity of pixel i to j in terms of intensity. I am using OpenCV for image processing and the development environment is C on Linux. Objective is to compute the Eigenvectors of matrix A and I have used the following approach: static CvMat mat, *eigenVec, *eigenVal; static double A[100][100]={}, Ain1D[10000]={}; int cnt=0; //Converting matrix A into a one dimensional array //Reason: That is how cvMat requires it for(i = 0;i < affnDim;i++){ for(j = 0;j < affnDim;j++){ Ain1D[cnt++] = A[i][j]; } } mat = cvMat(100, 100, CV_32FC1, Ain1D); cvEigenVV(&mat, eigenVec, eigenVal, 1e-300); for(i=0;i < 100;i++){ val1 = cvmGet(eigenVal,i,0); //Fetching Eigen Value for(j=0;j < 100;j++){ matX[i][j] = cvmGet(eigenVec,i,j); //Fetching each component of Eigenvector i } } Problem: After execution I get nearly all components of all the Eigenvectors to be zero. I tried different images and also tried populating A with random values between 0 and 1, but the same result. Few of the top eigenvalues returned look like the following: 9805401476911479666115491135488.000000 -9805401476911479666115491135488.000000 -89222871725331592641813413888.000000 89222862280598626902522986496.000000 5255391142666987110400.000000 I am now thinking on the lines of using cvSVD() which performs singular value decomposition of real floating-point matrix and might yield me the eigenvectors. But before that I thought of asking it here. Is there anything absurd in my current approach? Am I using the right API i.e. cvEigenVV() for the right input matrix (my matrix A is a floating point matrix)? cheers

    Read the article

  • IDataRecord.IsDBNull causes an System.OverflowException (Arithmetic Overflow)

    - by Ciddan
    Hi! I have a OdbcDataReader that gets data from a database and returns a set of records. The code that executes the query looks as follows: OdbcDataReader reader = command.ExecuteReader(); while (reader.Read()) { yield return reader.AsMovexProduct(); } The method returns an IEnumerable of a custom type (MovexProduct). The convertion from an IDataRecord to my custom type MovexProduct happens in an extension-method that looks like this (abbrev.): public static MovexProduct AsMovexProduct(this IDataRecord record) { var movexProduct = new MovexProduct { ItemNumber = record.GetString(0).Trim(), Name = record.GetString(1).Trim(), Category = record.GetString(2).Trim(), ItemType = record.GetString(3).Trim() }; if (!record.IsDBNull(4)) movexProduct.Status1 = int.Parse(record.GetString(4).Trim()); // Additional properties with IsDBNull checks follow here. return movexProduct; } As soon as I hit the if (!record.IsDBNull(4)) I get an OverflowException with the exception message "Arithmetic operation resulted in an overflow." StackTrace: System.OverflowException was unhandled by user code Message=Arithmetic operation resulted in an overflow. Source=System.Data StackTrace: at System.Data.Odbc.OdbcDataReader.GetSqlType(Int32 i) at System.Data.Odbc.OdbcDataReader.GetValue(Int32 i) at System.Data.Odbc.OdbcDataReader.IsDBNull(Int32 i) at JulaAil.DataService.Movex.Data.ExtensionMethods.AsMovexProduct(IDataRecord record) [...] I've never encountered this problem before and I cannot figure out why I get it. I have verified that the record exists and that it contains data and that the indexes I provide are correct. I should also mention that I get the same exception if I change the if-statemnt to this: if (record.GetString(4) != null). What does work is encapsulating the property-assignment in a try {} catch (NullReferenceException) {} block - but that can lead to performance-loss (can it not?). I am running the x64 version of Visual Studio and I'm using a 64-bit odbc driver. Has anyone else come across this? Any suggestions as to how I could solve / get around this issue? Many thanks!

    Read the article

  • Improving long-polling Ajax performance

    - by Bears will eat you
    I'm writing a webapp (Firefox-compatible only) which uses long polling (via jQuery's ajax abilities) to send more-or-less constant updates from the server to the client. I'm concerned about the effects of leaving this running for long periods of time, say, all day or overnight. The basic code skeleton is this: function processResults(xml) { // do stuff with the xml from the server } function fetch() { setTimeout(function () { $.ajax({ type: 'GET', url: 'foo/bar/baz', dataType: 'xml', success: function (xml) { processResults(xml); fetch(); }, error: function (xhr, type, exception) { if (xhr.status === 0) { console.log('XMLHttpRequest cancelled'); } else { console.debug(xhr); fetch(); } } }); }, 500); } (The half-second "sleep" is so that the client doesn't hammer the server if the updates are coming back to the client quickly - which they usually are.) After leaving this running overnight, it tends to make Firefox crawl. I'd been thinking that this could be partially caused by a large stack depth since I've basically written an infinitely recursive function. However, if I use Firebug and throw a breakpoint into fetch, it looks like this is not the case. The stack that Firebug shows me is only about 4 or 5 frames deep, even after an hour. One of the solutions I'm considering is changing my recursive function to an iterative one, but I can't figure out how I would insert the delay in between Ajax requests without spinning. I've looked at the JS 1.7 "yield" keyword but I can't quite wrap my head around it, to figure out if it's what I need here. Is the best solution just to do a hard refresh on the page periodically, say, once every hour? Is there a better/leaner long-polling design pattern that won't put a hurt on the browser even after running for 8 or 12 hours? Or should I just skip the long polling altogether and use a different "constant update" pattern since I usually know how frequently the server will have a response for me?

    Read the article

  • Problem with boost::find_format_all, boost::regex_finder and custom regex formatter (bug boost 1.42)

    - by Nikko
    I have a code that has been working for almost 4 years (since boost 1.33) and today I went from boost 1.36 to boost 1.42 and now I have a problem. I'm calling a custom formatter on a string to format parts of the string that match a REGEX. For instance, a string like: "abc;def:" will be changed to "abc\2Cdef\3B" if the REGEX contains "([;:])" boost::find_format_all( mystring, boost::regex_finder( REGEX ), custom_formatter() ); The custom formatter looks like this: struct custom_formatter() { template< typename T > std::string operator()( const T & s ) const { std::string matchStr = s.match_results().str(1); // perform substitutions return matchStr; } } This worked fine but with boost 1.42 I know have "non initialized" s.match_results() which yield to boost::exception_detail::clone_implINS0_::error_info_injectorISt11logic_errorEEEE - Attempt to access an uninitialzed boost::match_results< class. This means that sometimes I am in the functor to format a string but there is no match. Am I doing something wrong? Or is it normal to enter the functor when there is no match and I should check against something? for now my solution is to try{}catch(){} the exception and everything works fine, but somehow that doesn't feel very good. EDIT1 Actually I have a new empty match at the end of each string to parse. EDIT2 : one solution inspired by ablaeul template< typename T > std::string operator()( const T & s ) const { if( s.begin() == s.end() ) return std::string(); std::string matchStr = s.match_results().str(1); // perform substitutions return matchStr; } *EDIT3 Seems to be a bug in (at least) boost 1.42 *

    Read the article

  • How can I Fail a WebTest?

    - by craigb
    I'm using Microsoft WebTest and want to be able to do something similar to NUnit's Assert.Fail(). The best i have come up with is to throw new webTestException() but this shows in the test results as an Error rather than a Failure. Other than reflecting on the WebTest to set a private member variable to indicate the failure, is there something I've missed? EDIT: I have also used the Assert.Fail() method, but this still shows up as an error rather than a failure when used from within WebTest, and the Outcome property is read-only (has no public setter). EDIT: well now I'm really stumped. I used reflection to set the Outcome property to Failed but the test still passes! Here's the code that sets the Oucome to failed: public static class WebTestExtensions { public static void Fail(this WebTest test) { var method = test.GetType().GetMethod("set_Outcome", BindingFlags.NonPublic | BindingFlags.Instance); method.Invoke(test, new object[] {Outcome.Fail}); } } and here's the code that I'm trying to fail: public override IEnumerator<WebTestRequest> GetRequestEnumerator() { this.Fail(); yield return new WebTestRequest("http://google.com"); } Outcome is getting set to Oucome.Fail but apparently the WebTest framework doesn't really use this to determine test pass/fail results.

    Read the article

  • Can someone please explain this lazy evaluation code?

    - by Tejs
    So, this question was just asked on SO: http://stackoverflow.com/questions/2740001/how-to-handle-an-infinite-ienumerable My sample code: public static void Main(string[] args) { foreach (var item in Numbers().Take(10)) Console.WriteLine(item); Console.ReadKey(); } public static IEnumerable<int> Numbers() { int x = 0; while (true) yield return x++; } Can someone please explain why this is lazy evaluated? I've looked up this code in Reflector, and I'm more confused than when I began. Reflector outputs: public static IEnumerable<int> Numbers() { return new <Numbers>d__0(-2); } For the numbers method, and looks to have generated a new type for that expression: [DebuggerHidden] public <Numbers>d__0(int <>1__state) { this.<>1__state = <>1__state; this.<>l__initialThreadId = Thread.CurrentThread.ManagedThreadId; } This makes no sense to me. I would have assumed it was an infinite loop until I put that code together and executed it myself.

    Read the article

  • heroku time zone problem

    - by Ole Morten Amundsen
    Why does Time.now yield the server local time when I have set the another time zone in my environment.rb config.time_zone = 'Copenhagen' I've put this in a view <p> Time.zone <%= Time.zone %> </p> <p> Time.now <%= Time.now %> </p> <p> Time.now.utc <%= Time.now.utc %> </p> <p> Time.zone.now <%= Time.zone.now %> </p> <p> Time.zone.today <%= Time.zone.today %> </p> rendering this result on my app at heroku Time.zone (GMT+01:00) Copenhagen Time.now Mon Apr 26 08:28:21 -0700 2010 Time.now.utc Mon Apr 26 15:28:21 UTC 2010 Time.zone.now 2010-04-26 17:28:21 +0200 Time.zone.today 2010-04-26 Time.zone.now yields the correct result. Do I have to switch from Time.now to Time.zone.now, everywhere? Seems cumbersome. I truly don't care what the local time of the server is, it's giving me loads of trouble due to extensive use of Time.now. Am I misunderstanding anything fundamental here?

    Read the article

  • How to explain to users the advantages of dumb primary key?

    - by Hao
    Primary key attractiveness I have a boss(and also users) that wants primary key to be sophisticated/smart/attractive control number(sort of like Social Security number, or credit card number format) I just padded the primary key(in Views) with zeroes to appease their desire to make the control number sophisticated,smart and attractive. But they wanted it as: first 2 digits as client code, then 4 digits as year year, then last 4 digits as transaction number on that client on a given year, then reset the transaction number of client to 1 when next year flows. Each client's transaction starts with 1. e.g. WM20090001, WM20090002, BB2009001, WM20100001, BB20100001 But as I wanted to make things as simple as possible, I forgo embedding their suggested smartness in primary key, I just keep the primary key auto increments regardless of client and year. But to make it not dull-looking(they really are adamant to make the primary key as smart control number), I made the primary key appears to them smart, on view query, I put the client code and four digit year code on front of the eight-zero padded autoincrement key, i.e. WM200900000001. Sort of slug-like information on autoincremented primary key. Keeping primary key autoincrement regardless of any other information, we are able keep other potential side effects problem when they edit a record, for example, if they made a mistake of entering the transaction on WM, then they edit the client code to BB, if we use smart primary key, the primary keys of WM customer will have gaps in their control number. Or worse yet, instead of letting the control numbers have gaps/holes, the user will request that subsequent records of that gap should shift up to that gap and have their subsequent primary keys re-adjust(decremented). How do you deal with these user requests(reasonable or otherwise)? Do you yield to their request? Or just continue using dumb primary key and explain them the repercussions of having a very smart/sophisticated primary key and educate them the significant advantages of having a dumb primary key? P.S. quotable quote(http://articles.techrepublic.com.com/5100-10878_11-1044961.html): "If you hold your tongue the first time users ask what is for them a reasonable request, things will work a lot better in the end."

    Read the article

  • Multithreaded search with UISearchDisplayController

    - by Kulpreet
    I'm sort of new to any sort of multithreading and simply can't seem to get a simple search method working on a background thread properly. Everything seems to be in order with an NSAutoreleasePool and the UI being updated on the main thread. The app doesn't crash and does perform a search in the background but the search results yield the several of the same items several times depending on how fast I type it in. The search works properly without the multithreading (which is commented out), but is very slow because of the large amounts of data I am working with. Here's the code: - (void)filterContentForSearchText:(NSString*)searchText { NSAutoreleasePool *apool = [[NSAutoreleasePool alloc] init]; isSearching = YES; //[self.filteredListContent removeAllObjects]; // First clear the filtered array. for (Entry *entry in appDelegate.entries) { NSComparisonResult result = [entry.item compare:searchText options:(NSCaseInsensitiveSearch|NSDiacriticInsensitiveSearch) range:NSMakeRange(0, [searchText length])]; if (result == NSOrderedSame) { [self.filteredListContent addObject:entry]; } } isSearching=NO; [self.searchDisplayController.searchResultsTableView performSelectorOnMainThread:(@selector(reloadData)) withObject:nil waitUntilDone:NO]; //[self.searchDisplayController.searchResultsTableView reloadData]; [apool drain]; } - (BOOL)searchDisplayController:(UISearchDisplayController *)controller shouldReloadTableForSearchString:(NSString *)searchString { [NSObject cancelPreviousPerformRequestsWithTarget:self selector:@selector(filteredListContent:) object:searchString]; [self.filteredListContent removeAllObjects]; // First clear the filtered array. [self performSelectorInBackground:(@selector(filterContentForSearchText:)) withObject:searchString]; //[self filterContentForSearchText:searchString]; // Return YES to cause the search result table view to be reloaded. return NO; }

    Read the article

  • How do I encapsulate form/post/validation[/redirect] in ViewUserControl in ASP.Net MVC 2

    - by paul
    What I am trying to achieve: encapsulate a Login (or any) Form to be reused across site post to self when Login/validation fails, show original page with Validation Summary (some might argue to just post to Login Page and show Validation Summary there; if what I'm trying to achieve isn't possible, I will just go that route) when Login succeeds, redirect to /App/Home/Index also, want to: stick to PRG principles avoid ajax keep Login Form (UserController.Login()) as encapsulated as possible; avoid having to implement HomeController.Login() since the Login Form might appear elsewhere All but the redirect works. My approach thus far has been: Home/Index includes Login Form: <%Html.RenderAction("Login","User");%> User/Login ViewUserControl<UserLoginViewModel> includes: <%=Html.ValidationSummary("") % using(Html.BeginForm()){} includes hidden form field "userlogin"="1" public class UserController : BaseController { ... [AcceptPostWhenFieldExists(FieldName = "userlogin")] public ActionResult Login(UserLoginViewModel model, FormCollection form){ if (ModelState.IsValid) { if(checkUserCredentials()) { setUserCredentials() return this.RedirectToAction<Areas.App.Controllers.HomeController>(x = x.Index()); } else { return View(); } } ... } Works great when: ModelState or User Credentials fail -- return View() does yield to Home/Index and displays appropriate validation summary. (I have a Register Form on the same page, using the same structure. Each form's validation summary only shows when that form is submitted.) Fails when: ModelState and User Credentials valid -- RedirectToAction<>() gives following error: "Child actions are not allowed to perform redirect actions." It seems like in the Classic ASP days, this would've been solved with Response.Buffer=True. Is there an equivalent setting or workaround now? Btw, running: ASP.Net 4, MVC 2, VS 2010, Dev/Debugging Web Server I hope all of that makes sense. So, what are my options? Or where am I going wrong in my approach? tia!

    Read the article

  • Model Binding, a simple, simple question

    - by Paul Hatcherian
    I have a struct which works much like the System.Nullable type: public struct SpecialProperty<T> { public static implicit operator T(SpecialProperty<T> value) { return value.Value; } public static implicit operator SpecialProperty<T>(T value) { return new TrackChanges<T> { Value = value }; } T internalValue; public T Value { get { return internalValue; } set { internalValue = value; } } public override bool Equals(object other) { return Value.Equals(other); } public override int GetHashCode() { return Value.GetHashCode(); } public override string ToString() { return Value.ToString(); } } I'm trying to use it with ASP.NET MVC binding. Using the default customer model binder the property will always yield null. I can fix this by adding ".Value" to the end of every form input name, but I just want it to bind to the new type directly using some sort of custom model binder, but all the solutions I've tried seemed needlessly complex. I feel like I should be able to extend the default binder and with a few lines of code redirect the property binding to the entire model using implicit conversion. I don't quite get the binding paradigm of the default binder, but it seems really stuck on this distinction between the model and model properties. What is the simplest method to do this? Thanks!

    Read the article

  • Tools for Maintaining Branches in SVN

    - by Chris Conway
    My team uses SVN for source control. Recently, I've been working on a branch with occasional merges from the trunk and it's been a fairly annoying experience (cf. Joel Spolsky's "Subversion Story #1"), so I've been looking alternative ways to manage branches and merging. Given that a centralized SVN repository is non-negotiable, what I'd like is a set of tools that satisfy the following conditions. Complete revision history should be stored in SVN for both trunk and branches. Merging in either direction (and potentially criss-crossing) should be relatively painless. Merging history should be stored in SVN to the greatest extent possible. I've looked at both git-svn and bzr-svn and neither seems to be up to the job—basically, given the revision history they can export from the SVN repository, they can't seem to do any better a job handling merges than SVN can. For example, after cloning the repository with git, the revision history for my branch shows the original branch off of trunk, but git doesn't "see" any of the interim SVN merges as "native" merges—the revision history is one long line. As a result, any attempts to merge from trunk in git yield just as many conflicts as an SVN merge would. (Besides, the git-svn documentation explicitly warns against using git to merge between branches.) Is there a way to adjust my workflow to make git satisfy the above requirements? Maybe I just need tips or tricks (or a separate merging tool?) to help SVN be better at merging into branches?

    Read the article

  • Cocoa memory management

    - by silvio
    At various points during my application's workflow, I need so show a view. That view is quite memory intensive, so I want it to be deallocated when it gets discarded by the user. So, I wrote the following code: - (MyView *)myView { if (myView != nil) return myView; myView = [[UIView alloc] initWithFrame:CGRectZero]; // allocate memory if necessary. // further init here return myView; } - (void)discardView { [myView discard]; // the discard methods puts the view offscreen. [myView release]; // free memory! } - (void)showView { view = [self myView]; // more code that puts the view onscreen. } Unfortunately, this methods only works the first time. Subsequent requests to put the view onscreen result in "message sent to deallocated instance" errors. Apparently, a deallocated instance isn't the same thing as nil. I thought about putting an additional line after [myView release] that reads myView = nil. However, that could result in errors (any calls to myView after that line would probably yield errors). So, how can I solve this problem?

    Read the article

  • Spaces and backslashes in Visual Studio build events

    - by gencha
    I have an application that is supposed to aid my project in terms of pre- and post-build event handling. I'm using ndesk.options for command line argument parsing. Which gave me weird results when my project path contains spaces. I thought this was the fault of ndesk.options but I guess my own application is to blame. I call my application as a post-built event like so: build.exe --in="$(ProjectDir)" --out="c:\out\" A simple foreach over args[] displays the following: --in=c:\my project" --out=c:\out" What happened is that the last " in each parameter was treated as if it was escaped. Thus the trailing backslash was removed. And the whole thing is treated as a single argument. Now I thought I was being smart by simply escaping the first " as well, like so: build.exe --in=\"$(ProjectDir)" --out=\"c:\out\" In that case the resulting args[] look like this: --path="c:\my project" --out="c:\out" The trailing backslash in the parameters is still swallowed and the first parameter is now split up. Passing this args[] to ndesk.options will then yield wrong results. How should the right command line look so that the correct elements end up in the correct args[] slots? Alternatively, how is one supposed to parse command line arguments like these with or without ndesk.options? Any suggestion is welcome. Thanks in advance

    Read the article

  • Code Golf: Connect 4

    - by Matthieu M.
    If you don't know the Connect 4 game, follow the link :) I used to play it a lot when I was a child. At least until my little sister got bored with me winning... Anyway I was reading the Code Golf: Tic Tac Toe the other day and I thought that solving the Tic Tac Toe problem was simpler than solving the Connect 4... and wondered how much this would reflect on the number of characters a solution would yield. I thus propose a similar challenge: Find the winner The grid is given under the form of a string meant to passed as a parameter to a function. The goal of the code golf is to write the body of the function, the parameter will be b, of string type The image in the wikipedia article leads to the following representation: "....... ..RY... ..YYYR. ..RRYY. ..RYRY. .YRRRYR" (6 rows of 7 elements) but is obviously incomplete (Yellow has not won yet) There is a winner in the grid passed, no need to do error checking Remember that it might not be exactly 4 The expected output is the letter representing the winner (either R or Y) I expect perl mongers to produce the most unreadable script (along with Ook and whitespace, of course), but I am most interested in reading innovative solutions. I must admit the magic square solution for Tic Tac Toe was my personal fav and I wonder if there is a way to build a similar one with this. Well, happy Easter weekend :) Now I just have a few days to come up with a solution of my own!

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >