Search Results

Search found 2565 results on 103 pages for 'reduce'.

Page 35/103 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Alternative to Amazon’s S3 service?

    - by Cory
    Just wondering if there is good alternative to Amazon's S3 service? I like S3 but the bandwidth cost is high. I looked at CouldFiles from Rackspace but the cost is even higher. I don't mind prepaying or having monthly payment in order to reduce the bandwidth cost greatly. Thank you for any help

    Read the article

  • PyQt4.QtWebKit.QWebFrame missing findAllElements() method on debian

    - by user249095
    I recompiled PyQt4 from source, but the issue still exists. On another ArchLinux system, it works well. Can someone help me? Thanks! on Debian: import PyQt4.QtWebKit dir(PyQt4.QtWebKit.QWebFrame.findAllElements) Traceback (most recent call last): File "", line 1, in AttributeError: type object 'QWebFrame' has no attribute 'findAllElements' on ArchLinux: import PyQt4.QtWebKit dir(PyQt4.QtWebKit.QWebFrame.findAllElements) ['__call__', '__class__', '__cmp__', '__delattr__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__le__', '__lt__', '__module__', '__name__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__self__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__']

    Read the article

  • Boolean algebra simplification

    - by xbonez
    I need to reduce this boolean expression to its simplest form. Its given that the simplest form contains 3 terms and 7 literals. The expression is: x'yz + w'x'z + x'y + wxy + w'y'z We tried this in class, and even out recitation teacher could not figure it out. Any help would be appreciated.

    Read the article

  • How can I make the SmartGWT core smaller?

    - by Jon Anter
    I have recently written a Hello World application using SmartGWT and noticed that the size of the application is huge. In my case it is over 600kb just for that application. I think that size is obscene so I narrowed the culprit down to two core libraries, ISC_Core and ISC_Foundation which combine for a total size of 649kb. Is there anyway to reduce the bloat of these libraries? Any help would be appreciated.

    Read the article

  • Can I use Google's Protocol buffers for processing LDAP requests in my LDAP server?

    - by Naga
    Hi, I need to process the incoming predefined ASN format data(coming from verity of clients that uses BER library to build it) in my application server. This is typically an LDAP server where every request will be in a predefined ASN format. Can i use Google's protocol buffers to process the requests in the server side? Will it help any way to improve performance of my servers request handling? Is it anyway reduce the number of malloc() calls that happens while processing ASN messages? Thanks, Naga

    Read the article

  • Can someone clarify what this Joel On Software quote means?

    - by Bob
    I was reading Joel On Software today and ran across this quote: Without understanding functional programming, you can't invent MapReduce, the algorithm that makes Google so massively scalable. The terms Map and Reduce come from Lisp and functional programming. MapReduce is, in retrospect, obvious to anyone who remembers from their 6.001-equivalent programming class that purely functional programs have no side effects and are thus trivially parallelizable. What does he mean when he says functional programs have no side effects? And how does this make parallelizing trivial?

    Read the article

  • Replace diacritic characters with "equivalent" ASCII in PHP?

    - by Dolph Mathews
    Related questions: http://stackoverflow.com/questions/2653739/how-to-replace-characters-in-a-java-string http://stackoverflow.com/questions/2393887/how-to-replace-special-characters-with-their-equivalent-such-as-a-for-a As in the questions above, I'm looking for a reliable, robust way to reduce any unicode character to near-equivalent ASCII using PHP. I really want to avoid rolling my own look up table. For example (stolen from 1st referenced question): Gracišce becomes Gracisce

    Read the article

  • Finding minimum cut-sets between bounded subgraphs

    - by Tore
    If a game map is partitioned into subgraphs, how to minimize edges between subgraphs? I have a problem, Im trying to make A* searches through a grid based game like pacman or sokoban, but i need to find "enclosures". What do i mean by enclosures? subgraphs with as few cut edges as possible given a maximum size and minimum size for number of vertices for each subgraph that act as a soft constraints. Alternatively you could say i am looking to find bridges between subgraphs, but its generally the same problem. Given a game that looks like this, what i want to do is find enclosures so that i can properly find entrances to them and thus get a good heuristic for reaching vertices inside these enclosures. So what i want is to find these colored regions on any given map. My Motivation The reason for me bothering to do this and not just staying content with the performance of a simple manhattan distance heuristic is that an enclosure heuristic can give more optimal results and i would not have to actually do the A* to get some proper distance calculations and also for later adding competitive blocking of opponents within these enclosures when playing sokoban type games. Also the enclosure heuristic can be used for a minimax approach to finding goal vertices more properly. A possible solution to the problem is the Kernighan-Lin algorithm: function Kernighan-Lin(G(V,E)): determine a balanced initial partition of the nodes into sets A and B do A1 := A; B1 := B compute D values for all a in A1 and b in B1 for (i := 1 to |V|/2) find a[i] from A1 and b[i] from B1, such that g[i] = D[a[i]] + D[b[i]] - 2*c[a][b] is maximal move a[i] to B1 and b[i] to A1 remove a[i] and b[i] from further consideration in this pass update D values for the elements of A1 = A1 / a[i] and B1 = B1 / b[i] end for find k which maximizes g_max, the sum of g[1],...,g[k] if (g_max > 0) then Exchange a[1],a[2],...,a[k] with b[1],b[2],...,b[k] until (g_max <= 0) return G(V,E) My problem with this algorithm is its runtime at O(n^2 * lg(n)), i am thinking of limiting the nodes in A1 and B1 to the border of each subgraph to reduce the amount of work done. I also dont understand the c[a][b] cost in the algorithm, if a and b do not have an edge between them is the cost assumed to be 0 or infinity, or should i create an edge based on some heuristic. Do you know what c[a][b] is supposed to be when there is no edge between a and b? Do you think my problem is suitable to use a multi level problem? Why or why not? Do you have a good idea for how to reduce the work done with the kernighan-lin algorithm for my problem?

    Read the article

  • Mercurial Queues: combining patches

    - by Eamon Nerbonne
    I've been playing with Mercurial and mercurial queues, and now have a fairly reasonable working version. However, before I submit a patch, I'd like to take that spagetti-history and merge it into discrete, logical steps, rather than the semi-overlapping repeated do-undo-redo-slightly-differently mess it is now, if only to reduce the number of patches. How do I do that?

    Read the article

  • When using a mocking framework and MSPEC where do you set your stubs

    - by Kev Hunter
    I am relatively new to using MSpec and as I write more and more tests it becomes obvious to reduce duplication you often have to use a base class for your setup as per Rob Conery's article I am happy with using the AssertWasCalled method to verify my expectations, but where do you set up a stub's return value, I find it useful to set the context in the base class injecting my dependencies but that (I think) means that I need to set my stubs up in the Because delegate which just feels wrong. Is there a better approach I am missing?

    Read the article

  • Portability among mobile platforms

    - by ElectricDialect
    Do any libraries or other development resources exist that can help reduce the effort involved in porting applications between various mobile platforms? In particular, I am interested in supporting iPhone, Android, and Windows Mobile. Some areas of concern include UI, client-server communication, and hardware support (e.g., camera, GPS, etc).

    Read the article

  • running a python script where dependencies are not avail: distributed computing

    - by sadhu_
    Hi, I have access to a grid (running condor) that would (potentially) allow to very substantially reduce how long by nltk based nlp tasks take. unfortunately, i dont have root access on the cluster so cannot install new packages, only run whatever is available on the linux boxes. python is of course available, but nltk isnt - i was wondering however, if there might be a way around this somehow ? is there a way i can somehow still distribute the task in a self-contained 'package' of some sort? Thanks for your hel

    Read the article

  • How do you solve the 15-puzzle with A-Star or Dijkstra's Algorithm?

    - by Sean
    I've read in one of my AI books that popular algorithms (A-Star, Dijkstra) for path-finding in simulation or games is also used to solve the well-known "15-puzzle". Can anyone give me some pointers on how I would reduce the 15-puzzle to a graph of nodes and edges so that I could apply one of these algorithms? If I were to treat each node in the graph as a game state then wouldn't that tree become quite large? Or is that just the way to do it?

    Read the article

  • Regular Expression to split "/n"

    - by Gnaniyar Zubair
    I want to split the /n from my string in Java. For example, I have one String field which has 2 lines space ( /n). I have to find out lines ( wherever mopre than one lines is coming) and should replace with one line spaces. "This is Test Message thanks Zubair " From the above example, there are more spaces between "This is Test Message" and "thanks". So i want to reduce as only one line instead of 2 lines.How to do that?

    Read the article

  • ORDER BY job failed in the Pig script while running EmbeddedPig using Java

    - by C.c. Huang
    I have this following pig script, which works perfectly using grunt shell (stored the results to HDFS without any issues); however, the last job (ORDER BY) failed if I ran the same script using Java EmbeddedPig. If I replace the ORDER BY job by others, such as GROUP or FOREACH GENERATE, the whole script then succeeded in Java EmbeddedPig. So I think it's the ORDER BY which causes the issue. Anyone has any experience with this? Any help would be appreciated! The Pig script: REGISTER pig-udf-0.0.1-SNAPSHOT.jar; user_similarity = LOAD '/tmp/sample-sim-score-results-31/part-r-00000' USING PigStorage('\t') AS (user_id: chararray, sim_user_id: chararray, basic_sim_score: float, alt_sim_score: float); simplified_user_similarity = FOREACH user_similarity GENERATE $0 AS user_id, $1 AS sim_user_id, $2 AS sim_score; grouped_user_similarity = GROUP simplified_user_similarity BY user_id; ordered_user_similarity = FOREACH grouped_user_similarity { sorted = ORDER simplified_user_similarity BY sim_score DESC; top = LIMIT sorted 10; GENERATE group, top; }; top_influencers = FOREACH ordered_user_similarity GENERATE com.aol.grapevine.similarity.pig.udf.AssignPointsToTopInfluencer($1, 10); all_influence_scores = FOREACH top_influencers GENERATE FLATTEN($0); grouped_influence_scores = GROUP all_influence_scores BY bag_of_topSimUserTuples::user_id; influence_scores = FOREACH grouped_influence_scores GENERATE group AS user_id, SUM(all_influence_scores.bag_of_topSimUserTuples::points) AS influence_score; ordered_influence_scores = ORDER influence_scores BY influence_score DESC; STORE ordered_influence_scores INTO '/tmp/cc-test-results-1' USING PigStorage(); The error log from Pig: 12/04/05 10:00:56 INFO pigstats.ScriptState: Pig script settings are added to the job 12/04/05 10:00:56 INFO mapReduceLayer.JobControlCompiler: mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3 12/04/05 10:00:58 INFO mapReduceLayer.JobControlCompiler: Setting up single store job 12/04/05 10:00:58 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized 12/04/05 10:00:58 INFO mapReduceLayer.MapReduceLauncher: 1 map-reduce job(s) waiting for submission. 12/04/05 10:00:58 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 12/04/05 10:00:58 INFO input.FileInputFormat: Total input paths to process : 1 12/04/05 10:00:58 INFO util.MapRedUtil: Total input paths to process : 1 12/04/05 10:00:58 INFO util.MapRedUtil: Total input paths (combined) to process : 1 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Creating tmp-1546565755 in /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134-work-6955502337234509704 with rwxr-xr-x 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Cached hdfs://localhost/tmp/temp1725960134/tmp-1546565755#pigsample_854728855_1333645258470 as /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Cached hdfs://localhost/tmp/temp1725960134/tmp-1546565755#pigsample_854728855_1333645258470 as /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 12/04/05 10:00:58 WARN mapred.LocalJobRunner: LocalJobRunner does not support symlinking into current working dir. 12/04/05 10:00:58 INFO mapred.TaskRunner: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/pigsample_854728855_1333645258470 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.jar.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.jar.crc 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.split.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.split.crc 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.splitmetainfo.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.splitmetainfo.crc 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.xml.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.xml.crc 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.jar <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.jar 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.split <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.split 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.splitmetainfo <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.splitmetainfo 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.xml <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.xml 12/04/05 10:00:59 INFO mapred.Task: Using ResourceCalculatorPlugin : null 12/04/05 10:00:59 INFO mapred.MapTask: io.sort.mb = 100 12/04/05 10:00:59 INFO mapred.MapTask: data buffer = 79691776/99614720 12/04/05 10:00:59 INFO mapred.MapTask: record buffer = 262144/327680 12/04/05 10:00:59 WARN mapred.LocalJobRunner: job_local_0004 java.lang.RuntimeException: org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/Users/cchuang/workspace/grapevine-rec/pigsample_854728855_1333645258470 at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.partitioners.WeightedRangePartitioner.setConf(WeightedRangePartitioner.java:139) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117) at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:560) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:639) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:210) Caused by: org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/Users/cchuang/workspace/grapevine-rec/pigsample_854728855_1333645258470 at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:231) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigFileInputFormat.listStatus(PigFileInputFormat.java:37) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:248) at org.apache.pig.impl.io.ReadToEndLoader.init(ReadToEndLoader.java:153) at org.apache.pig.impl.io.ReadToEndLoader.<init>(ReadToEndLoader.java:115) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.partitioners.WeightedRangePartitioner.setConf(WeightedRangePartitioner.java:112) ... 6 more 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Deleted path /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 12/04/05 10:00:59 INFO mapReduceLayer.MapReduceLauncher: HadoopJobId: job_local_0004 12/04/05 10:01:04 INFO mapReduceLayer.MapReduceLauncher: job job_local_0004 has failed! Stop running all dependent jobs 12/04/05 10:01:04 INFO mapReduceLayer.MapReduceLauncher: 100% complete 12/04/05 10:01:04 ERROR pigstats.PigStatsUtil: 1 map reduce job(s) failed! 12/04/05 10:01:04 INFO pigstats.PigStats: Script Statistics: HadoopVersion PigVersion UserId StartedAt FinishedAt Features 0.20.2-cdh3u3 0.8.1-cdh3u3 cchuang 2012-04-05 10:00:34 2012-04-05 10:01:04 GROUP_BY,ORDER_BY Some jobs have failed! Stop running all dependent jobs Job Stats (time in seconds): JobId Maps Reduces MaxMapTime MinMapTIme AvgMapTime MaxReduceTime MinReduceTime AvgReduceTime Alias Feature Outputs job_local_0001 0 0 0 0 0 0 0 0 all_influence_scores,grouped_user_similarity,simplified_user_similarity,user_similarity GROUP_BY job_local_0002 0 0 0 0 0 0 0 0 grouped_influence_scores,influence_scores GROUP_BY,COMBINER job_local_0003 0 0 0 0 0 0 0 0 ordered_influence_scores SAMPLER Failed Jobs: JobId Alias Feature Message Outputs job_local_0004 ordered_influence_scores ORDER_BY Message: Job failed! Error - NA /tmp/cc-test-results-1, Input(s): Successfully read 0 records from: "/tmp/sample-sim-score-results-31/part-r-00000" Output(s): Failed to produce result in "/tmp/cc-test-results-1" Counters: Total records written : 0 Total bytes written : 0 Spillable Memory Manager spill count : 0 Total bags proactively spilled: 0 Total records proactively spilled: 0 Job DAG: job_local_0001 -> job_local_0002, job_local_0002 -> job_local_0003, job_local_0003 -> job_local_0004, job_local_0004 12/04/05 10:01:04 INFO mapReduceLayer.MapReduceLauncher: Some jobs have failed! Stop running all dependent jobs

    Read the article

  • C# TabPage.Controls.Add too long

    - by Toto
    Hi, At run time, I add one control to a tabpage and I notice that it takes 0.5 sec to do it. It's rather long and I would like to reduce this time. I tried Suspend/ResumeLayout but for only one action it's no relevant and do not improved anythng. Any ideas ? Thx

    Read the article

  • Unable to run MR on cluster

    - by RAVITEJA SATYAVADA
    I have an Map reduce program that is running successfully in standalone(Ecllipse) mode but while trying to run the same MR by exporting the jar in cluster. It is showing null pointer exception like this, 13/06/26 05:46:22 ERROR mypackage.HHDriver: Error while configuring run method. java.lang.NullPointerException I double checked the run method parameters those are not null and it is running in standalone mode as well..

    Read the article

  • Any Open Source Pregel like framework for distributed processing of large Graphs?

    - by Akshay Bhat
    Google has described a novel framework for distributed processing on Massive Graphs. http://portal.acm.org/citation.cfm?id=1582716.1582723 I wanted to know if similar to Hadoop (Map-Reduce) are there any open source implementations of this framework? I am actually in process of writing a Pseudo distributed one using python and multiprocessing module and thus wanted to know if someone else has also tried implementing it. Since public information about this framework is extremely scarce. (A link above and a blog post at Google Research)

    Read the article

  • What would cause my Silverlight .xap to quadruple in size suddenly?

    - by Edward Tanguay
    I've been working on a Silverlight application. I just noticed that the .xap file is now four times as big as it was, what could have caused that? Here's some other info: there seems to now be many more language settings in the bin/Release directory I checked "Reduce size of .xap" in under Properties/Silverlight but that just brought it down from 1300 to 1200. I reference the System.Windows.Controls.Toolkit dll but I was doing that even when it was 325K Screenshot of build directories before and after:

    Read the article

  • Why is shrink_to_fit non-binding?

    - by Roger Pate
    The C++0x FCD states in 23.3.6.2 vector capacity: void shrink_to_fit(); Remarks: shrink_to_fit is a non-binding request to reduce capacity() to size(). [Note: The request is non-binding to allow latitude for implementation-specific optimizations. —end note] Why is it non-binding, and what optimizations are intended to be allowed?

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >