Search Results

Search found 18546 results on 742 pages for 'evaluation order'.

Page 25/742 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • Ruby on Rails export to csv - maintain mysql select statement order

    - by zekial
    Exporting some data from mysql to a csv file using FasterCSV. I'd like the columns in the outputted CSV to be in the same order as the select statement in my query. Example: rows = Data.find( :all, :select=>'name, age, height, weight' ) headers = rows[0].attributes.keys FasterCSV.generate do |csv| csv << headers rows.each do |r| csv << r.attributes.values end end CSV Output: height,weight,name,age 74,212,bob,23 70,201,fred,24 . . . I want the CSV columns in the same order as my select statement. Obviously the attributes method is not going to work. Any ideas on the best way to ensure that the columns in my csv file will be in the same order as the select statement? Got a lot of data and performance is an issue. The select statement is not static. I realize I could loop through column names within the rows.each loop but it seems kinda dirty.

    Read the article

  • The ORDER BY clause is invalid in views, inline functions, derived tables, subqueries, and common ta

    - by zurna
    I get "The ORDER BY clause is invalid in views, inline functions, derived tables, subqueries, and common table expressions, unless TOP or FOR XML is also specified." error with the following code. I initially had two tables, ADSAREAS & CATEGORIES. I started receiving this error when I removed CATEGORIES table. Select Case SIDX Case "ID" : SQLCONT1 = " AdsAreasID" Case "Page" : SQLCONT1 = " AdsAreasName" Case Else : SQLCONT1 = " AdsAreasID" End Select Select Case SORD Case "asc" : SQLCONT2 = " ASC" Case "desc" : SQLCONT2 = " DESC" Case Else : SQLCONT2 = " ASC" End Select ''# search feature ---> Select Case SEARCHFIELD Case "ID" : SQLSFIELD = "AND AdsAreasID" Case "Ads Areas" : SQLSFIELD = "AND AdsAreasName" Case Else : SQLSFIELD = "" End Select Select Case SEARCHOPER Case "eq" : SQLSOPER = " = " & SEARCHSTRING Case "ne" : SQLSOPER = " <> " & SEARCHSTRING Case "lt" : SQLSOPER = " <" & SEARCHSTRING Case "le" : SQLSOPER = " <= " & SEARCHSTRING Case "gt" : SQLSOPER = " >" & SEARCHSTRING Case "ge" : SQLSOPER = " >= " & SEARCHSTRING Case "bw" : SQLSOPER = " LIKE '" & SEARCHSTRING & "%' " Case "ew" : SQLSOPER = " LIKE '%" & SEARCHSTRING & "' " Case "cn" : SQLSOPER = " LIKE '%" & SEARCHSTRING & "%' " Case Else : SQLSOPER = "" End Select ''# search feature ---> SQL = "SELECT * FROM ( SELECT A.AdsAreasID, A.AdsAreasName, ROW_NUMBER() OVER (ORDER BY A.AdsAreasID) As Row" SQL = SQL & " FROM ADSAREAS A" SQL = SQL & " WHERE Row > ("& RecordsPageSize - RecordsPerPage &") AND Row <= ("& RecordsPageSize &") ORDER BY" & SQLCONT1 & SQLCONT2 Set objXML = objConn.Execute(SQL)

    Read the article

  • jQuery "Autcomplete" plugin is messing up the order of my data

    - by Max Williams
    I'm using Jorn Zaefferer's Autocomplete plugin on a couple of different pages. In both instances, the order of displayed strings is a little bit messed up. Example 1: array of strings: basically they are in alphabetical order except for General Knowledge which has been pushed to the top: General Knowledge,Art and Design,Business Studies,Citizenship,Design and Technology,English,Geography,History,ICT,Mathematics,MFL French,MFL German,MFL Spanish,Music,Physical Education,PSHE,Religious Education,Science,Something Else Displayed strings: General Knowledge,Geography,Art and Design,Business Studies,Citizenship,Design and Technology,English,History,ICT,Mathematics,MFL French,MFL German,MFL Spanish,Music,Physical Education,PSHE,Religious Education,Science,Something Else Note that Geography has been pushed to be the second item, after General Knowledge. The rest are all fine. Example 2: array of strings: as above but with Cross-curricular instead of General Knowledge. Cross-curricular,Art and Design,Business Studies,Citizenship,Design and Technology,English,Geography,History,ICT,Mathematics,MFL French,MFL German,MFL Spanish,Music,Physical Education,PSHE,Religious Education,Science,Something Else Displayed strings: Cross-curricular,Citizenship,Art and Design,Business Studies,Design and Technology,English,Geography,History,ICT,Mathematics,MFL French,MFL German,MFL Spanish,Music,Physical Education,PSHE,Religious Education,Science,Something Else Here, Citizenship has been pushed to the number 2 position. I've experimented a little, and it seems like there's a bug saying "put things that start with the same letter as the first item after the first item and leave the rest alone". Kind of mystifying. I've tried a bit of debugging by triggering alerts inside the autocomplete plugin code but everywhere i can see, it's using the correct order. it seems to be just when its rendered out that it goes wrong. Any ideas anyone? max

    Read the article

  • Machine Learning Algorithm for Predicting Order of Events?

    - by user213060
    Simple machine learning question. Probably numerous ways to solve this: There is an infinite stream of 4 possible events: 'event_1', 'event_2', 'event_4', 'event_4' The events do not come in in completely random order. We will assume that there are some complex patterns to the order that most events come in, and the rest of the events are just random. We do not know the patterns ahead of time though. After each event is received, I want to predict what the next event will be based on the order that events have come in in the past. The predictor will then be told what the next event actually was: Predictor=new_predictor() prev_event=False while True: event=get_event() if prev_event is not False: Predictor.last_event_was(prev_event) predicted_event=Predictor.predict_next_event(event) The question arises of how long of a history that the predictor should maintain, since maintaining infinite history will not be possible. I'll leave this up to you to answer. The answer can't be infinte though for practicality. So I believe that the predictions will have to be done with some kind of rolling history. Adding a new event and expiring an old event should therefore be rather efficient, and not require rebuilding the entire predictor model, for example. Specific code, instead of research papers, would add for me immense value to your responses. Python or C libraries are nice, but anything will do. Thanks! Update: And what if more than one event can happen simultaneously on each round. Does that change the solution?

    Read the article

  • Best approach to storing image pixels in bottom-up order in Java

    - by finnw
    I have an array of bytes representing an image in Windows BMP format and I would like my library to present it to the Java application as a BufferedImage, without copying the pixel data. The main problem is that all implementations of Raster in the JDK store image pixels in top-down, left-to-right order whereas BMP pixel data is stored bottom-up, left-to-right. If this is not compensated for, the resulting image will be flipped vertically. The most obvious "solution" is to set the SampleModel's scanlineStride property to a negative value and change the band offsets (or the DataBuffer's array offset) to point to the top-left pixel, i.e. the first pixel of the last line in the array. Unfortunately this does not work because all of the SampleModel constructors throw an exception if given a negative scanlineStride argument. I am currently working around it by forcing the scanlineStride field to a negative value using reflection, but I would like to do it in a cleaner and more portable way if possible. e.g. is there another way to fool the Raster or SampleModel into arranging the pixels in bottom-up order but without breaking encapsulation? Or is there a library somewhere that will wrap the Raster and SampleModel, presenting the pixel rows in reverse order? I would prefer to avoid the following approaches: Copying the whole image (for performance reasons. The code must process hundreds of large (= 1Mpixels) images per second and although the whole image must be available to the application, it will normally access only a tiny (but hard-to-predict) portion of the image.) Modifying the DataBuffer to perform coordinate transformation (this actually works but is another "dirty" solution because the buffer should not need to know about the scanline/pixel layout.) Re-implementing the Raster and/or SampleModel interfaces from scratch (but I have a hunch that I will be unable to avoid this.)

    Read the article

  • HTTP POST prarameters order / REST urls

    - by pq
    Let's say that I'm uploading a large file via a POST HTTP request. Let's also say that I have another parameter (other than the file) that names the resource which the file is updating. The resource cannot be not part of the URL the way you can do it with REST (e.g. foo.com/bar/123). Let's say this is due to a combination of technical and political reasons. The server needs to ignore the file if the resource name is invalid or, say, the IP address and/or the logged in user are not authorized to update the resource. Looks like, if this POST came from an HTML form that contains the resource name first and file field second, for most (all?) browsers, this order is preserved in the POST request. But it would be naive to fully rely on that, no? In other words the order of HTTP parameters is insignificant and a client is free to construct the POST in any order. Isn't that true? Which means that, at least in theory, the server may end up storing the whole large file before it can deny the request. It seems to me that this is a clear case where RESTful urls have an advantage, since you don't have to look at the POST content to perform certain authorization/error checking on the request. Do you agree? What are your thoughts, experiences?

    Read the article

  • Order in many to many relation in Django model

    - by Pietro Speroni
    I am writing a small website to store the papers I have written. The relation papers<- author is important, but the order of the name of the authors (which one is First Author, which one is second order, and so on) is also important. I am just learning Django so I don't know much. In any case so far I have done: from django.db import models class author(models.Model): Name = models.CharField(max_length=60) URLField = models.URLField(verify_exists=True, null=True, blank=True) def __unicode__(self): return self.Name class topic(models.Model): TopicName = models.CharField(max_length=60) def __unicode__(self): return self.TopicName class publication(models.Model): Title = models.CharField(max_length=100) Authors = models.ManyToManyField(author, null=True, blank=True) Content = models.TextField() Notes = models.TextField(blank=True) Abstract = models.TextField(blank=True) pub_date = models.DateField('date published') TimeInsertion = models.DateTimeField(auto_now=True) URLField = models.URLField(verify_exists=True,null=True, blank=True) Topic = models.ManyToManyField(topic, null=True, blank=True) def __unicode__(self): return self.Title This work fine in the sense that I now can define who the authors are. But I cannot order them. How should I do that? Of course I could add a series of relations: first author, second author,... but it would be ugly, and would not be flexible. Any better idea? Thanks

    Read the article

  • Java split giving opposite order of arabic characters

    - by MuhammadA
    I am splitting the following string using \\| in java (android) using the IntelliJ 12 IDE. Everything is fine except the last part, somehow the split picks them up in the opposite order : As you can see the real positioning 34,35,36 is correct and according to the string, but when it gets picked out into split part no 5 its in the wrong order, 36,35,34 ... Any way I can get them to be in the right order? My Code: public ArrayList<Book> getBooksFromDatFile(Context context, String fileName) { ArrayList<Book> books = new ArrayList<Book>(); try { // load csv from assets InputStream is = context.getAssets().open(fileName); try { BufferedReader reader = new BufferedReader(new InputStreamReader(is)); String line; while ((line = reader.readLine()) != null) { String[] RowData = line.split("\\|"); books.add(new Book(RowData[0], RowData[1], RowData[2], RowData[3], RowData[4], RowData[5])); } } catch (IOException ex) { Log.e(TAG, "Error parsing csv file!"); } finally { try { is.close(); } catch (IOException e) { Log.e(TAG, "Error closing input stream!"); } } } catch (IOException ex) { Log.e(TAG, "Error reading .dat file from assets!"); } return books; }

    Read the article

  • Algorithm to produce Cartesian product of arrays in depth-first order

    - by Yuri Gadow
    I'm looking for an example of how, in Ruby, a C like language, or pseudo code, to create the Cartesian product of a variable number of arrays of integers, each of differing length, and step through the results in a particular order: So given, [1,2,3],[1,2,3],[1,2,3]: [1, 1, 1] [2, 1, 1] [1, 2, 1] [1, 1, 2] [2, 2, 1] [1, 2, 2] [2, 1, 2] [2, 2, 2] [3, 1, 1] [1, 3, 1] etc. Instead of the typical result I've seen (including the example I give below): [1, 1, 1] [2, 1, 1] [3, 1, 1] [1, 2, 1] [2, 2, 1] [3, 2, 1] [1, 3, 1] [2, 3, 1] etc. The problem with this example is that the third position isn't explored at all until all combinations of of the first two are tried. In the code that uses this, that means even though the right answer is generally (the much larger equivalent of) 1,1,2 it will examine a few million possibilities instead of just a few thousand before finding it. I'm dealing with result sets of one million to hundreds of millions, so generating them and then sorting isn't doable here and would defeat the reason for ordering them in the first example, which is to find the correct answer sooner and so break out of the cartesian product generation earlier. Just in case it helps clarify any of the above, here's how I do this now (this has correct results and right performance, but not the order I want, i.e., it creates results as in the second listing above): def cartesian(a_of_a) a_of_a_len = a_of_a.size result = Array.new(a_of_a_len) j, k, a2, a2_len = nil, nil, nil, nil i = 0 while 1 do j, k = i, 0 while k < a_of_a_len a2 = a_of_a[k] a2_len = a2.size result[k] = a2[j % a2_len] j /= a2_len k += 1 end return if j > 0 yield result i += 1 end end UPDATE: I didn't make it very clear that I'm after a solution where all the combinations of 1,2 are examined before 3 is added in, then all 3 and 1, then all 3, 2 and 1, then all 3,2. In other words, explore all earlier combinations "horizontally" before "vertically." The precise order in which those possibilities are explored, i.e., 1,1,2 or 2,1,1, doesn't matter, just that all 2 and 1 are explored before mixing in 3 and so on.

    Read the article

  • How can I order by the result of a recursive SQL query

    - by Tony
    I have the following method I need to ORDER BY: def has_attachments? attachments.size > 0 || (!parent.nil? && parent.has_attachments?) end I have gotten this far: ORDER BY CASE WHEN attachments.size > 0 THEN 1 ELSE (CASE WHEN parent_id IS NULL THEN 0 ELSE (CASE message.parent ...what goes here ) END END END I may be looking at this wrong because I don't have experience with recursive SQL. Essentially I want to ORDER by whether a message or any of its parents has attachments. If it's attachment size is 0, I can stop and return a 1. If the message has an attachment size of 0, I now check to see if it has a parent. If it has no parent then there is no attachment, however if it does have a parent then I essentially have to do the same query case logic for the parent. UPDATE The table looks like this +---------------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | message_type_id | int(11) | NO | MUL | | | | message_priority_id | int(11) | NO | MUL | | | | message_status_id | int(11) | NO | MUL | | | | message_subject_id | int(11) | NO | MUL | | | | from_user_id | int(11) | YES | MUL | NULL | | | parent_id | int(11) | YES | MUL | NULL | | | expires_at | datetime | YES | MUL | NULL | | | subject_other | varchar(255) | YES | | NULL | | | body | text | YES | | NULL | | | created_at | datetime | NO | MUL | | | | updated_at | datetime | NO | | | | | lock_version | int(11) | NO | | 0 | | +---------------------+--------------+------+-----+---------+----------------+ Where the parent_id refers to the parent message, if it exists. Thanks!

    Read the article

  • Is it possible to prevent out-of-order execution by using single volatile

    - by Yan Cheng CHEOK
    By referring article, it is using a pair of volatile to prevent out-of-order execution. I was wondering, is it possible to prevent it using single volatile? void fun_by_thread_1() { this.isNuclearFactory = true; this.factory = new NuclearFactory(); } void fun_by_thread_2() { Factory _factory = this.factory; if (this.isNuclearFactory) { // Do not operate nuclear factory!!! return; } // If out-of-order execution happens, _factory might // be NuclearFactory instance. _factory.operate(); } Factory factory = new FoodFactory(); volatile boolean isNuclearFactory = false; The reason I ask, is because I have a single guard flag (similar to isNuclearFactory flag), to guard against multiple variables (similar to many Factory). I do not wish to mark all the Factory as volatile. Or, shall I fall into the following solution? void fun_by_thread_1() { try { writer.lock(); this.isNuclearFactory = true; this.factory = new NuclearFactory(); } finally { writer.unlock(); } } void fun_by_thread_2() { try { reader.lock(); Factory _factory = this.factory; if (this.isNuclearFactory) { // Do not operate nuclear factory!!! return; } } finally { reader.unlock(); } _factory.operate(); } Factory factory = new FoodFactory(); boolean isNuclearFactory = false; P/S: I know instanceof. Factory is just an example to demonstrate of out-of-order problem.

    Read the article

  • Match subpatterns in any order

    - by Yaroslav
    I have long regexp with two complicated subpatters inside. How i can match that subpatterns in any order? Simplified example: /(apple)?\s?(banana)?\s?(orange)?\s?(kiwi)?/ and i want to match both of apple banana orange kiwi apple orange banana kiwi It is very simplified example. In my case banana and orange is long complicated subpatterns and i don't want to do something like /(apple)?\s?((banana)?\s?(orange)?|(orange)?\s?(banana)?)\s?(kiwi)?/ Is it possible to group subpatterns like chars in character class? UPD Real data as requested: 14:24 26,37 Mb 108.53 01:19:02 06.07 24.39 19:39 46:00 my strings much longer, but it is significant part. Here you can see two lines what i need to match. First has two values: length (14 min 24 sec) and size 26.37 Mb. Second one has three values but in different order: size 108.53 Mb, length 01 h 19 m 02 s and date June, 07 Third one has two size and length Fourth has only length There are couple more variations and i need to parse all values. I have a regexp that pretty close except i can't figure out how to match patterns in different order without writing it twice. (?<size>\d{1,3}\[.,]\d{1,2}\s+(?:Mb)?)?\s? (?<length>(?:(?:01:)?\d{1,2}:\d{2}))?\s* (?<date>\d{2}\.\d{2}))? NOTE: that is only part of big regexp that forks fine already.

    Read the article

  • ignoring elements order in xml validation against xsd

    - by segolas
    Hi, Ia processing an email and saving some header inside a xml document. I also need to validate the document against a xml schema. As the subject suggest, I need to validate ignoring the elements order but, as far as I read this seems to be impossible. Am I correct? If I put the headers in a<xsd:sequence>, the order obviously matter. If I us the order is ignored but for some strange reason this imply that the elements must occur at least once. My xml is something like this: <headers> <subject>bla bla bla</subject> <recipient>[email protected]</recipient> <recipient>rcp02domain.com</recipient> <recipient>[email protected]</recipient> </headers> but I think the final document is valid even if subject and recipient elements are swapped. There is really nothing to do?

    Read the article

  • Is NFS capable of preserving order of operations?

    - by JustJeff
    I have a diskless host 'A', that has a directory NFS mounted on server 'B'. A process on A writes to two files F1 and F2 in that directory, and a process on B monitors these files for changes. Assume that B polls for changes faster than A is expected to make them. Process A seeks the head of the files, writes data, and flushes. Process B seeks the head of the files and does reads. Are there any guarantees about how the order of the changes performed by A will be detected at B? Specifically, if A alternately writes to one file, and then the other, is it reasonable to expect that B will notice alternating changes to F1 and F2? Or could B conceivably detect a series of changes on F1 and then a series on F2? I know there are a lot of assumptions embedded in the question. For instance, I am virtually certain that, even operating on just one file, if A performs 100 operations on the file, B may see a smaller number of changes that give the same result, due to NFS caching some of the actions on A before they are communicated to B. And of course there would be issues with concurrent file access even if NFS weren't involved and both the reading and the writing process were running on the same real file system. The reason I'm even putting the question up here is that it seems like most of the time, the setup described above does detect the changes at B in the same order they are made at A, but that occasionally some events come through in transposed order. So, is it worth trying to make this work? Is there some way to tune NFS to make it work, perhaps cache settings or something? Or is fine-grained behavior like this just too much expect from NFS?

    Read the article

  • ORDER BY job failed in the Pig script while running EmbeddedPig using Java

    - by C.c. Huang
    I have this following pig script, which works perfectly using grunt shell (stored the results to HDFS without any issues); however, the last job (ORDER BY) failed if I ran the same script using Java EmbeddedPig. If I replace the ORDER BY job by others, such as GROUP or FOREACH GENERATE, the whole script then succeeded in Java EmbeddedPig. So I think it's the ORDER BY which causes the issue. Anyone has any experience with this? Any help would be appreciated! The Pig script: REGISTER pig-udf-0.0.1-SNAPSHOT.jar; user_similarity = LOAD '/tmp/sample-sim-score-results-31/part-r-00000' USING PigStorage('\t') AS (user_id: chararray, sim_user_id: chararray, basic_sim_score: float, alt_sim_score: float); simplified_user_similarity = FOREACH user_similarity GENERATE $0 AS user_id, $1 AS sim_user_id, $2 AS sim_score; grouped_user_similarity = GROUP simplified_user_similarity BY user_id; ordered_user_similarity = FOREACH grouped_user_similarity { sorted = ORDER simplified_user_similarity BY sim_score DESC; top = LIMIT sorted 10; GENERATE group, top; }; top_influencers = FOREACH ordered_user_similarity GENERATE com.aol.grapevine.similarity.pig.udf.AssignPointsToTopInfluencer($1, 10); all_influence_scores = FOREACH top_influencers GENERATE FLATTEN($0); grouped_influence_scores = GROUP all_influence_scores BY bag_of_topSimUserTuples::user_id; influence_scores = FOREACH grouped_influence_scores GENERATE group AS user_id, SUM(all_influence_scores.bag_of_topSimUserTuples::points) AS influence_score; ordered_influence_scores = ORDER influence_scores BY influence_score DESC; STORE ordered_influence_scores INTO '/tmp/cc-test-results-1' USING PigStorage(); The error log from Pig: 12/04/05 10:00:56 INFO pigstats.ScriptState: Pig script settings are added to the job 12/04/05 10:00:56 INFO mapReduceLayer.JobControlCompiler: mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3 12/04/05 10:00:58 INFO mapReduceLayer.JobControlCompiler: Setting up single store job 12/04/05 10:00:58 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized 12/04/05 10:00:58 INFO mapReduceLayer.MapReduceLauncher: 1 map-reduce job(s) waiting for submission. 12/04/05 10:00:58 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 12/04/05 10:00:58 INFO input.FileInputFormat: Total input paths to process : 1 12/04/05 10:00:58 INFO util.MapRedUtil: Total input paths to process : 1 12/04/05 10:00:58 INFO util.MapRedUtil: Total input paths (combined) to process : 1 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Creating tmp-1546565755 in /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134-work-6955502337234509704 with rwxr-xr-x 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Cached hdfs://localhost/tmp/temp1725960134/tmp-1546565755#pigsample_854728855_1333645258470 as /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Cached hdfs://localhost/tmp/temp1725960134/tmp-1546565755#pigsample_854728855_1333645258470 as /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 12/04/05 10:00:58 WARN mapred.LocalJobRunner: LocalJobRunner does not support symlinking into current working dir. 12/04/05 10:00:58 INFO mapred.TaskRunner: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/pigsample_854728855_1333645258470 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.jar.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.jar.crc 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.split.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.split.crc 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.splitmetainfo.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.splitmetainfo.crc 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.xml.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.xml.crc 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.jar <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.jar 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.split <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.split 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.splitmetainfo <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.splitmetainfo 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.xml <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.xml 12/04/05 10:00:59 INFO mapred.Task: Using ResourceCalculatorPlugin : null 12/04/05 10:00:59 INFO mapred.MapTask: io.sort.mb = 100 12/04/05 10:00:59 INFO mapred.MapTask: data buffer = 79691776/99614720 12/04/05 10:00:59 INFO mapred.MapTask: record buffer = 262144/327680 12/04/05 10:00:59 WARN mapred.LocalJobRunner: job_local_0004 java.lang.RuntimeException: org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/Users/cchuang/workspace/grapevine-rec/pigsample_854728855_1333645258470 at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.partitioners.WeightedRangePartitioner.setConf(WeightedRangePartitioner.java:139) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117) at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:560) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:639) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:210) Caused by: org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/Users/cchuang/workspace/grapevine-rec/pigsample_854728855_1333645258470 at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:231) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigFileInputFormat.listStatus(PigFileInputFormat.java:37) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:248) at org.apache.pig.impl.io.ReadToEndLoader.init(ReadToEndLoader.java:153) at org.apache.pig.impl.io.ReadToEndLoader.<init>(ReadToEndLoader.java:115) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.partitioners.WeightedRangePartitioner.setConf(WeightedRangePartitioner.java:112) ... 6 more 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Deleted path /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 12/04/05 10:00:59 INFO mapReduceLayer.MapReduceLauncher: HadoopJobId: job_local_0004 12/04/05 10:01:04 INFO mapReduceLayer.MapReduceLauncher: job job_local_0004 has failed! Stop running all dependent jobs 12/04/05 10:01:04 INFO mapReduceLayer.MapReduceLauncher: 100% complete 12/04/05 10:01:04 ERROR pigstats.PigStatsUtil: 1 map reduce job(s) failed! 12/04/05 10:01:04 INFO pigstats.PigStats: Script Statistics: HadoopVersion PigVersion UserId StartedAt FinishedAt Features 0.20.2-cdh3u3 0.8.1-cdh3u3 cchuang 2012-04-05 10:00:34 2012-04-05 10:01:04 GROUP_BY,ORDER_BY Some jobs have failed! Stop running all dependent jobs Job Stats (time in seconds): JobId Maps Reduces MaxMapTime MinMapTIme AvgMapTime MaxReduceTime MinReduceTime AvgReduceTime Alias Feature Outputs job_local_0001 0 0 0 0 0 0 0 0 all_influence_scores,grouped_user_similarity,simplified_user_similarity,user_similarity GROUP_BY job_local_0002 0 0 0 0 0 0 0 0 grouped_influence_scores,influence_scores GROUP_BY,COMBINER job_local_0003 0 0 0 0 0 0 0 0 ordered_influence_scores SAMPLER Failed Jobs: JobId Alias Feature Message Outputs job_local_0004 ordered_influence_scores ORDER_BY Message: Job failed! Error - NA /tmp/cc-test-results-1, Input(s): Successfully read 0 records from: "/tmp/sample-sim-score-results-31/part-r-00000" Output(s): Failed to produce result in "/tmp/cc-test-results-1" Counters: Total records written : 0 Total bytes written : 0 Spillable Memory Manager spill count : 0 Total bags proactively spilled: 0 Total records proactively spilled: 0 Job DAG: job_local_0001 -> job_local_0002, job_local_0002 -> job_local_0003, job_local_0003 -> job_local_0004, job_local_0004 12/04/05 10:01:04 INFO mapReduceLayer.MapReduceLauncher: Some jobs have failed! Stop running all dependent jobs

    Read the article

  • Will JavaScript evaluate a property's value if it's not part of an assignment statement?

    - by Bungle
    I've come across a fairly obscure problem having to do with measuring a document's height before the styling has been applied by the browser. More information here: http://sonspring.com/journal/jquery-iframe-sizing http://ajaxian.com/archives/safari-3-onload-firing-and-bad-timing In Dave Hyatt's comment from June 27, he advises simply checking the document.body.offsetLeft property to force Safari to do a reflow. Can I simply use the following statement: document.body.offsetLeft; or do I need to assign it to a variable, e.g.: var force_layout = document.body.offsetLeft; in order for browsers to calculate that value? I think this comes down to a more fundamental question - that is, without being part of an assignment statement, will JavaScript still evaluate a property's value? Does this depend on whether a particular browser's JavaScript engine optimizes the code?

    Read the article

  • Debian 3.1 (Sarge) init.d boot order

    - by Adam Lewis
    I am using a TS-7800 single board computer from Technologic Systems that ships with Debian 3.1 (Sarge). I have updated it to Squeeze, but due to various driver issues I have been forced to roll back to Sarge. I am attempting to configure the various drivers and configurations needed for my application services before they start. Ideally I would call one init.d script that contains the drivers / configurations then call the other init.d scripts (one for each process). I am left scratching my head on how to guarantee the boot sequence. I know in later versions of Debian I could use the lbs-header to achieve this; but is there anything comparable to the LBS header in Sarge?

    Read the article

  • Debian 3.1 (Sarge) init.d boot order

    - by Adam Lewis
    I am using a TS-7800 single board computer from Technologic Systems that ships with Debian 3.1 (Sarge). I have updated it to Squeeze, but due to various driver issues I have been forced to roll back to Sarge. I am attempting to configure the various drivers and configurations needed for my application services before they start. Ideally I would call one init.d script that contains the drivers / configurations then call the other init.d scripts (one for each process). I am left scratching my head on how to guarantee the boot sequence. I know in later versions of Debian I could use the lbs-header to achieve this; but is there anything comparable to the LBS header in Sarge?

    Read the article

  • Looking for updated BIOS for '99 Gateway in order to format/recognize >127 GB HD

    - by Jeff
    I have a '99 Gateway that's apparently too old for even Gateway to acknowledge it exists. Want to use it as a media hub and put in a 320GB HD, but it will not format above 127GB even running Win XP SP3. Read somewhere that upgrading the BIOS may do the trick, but I can't find the correct BIOS, and GW has been no help. Hoping I can just upgrade the BIOS, which is 11 years old. Any help would be much appreciated! I don't know where to look, and searches have been fruitless. System info: OS Name Microsoft Windows XP Home Edition Version 5.1.2600 Service Pack 3 Build 2600 OS Manufacturer Microsoft Corporation System Name xxxx System Manufacturer Gateway System Model TABOR_II System Type X86-based PC Processor x86 Family 6 Model 7 Stepping 3 GenuineIntel ~596 Mhz BIOS Version/Date Intel Corp. 4W4SB0X0.15A.0015.P10, 9/28/1999 SMBIOS Version 2.1 BIOS info (from a free app I located): BIOS Type: Phoenix BIOS Date: September 28th 1999 BIOS ID: 4W4SB0X0.15A.0015.P10.9909281445-None BIOS OEM: 4W4SB0X0.15A.0015.P10 Chipset: Intel 440BX/ZX rev 3 SuperIO: SMC 70x or 80x rev 0 at port 0370 Manufacturer: Gateway Motherboard: WS440BX

    Read the article

  • How to configure apache and mod_proxy_ajp in order to forward ssl client certificate

    - by giovanni.cuccu
    I've developed a java application that need a ssl client certificate and in the staging environment with apache 2.2 and mod_jk it is working fine. In production the configuration is not using mod_jk but mod_proxy_ajp. I'm looking for an apache configuration example that configure ssl and mod_proxy_ajp for sending the ssl client certificate to the java application server (which listens with the ajp protocol). Thanks a lot

    Read the article

  • Linux, fdisk: change order of partitions

    - by osgx
    I have a harddrive with 24 logical partitions on it. Half of them are Linux and half are windows. Current ordering is: 3 Linux partitions; 12 windows partitions; 9 Linux partitions. In this setup, Windows can access any of partition (no limits on partition number), but Linux can't access sda16, sda17 ... Can I change numbering of partitions without moving them on disk? I want to put all Linux partitions to be <16; and windows partitions to be 16, so linux will be able to access all linux partitions. I have fdisk/sfdisk and it sees all partitions.

    Read the article

  • order of operations for environment variables

    - by alyda
    I want to understand how environment variables are set and reset (overridden). I'm running Apache/2.2.24 (Unix) PHP/5.4.14 on a mac . My theory is this: Environment vars can be set in bash, then they can be overwritten with httpd.conf preceding a VirtualHost directive that precedes php.ini, which can then be overwritten by .htaccess (if allowable) and finally by PHP I tried the following: setting environment variable in bash: I added export ENVIRONMENT='local' to my ~/.bashrc file, restarted apache and did not get any output from print_r($_ENV); (in a simple index.php file at the root of my webserver). I also tried putting ENVIRONMENT='local' into /etc/environment, and restarting apache, nothing, as well as /etc/bashrc, restart apache. still nothing. setting environment variable in httpd.conf: I added SetEnv ENVIRONMENT 'local-httpd to the end of my /etc/apache2/httpd.conf file (but before I load other conf files, such as virtual host [Include /private/etc/apache2/other/*.conf]). I now see the variable in the array print_r($_SERVER); but not print_r($_ENV);. setting environment variable in httpd-vhosts.conf: I added SetEnv ENVIRONMENT 'local-vhost to my /etc/apache2/extra/httpd-vhosts.conf file in my generic directive that points to my default document root. I now see the variable has been overwritten (to local-vhost from local-httpd, so I know where the variable is getting set). setting environment variable in php.ini: while searching for a proper place to put my environment variable, I noticed that variables_order = "GPCS" was set to the production value rather than EGPCS. I changed it, restarted my server and found that I was now getting output for print_r($_ENV); but not my expected custom variable. It also appears that I am not able to set a custom variable in this file. Please tell me if I am wrong setting environment variable in .htaccess: I added SetEnv ENVIRONMENT 'local-htaccess'. This worked as expected, overwriting all other values that were set. setting / overwriting environment variable in PHP: if (...) { putenv('ENVIRONMENT=local'); } I'm asking this question because I have a lot of local and remote testing servers, some of which may or may not allow me access to modify httpd, httpd-vhost, php.ini or environment variables. I want to understand what is best for those difference scenarios (shared hosting, heroku, local servers, etc) I obviously don't know how to properly set the environment variable in bash in a way that php can use it, I'd like to know how to do that (as I think Heroku does something similar with heroku config set...)

    Read the article

  • how to correctly mount fat32 partition in Ubuntu in order to preserve case

    - by Dean
    I've found there are couple of problems might be related how my FAT32 partition was mounted. I hope you can help me to solve the problem. I also included the command I used to help others when they find this post, sorry to those might feel I should use less space. I've the following file structures on my disk dean@notebook:~$ sudo fdisk -l Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x08860886 Device Boot Start End Blocks Id System /dev/sda1 * 1 13 102400 7 HPFS/NTFS Partition 1 does not end on cylinder boundary. /dev/sda2 13 5737 45978624 7 HPFS/NTFS /dev/sda3 5738 10600 39062047+ 83 Linux /dev/sda4 10601 19457 71143852+ 5 Extended /dev/sda5 10601 11208 4883728+ 82 Linux swap / Solaris /dev/sda6 11209 15033 30720000 b W95 FAT32 /dev/sda7 15033 19457 35537920 7 HPFS/NTFS In the etc/fstab I've got UUID=91c57a65-dc53-476b-b219-28dac3682d31 / ext4 defaults 0 1 UUID=BEA2A8AFA2A86D99 /media/NTFS ntfs-3g quiet,defaults,locale=en_US.utf8,umask=0 0 0 UUID=0C0C-9BB3 /media/FAT32 vfat user,auto,utf8,fmask=0111,dmask=0000,uid=1000 0 0 /dev/sda5 swap swap sw 0 0 /dev/sda1 /media/sda1 ntfs nls=iso8859-1,ro,noauto,umask=000 0 0 /dev/sda2 /media/sda2 ntfs nls=iso8859-1,ro,noauto,umask=000 0 0 I checked my id using id and I've got dean@notebook:~$ id uid=1000(dean) gid=1000(dean) groups=4(adm),20(dialout),24(cdrom),46(plugdev),103(fuse),104(lpadmin),115(admin),120(sambashare),1000(dean) I don't know why with these settings I still have problem of using svn like in this one Thank you for your help!

    Read the article

  • Downloading Python 2.5.4 (from official website) in order to install it

    - by brilliant
    I was quite hesitant about whether I should post this question here on "StackOverflow" or on "SuperUser", but finally decided to post it here as Python is more a programming language rather than a piece of software. I've been recently using Python 2.5.4 that is installed on my computer, but at the moment I am not at home (and won't be for about two weeks from now), so I need to install the same version of Python on another computer. This computer has Windows XP installed – just like the one that I have at home. The reason why I need Python 2.5.4 is because I am using “Google App Engine”, and I was told that it only supports Python 2.5 However, when I went to the official Python page for the download, I discovered that certain things have changed, and I don’t quite remember where exactly from that site I had downloaded Python 2.5.4 on my computer at home. I found this page: http://www.python.org/download/releases/2.5.4/ Here is how it looks: (If you can’t see it here, please check it out at this address: http://brad.cwahi.net/some_pictures/python_page.jpg ) A few things here are not clear to me. It says: For x86 processors: python-2.5.4.msi For Win64-Itanium users: python-2.5.4.ia64.msi For Win64-AMD64 users: python-2.5.4.amd64.msi First of all, I don’t know what processor I am using – whether mine is “x86” or not; and also, I don’t know whether I am an “Win64-Itanium” or an “Win64-AMD64” user. Are Itanium and AMD64 also processors? Later it says: Windows XP and later already have MSI; many older machines will already have MSI installed. I guess, it is my case, but then I am totally puzzled as to which link I should click as it seems now that I don’t need those three previous links (as MSI is already installed on Windows XP), but there is no fourth link provided for those who use “Windows XP” or older machines. Of course, there are these words after that: Windows users may also be interested in Mark Hammond's win32all package, available from Sourceforge. but it seems to me that it is something additional rather than the main file. So, my question is simple: Where in the official Python website I can download Python 2.5.4, precisely, which link I should click?

    Read the article

  • How to configure apache and mod_proxy_ajp in order to forward ssl client certificate

    - by giovanni.cuccu
    Hi, I've developed a java application that need a ssl client certificate and in the staging environment with apache 2.2 and mod_jk it is working fine. In production the configuration is not using mod_jk but mod_proxy_ajp. I'm looking for an apache configuration example that configure ssl and mod_proxy_ajp for sending the ssl client certificate to the java application server (which listens with the ajp protocol). Thanks a lot

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >