Search Results

Search found 17924 results on 717 pages for 'in order'.

Page 19/717 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Php + MySQL banner rotator by order

    - by ilnur777
    I have table with advertisement in MySQL. I would like to rotate banners by order (NOT RANDOM). What function or mechanism I need to SELECT advertisement from MySQL table to show it in order, like 1, then 2, then 3 ... then again 1,2,3... ?

    Read the article

  • Determining the order of DOM elements

    - by sonofdelphi
    Given two DOM elements, say a and b, how can we determine which comes first in the document? I'm implementing drag and drop for a set of elements. And the elements can be selected in any order, but when they are dragged, these elements need to be moved in the "correct" order.

    Read the article

  • C++ dictionary/map with added order

    - by Gopalakrishnan Subramani
    I want to have something similar to map but while iterating I want them to be in the same order as it is added. Example map.insert("one", 1); map.insert("two", 2); map.insert("three", 3); While iterating I want the items to be like "one", ""two", "three"..By default, map doesn't provide this added order. How to get the map elements the way I have added? Anything with STL is fine or other alternative suggestions also fine

    Read the article

  • iPhone App Store Screenshot Order

    - by David L
    In iTunes Connect, the first screenshot (on left) is not showing up as the first screenshot in the App Store. In one case the 3rd screenshot in iTunes Connect shows up 1st on the App Store and in another case the 4th screenshot is 1st on the App Store. Does anyone know how to specify the order of images? I found this question, that says the 1st screenshot in iTunes Connect should be 1st on the App Store, but that is not the case for my 2 apps. iTunes connect screenshot order

    Read the article

  • Changing the Order in a .NET Generic Dictionary

    - by pm_2
    I have a class that inherits from a generic dictionary as follows: Class myClass : System.Collections.Generic.Dictionary<int, Object> I have added a list of values to this in a particular order, but I now wish to change that order. Is there any way (without removing and re-adding) that I could effectively re-index the values; so change the object at index 1 to now be at index 10 for example? For example, this doesn't work: myClass[1].Index = 10;

    Read the article

  • Unwanted character being added to string in C

    - by Church
    I have a program that gives you shipping addresses from an input file. However at the beginning of one of the strings, order.add_one, a number is being added to the beginning of the string, that number is equivalent to the variable "choice" every time. Why is it doing this? #include <stdio.h> #include <math.h> #include <string.h> //structure typedef struct {char cust_name[25]; char cust_id[3]; char add_one[30]; char add_two[30]; char bike; char risky; int number_ordered; char cust_information[500]; }ORDER; ORDER order; int main(void){ fflush(stdin); system ( "clear" ); //initialize variables float price; float m = 359.95; float s = 279.95; //while loop, runs until user declares they no longer wish to input orders while (1==1){ printf("Options: \nEnter Customer information manually : 1 \nSearch Customer by ID(input.txt reader) : 2 \n"); int option = 0; scanf(" %d", &option); if (option == 1){ //Print and scan statements printf("Enter Customer Information\n"); printf("Customer Name: "); scanf(" %[^\n]s", &order.cust_name); printf("\nEnter Address Line One: "); scanf(" %[^\n]s", &order.add_one); printf("\nEnter Addres Line Two: "); scanf(" %[^\n]s", &order.add_two); printf("\nHow Many Bicycles Are Ordered: "); scanf(" %d", &order.number_ordered); printf("\nWhat Type Of Bike Is Ordered\n M Mountain Bike \n S Street Bike"); printf("\nChoose One (M or S): "); scanf(" %c", &order.bike); printf("\nIs The Customer Risky (Y/N): "); scanf(" %c", &order.risky); system ( "clear" ); } if (option == 2){ FILE *fpt; fpt = fopen("input.txt", "r"); if (fpt==NULL){ printf("Text file did not open\n"); return 1; } printf("Enter Customer ID: "); scanf("%s", &order.cust_id); char choice; choice = order.cust_id[0]; char x[3]; int w, u, y, z; char a[10], b[10], c[10], d[10], e[20], f[10], g[10], i[1], j[1]; int h; printf("%s value of c", c); if (choice >='1'){ while ((w = fgetc(fpt)) != '\n' ){ } } if (choice >='2'){ while ((u = fgetc(fpt)) != '\n' ){ } } if (choice >='3'){ while ((y = fgetc(fpt)) != '\n' ){ } } if (choice >= '4'){ while ((z = fgetc(fpt)) != '\n' ){ } } printf("\n"); fscanf(fpt, "%s", x); fscanf(fpt, "%s", a); printf("%s", a); strcat(order.cust_name, a); fscanf(fpt, " %s", b); printf(" %s", b); strcat(order.cust_name, " "); strcat(order.cust_name, b); fscanf(fpt, "%s", c); printf(" %s", c); strcat(order.add_one, "\0"); strcat(order.add_one, c); fscanf(fpt, "%s", d); printf(" %s", d); strcat(order.add_one, " "); strcat(order.add_one, d); fscanf(fpt, "%s", e); printf(" %s", e); strcat(order.add_two, e); fscanf(fpt, "%s", f); printf(" %s", f); strcat(order.add_two, " "); strcat(order.add_two, f); fscanf(fpt, "%s", g); printf(" %s", g); strcat(order.add_two, " "); strcat(order.add_two, g); strcat(order.add_two, "\0"); fscanf(fpt, "%d", &h); printf(" %d", h); order.number_ordered = h; fscanf(fpt, "%s", i); printf(" %s", i); order.bike = i[0]; fscanf(fpt, "%s", j); printf(" %s", j); order.risky = j[0]; fclose(fpt); printf("%s %s %s %d %c %c", order.cust_name, order.add_one, order.add_two, order.number_ordered, order.bike, order.risky); }

    Read the article

  • SPFileVersionCollection - why versions are sorted in mixed order?

    - by Janis Veinbergs
    SPFileVersionCollection and SPListItemVersionCollection versioning seems inconsistent to me. Inconsistency wouldn't be a problem to me, but sort order is. SPListItemVersionCollection I can understand versioning of ListItems as they are stored in descending order: SPContext.Current.ListItem.Versions.Count -> 5 SPContext.Current.ListItem.Versions[0].VersionId -> 1026 (2.2 latest version) SPContext.Current.ListItem.Versions[1].VersionId -> 1025 (2.1) SPContext.Current.ListItem.Versions[2].VersionId -> 1024 (2.0) ... [4].VersionId -> (oldest version) SPFileVersionCollection However I can't understand how version numbers are saved for a document library item: SPContext.Current.ListItem.File.Versions.Count -> 4 SPContext.Current.ListItem.File.Versions[0].ID -> 512 (1.0 oldest one) SPContext.Current.ListItem.File.Versions[1].ID -> 513 (1.1) SPContext.Current.ListItem.File.Versions[2].ID -> 1025 (2.1 latest version) SPContext.Current.ListItem.File.Versions[3].ID -> 1024 (2.0 (EDIT: IsCurrentVersion = True)) They are nor in ascending order, nor descending, but something mixed. Is there any reason for SharePoint team to decide to store SPFile versions like that? And do they expect that I write my own method to get latest version or is there a builtin one for that? A note: Let me point out that SPListItem.File is not null for document library items.

    Read the article

  • Ruby on Rails export to csv - maintain mysql select statement order

    - by zekial
    Exporting some data from mysql to a csv file using FasterCSV. I'd like the columns in the outputted CSV to be in the same order as the select statement in my query. Example: rows = Data.find( :all, :select=>'name, age, height, weight' ) headers = rows[0].attributes.keys FasterCSV.generate do |csv| csv << headers rows.each do |r| csv << r.attributes.values end end CSV Output: height,weight,name,age 74,212,bob,23 70,201,fred,24 . . . I want the CSV columns in the same order as my select statement. Obviously the attributes method is not going to work. Any ideas on the best way to ensure that the columns in my csv file will be in the same order as the select statement? Got a lot of data and performance is an issue. The select statement is not static. I realize I could loop through column names within the rows.each loop but it seems kinda dirty.

    Read the article

  • The ORDER BY clause is invalid in views, inline functions, derived tables, subqueries, and common ta

    - by zurna
    I get "The ORDER BY clause is invalid in views, inline functions, derived tables, subqueries, and common table expressions, unless TOP or FOR XML is also specified." error with the following code. I initially had two tables, ADSAREAS & CATEGORIES. I started receiving this error when I removed CATEGORIES table. Select Case SIDX Case "ID" : SQLCONT1 = " AdsAreasID" Case "Page" : SQLCONT1 = " AdsAreasName" Case Else : SQLCONT1 = " AdsAreasID" End Select Select Case SORD Case "asc" : SQLCONT2 = " ASC" Case "desc" : SQLCONT2 = " DESC" Case Else : SQLCONT2 = " ASC" End Select ''# search feature ---> Select Case SEARCHFIELD Case "ID" : SQLSFIELD = "AND AdsAreasID" Case "Ads Areas" : SQLSFIELD = "AND AdsAreasName" Case Else : SQLSFIELD = "" End Select Select Case SEARCHOPER Case "eq" : SQLSOPER = " = " & SEARCHSTRING Case "ne" : SQLSOPER = " <> " & SEARCHSTRING Case "lt" : SQLSOPER = " <" & SEARCHSTRING Case "le" : SQLSOPER = " <= " & SEARCHSTRING Case "gt" : SQLSOPER = " >" & SEARCHSTRING Case "ge" : SQLSOPER = " >= " & SEARCHSTRING Case "bw" : SQLSOPER = " LIKE '" & SEARCHSTRING & "%' " Case "ew" : SQLSOPER = " LIKE '%" & SEARCHSTRING & "' " Case "cn" : SQLSOPER = " LIKE '%" & SEARCHSTRING & "%' " Case Else : SQLSOPER = "" End Select ''# search feature ---> SQL = "SELECT * FROM ( SELECT A.AdsAreasID, A.AdsAreasName, ROW_NUMBER() OVER (ORDER BY A.AdsAreasID) As Row" SQL = SQL & " FROM ADSAREAS A" SQL = SQL & " WHERE Row > ("& RecordsPageSize - RecordsPerPage &") AND Row <= ("& RecordsPageSize &") ORDER BY" & SQLCONT1 & SQLCONT2 Set objXML = objConn.Execute(SQL)

    Read the article

  • jQuery "Autcomplete" plugin is messing up the order of my data

    - by Max Williams
    I'm using Jorn Zaefferer's Autocomplete plugin on a couple of different pages. In both instances, the order of displayed strings is a little bit messed up. Example 1: array of strings: basically they are in alphabetical order except for General Knowledge which has been pushed to the top: General Knowledge,Art and Design,Business Studies,Citizenship,Design and Technology,English,Geography,History,ICT,Mathematics,MFL French,MFL German,MFL Spanish,Music,Physical Education,PSHE,Religious Education,Science,Something Else Displayed strings: General Knowledge,Geography,Art and Design,Business Studies,Citizenship,Design and Technology,English,History,ICT,Mathematics,MFL French,MFL German,MFL Spanish,Music,Physical Education,PSHE,Religious Education,Science,Something Else Note that Geography has been pushed to be the second item, after General Knowledge. The rest are all fine. Example 2: array of strings: as above but with Cross-curricular instead of General Knowledge. Cross-curricular,Art and Design,Business Studies,Citizenship,Design and Technology,English,Geography,History,ICT,Mathematics,MFL French,MFL German,MFL Spanish,Music,Physical Education,PSHE,Religious Education,Science,Something Else Displayed strings: Cross-curricular,Citizenship,Art and Design,Business Studies,Design and Technology,English,Geography,History,ICT,Mathematics,MFL French,MFL German,MFL Spanish,Music,Physical Education,PSHE,Religious Education,Science,Something Else Here, Citizenship has been pushed to the number 2 position. I've experimented a little, and it seems like there's a bug saying "put things that start with the same letter as the first item after the first item and leave the rest alone". Kind of mystifying. I've tried a bit of debugging by triggering alerts inside the autocomplete plugin code but everywhere i can see, it's using the correct order. it seems to be just when its rendered out that it goes wrong. Any ideas anyone? max

    Read the article

  • Machine Learning Algorithm for Predicting Order of Events?

    - by user213060
    Simple machine learning question. Probably numerous ways to solve this: There is an infinite stream of 4 possible events: 'event_1', 'event_2', 'event_4', 'event_4' The events do not come in in completely random order. We will assume that there are some complex patterns to the order that most events come in, and the rest of the events are just random. We do not know the patterns ahead of time though. After each event is received, I want to predict what the next event will be based on the order that events have come in in the past. The predictor will then be told what the next event actually was: Predictor=new_predictor() prev_event=False while True: event=get_event() if prev_event is not False: Predictor.last_event_was(prev_event) predicted_event=Predictor.predict_next_event(event) The question arises of how long of a history that the predictor should maintain, since maintaining infinite history will not be possible. I'll leave this up to you to answer. The answer can't be infinte though for practicality. So I believe that the predictions will have to be done with some kind of rolling history. Adding a new event and expiring an old event should therefore be rather efficient, and not require rebuilding the entire predictor model, for example. Specific code, instead of research papers, would add for me immense value to your responses. Python or C libraries are nice, but anything will do. Thanks! Update: And what if more than one event can happen simultaneously on each round. Does that change the solution?

    Read the article

  • Best approach to storing image pixels in bottom-up order in Java

    - by finnw
    I have an array of bytes representing an image in Windows BMP format and I would like my library to present it to the Java application as a BufferedImage, without copying the pixel data. The main problem is that all implementations of Raster in the JDK store image pixels in top-down, left-to-right order whereas BMP pixel data is stored bottom-up, left-to-right. If this is not compensated for, the resulting image will be flipped vertically. The most obvious "solution" is to set the SampleModel's scanlineStride property to a negative value and change the band offsets (or the DataBuffer's array offset) to point to the top-left pixel, i.e. the first pixel of the last line in the array. Unfortunately this does not work because all of the SampleModel constructors throw an exception if given a negative scanlineStride argument. I am currently working around it by forcing the scanlineStride field to a negative value using reflection, but I would like to do it in a cleaner and more portable way if possible. e.g. is there another way to fool the Raster or SampleModel into arranging the pixels in bottom-up order but without breaking encapsulation? Or is there a library somewhere that will wrap the Raster and SampleModel, presenting the pixel rows in reverse order? I would prefer to avoid the following approaches: Copying the whole image (for performance reasons. The code must process hundreds of large (= 1Mpixels) images per second and although the whole image must be available to the application, it will normally access only a tiny (but hard-to-predict) portion of the image.) Modifying the DataBuffer to perform coordinate transformation (this actually works but is another "dirty" solution because the buffer should not need to know about the scanline/pixel layout.) Re-implementing the Raster and/or SampleModel interfaces from scratch (but I have a hunch that I will be unable to avoid this.)

    Read the article

  • HTTP POST prarameters order / REST urls

    - by pq
    Let's say that I'm uploading a large file via a POST HTTP request. Let's also say that I have another parameter (other than the file) that names the resource which the file is updating. The resource cannot be not part of the URL the way you can do it with REST (e.g. foo.com/bar/123). Let's say this is due to a combination of technical and political reasons. The server needs to ignore the file if the resource name is invalid or, say, the IP address and/or the logged in user are not authorized to update the resource. Looks like, if this POST came from an HTML form that contains the resource name first and file field second, for most (all?) browsers, this order is preserved in the POST request. But it would be naive to fully rely on that, no? In other words the order of HTTP parameters is insignificant and a client is free to construct the POST in any order. Isn't that true? Which means that, at least in theory, the server may end up storing the whole large file before it can deny the request. It seems to me that this is a clear case where RESTful urls have an advantage, since you don't have to look at the POST content to perform certain authorization/error checking on the request. Do you agree? What are your thoughts, experiences?

    Read the article

  • Order in many to many relation in Django model

    - by Pietro Speroni
    I am writing a small website to store the papers I have written. The relation papers<- author is important, but the order of the name of the authors (which one is First Author, which one is second order, and so on) is also important. I am just learning Django so I don't know much. In any case so far I have done: from django.db import models class author(models.Model): Name = models.CharField(max_length=60) URLField = models.URLField(verify_exists=True, null=True, blank=True) def __unicode__(self): return self.Name class topic(models.Model): TopicName = models.CharField(max_length=60) def __unicode__(self): return self.TopicName class publication(models.Model): Title = models.CharField(max_length=100) Authors = models.ManyToManyField(author, null=True, blank=True) Content = models.TextField() Notes = models.TextField(blank=True) Abstract = models.TextField(blank=True) pub_date = models.DateField('date published') TimeInsertion = models.DateTimeField(auto_now=True) URLField = models.URLField(verify_exists=True,null=True, blank=True) Topic = models.ManyToManyField(topic, null=True, blank=True) def __unicode__(self): return self.Title This work fine in the sense that I now can define who the authors are. But I cannot order them. How should I do that? Of course I could add a series of relations: first author, second author,... but it would be ugly, and would not be flexible. Any better idea? Thanks

    Read the article

  • Java split giving opposite order of arabic characters

    - by MuhammadA
    I am splitting the following string using \\| in java (android) using the IntelliJ 12 IDE. Everything is fine except the last part, somehow the split picks them up in the opposite order : As you can see the real positioning 34,35,36 is correct and according to the string, but when it gets picked out into split part no 5 its in the wrong order, 36,35,34 ... Any way I can get them to be in the right order? My Code: public ArrayList<Book> getBooksFromDatFile(Context context, String fileName) { ArrayList<Book> books = new ArrayList<Book>(); try { // load csv from assets InputStream is = context.getAssets().open(fileName); try { BufferedReader reader = new BufferedReader(new InputStreamReader(is)); String line; while ((line = reader.readLine()) != null) { String[] RowData = line.split("\\|"); books.add(new Book(RowData[0], RowData[1], RowData[2], RowData[3], RowData[4], RowData[5])); } } catch (IOException ex) { Log.e(TAG, "Error parsing csv file!"); } finally { try { is.close(); } catch (IOException e) { Log.e(TAG, "Error closing input stream!"); } } } catch (IOException ex) { Log.e(TAG, "Error reading .dat file from assets!"); } return books; }

    Read the article

  • Algorithm to produce Cartesian product of arrays in depth-first order

    - by Yuri Gadow
    I'm looking for an example of how, in Ruby, a C like language, or pseudo code, to create the Cartesian product of a variable number of arrays of integers, each of differing length, and step through the results in a particular order: So given, [1,2,3],[1,2,3],[1,2,3]: [1, 1, 1] [2, 1, 1] [1, 2, 1] [1, 1, 2] [2, 2, 1] [1, 2, 2] [2, 1, 2] [2, 2, 2] [3, 1, 1] [1, 3, 1] etc. Instead of the typical result I've seen (including the example I give below): [1, 1, 1] [2, 1, 1] [3, 1, 1] [1, 2, 1] [2, 2, 1] [3, 2, 1] [1, 3, 1] [2, 3, 1] etc. The problem with this example is that the third position isn't explored at all until all combinations of of the first two are tried. In the code that uses this, that means even though the right answer is generally (the much larger equivalent of) 1,1,2 it will examine a few million possibilities instead of just a few thousand before finding it. I'm dealing with result sets of one million to hundreds of millions, so generating them and then sorting isn't doable here and would defeat the reason for ordering them in the first example, which is to find the correct answer sooner and so break out of the cartesian product generation earlier. Just in case it helps clarify any of the above, here's how I do this now (this has correct results and right performance, but not the order I want, i.e., it creates results as in the second listing above): def cartesian(a_of_a) a_of_a_len = a_of_a.size result = Array.new(a_of_a_len) j, k, a2, a2_len = nil, nil, nil, nil i = 0 while 1 do j, k = i, 0 while k < a_of_a_len a2 = a_of_a[k] a2_len = a2.size result[k] = a2[j % a2_len] j /= a2_len k += 1 end return if j > 0 yield result i += 1 end end UPDATE: I didn't make it very clear that I'm after a solution where all the combinations of 1,2 are examined before 3 is added in, then all 3 and 1, then all 3, 2 and 1, then all 3,2. In other words, explore all earlier combinations "horizontally" before "vertically." The precise order in which those possibilities are explored, i.e., 1,1,2 or 2,1,1, doesn't matter, just that all 2 and 1 are explored before mixing in 3 and so on.

    Read the article

  • How can I order by the result of a recursive SQL query

    - by Tony
    I have the following method I need to ORDER BY: def has_attachments? attachments.size > 0 || (!parent.nil? && parent.has_attachments?) end I have gotten this far: ORDER BY CASE WHEN attachments.size > 0 THEN 1 ELSE (CASE WHEN parent_id IS NULL THEN 0 ELSE (CASE message.parent ...what goes here ) END END END I may be looking at this wrong because I don't have experience with recursive SQL. Essentially I want to ORDER by whether a message or any of its parents has attachments. If it's attachment size is 0, I can stop and return a 1. If the message has an attachment size of 0, I now check to see if it has a parent. If it has no parent then there is no attachment, however if it does have a parent then I essentially have to do the same query case logic for the parent. UPDATE The table looks like this +---------------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | message_type_id | int(11) | NO | MUL | | | | message_priority_id | int(11) | NO | MUL | | | | message_status_id | int(11) | NO | MUL | | | | message_subject_id | int(11) | NO | MUL | | | | from_user_id | int(11) | YES | MUL | NULL | | | parent_id | int(11) | YES | MUL | NULL | | | expires_at | datetime | YES | MUL | NULL | | | subject_other | varchar(255) | YES | | NULL | | | body | text | YES | | NULL | | | created_at | datetime | NO | MUL | | | | updated_at | datetime | NO | | | | | lock_version | int(11) | NO | | 0 | | +---------------------+--------------+------+-----+---------+----------------+ Where the parent_id refers to the parent message, if it exists. Thanks!

    Read the article

  • Is it possible to prevent out-of-order execution by using single volatile

    - by Yan Cheng CHEOK
    By referring article, it is using a pair of volatile to prevent out-of-order execution. I was wondering, is it possible to prevent it using single volatile? void fun_by_thread_1() { this.isNuclearFactory = true; this.factory = new NuclearFactory(); } void fun_by_thread_2() { Factory _factory = this.factory; if (this.isNuclearFactory) { // Do not operate nuclear factory!!! return; } // If out-of-order execution happens, _factory might // be NuclearFactory instance. _factory.operate(); } Factory factory = new FoodFactory(); volatile boolean isNuclearFactory = false; The reason I ask, is because I have a single guard flag (similar to isNuclearFactory flag), to guard against multiple variables (similar to many Factory). I do not wish to mark all the Factory as volatile. Or, shall I fall into the following solution? void fun_by_thread_1() { try { writer.lock(); this.isNuclearFactory = true; this.factory = new NuclearFactory(); } finally { writer.unlock(); } } void fun_by_thread_2() { try { reader.lock(); Factory _factory = this.factory; if (this.isNuclearFactory) { // Do not operate nuclear factory!!! return; } } finally { reader.unlock(); } _factory.operate(); } Factory factory = new FoodFactory(); boolean isNuclearFactory = false; P/S: I know instanceof. Factory is just an example to demonstrate of out-of-order problem.

    Read the article

  • Match subpatterns in any order

    - by Yaroslav
    I have long regexp with two complicated subpatters inside. How i can match that subpatterns in any order? Simplified example: /(apple)?\s?(banana)?\s?(orange)?\s?(kiwi)?/ and i want to match both of apple banana orange kiwi apple orange banana kiwi It is very simplified example. In my case banana and orange is long complicated subpatterns and i don't want to do something like /(apple)?\s?((banana)?\s?(orange)?|(orange)?\s?(banana)?)\s?(kiwi)?/ Is it possible to group subpatterns like chars in character class? UPD Real data as requested: 14:24 26,37 Mb 108.53 01:19:02 06.07 24.39 19:39 46:00 my strings much longer, but it is significant part. Here you can see two lines what i need to match. First has two values: length (14 min 24 sec) and size 26.37 Mb. Second one has three values but in different order: size 108.53 Mb, length 01 h 19 m 02 s and date June, 07 Third one has two size and length Fourth has only length There are couple more variations and i need to parse all values. I have a regexp that pretty close except i can't figure out how to match patterns in different order without writing it twice. (?<size>\d{1,3}\[.,]\d{1,2}\s+(?:Mb)?)?\s? (?<length>(?:(?:01:)?\d{1,2}:\d{2}))?\s* (?<date>\d{2}\.\d{2}))? NOTE: that is only part of big regexp that forks fine already.

    Read the article

  • ignoring elements order in xml validation against xsd

    - by segolas
    Hi, Ia processing an email and saving some header inside a xml document. I also need to validate the document against a xml schema. As the subject suggest, I need to validate ignoring the elements order but, as far as I read this seems to be impossible. Am I correct? If I put the headers in a<xsd:sequence>, the order obviously matter. If I us the order is ignored but for some strange reason this imply that the elements must occur at least once. My xml is something like this: <headers> <subject>bla bla bla</subject> <recipient>[email protected]</recipient> <recipient>rcp02domain.com</recipient> <recipient>[email protected]</recipient> </headers> but I think the final document is valid even if subject and recipient elements are swapped. There is really nothing to do?

    Read the article

  • Is NFS capable of preserving order of operations?

    - by JustJeff
    I have a diskless host 'A', that has a directory NFS mounted on server 'B'. A process on A writes to two files F1 and F2 in that directory, and a process on B monitors these files for changes. Assume that B polls for changes faster than A is expected to make them. Process A seeks the head of the files, writes data, and flushes. Process B seeks the head of the files and does reads. Are there any guarantees about how the order of the changes performed by A will be detected at B? Specifically, if A alternately writes to one file, and then the other, is it reasonable to expect that B will notice alternating changes to F1 and F2? Or could B conceivably detect a series of changes on F1 and then a series on F2? I know there are a lot of assumptions embedded in the question. For instance, I am virtually certain that, even operating on just one file, if A performs 100 operations on the file, B may see a smaller number of changes that give the same result, due to NFS caching some of the actions on A before they are communicated to B. And of course there would be issues with concurrent file access even if NFS weren't involved and both the reading and the writing process were running on the same real file system. The reason I'm even putting the question up here is that it seems like most of the time, the setup described above does detect the changes at B in the same order they are made at A, but that occasionally some events come through in transposed order. So, is it worth trying to make this work? Is there some way to tune NFS to make it work, perhaps cache settings or something? Or is fine-grained behavior like this just too much expect from NFS?

    Read the article

  • ORDER BY job failed in the Pig script while running EmbeddedPig using Java

    - by C.c. Huang
    I have this following pig script, which works perfectly using grunt shell (stored the results to HDFS without any issues); however, the last job (ORDER BY) failed if I ran the same script using Java EmbeddedPig. If I replace the ORDER BY job by others, such as GROUP or FOREACH GENERATE, the whole script then succeeded in Java EmbeddedPig. So I think it's the ORDER BY which causes the issue. Anyone has any experience with this? Any help would be appreciated! The Pig script: REGISTER pig-udf-0.0.1-SNAPSHOT.jar; user_similarity = LOAD '/tmp/sample-sim-score-results-31/part-r-00000' USING PigStorage('\t') AS (user_id: chararray, sim_user_id: chararray, basic_sim_score: float, alt_sim_score: float); simplified_user_similarity = FOREACH user_similarity GENERATE $0 AS user_id, $1 AS sim_user_id, $2 AS sim_score; grouped_user_similarity = GROUP simplified_user_similarity BY user_id; ordered_user_similarity = FOREACH grouped_user_similarity { sorted = ORDER simplified_user_similarity BY sim_score DESC; top = LIMIT sorted 10; GENERATE group, top; }; top_influencers = FOREACH ordered_user_similarity GENERATE com.aol.grapevine.similarity.pig.udf.AssignPointsToTopInfluencer($1, 10); all_influence_scores = FOREACH top_influencers GENERATE FLATTEN($0); grouped_influence_scores = GROUP all_influence_scores BY bag_of_topSimUserTuples::user_id; influence_scores = FOREACH grouped_influence_scores GENERATE group AS user_id, SUM(all_influence_scores.bag_of_topSimUserTuples::points) AS influence_score; ordered_influence_scores = ORDER influence_scores BY influence_score DESC; STORE ordered_influence_scores INTO '/tmp/cc-test-results-1' USING PigStorage(); The error log from Pig: 12/04/05 10:00:56 INFO pigstats.ScriptState: Pig script settings are added to the job 12/04/05 10:00:56 INFO mapReduceLayer.JobControlCompiler: mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3 12/04/05 10:00:58 INFO mapReduceLayer.JobControlCompiler: Setting up single store job 12/04/05 10:00:58 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized 12/04/05 10:00:58 INFO mapReduceLayer.MapReduceLauncher: 1 map-reduce job(s) waiting for submission. 12/04/05 10:00:58 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 12/04/05 10:00:58 INFO input.FileInputFormat: Total input paths to process : 1 12/04/05 10:00:58 INFO util.MapRedUtil: Total input paths to process : 1 12/04/05 10:00:58 INFO util.MapRedUtil: Total input paths (combined) to process : 1 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Creating tmp-1546565755 in /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134-work-6955502337234509704 with rwxr-xr-x 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Cached hdfs://localhost/tmp/temp1725960134/tmp-1546565755#pigsample_854728855_1333645258470 as /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Cached hdfs://localhost/tmp/temp1725960134/tmp-1546565755#pigsample_854728855_1333645258470 as /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 12/04/05 10:00:58 WARN mapred.LocalJobRunner: LocalJobRunner does not support symlinking into current working dir. 12/04/05 10:00:58 INFO mapred.TaskRunner: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/pigsample_854728855_1333645258470 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.jar.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.jar.crc 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.split.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.split.crc 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.splitmetainfo.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.splitmetainfo.crc 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.xml.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.xml.crc 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.jar <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.jar 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.split <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.split 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.splitmetainfo <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.splitmetainfo 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.xml <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.xml 12/04/05 10:00:59 INFO mapred.Task: Using ResourceCalculatorPlugin : null 12/04/05 10:00:59 INFO mapred.MapTask: io.sort.mb = 100 12/04/05 10:00:59 INFO mapred.MapTask: data buffer = 79691776/99614720 12/04/05 10:00:59 INFO mapred.MapTask: record buffer = 262144/327680 12/04/05 10:00:59 WARN mapred.LocalJobRunner: job_local_0004 java.lang.RuntimeException: org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/Users/cchuang/workspace/grapevine-rec/pigsample_854728855_1333645258470 at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.partitioners.WeightedRangePartitioner.setConf(WeightedRangePartitioner.java:139) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117) at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:560) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:639) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:210) Caused by: org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/Users/cchuang/workspace/grapevine-rec/pigsample_854728855_1333645258470 at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:231) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigFileInputFormat.listStatus(PigFileInputFormat.java:37) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:248) at org.apache.pig.impl.io.ReadToEndLoader.init(ReadToEndLoader.java:153) at org.apache.pig.impl.io.ReadToEndLoader.<init>(ReadToEndLoader.java:115) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.partitioners.WeightedRangePartitioner.setConf(WeightedRangePartitioner.java:112) ... 6 more 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Deleted path /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 12/04/05 10:00:59 INFO mapReduceLayer.MapReduceLauncher: HadoopJobId: job_local_0004 12/04/05 10:01:04 INFO mapReduceLayer.MapReduceLauncher: job job_local_0004 has failed! Stop running all dependent jobs 12/04/05 10:01:04 INFO mapReduceLayer.MapReduceLauncher: 100% complete 12/04/05 10:01:04 ERROR pigstats.PigStatsUtil: 1 map reduce job(s) failed! 12/04/05 10:01:04 INFO pigstats.PigStats: Script Statistics: HadoopVersion PigVersion UserId StartedAt FinishedAt Features 0.20.2-cdh3u3 0.8.1-cdh3u3 cchuang 2012-04-05 10:00:34 2012-04-05 10:01:04 GROUP_BY,ORDER_BY Some jobs have failed! Stop running all dependent jobs Job Stats (time in seconds): JobId Maps Reduces MaxMapTime MinMapTIme AvgMapTime MaxReduceTime MinReduceTime AvgReduceTime Alias Feature Outputs job_local_0001 0 0 0 0 0 0 0 0 all_influence_scores,grouped_user_similarity,simplified_user_similarity,user_similarity GROUP_BY job_local_0002 0 0 0 0 0 0 0 0 grouped_influence_scores,influence_scores GROUP_BY,COMBINER job_local_0003 0 0 0 0 0 0 0 0 ordered_influence_scores SAMPLER Failed Jobs: JobId Alias Feature Message Outputs job_local_0004 ordered_influence_scores ORDER_BY Message: Job failed! Error - NA /tmp/cc-test-results-1, Input(s): Successfully read 0 records from: "/tmp/sample-sim-score-results-31/part-r-00000" Output(s): Failed to produce result in "/tmp/cc-test-results-1" Counters: Total records written : 0 Total bytes written : 0 Spillable Memory Manager spill count : 0 Total bags proactively spilled: 0 Total records proactively spilled: 0 Job DAG: job_local_0001 -> job_local_0002, job_local_0002 -> job_local_0003, job_local_0003 -> job_local_0004, job_local_0004 12/04/05 10:01:04 INFO mapReduceLayer.MapReduceLauncher: Some jobs have failed! Stop running all dependent jobs

    Read the article

  • Debian 3.1 (Sarge) init.d boot order

    - by Adam Lewis
    I am using a TS-7800 single board computer from Technologic Systems that ships with Debian 3.1 (Sarge). I have updated it to Squeeze, but due to various driver issues I have been forced to roll back to Sarge. I am attempting to configure the various drivers and configurations needed for my application services before they start. Ideally I would call one init.d script that contains the drivers / configurations then call the other init.d scripts (one for each process). I am left scratching my head on how to guarantee the boot sequence. I know in later versions of Debian I could use the lbs-header to achieve this; but is there anything comparable to the LBS header in Sarge?

    Read the article

  • Debian 3.1 (Sarge) init.d boot order

    - by Adam Lewis
    I am using a TS-7800 single board computer from Technologic Systems that ships with Debian 3.1 (Sarge). I have updated it to Squeeze, but due to various driver issues I have been forced to roll back to Sarge. I am attempting to configure the various drivers and configurations needed for my application services before they start. Ideally I would call one init.d script that contains the drivers / configurations then call the other init.d scripts (one for each process). I am left scratching my head on how to guarantee the boot sequence. I know in later versions of Debian I could use the lbs-header to achieve this; but is there anything comparable to the LBS header in Sarge?

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >