Search Results

Search found 23206 results on 929 pages for 'non root'.

Page 201/929 | < Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >

  • Password not working for sudo ("Authentication failure")

    - by Souta
    Before I mention anything further, DO NOT give me a response saying that terminal won't show password input. I'm AWARE of that. I'm typing my user password in (not a capslock issue), and for some reason it still says 'Authentication Failure'. Is there some other password (one I'm not aware of) I'm supposed to be using other than my user password? I've had this ubuntu before, on another hard drive and I didn't have this problem. (And it was the same ubuntu, ubuntu 12.04 LTS) ai@AiNekoYokai:~$ groups ai adm cdrom sudo dip plugdev lpadmin sambashare ai@AiNekoYokai:~$ lsb_release -rd Description: Ubuntu 12.04 LTS Release: 12.04 ai@AiNekoYokai:~$ pkexec cat /etc/sudoers # # This file MUST be edited with the 'visudo' command as root. # # Please consider adding local content in /etc/sudoers.d/ instead of # directly modifying this file. # # See the man page for details on how to write a sudoers file. # Defaults env_reset Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" # Host alias specification # User alias specification # Cmnd alias specification # User privilege specification root ALL=(ALL:ALL) ALL # Members of the admin group may gain root privileges %admin ALL=(ALL) ALL # Allow members of group sudo to execute any command %sudo ALL=(ALL:ALL) ALL # See sudoers(5) for more information on "#include" directives: #includedir /etc/sudoers.d I can log in with my password, but it's not accepted as valid for authentication <-- That is pretty much my issue. (Although, I haven't gone into recovery mode.) I've ran: ai@AiNekoYokai:~$ ls /etc/sudoers.d README And also reinstalled sudo with: pkexec apt-get update pkexec apt-get --purge --reinstall install sudo pkexec usermod -a -G admin $USER <- Says admin does not exist su $USER <- worked for me, however, my password still does not do much (in sense of not working for other things) I changed my password with pkexec passwd $USER. I was able to change it no problem. gksudo xclock was something I was able to get into, no problem. (Clock showed) ai@AiNekoYokai:~$ gksudo xclock

    Read the article

  • Single complex or multiple simple autoload functions [on hold]

    - by Tyson of the Northwest
    Using the spl_autoload_register(), should I use a single autoload function that contains all the logic to determine where the include files are or should I break each include grouping into it's own function with it's own logic to include the files for the called function? As the places where include files may reside expands so too will the logic of a single function. If I break it into multiple functions I can add functions as new groupings are added, but the functions will be copy/pastes of each other with minor alterations. Currently I have a tool with a single registered autoload function that picks apart the class name and tries to predict where it is and then includes it. Due to naming conventions for the project this has been pretty simple. if has namespace if in template namespace look in Root\Templates else look in Root\Modules\Namespace else look in Root\System if file exists include But we are starting to include Interfaces and Traits into our codebase and it hurts me to include the type of a thing in it's name. So we are looking at instead of a single autoload function that digs through the class name and looks for the file and has increasingly complex logic to it, we are looking at having multiple autoload functions registered. But each one follows the same pattern and any time I see that I get paranoid about code copying. function systemAutoloadFunc logic to create probable filename if filename exists in system include it and return true else return false function moduleAutoloadFunc logic to create probable filename if filename exists in modules include it and return true else return false Every autoload function will follow that pattern and the last of each function if filename exists, include return true else return false is going to be identical code. This makes me paranoid about having to update it later across the board if the file_exists include pattern we are using ever changes. Or is it just that, paranoia and the multiple functions with some identical code is the best option?

    Read the article

  • How to input data into user defined variables into MySql query

    - by user292791
    Simple Shell script echo "Enter 1 for month of March" echo "Enter 2 for month of April" echo "Enter 3 for month of May" read Month case "$Month" in 1) echo "enter establishment name" read a; mysql -u root -p $a < "March.sql";; 2) echo "enter establishment name" read b; mysql -u root -p $b < "April.sql";; 3) echo "enter establishment name" read c; mysql -u root -p $c < "May.sql";; esac done In this i have three other query files March.sql, April.sql, May.sql. i'm linking this in shell script . Example of .sql file: SELECT DISTINCT substr( a.case_no, 3, 2 ), b.case_type, b.type_name, a.case_no into outfile '/tmp/April.csv' FIELDS TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY '\r\n' FROM Civil_t AS a, Case_type_t AS b, disposal_proc AS c WHERE substr( a.case_no, 3, 2 ) = b.case_type AND a.date_of_decision BETWEEN '2014-04-01' AND '2014-04-30' AND a.case_no = c.case_no AND a.court_no =1; I have to alter the .sql script every time. Is there any method to read the variables from shell script and use it in mysql. For example:- echo "enter date" read a #input date Now i have read a "date" and i want to use it in March.sql query in where clause. Is there is any method of using this variable in .sql query.

    Read the article

  • Blank screen during boot after clean Ubuntu 11.10 install (Intel N10 graphics)

    - by Coen
    After a clean install of Ubuntu 11.10 on my Asus eee PC 1005p, Ubuntu seems to boot correctly, except for initialization of the LCD screen. What I observe: I choose Ubuntu 11.10 in the GRUB 2 menu A blank screen with a blinking cursor in the top left of the screen, for 15-20 seconds. The ubuntu logo with 5 red dots in the center of the screen, for 1 second. The LCD screen is entirely blank The startup sound plays (Ubuntu is configured to auto-login) Still, the LCD screen is entirely blank. When I press Fn-F8 (the switch between LCD screen and external VGA), the LCD screen shows my desktop correctly and everything seems to work fine. Except for the adjust contrast buttons (Fn-F5 and Fn-F6), these seem to cycle through random brightness modes. Something like: 0% - 50% - 20% - 0% - 20% - 0% Any ideas what's causing this or how to solve this? coen@elpicu:~$ lspci -v 00:02.0 VGA compatible controller: Intel Corporation N10 Family Integrated Graphics Controller (prog-if 00 [VGA controller]) Subsystem: ASUSTeK Computer Inc. Device 83ac Flags: bus master, fast devsel, latency 0, IRQ 44 Memory at f7e00000 (32-bit, non-prefetchable) [size=512K] I/O ports at dc00 [size=8] Memory at d0000000 (32-bit, prefetchable) [size=256M] Memory at f7d00000 (32-bit, non-prefetchable) [size=1M] Expansion ROM at <unassigned> [disabled] Capabilities: <access denied> Kernel driver in use: i915 Kernel modules: i915 00:02.1 Display controller: Intel Corporation N10 Family Integrated Graphics Controller Subsystem: ASUSTeK Computer Inc. Device 83ac Flags: bus master, fast devsel, latency 0 Memory at f7e80000 (32-bit, non-prefetchable) [size=512K] Capabilities: <access denied>

    Read the article

  • after upgrade from 10.04 to 12.04 cannot boot with linux 3.2.0-24-generic-pae

    - by Ricardo
    After upgrade from 10.04 to 12.04, I cannot boot with linux 3.2.0-24-generic-pae: process gets frozen in a xubuntu initialization screen (I had qimo installed). If I try the recovery mode (with the same Linux version), booting freezes after this message: Begin: Mounting root file system. If in grub menu I choose Previous Linux versions, I can boot using Linux 2.6.32-41-generic-pae. But once logged in, some things don't seem to work (apt-get update fails, update manager fails, HID menu does not provide suggestions...) (to be honest, I have no idea whether this is part of the bigger issue) Reading in Ask Ubuntu through apparently similar problems, I decided to follow some advices: got boot-repair and run it. The problem remains & I got this report. I also run as root in terminal $ sudo update-initramfs -u and this is what I got: update-initramfs: Generating /boot/initrd.img-3.2.0-24-generic-pae cryptsetup: WARNING: failed to detect canonical device of /dev/sda1 cryptsetup: WARNING: could not determine root device from /etc/fstab /tmp/mkinitramfs_EIDlHy/scripts/classmate-bottom/45xconfig: 9: .: Can't open /scripts/casper-functions What else? My pc is Intel® Core™ i7 CPU 870 @ 2.93GHz × 8, graphs is GeForce 8400 GS/PCIe/SSE2, memory is 7,8 GiB. I have two questions: Is this a bug in the newest kernel I should report? Is there anything I can do appart from a fresh install?

    Read the article

  • JavaFX: Use a Screen with your Scene!

    - by user12610255
    Here's a handy tip for sizing your application. You can use the javafx.stage.Screen class to obtain the width and height of the user's screen, and then use those same dimensions when sizing your scene. The following code modifies default "Hello World" application that appears when you create a new JavaFX project in NetBeans. package screendemo; import javafx.application.Application; import javafx.event.ActionEvent; import javafx.event.EventHandler; import javafx.scene.Group; import javafx.scene.Scene; import javafx.scene.control.Button; import javafx.stage.Stage; import javafx.stage.Screen; import javafx.geometry.Rectangle2D; public class ScreenDemo extends Application { public static void main(String[] args) { Application.launch(args); } @Override public void start(Stage primaryStage) { primaryStage.setTitle("Hello World"); Group root = new Group(); Rectangle2D screenBounds = Screen.getPrimary().getVisualBounds(); Scene scene = new Scene(root, screenBounds.getWidth(), screenBounds.getHeight()); Button btn = new Button(); btn.setLayoutX(100); btn.setLayoutY(80); btn.setText("Hello World"); btn.setOnAction(new EventHandler() { public void handle(ActionEvent event) { System.out.println("Hello World"); } }); root.getChildren().add(btn); primaryStage.setScene(scene); primaryStage.show(); } } Running this program will set the Stage boundaries to visible bounds of the main screen. -- Scott Hommel

    Read the article

  • RSA decrypting data in C# (.NET 3.5) which was encrypted with openssl in php 5.3.2

    - by panny
    Maybe someone can clear me up. I have been surfing on this a while now. Step #1: Create a root certificate Key generation on unix 1) openssl req -x509 -nodes -days 3650 -newkey rsa:1024 -keyout privatekey.pem -out mycert.pem 2) openssl rsa -in privatekey.pem -pubout -out publickey.pem 3) openssl pkcs12 -export -out mycertprivatekey.pfx -in mycert.pem -inkey privatekey.pem -name "my certificate" Step #2: Does root certificate work on php: YES PHP side I used the publickey.pem to read it into php: $publicKey = "file://C:/publickey.pem"; $privateKey = "file://C:/privatekey.pem"; $plaintext = "123"; openssl_public_encrypt($plaintext, $encrypted, $publicKey); $transfer = base64_encode($encrypted); openssl_private_decrypt($encrypted, $decrypted, $privateKey); echo $decrypted; // "123" OR $server_public_key = openssl_pkey_get_public(file_get_contents("C:\publickey.pem")); // rsa encrypt openssl_public_encrypt("123", $encrypted, $server_public_key); and the privatekey.pem to check if it works: openssl_private_decrypt($encrypted, $decrypted, openssl_get_privatekey(file_get_contents("C:\privatekey.pem"))); echo $decrypted; // "123" Coming to the conclusion, that encryption/decryption works fine on the php side with these openssl root certificate files. Step #3: Does root certificate work on .NET: YES C# side In same manner I read the keys into a .net C# console program: X509Certificate2 myCert2 = new X509Certificate2(); RSACryptoServiceProvider rsa = new RSACryptoServiceProvider(); try { myCert2 = new X509Certificate2(@"C:\mycertprivatekey.pfx"); rsa = (RSACryptoServiceProvider)myCert2.PrivateKey; } catch (Exception e) { } byte[] test = {Convert.ToByte("123")}; string t = Convert.ToString(rsa.Decrypt(rsa.Encrypt(test, false), false)); Coming to the point, that encryption/decryption works fine on the c# side with these openssl root certificate files. Step #4: Enrypt in php and Decrypt in .NET: !!NO!! PHP side $onett = "123" .... openssl_public_encrypt($onett, $encrypted, $server_public_key); $onettbase64 = base64_encode($encrypted); copy - paste $onettbase64 ("LkU2GOCy4lqwY4vtPI1JcsxgDgS2t05E6kYghuXjrQe7hSsYXETGdlhzEBlp+qhxzTXV3pw+AS5bEg9CPxqHus8fXHOnXYqsd2HL20QSaz+FjZee6Kvva0cGhWkFdWL+ANDSOWRWo/OMhm7JVqU3P/44c3dLA1eu2UsoDI26OMw=") into c# program: C# side byte[] transfered_onettbase64 = Convert.FromBase64String("LkU2GOCy4lqwY4vtPI1JcsxgDgS2t05E6kYghuXjrQe7hSsYXETGdlhzEBlp+qhxzTXV3pw+AS5bEg9CPxqHus8fXHOnXYqsd2HL20QSaz+FjZee6Kvva0cGhWkFdWL+ANDSOWRWo/OMhm7JVqU3P/44c3dLA1eu2UsoDI26OMw="); string k = Convert.ToString(rsa.Decrypt(transfered_onettbase64, false)); // Bad Data exception == Exception while decrypting!!! Any ideas?

    Read the article

  • C#/.NET Little Wonders: The Concurrent Collections (1 of 3)

    - by James Michael Hare
    Once again we consider some of the lesser known classes and keywords of C#.  In the next few weeks, we will discuss the concurrent collections and how they have changed the face of concurrent programming. This week’s post will begin with a general introduction and discuss the ConcurrentStack<T> and ConcurrentQueue<T>.  Then in the following post we’ll discuss the ConcurrentDictionary<T> and ConcurrentBag<T>.  Finally, we shall close on the third post with a discussion of the BlockingCollection<T>. For more of the "Little Wonders" posts, see the index here. A brief history of collections In the beginning was the .NET 1.0 Framework.  And out of this framework emerged the System.Collections namespace, and it was good.  It contained all the basic things a growing programming language needs like the ArrayList and Hashtable collections.  The main problem, of course, with these original collections is that they held items of type object which means you had to be disciplined enough to use them correctly or you could end up with runtime errors if you got an object of a type you weren't expecting. Then came .NET 2.0 and generics and our world changed forever!  With generics the C# language finally got an equivalent of the very powerful C++ templates.  As such, the System.Collections.Generic was born and we got type-safe versions of all are favorite collections.  The List<T> succeeded the ArrayList and the Dictionary<TKey,TValue> succeeded the Hashtable and so on.  The new versions of the library were not only safer because they checked types at compile-time, in many cases they were more performant as well.  So much so that it's Microsoft's recommendation that the System.Collections original collections only be used for backwards compatibility. So we as developers came to know and love the generic collections and took them into our hearts and embraced them.  The problem is, thread safety in both the original collections and the generic collections can be problematic, for very different reasons. Now, if you are only doing single-threaded development you may not care – after all, no locking is required.  Even if you do have multiple threads, if a collection is “load-once, read-many” you don’t need to do anything to protect that container from multi-threaded access, as illustrated below: 1: public static class OrderTypeTranslator 2: { 3: // because this dictionary is loaded once before it is ever accessed, we don't need to synchronize 4: // multi-threaded read access 5: private static readonly Dictionary<string, char> _translator = new Dictionary<string, char> 6: { 7: {"New", 'N'}, 8: {"Update", 'U'}, 9: {"Cancel", 'X'} 10: }; 11:  12: // the only public interface into the dictionary is for reading, so inherently thread-safe 13: public static char? Translate(string orderType) 14: { 15: char charValue; 16: if (_translator.TryGetValue(orderType, out charValue)) 17: { 18: return charValue; 19: } 20:  21: return null; 22: } 23: } Unfortunately, most of our computer science problems cannot get by with just single-threaded applications or with multi-threading in a load-once manner.  Looking at  today's trends, it's clear to see that computers are not so much getting faster because of faster processor speeds -- we've nearly reached the limits we can push through with today's technologies -- but more because we're adding more cores to the boxes.  With this new hardware paradigm, it is even more important to use multi-threaded applications to take full advantage of parallel processing to achieve higher application speeds. So let's look at how to use collections in a thread-safe manner. Using historical collections in a concurrent fashion The early .NET collections (System.Collections) had a Synchronized() static method that could be used to wrap the early collections to make them completely thread-safe.  This paradigm was dropped in the generic collections (System.Collections.Generic) because having a synchronized wrapper resulted in atomic locks for all operations, which could prove overkill in many multithreading situations.  Thus the paradigm shifted to having the user of the collection specify their own locking, usually with an external object: 1: public class OrderAggregator 2: { 3: private static readonly Dictionary<string, List<Order>> _orders = new Dictionary<string, List<Order>>(); 4: private static readonly _orderLock = new object(); 5:  6: public void Add(string accountNumber, Order newOrder) 7: { 8: List<Order> ordersForAccount; 9:  10: // a complex operation like this should all be protected 11: lock (_orderLock) 12: { 13: if (!_orders.TryGetValue(accountNumber, out ordersForAccount)) 14: { 15: _orders.Add(accountNumber, ordersForAccount = new List<Order>()); 16: } 17:  18: ordersForAccount.Add(newOrder); 19: } 20: } 21: } Notice how we’re performing several operations on the dictionary under one lock.  With the Synchronized() static methods of the early collections, you wouldn’t be able to specify this level of locking (a more macro-level).  So in the generic collections, it was decided that if a user needed synchronization, they could implement their own locking scheme instead so that they could provide synchronization as needed. The need for better concurrent access to collections Here’s the problem: it’s relatively easy to write a collection that locks itself down completely for access, but anything more complex than that can be difficult and error-prone to write, and much less to make it perform efficiently!  For example, what if you have a Dictionary that has frequent reads but in-frequent updates?  Do you want to lock down the entire Dictionary for every access?  This would be overkill and would prevent concurrent reads.  In such cases you could use something like a ReaderWriterLockSlim which allows for multiple readers in a lock, and then once a writer grabs the lock it blocks all further readers until the writer is done (in a nutshell).  This is all very complex stuff to consider. Fortunately, this is where the Concurrent Collections come in.  The Parallel Computing Platform team at Microsoft went through great pains to determine how to make a set of concurrent collections that would have the best performance characteristics for general case multi-threaded use. Now, as in all things involving threading, you should always make sure you evaluate all your container options based on the particular usage scenario and the degree of parallelism you wish to acheive. This article should not be taken to understand that these collections are always supperior to the generic collections. Each fills a particular need for a particular situation. Understanding what each container is optimized for is key to the success of your application whether it be single-threaded or multi-threaded. General points to consider with the concurrent collections The MSDN points out that the concurrent collections all support the ICollection interface. However, since the collections are already synchronized, the IsSynchronized property always returns false, and SyncRoot always returns null.  Thus you should not attempt to use these properties for synchronization purposes. Note that since the concurrent collections also may have different operations than the traditional data structures you may be used to.  Now you may ask why they did this, but it was done out of necessity to keep operations safe and atomic.  For example, in order to do a Pop() on a stack you have to know the stack is non-empty, but between the time you check the stack’s IsEmpty property and then do the Pop() another thread may have come in and made the stack empty!  This is why some of the traditional operations have been changed to make them safe for concurrent use. In addition, some properties and methods in the concurrent collections achieve concurrency by creating a snapshot of the collection, which means that some operations that were traditionally O(1) may now be O(n) in the concurrent models.  I’ll try to point these out as we talk about each collection so you can be aware of any potential performance impacts.  Finally, all the concurrent containers are safe for enumeration even while being modified, but some of the containers support this in different ways (snapshot vs. dirty iteration).  Once again I’ll highlight how thread-safe enumeration works for each collection. ConcurrentStack<T>: The thread-safe LIFO container The ConcurrentStack<T> is the thread-safe counterpart to the System.Collections.Generic.Stack<T>, which as you may remember is your standard last-in-first-out container.  If you think of algorithms that favor stack usage (for example, depth-first searches of graphs and trees) then you can see how using a thread-safe stack would be of benefit. The ConcurrentStack<T> achieves thread-safe access by using System.Threading.Interlocked operations.  This means that the multi-threaded access to the stack requires no traditional locking and is very, very fast! For the most part, the ConcurrentStack<T> behaves like it’s Stack<T> counterpart with a few differences: Pop() was removed in favor of TryPop() Returns true if an item existed and was popped and false if empty. PushRange() and TryPopRange() were added Allows you to push multiple items and pop multiple items atomically. Count takes a snapshot of the stack and then counts the items. This means it is a O(n) operation, if you just want to check for an empty stack, call IsEmpty instead which is O(1). ToArray() and GetEnumerator() both also take snapshots. This means that iteration over a stack will give you a static view at the time of the call and will not reflect updates. Pushing on a ConcurrentStack<T> works just like you’d expect except for the aforementioned PushRange() method that was added to allow you to push a range of items concurrently. 1: var stack = new ConcurrentStack<string>(); 2:  3: // adding to stack is much the same as before 4: stack.Push("First"); 5:  6: // but you can also push multiple items in one atomic operation (no interleaves) 7: stack.PushRange(new [] { "Second", "Third", "Fourth" }); For looking at the top item of the stack (without removing it) the Peek() method has been removed in favor of a TryPeek().  This is because in order to do a peek the stack must be non-empty, but between the time you check for empty and the time you execute the peek the stack contents may have changed.  Thus the TryPeek() was created to be an atomic check for empty, and then peek if not empty: 1: // to look at top item of stack without removing it, can use TryPeek. 2: // Note that there is no Peek(), this is because you need to check for empty first. TryPeek does. 3: string item; 4: if (stack.TryPeek(out item)) 5: { 6: Console.WriteLine("Top item was " + item); 7: } 8: else 9: { 10: Console.WriteLine("Stack was empty."); 11: } Finally, to remove items from the stack, we have the TryPop() for single, and TryPopRange() for multiple items.  Just like the TryPeek(), these operations replace Pop() since we need to ensure atomically that the stack is non-empty before we pop from it: 1: // to remove items, use TryPop or TryPopRange to get multiple items atomically (no interleaves) 2: if (stack.TryPop(out item)) 3: { 4: Console.WriteLine("Popped " + item); 5: } 6:  7: // TryPopRange will only pop up to the number of spaces in the array, the actual number popped is returned. 8: var poppedItems = new string[2]; 9: int numPopped = stack.TryPopRange(poppedItems); 10:  11: foreach (var theItem in poppedItems.Take(numPopped)) 12: { 13: Console.WriteLine("Popped " + theItem); 14: } Finally, note that as stated before, GetEnumerator() and ToArray() gets a snapshot of the data at the time of the call.  That means if you are enumerating the stack you will get a snapshot of the stack at the time of the call.  This is illustrated below: 1: var stack = new ConcurrentStack<string>(); 2:  3: // adding to stack is much the same as before 4: stack.Push("First"); 5:  6: var results = stack.GetEnumerator(); 7:  8: // but you can also push multiple items in one atomic operation (no interleaves) 9: stack.PushRange(new [] { "Second", "Third", "Fourth" }); 10:  11: while(results.MoveNext()) 12: { 13: Console.WriteLine("Stack only has: " + results.Current); 14: } The only item that will be printed out in the above code is "First" because the snapshot was taken before the other items were added. This may sound like an issue, but it’s really for safety and is more correct.  You don’t want to enumerate a stack and have half a view of the stack before an update and half a view of the stack after an update, after all.  In addition, note that this is still thread-safe, whereas iterating through a non-concurrent collection while updating it in the old collections would cause an exception. ConcurrentQueue<T>: The thread-safe FIFO container The ConcurrentQueue<T> is the thread-safe counterpart of the System.Collections.Generic.Queue<T> class.  The concurrent queue uses an underlying list of small arrays and lock-free System.Threading.Interlocked operations on the head and tail arrays.  Once again, this allows us to do thread-safe operations without the need for heavy locks! The ConcurrentQueue<T> (like the ConcurrentStack<T>) has some departures from the non-concurrent counterpart.  Most notably: Dequeue() was removed in favor of TryDequeue(). Returns true if an item existed and was dequeued and false if empty. Count does not take a snapshot It subtracts the head and tail index to get the count.  This results overall in a O(1) complexity which is quite good.  It’s still recommended, however, that for empty checks you call IsEmpty instead of comparing Count to zero. ToArray() and GetEnumerator() both take snapshots. This means that iteration over a queue will give you a static view at the time of the call and will not reflect updates. The Enqueue() method on the ConcurrentQueue<T> works much the same as the generic Queue<T>: 1: var queue = new ConcurrentQueue<string>(); 2:  3: // adding to queue is much the same as before 4: queue.Enqueue("First"); 5: queue.Enqueue("Second"); 6: queue.Enqueue("Third"); For front item access, the TryPeek() method must be used to attempt to see the first item if the queue.  There is no Peek() method since, as you’ll remember, we can only peek on a non-empty queue, so we must have an atomic TryPeek() that checks for empty and then returns the first item if the queue is non-empty. 1: // to look at first item in queue without removing it, can use TryPeek. 2: // Note that there is no Peek(), this is because you need to check for empty first. TryPeek does. 3: string item; 4: if (queue.TryPeek(out item)) 5: { 6: Console.WriteLine("First item was " + item); 7: } 8: else 9: { 10: Console.WriteLine("Queue was empty."); 11: } Then, to remove items you use TryDequeue().  Once again this is for the same reason we have TryPeek() and not Peek(): 1: // to remove items, use TryDequeue. If queue is empty returns false. 2: if (queue.TryDequeue(out item)) 3: { 4: Console.WriteLine("Dequeued first item " + item); 5: } Just like the concurrent stack, the ConcurrentQueue<T> takes a snapshot when you call ToArray() or GetEnumerator() which means that subsequent updates to the queue will not be seen when you iterate over the results.  Thus once again the code below will only show the first item, since the other items were added after the snapshot. 1: var queue = new ConcurrentQueue<string>(); 2:  3: // adding to queue is much the same as before 4: queue.Enqueue("First"); 5:  6: var iterator = queue.GetEnumerator(); 7:  8: queue.Enqueue("Second"); 9: queue.Enqueue("Third"); 10:  11: // only shows First 12: while (iterator.MoveNext()) 13: { 14: Console.WriteLine("Dequeued item " + iterator.Current); 15: } Using collections concurrently You’ll notice in the examples above I stuck to using single-threaded examples so as to make them deterministic and the results obvious.  Of course, if we used these collections in a truly multi-threaded way the results would be less deterministic, but would still be thread-safe and with no locking on your part required! For example, say you have an order processor that takes an IEnumerable<Order> and handles each other in a multi-threaded fashion, then groups the responses together in a concurrent collection for aggregation.  This can be done easily with the TPL’s Parallel.ForEach(): 1: public static IEnumerable<OrderResult> ProcessOrders(IEnumerable<Order> orderList) 2: { 3: var proxy = new OrderProxy(); 4: var results = new ConcurrentQueue<OrderResult>(); 5:  6: // notice that we can process all these in parallel and put the results 7: // into our concurrent collection without needing any external locking! 8: Parallel.ForEach(orderList, 9: order => 10: { 11: var result = proxy.PlaceOrder(order); 12:  13: results.Enqueue(result); 14: }); 15:  16: return results; 17: } Summary Obviously, if you do not need multi-threaded safety, you don’t need to use these collections, but when you do need multi-threaded collections these are just the ticket! The plethora of features (I always think of the movie The Three Amigos when I say plethora) built into these containers and the amazing way they acheive thread-safe access in an efficient manner is wonderful to behold. Stay tuned next week where we’ll continue our discussion with the ConcurrentBag<T> and the ConcurrentDictionary<TKey,TValue>. For some excellent information on the performance of the concurrent collections and how they perform compared to a traditional brute-force locking strategy, see this wonderful whitepaper by the Microsoft Parallel Computing Platform team here.   Tweet Technorati Tags: C#,.NET,Concurrent Collections,Collections,Multi-Threading,Little Wonders,BlackRabbitCoder,James Michael Hare

    Read the article

  • Tips on Migrating from AquaLogic .NET Accelerator to WebCenter WSRP Producer for .NET

    - by user647124
    This year I embarked on a journey to migrate a group of ASP.NET web applications developed to integrate with WebLogic Portal 9.2 via the AquaLogic® Interaction .NET Application Accelerator 1.0 to instead use the Oracle WebCenter WSRP Producer for .NET and integrated with WebLogic Portal 10.3.4. It has been a very winding path and this blog entry is intended to share both the lessons learned and relevant approaches that led to those learnings. Like most journeys of discovery, it was not a direct path, and there are notes to let you know when it is practical to skip a section if you are in a hurry to get from here to there. For the Curious From the perspective of necessity, this section would be better at the end. If it were there, though, it would probably be read by far fewer people, including those that are actually interested in these types of sections. Those in a hurry may skip past and be none the worst for it in dealing with the hands-on bits of performing a migration from .NET Accelerator to WSRP Producer. For others who want to talk about why they did what they did after they did it, or just want to know for themselves, enjoy. A Brief (and edited) History of the WSRP for .NET Technologies (as Relevant to the this Post) Note: This section is for those who are curious about why the migration path is not as simple as many other Oracle technologies. You can skip this section in its entirety and still be just as competent in performing a migration as if you had read it. The currently deployed architecture that was to be migrated and upgraded achieved initial integration between .NET and J2EE over the WSRP protocol through the use of The AquaLogic Interaction .NET Application Accelerator. The .NET Accelerator allowed the applications that were written in ASP.NET and deployed on a Microsoft Internet Information Server (IIS) to interact with a WebLogic Portal application deployed on a WebLogic (J2EE application) Server (both version 9.2, the state of the art at the time of its creation). At the time this architectural decision for the application was made, both the AquaLogic and WebLogic brands were owned by BEA Systems. The AquaLogic brand included products acquired by BEA through the acquisition of Plumtree, whose flagship product was a portal platform available in both J2EE and .NET versions. As part of this dual technology support an adaptor was created to facilitate the use of WSRP as a communication protocol where customers wished to integrate components from both versions of the Plumtree portal. The adapter evolved over several product generations to include a broad array of both standard and proprietary WSRP integration capabilities. Later, BEA Systems was acquired by Oracle. Over the course of several years Oracle has acquired a large number of portal applications and has taken the strategic direction to migrate users of these myriad (and formerly competitive) products to the Oracle WebCenter technology stack. As part of Oracle’s strategic technology roadmap, older portal products are being schedule for end of life, including the portal products that were part of the BEA acquisition. The .NET Accelerator has been modified over a very long period of time with features driven by users of that product and developed under three different vendors (each a direct competitor in the same solution space prior to merger). The Oracle WebCenter WSRP Producer for .NET was introduced much more recently with the key objective to specifically address the needs of the WebCenter customers developing solutions accessible through both J2EE and .NET platforms utilizing the WSRP specifications. The Oracle Product Development Team also provides these insights on the drivers for developing the WSRP Producer: ***************************************** Support for ASP.NET AJAX. Controls using the ASP.NET AJAX script manager do not function properly in the Application Accelerator for .NET. Support 2 way SSL in WLP. This was not possible with the proxy/bridge set up in the existing Application Accelerator for .NET. Allow developers to code portlets (Web Parts) using the .NET framework rather than a proprietary framework. Developers had to use the Application Accelerator for .NET plug-ins to Visual Studio to manage preferences and profile data. This is now replaced with the .NET Framework Personalization (for preferences) and Profile providers. The WSRP Producer for .NET was created as a new way of developing .NET portlets. It was never designed to be an upgrade path for the Application Accelerator for .NET. .NET developers would create new .NET portlets with the WSRP Producer for .NET and leave any existing .NET portlets running in the Application Accelerator for .NET. ***************************************** The advantage to creating a new solution for WSRP is a product that is far easier for Oracle to maintain and support which in turn improves quality, reliability and maintainability for their customers. No changes to J2EE applications consuming the WSRP portlets previously rendered by the.NET Accelerator is required to migrate from the Aqualogic WSRP solution. For some customers using the .NET Accelerator the challenge is adapting their current .NET applications to work with the WSRP Producer (or any other WSRP adapter as they are proprietary by nature). Part of this adaptation is the need to deploy the .NET applications as a child to the WSRP producer web application as root. Differences between .NET Accelerator and WSRP Producer Note: This section is for those who are curious about why the migration is not as pluggable as something such as changing security providers in WebLogic Server. You can skip this section in its entirety and still be just as competent in performing a migration as if you had read it. The basic terminology used to describe the participating applications in a WSRP environment are the same when applied to either the .NET Accelerator or the WSRP Producer: Producer and Consumer. In both cases the .NET application serves as what is referred to as a WSRP environment as the Producer. The difference lies in how the two adapters create the WSRP translation of the .NET application. The .NET Accelerator, as the name implies, is meant to serve as a quick way of adding WSRP capability to a .NET application. As such, at a high level, the .NET Accelerator behaves as a proxy for requests between the .NET application and the WSRP Consumer. A WSRP request is sent from the consumer to the .NET Accelerator, the.NET Accelerator transforms this request into an ASP.NET request, receives the response, then transforms the response into a WSRP response. The .NET Accelerator is deployed as a stand-alone application on IIS. The WSRP Producer is deployed as a parent application on IIS and all ASP.NET modules that will be made available over WSRP are deployed as children of the WSRP Producer application. In this manner, the WSRP Producer acts more as a Request Filter than a proxy in the WSRP transactions between Producer and Consumer. Highly Recommended Enabling Logging Note: You can skip this section now, but you will most likely want to come back to it later, so why not just read it now? Logging is very helpful in tracking down the causes of any anomalies during testing of migrated portlets. To enable the WSRP Producer logging, update the Application_Start method in the Global.asax.cs for your .NET application by adding log4net.Config.XmlConfigurator.Configure(); IIS logs will usually (in a standard configuration) be in a sub folder under C:\WINDOWS\system32\LogFiles\W3SVC. WSRP Producer logs will be found at C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault\Logs\WSRPProducer.log InputTrace.webinfo and OutputTrace.webinfo are located under C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault and can be useful in debugging issues related to markup transformations. Things You Must Do Merge Web.Config Note: If you have been skipping all the sections that you can, now is the time to stop and pay attention J Because the existing .NET application will become a sub-application to the WSRP Producer, you will want to merge required settings from the existing Web.Config to the one in the WSRP Producer. Use the WSRP Producer Master Page The Master Page installed for the WSRP Producer provides common, hiddenform fields and JavaScripts to facilitate portlet instance management and display configuration when the child page is being rendered over WSRP. You add the Master Page by including it in the <@ Page declaration with MasterPageFile="~/portlets/Resources/MasterPages/WSRP.Master" . You then replace: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" > <HTML> <HEAD> With <asp:Content ID="ContentHead1" ContentPlaceHolderID="wsrphead" Runat="Server"> And </HEAD> <body> <form id="theForm" method="post" runat="server"> With </asp:Content> <asp:Content ID="ContentBody1" ContentPlaceHolderID="Main" Runat="Server"> And finally </form> </body> </HTML> With </asp:Content> In the event you already use Master Pages, adapt your existing Master Pages to be sub masters. See Nested ASP.NET Master Pages for a detailed reference of how to do this. It Happened to Me, It Might Happen to You…Or Not Watch for Use of Session or Request in OnInit In the event the .NET application being modified has pages developed to assume the user has been authenticated in an earlier page request there may be direct or indirect references in the OnInit method to request or session objects that may not have been created yet. This will vary from application to application, so the recommended approach is to test first. If there is an issue with a page running as a WSRP portlet then check for potential references in the OnInit method (including references by methods called within OnInit) to session or request objects. If there are, the simplest solution is to create a new method and then call that method once the necessary object(s) is fully available. I find doing this at the start of the Page_Load method to be the simplest solution. Case Sensitivity .NET languages are not case sensitive, but Java is. This means it is possible to have many variations of SRC= and src= or .JPG and .jpg. The preferred solution is to make these mark up instances all lower case in your .NET application. This will allow the default Rewriter rules in wsrp-producer.xml to work as is. If this is not practical, then make duplicates of any rules where an issue is occurring due to upper or mixed case usage in the .NET application markup and match the case in use with the duplicate rule. For example: <RewriterRule> <LookFor>(href=\"([^\"]+)</LookFor> <ChangeToAbsolute>true</ChangeToAbsolute> <ApplyTo>.axd,.css</ApplyTo> <MakeResource>true</MakeResource> </RewriterRule> May need to be duplicated as: <RewriterRule> <LookFor>(HREF=\"([^\"]+)</LookFor> <ChangeToAbsolute>true</ChangeToAbsolute> <ApplyTo>.axd,.css</ApplyTo> <MakeResource>true</MakeResource> </RewriterRule> While it is possible to write a regular expression that will handle mixed case usage, it would be long and strenous to test and maintain, so the recommendation is to use duplicate rules. Is it Still Relative? Some .NET applications base relative paths with a fixed root location. With the introduction of the WSRP Producer, the root has moved up one level. References to ~/ will need to be updated to ~/portlets and many ../ paths will need another ../ in front. I Can See You But I Can’t Find You This issue was first discovered while debugging modules with code that referenced the form on a page from the code-behind by name and/or id. The initial error presented itself as run-time error that was difficult to interpret over WSRP but seemed clear when run as straight ASP.NET as it indicated that the object with the form name did not exist. Since the form name was no longer valid after implementing the WSRP Master Page, the likely fix seemed to simply update the references in the code. However, as the WSRP Master Page is external to the code, a compile time error resulted: Error      155         The name 'form1' does not exist in the current context                C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault\portlets\legacywebsite\module\Screens \Reporting.aspx.cs                51           52           legacywebsite.module Much hair-pulling research later it was discovered that it was the use of the FindControl method causing the issue. FindControl doesn’t work quite as expected once a Master Page has been introduced as the controls become embedded in controls, require a recursion to find them that is not part of the FindControl method. In code where the page form is referenced by name, there are two steps to the solution. First, the form needs to be referenced in code generically with Page.Form. For example, this: ToggleControl ctrl = new ToggleControl(frmManualEntry, FunctionLibrary.ParseArrayLst(userObj.Roles)); Becomes this: ToggleControl ctrl = new ToggleControl(Page.Form, FunctionLibrary.ParseArrayLst(userObj.Roles)); Generally the form id is referenced in most ASP.NET applications as a path to a control on the form. To reach the control once a MasterPage has been added requires an additional method to recurse through the controls collections within the form and find the control ID. The following method (found at Rick Strahl's Web Log) corrects this very nicely: public static Control FindControlRecursive(Control Root, string Id) { if (Root.ID == Id) return Root; foreach (Control Ctl in Root.Controls) { Control FoundCtl = FindControlRecursive(Ctl, Id); if (FoundCtl != null) return FoundCtl; } return null; } Where the form name is not referenced, simply using the FindControlRecursive method in place of FindControl will be all that is necessary. Following the second part of the example referenced earlier, the method called with Page.Form changes its value extraction code block from this: Label lblErrMsg = (Label)frmRef.FindControl("lblBRMsg" To this: Label lblErrMsg = (Label) FunctionLibrary.FindControlRecursive(frmRef, "lblBRMsg" The Master That Won’t Step Aside In most migrations it is preferable to make as few changes as possible. In one case I ran across an existing Master Page that would not function as a sub-Master Page. While it would probably have been educational to trace down why, the expedient process of updating it to take the place of the WSRP Master Page is the route I took. The changes are highlighted below: … <asp:ContentPlaceHolder ID="wsrphead" runat="server"></asp:ContentPlaceHolder> </head> <body leftMargin="0" topMargin="0"> <form id="TheForm" runat="server"> <input type="hidden" name="key" id="key" value="" /> <input type="hidden" name="formactionurl" id="formactionurl" value="" /> <input type="hidden" name="handle" id="handle" value="" /> <asp:ScriptManager ID="ScriptManager1" runat="server" EnablePartialRendering="true" > </asp:ScriptManager> This approach did not work for all existing Master Pages, but fortunately all of the other existing Master Pages I have run across worked fine as a sub-Master to the WSRP Master Page. Moving On In Enterprise Portals, even after you get everything working, the work is not finished. Next you need to get it where everyone will work with it. Migration Planning Providing that the server where IIS is running is adequately sized, it is possible to run both the .NET Accelerator and the WSRP Producer on the same server during the upgrade process. The upgrade can be performed incrementally, i.e., one portlet at a time, if server administration processes support it. Those processes would include the ability to manage a second producer in the consuming portal and to change over individual portlet instances from one provider to the other. If processes or requirements demand that all portlets be cut over at the same time, it needs to be determined if this cut over should include a new producer, updating all of the portlets in the consumer, or if the WSRP Producer portlet configuration must maintain the naming conventions used by the .NET Accelerator and simply change the WSRP end point configured in the consumer. In some enterprises it may even be necessary to maintain the same WSDL end point, at which point the IIS configuration will be where the updates occur. The downside to such a requirement is that it makes rolling back very difficult, should the need arise. Location, Location, Location Not everyone wants the web application to have the descriptively obvious wsrpdefault location, or needs to create a second WSRP site on the same server. The instructions below are from the product team and, while targeted towards making a second site, will work for creating a site with a different name and then remove the old site. You can also change just the name in IIS. Manually Creating a WSRP Producer Site Instructions (NOTE: all executables used are the same ones used by the installer and “wsrpdev” will be the name of the new instance): 1. Copy C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault to C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev. 2. Bring up a command window as an administrator 3. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\IISAppAccelSiteCreator.exe install WSRPProducers wsrpdev "C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev" 8678 2.0.50727 4. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\PermManage.exe add FileSystem C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev "NETWORK SERVICE" 3 1 5. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\PermManage.exe add FileSystem C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev EVERYONE 1 1 6. Open up C:\Oracle\Middleware\WSRPProducerForDotNet\wsdl\1.0\WSRPService.wsdl and replace wsrpdefault with wsrpdev 7. Open up C:\Oracle\Middleware\WSRPProducerForDotNet\wsdl\2.0\WSRPService.wsdl and replace wsrpdefault with wsrpdev Tests: 1. Bring up a browser on the host itself and go to http://localhost:8678/wsrpdev/wsdl/1.0/WSRPService.wsdl and make sure that the URLs in the XML returned include the wsrpdev changes you made in step 6. 2. Bring up a browser on the host itself and see if the default sample comes up: http://localhost:8678/wsrpdev/portlets/ASPNET_AJAX_sample/default.aspx 3. Register the producer in WLP and test the portlet. Changing the Port used by WSRP Producer The pre-configured port for the WSRP Producer is 8678. You can change this port by updating both the IIS configuration and C:\Oracle\Middleware\WSRPProducerForDotNet\[WSRP_APP_NAME]\wsdl\1.0\WSRPService.wsdl. Do You Need to Migrate? Oracle Premier Support ended in November of 2010 for AquaLogic Interaction .NET Application Accelerator 1.x and Extended Support ends in November 2012 (see http://www.oracle.com/us/support/lifetime-support/lifetime-support-software-342730.html for other related dates). This means that integration with products released after November of 2010 is not supported. If having such support is the policy within your enterprise, you do indeed need to migrate. If changes in your enterprise cause your current solution with the .NET Accelerator to no longer function properly, you may need to migrate. Migration is a choice, and if the goals of your enterprise are to take full advantage of newer technologies then migration is certainly one activity you should be planning for.

    Read the article

  • Can I query DOM Document with xpath expression from multiple threads safely?

    - by Dan
    I plan to use dom4j DOM Document as a static cache in an application where multiples threads can query the document. Taking into the account that the document itself will never change, is it safe to query it from multiple threads? I wrote the following code to test it, but I am not sure that it actually does prove that operation is safe? package test.concurrent_dom; import org.dom4j.Document; import org.dom4j.DocumentException; import org.dom4j.DocumentHelper; import org.dom4j.Element; import org.dom4j.Node; /** * Hello world! * */ public class App extends Thread { private static final String xml = "<Session>" + "<child1 attribute1=\"attribute1value\" attribute2=\"attribute2value\">" + "ChildText1</child1>" + "<child2 attribute1=\"attribute1value\" attribute2=\"attribute2value\">" + "ChildText2</child2>" + "<child3 attribute1=\"attribute1value\" attribute2=\"attribute2value\">" + "ChildText3</child3>" + "</Session>"; private static Document document; private static Element root; public static void main( String[] args ) throws DocumentException { document = DocumentHelper.parseText(xml); root = document.getRootElement(); Thread t1 = new Thread(){ public void run(){ while(true){ try { sleep(3); } catch (InterruptedException e) { e.printStackTrace(); } Node n1 = root.selectSingleNode("/Session/child1"); if(!n1.getText().equals("ChildText1")){ System.out.println("WRONG!"); } } } }; Thread t2 = new Thread(){ public void run(){ while(true){ try { sleep(3); } catch (InterruptedException e) { e.printStackTrace(); } Node n1 = root.selectSingleNode("/Session/child2"); if(!n1.getText().equals("ChildText2")){ System.out.println("WRONG!"); } } } }; Thread t3 = new Thread(){ public void run(){ while(true){ try { sleep(3); } catch (InterruptedException e) { e.printStackTrace(); } Node n1 = root.selectSingleNode("/Session/child3"); if(!n1.getText().equals("ChildText3")){ System.out.println("WRONG!"); } } } }; t1.start(); t2.start(); t3.start(); System.out.println( "Hello World!" ); } }

    Read the article

  • Generate Ant build file

    - by inakiabt
    I have the following project structure: root/ comp/ env/ version/ build.xml build.xml build.xml Where root/comp/env/version/build.xml is: <project name="comp-env-version" basedir="."> <import file="../build.xml" optional="true" /> <echo>Comp Env Version tasks</echo> <target name="run"> <echo>Comp Env Version run task</echo> </target> </project> root/comp/env/build.xml is: <project name="comp-env" basedir="."> <import file="../build.xml" optional="true" /> <echo>Comp Env tasks</echo> <target name="run"> <echo>Comp Env run task</echo> </target> </project> root/comp/build.xml is: <project name="comp" basedir="."> <echo>Comp tasks</echo> </project> Each build file imports the parent build file and each child inherits and overrides parent tasks/properties. What I need is to get the generated build XML without run anything. For example, if I run "ant" (or something like that) on root/comp/env/version/, I would like to get the following output: <project name="comp-env-version" basedir="."> <echo>Comp tasks</echo> <echo>Comp Env tasks</echo> <echo>Comp Env Version tasks</echo> <target name="run"> <echo>Comp Env Version run task</echo> </target> </project> Is there an Ant plugin to do this? With Maven? What are my options if not? EDIT: I need something like "mvn help:effective-pom" for Ant.

    Read the article

  • RVM can't switch to Ruby 1.9.1

    - by Marco
    I installed Ruby 1.8.7 through apt-get. I then installed 1.9.1 through RVM. The RVM 1.9.1 installation was successful: root: rvm install 1.9.1 <i>Installing Ruby from source to: /usr/local/rvm/rubies/ruby-1.9.1-p378 </i> <i>/usr/local/rvm/src/ruby-1.9.1-p378 has already been extracted. </i> <i>Configuring ruby-1.9.1-p378, this may take a while depending on your cpu(s)... </i> <i>Compiling ruby-1.9.1-p378, this may take a while, depending on your cpu(s)... </i> <i>Installing ruby-1.9.1-p378 </i> <i>Installation of ruby-1.9.1-p378 is complete. </i> <i>Updating rubygems for /usr/local/rvm/gems/ruby-1.9.1-p378@global </i> <i>Updating rubygems for /usr/local/rvm/gems/ruby-1.9.1-p378 </i> <i>adjusting shebangs for ruby-1.9.1-p378 (gem irb erb ri rdoc testrb rake). </i> <i>Installing gems for ruby-1.9.1-p378 (rdoc rake). </i> <i>Installing rdoc to /usr/local/rvm/gems/ruby-1.9.1-p378@global </i> <i>Installing rdoc to /usr/local/rvm/gems/ruby-1.9.1-p378 </i> <i>Installing rake to /usr/local/rvm/gems/ruby-1.9.1-p378@global </i> <i>Installing rake to /usr/local/rvm/gems/ruby-1.9.1-p378 </i> <i>Installation of gems for ruby-1.9.1-p378 is complete. </i> However, I cannot get RVM to switch to the new version: root: ruby -v ruby 1.8.7 (2008-08-11 patchlevel 72) [x86_64-linux] root: rvm 1.9.1 root: ruby -v ruby 1.8.7 (2008-08-11 patchlevel 72) [x86_64-linux] Despite that, it seems to have installed fine: root: /usr/local/rvm/bin/ruby-1.9.1-p378 -v ruby 1.9.1p378 (2010-01-10 revision 26273) [x86_64-linux] I also tried setting the rvm --default to 1.9.1 but that did not help. Why can't RVM switch to the new version? Should I just set an alias for ruby=1.9.1? *running Debian

    Read the article

  • Regular expression works normally, but fails when placed in an XML schema

    - by Eli Courtwright
    I have a simple doc.xml file which contains a single root element with a Timestamp attribute: <?xml version="1.0" encoding="utf-8"?> <root Timestamp="04-21-2010 16:00:19.000" /> I'd like to validate this document against a my simple schema.xsd to make sure that the Timestamp is in the correct format: <?xml version="1.0" encoding="utf-8"?> <xs:schema attributeFormDefault="unqualified" elementFormDefault="qualified" xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="root"> <xs:complexType> <xs:attribute name="Timestamp" use="required" type="timeStampType"/> </xs:complexType> </xs:element> <xs:simpleType name="timeStampType"> <xs:restriction base="xs:string"> <xs:pattern value="(0[0-9]{1})|(1[0-2]{1})-(3[0-1]{1}|[0-2]{1}[0-9]{1})-[2-9]{1}[0-9]{3} ([0-1]{1}[0-9]{1}|2[0-3]{1}):[0-5]{1}[0-9]{1}:[0-5]{1}[0-9]{1}.[0-9]{3}" /> </xs:restriction> </xs:simpleType> </xs:schema> So I use the lxml Python module and try to perform a simple schema validation and report any errors: from lxml import etree schema = etree.XMLSchema( etree.parse("schema.xsd") ) doc = etree.parse("doc.xml") if not schema.validate(doc): for e in schema.error_log: print e.message My XML document fails validation with the following error messages: Element 'root', attribute 'Timestamp': [facet 'pattern'] The value '04-21-2010 16:00:19.000' is not accepted by the pattern '(0[0-9]{1})|(1[0-2]{1})-(3[0-1]{1}|[0-2]{1}[0-9]{1})-[2-9]{1}[0-9]{3} ([0-1]{1}[0-9]{1}|2[0-3]{1}):[0-5]{1}[0-9]{1}:[0-5]{1}[0-9]{1}.[0-9]{3}'. Element 'root', attribute 'Timestamp': '04-21-2010 16:00:19.000' is not a valid value of the atomic type 'timeStampType'. So it looks like my regular expression must be faulty. But when I try to validate the regular expression at the command line, it passes: >>> import re >>> pat = '(0[0-9]{1})|(1[0-2]{1})-(3[0-1]{1}|[0-2]{1}[0-9]{1})-[2-9]{1}[0-9]{3} ([0-1]{1}[0-9]{1}|2[0-3]{1}):[0-5]{1}[0-9]{1}:[0-5]{1}[0-9]{1}.[0-9]{3}' >>> assert re.match(pat, '04-21-2010 16:00:19.000') >>> I'm aware that XSD regular expressions don't have every feature, but the documentation I've found indicates that every feature that I'm using should work. So what am I mis-understanding, and why does my document fail?

    Read the article

  • Cluster Node Recovery Using Second Node in Solaris Cluster

    - by Onur Bingul
    Assumptions:Node 0a is the cluster node that has crashed and could not boot anymore.Node 0b is the node in cluster and in production with services active.Both nodes have their boot disk mirrored via SDS/SVM.We have many options to clone the boot disk from node 0b:- make a copy via network using the ufsdump command and pipe to ufsrestore - make a copy inserting the disk locally on node 0b and creating the third mirror with SDS- make a copy inserting the disk locally on node 0b using dd commandIn this procedure we are going to use dd command (from my experience this is the best option).Bare in mind that in the examples provided we work on Sun Fire V240 systems which have SCSI internal disks. In the case of Fibre Channel (FC) internal disks you must pay attention to the unique identifier, or World Wide Name (WWN), associated with each FC disk (in this case take a look at infodoc #40133 in order to recreate the device tree correctly).Procedure:On node 0b the boot disk is c1t0d0 (c1t1d0 mirror) and this is the VTOC:* Partition  Tag  Flags    Sector     Count    Sector  Mount Directory      0      2    00          0   2106432   2106431      1      3    01    2106432  74630784  76737215      2      5    00          0 143349312 143349311      4      7    00   76737216  50340672 127077887      5      4    00  127077888  14683968 141761855      6      0    00  141761856   1058304 142820159      7      0    00  142820160    529152 143349311We will insert the new disk on node 0b and it will be seen as c1t2d0.1) On node 0b we make a copy via dd from disk c1t0d0s2 to disk c1t2d0s2# dd if=/dev/rdsk/c1t0d0s2 of=/dev/rdsk/c1t2d0s2 bs=8192kA copy of a 72GB disk will take approximately about 45 minutes.Note: as an alternative to make identical copy of root over network follow Document ID: 47498Title: Sun[TM] Cluster 3.0: How to Rebuild a node with Veritas Volume Manager2) Perform an fsck on disk c1t2d0 data slices:   1.  fsck -o f /dev/rdsk/c1t2d0s0 (root)   2.  fsck -o f /dev/rdsk/c1t2d0s4 (/var)   3.  fsck -o f /dev/rdsk/c1t2d0s5 (/usr)   4.  fsck -o f /dev/rdsk/c1t2d0s6 (/globaldevices)3) Mount the root file system in order to edit following files for changing the node name:# mount /dev/dsk/c1t2d0s0 /mntChange the hostname from 0b to 0a:# cd /mnt/etc# vi hosts # vi hostname.bge0 # vi hostname.bge2 # vi nodename 4) Change the /mnt/etc/vfstab from the actual:/dev/md/dsk/d201        -       -       swap    -       no      -/dev/md/dsk/d200        /dev/md/rdsk/d200       /       ufs     1       no      -/dev/md/dsk/d205        /dev/md/rdsk/d205       /usr    ufs     1       no      logging/dev/md/dsk/d204        /dev/md/rdsk/d204       /var    ufs     1       no      logging#/dev/md/dsk/d206       /dev/md/rdsk/d206       /globaldevices  ufs     2       yes     loggingswap    -       /tmp    tmpfs   -       yes     -/dev/md/dsk/d206        /dev/md/rdsk/d206       /global/.devices/node@2 ufs     2       noglobalto this (unencapsulate disk from SDS/SVM):/dev/dsk/c1t0d0s1        -       -       swap    -       no      -/dev/dsk/c1t0d0s0       /dev/rdsk/c1t0d0s0       /       ufs     1       no      -/dev/dsk/c1t0d0s5       /dev/rdsk/c1t0d0s5       /usr    ufs     1       no      logging/dev/dsk/c1t0d0s4       /dev/rdsk/c1t0d0s4       /var    ufs     1       no      logging#/dev/md/dsk/d206       /dev/md/rdsk/d206       /globaldevices  ufs     2       yes     loggingswap    -       /tmp    tmpfs   -       yes     -/dev/dsk/c1t0d0s6       /dev/rdsk/c1t0d0s6       /global/.devices/node@1 ufs     2       no globalIt is important that global device partition (slice 6) in the new vfstab will point to the physical partition of the disk (in our case slice 6).Be careful with the name you use for the new disk. In this case we define it as c1t0d0 because we will insert it as target 0 in node 0a.But this could be different based on the configuration you are working on.5) Remove following entry from /mnt/etc/system (part of unencapsulation procedure):rootdev:/pseudo/md@0:0,200,blk6) Correct the link shared -> ../../global/.devices/node@2/dev/md/shared in order to point to the nodeid of node 0a (in our case nodeid 1):# cd /mnt/dev/mdhow it is now.... node 0b has nodeid 2lrwxrwxrwx   1 root     root          42 Mar 10  2005 shared ->../../global/.devices/node@2/dev/md/shared# rm shared# ln -s ../../global/.devices/node@1/dev/md/shared sharedhow is going to be... with nodeid 1 for node 0alrwxrwxrwx   1 root     root          42 Mar 10  2005 shared ->../../global/.devices/node@1/dev/md/shared7) Change nodeid (in our case from 2 to 1):# cd /mnt/etc/cluster# vi nodeid8) Change the file /mnt/etc/path_to_inst in order to reflect the correct nodeid for node 0a:# cd /mnt/etc# vi path_to_instChange entries from node@2 to node@1 with the vi command ":%s/node@2/node@1/g"9) Write the bootblock to the disk... just in case:# /usr/sbin/installboot /usr/platform/sun4u/lib/fs/ufs/bootblk /dev/rdsk/c1t2d0s0Now the disk is ready to be inserted in node 0a in order to bootup the node.10) Bootup node 0a with command "boot -sx"... this is becasue we need to make some changes in ccr files in order to recreate did environment.11) Modify cluster ccr:# cd /etc/cluster/ccr# rm did_instances# rm did_instances.bak# vi directory - remove the did_instances line.# /usr/cluster/lib/sc/ccradm -i /etc/cluster/ccr/directory # grep ccr_gennum /etc/cluster/ccr/directory ccr_gennum -1 # /usr/cluster/lib/sc/ccradm -i /etc/cluster/ccr/infrastructure # grep ccr_gennum /etc/cluster/ccr/infrastructure ccr_gennum -112) Bring the node 0a down again to the ok prompt and then issue the command "boot -r"Now the node will join the cluster and from scstat and metaset command you can verify functionality. Next step is to encapsulate the boot disk in SDS/SVM and create the mirrors.In our case node 0b has metadevice name starting from d200. For this reason on node 0a we need to create metadevice starting from d100. This is just an example, you can have different names.The important thing to remember is that metadevice boot disks have different names on each node.13) Remove metadevice pointing to the boot and mirror disks (inherit from node 0b):# metaclear -r -f d200# metaclear -r -f d201# metaclear -r -f d204# metaclear -r -f d205# metaclear -r -f d206verify from metastat that no metadevices are set for boot and mirror disks.14) Encapsulate the boot disk:# metainit -f d110 1 1 c1t0d0s0# metainit d100 -m d110# metaroot d10015) Reboot node 0a.16) Create all the metadevice for slices remaining on boot disk# metainit -f d111 1 1 c1t0d0s1# metainit d101 -m d111# metainit -f d114 1 1 c1t0d0s4# metainit d104 -m d114# metainit -f d115 1 1 c1t0d0s5# metainit d105 -m d115# metainit -f d116 1 1 c1t0d0s6# metainit d106 -m d11617) Edit the vfstab in order to specifiy metadevices created:old:/dev/dsk/c1t0d0s1        -       -       swap    -       no      -/dev/md/dsk/d100        /dev/md/rdsk/d100       /       ufs     1       no      -/dev/dsk/c1t0d0s5       /dev/rdsk/c1t0d0s5       /usr    ufs     1       no      logging/dev/dsk/c1t0d0s4       /dev/rdsk/c1t0d0s4       /var    ufs     1       no      logging#/dev/md/dsk/d206       /dev/md/rdsk/d206       /globaldevices  ufs     2       yes     loggingswap    -       /tmp    tmpfs   -       yes     -/dev/dsk/c1t0d0s6       /dev/rdsk/c1t0d0s6       /global/.devices/node@1 ufs      2       no  globalnew:/dev/md/dsk/d101        -       -       swap    -       no      -/dev/md/dsk/d100        /dev/md/rdsk/d100       /       ufs     1       no      -/dev/md/dsk/d105        /dev/md/rdsk/d105       /usr    ufs     1       no      logging/dev/md/dsk/d104        /dev/md/rdsk/d104       /var    ufs     1       no      logging#/dev/md/dsk/106       /dev/md/rdsk/d106       /globaldevices  ufs     2       yes     loggingswap    -       /tmp    tmpfs   -       yes     -/dev/md/dsk/d106        /dev/md/rdsk/d106       /global/.devices/node@1 ufs     2       noglobal18) Reboot node 0a in order to check new SDS/SVM boot configuration.19) Label the mirror disk c1t1d0 with the VTOC of boot disk c1t0d0:# prtvtoc /dev/dsk/c1t0d0s2 > /var/tmp/VTOC_c1t0d0 # fmthard -s /var/tmp/VTOC_c1t0d0 /dev/rdsk/c1t1d0s220) Put DB replica on slice 7 of disk c1t1d0:# metadb -a -c 3 /dev/dsk/c1t1d0s721) Create metadevice for mirror disk c1t1d0 and attach the new mirror side:# metainit d120 1 1 c1t1d0s0# metattach d100 d120# metainit d121 1 1 c1t1d0s1# metattach d101 d121# metainit d124 1 1 c1t1d0s4# metattach d104 d124# metainit d125 1 1 c1t1d0s5# metattach d105 d125# metainit d126 1 1 c1t1d0s6# metattach d106 d126

    Read the article

  • Restructure XML nodes using XSLT

    - by Brian
    Looking to use XSLT to transform my XML. The sample XML is as follows: <root> <info> <firstname>Bob</firstname> <lastname>Joe</lastname> </info> <notes> <note>text1</note> <note>text2</note> </notes> <othernotes> <note>text3</note> <note>text4</note> </othernotes> I'm looking to extract all "note" elements, and have them under a parent node "notes". The result I'm looking for is as follows: <root> <info> <firstname>Bob</firstname> <lastname>Joe</lastname> </info> <notes> <note>text1</note> <note>text2</note> <note>text3</note> <note>text4</note> </notes> </root> The XSLT I attempted to use is allowing me to extract all my "note", however, I can't figure out how I can wrap them back within a "notes" node. Here's the XSLT I'm using: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output omit-xml-declaration="yes" indent="yes"/> <xsl:template match="notes|othernotes"> <xsl:apply-templates select="note"/> </xsl:template> <xsl:template match="*"> <xsl:copy><xsl:apply-templates/></xsl:copy> </xsl:template> </xsl:stylesheet> The result I'm getting with the above XSLT is: <root> <info> <firstname>Bob</firstname> <lastname>Joe</lastname> </info> <note>text1</note> <note>text2</note> <note>text3</note> <note>text4</note> </root> Thanks

    Read the article

  • Armchair Linguists: 'code' vs. 'codes'--or why I write 'code' and my manager asks for 'codes'

    - by Ukko
    I wanted to tap into the collective wisdom here to see if I can get some insight into one of my pet peeves, people who thread "code" as a countable noun. Let me also preface this by saying that I am not talking about anyone who speaks english as a second language, this is a native phenomenon. For those of us who slept through grammar class there are two classes of nouns which basically refer to things that are countable and non-countable (sometimes referred to as count and noncount). For instance 'sand' is a non-count noun and 'apple' is count. You can talk about "two apples" but "two sands" does not parse. The bright students then would point out a word like "beer" where is looks like this is violated. Beer as a substance is certainly a non-count noun, but I can ask for "two beers" without offending the grammar police. The reason is that there are actually two words tied up in that one utterance, Definition #1 is a yummy golden substance and Definition #2 is a colloquial term for a container of said substance. #1 is non-count and #2 is countable. This gets to my problem with "codes" as a countable noun. In my mind the code that we programmers write is non-count, "I wrote some code today." When used in the plural like "Have you got the codes" I can only assume that you are asking if I have the cryptographically significant numbers for launching a missile or the like. Every time my peer in marketing asks about when we will have the new codes ready I have a vision of rooms of code breakers going over the latest Enigma coded message. I corrected the usage in all the documents I am asked to review, but then I noticed that our customer was also using the work "codes" when they meant "code". At this point I have realized that there is a significant sub-population that uses "codes" and they seem to be impervious to what I see as the dominant "correct" usage. This is the part I want some help on, has anyone else noticed this phenomenon? Do you know what group it is associated with, old Fortran programmer perhaps? Is it a regionalism? I have become quick to change my terms when I notice a customer's usage, but it would be nice to know if I am sending a proposal somewhere what style they expect. I would hate to get canned with a review of "Ha, these guy's must be morons they don't even know 'code' is plural!"

    Read the article

  • How can I bind a instance of a custom class to a WPF TreeListView without bugs?

    - by user327104
    First, from : http://blogs.msdn.com/atc_avalon_team/archive/2006/03/01/541206.aspx I get a nice TreeListView. I left the original classes (TreeListView, TreeListItemView, and LevelToIndentConverter) intact, the only code y have Added is on the XAML, here is it: <Style TargetType="{x:Type l:TreeListViewItem}"> .... </ControlTemplate.Triggers> <ControlTemplate.Resources> <HierarchicalDataTemplate DataType="{x:Type local:FileInfo}" ItemsSource="{Binding Childs}"> </HierarchicalDataTemplate> </ControlTemplate.Resources> </ControlTemplate> ... here is my custom class public class FileInfo { private List<FileInfo> childs; public FileInfo() { childs = new List<FileInfo>(); } public bool IsExpanded { get; set; } public bool IsSelected { get; set; } public List<FileInfo> Childs { get { return childs; } set { } } public string FileName { get; set; } public string Size { get; set; } } And before I use my custom class I remove all the items inside the treeListView1: <l:TreeListView> <l:TreeListView.Columns> <GridViewColumn Header="Name" CellTemplate="{StaticResource CellTemplate_Name}" /> <GridViewColumn Header="IsAbstract" DisplayMemberBinding="{Binding IsAbstract}" Width="60" /> <GridViewColumn Header="Namespace" DisplayMemberBinding="{Binding Namespace}" /> </l:TreeListView.Columns> </l:TreeListView> So finaly I add this code to bind a Instance of my class to the TreeListView: private void Window_Loaded(object sender,RoutedEventArgs e) { FileInfo root = new FileInfo() { FileName = "mis hojos", Size="asdf" }; root.Childs.Add(new FileInfo(){FileName="sub", Size="123456" }); treeListView1.Items.Add(root); root.Childs[0].Childs.Add(new FileInfo() { FileName = "asdf" }); root.Childs[0].IsExpanded = true; } So the bug is that the button to expand the elementes dont appear and when a node is expanded by doble click the child nodes dont look like child nodes. Please F1 F1 F1 F1 F1 F1 F1 F1.

    Read the article

  • How to create nested directories in PhoneGap

    - by Stallman
    I have tried on this, but this didn't satisfy my request at all. I write a new one: var file_system; var fs_root; window.requestFileSystem(LocalFileSystem.PERSISTENT, 1024*1024, onInitFs, request_FS_fail); function onInitFs(fs) { file_system= fs; fs_root= file_system.root; alert("ini fs"); create_Directory(); alert("ini fs done."); } var string_array; var main_dir= "story_repository/"+ User_Editime; string_array= new Array("story_repository/",main_dir, main_dir+"/rec", main_dir+"/img","story_repository/"+ User_Name ); function create_Directory(){ var start= 0; var path=""; while(start < string_array.length) { path = string_array[start]; alert(start+" th created directory " +" is "+ path); fs_root.getDirectory ( path , {create: true, exclusive: false}, function(entry) { alert(path +"is created."); }, create_dir_err() ); start++; }//while loop }//create_Directory function create_dir_err() { alert("Recursively create directories error."); } function request_FS_fail() { alert("Failed to request File System "); } Although the directories are created, the it sends me ErrorCallback:"alert("Recursively create directories error.");" Firstly, I don't think this code will work since I have tried on this: This one failed: window.requestFileSystem( LocalFileSystem.PERSISTENT, 0, //request file system success callback. function(fileSys) { fileSys.root.getDirectory( "story_repository/"+ dir_name, {create: true, exclusive: false}, //Create directory story_repository/Stallman_time. function(directory) { alert("Create directory: "+ "story_repository/"+ dir_name); //create dir_name/img/ fileSys.root.getDirectory { "story_repository/"+ dir_name + "/img/", {create: true, exclusive: false}, function(directory) { alert("Create a directory: "+ "story_repository/"+ dir_name + "/img/"); //check. //create dir_name/rec/ fileSys.root.getDirectory { "story_repository/"+ dir_name + "/rec/", {create: true, exclusive: false}, function(directory) { alert("Create a directory: "+ "story_repository/"+ dir_name + "/rec/"); //check. //Go ahead. }, createError } //create dir_name/rec/ }, createError } //create dir_name/img }, createError); }, //Create directory story_repository/Stallman_time. createError()); } I just repeatedly call fs.root.getDirectory only but it failed. But the first one is almost the same... 1. What is the problem at all? Why does the first one always gives me the ErrorCallback? 2. Why can't the second one work? 3. Does anyone has a better solution?(no ErrorcallBack msg) ps: I work on Android and PhoneGap 1.7.0.

    Read the article

  • Ignore order of elements using xs:extension

    - by Peter Lang
    How can I design my xsd to ignore the sequence of elements? <root> <a/> <b/> </root> <root> <b/> <a/> </root> I need to use extension for code generation reasons, so I tried the following using all: <?xml version="1.0" encoding="UTF-8"?> <xs:schema targetNamespace="http://www.example.com/test" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:t="http://www.example.com/test" > <xs:complexType name="BaseType"> <xs:all> <xs:element name="a" type="xs:string" /> </xs:all> </xs:complexType> <xs:complexType name="ExtendedType"> <xs:complexContent> <xs:extension base="t:BaseType"> <xs:all> <!-- ERROR --> <xs:element name="b" type="xs:string" /> </xs:all> </xs:extension> </xs:complexContent> </xs:complexType> <xs:element name="root" type="t:ExtendedType"></xs:element> </xs:schema> This xsd is not valid though, the following error is reported at <!-- ERROR -->: cos-all-limited.1.2: An all model group must appear in a particle with {min occurs} = {max occurs} = 1, and that particle must be part of a pair which constitutes the {content type} of a complex type definition. Documentation of cos-all-limited.1.2 says: 1.2 the {term} property of a particle with {max occurs}=1 which is part of a pair which constitutes the {content type} of a complex type definition. I don't really understand this (neither xsd nor English native speaker :) ). Am I doing the wrong thing, am I doing the right thing wrong, or is there no way to achieve this? Thanks!

    Read the article

  • mounting ext4 fs with block size of 65536

    - by seaquest
    I am doing some benchmarking on EXT4 performance on Compact Flash media. I have created an ext4 fs with block size of 65536. however I can not mount it on ubuntu-10.10-netbook-i386. (it is already mounting ext4 fs with 4096 bytes of block sizes) According to my readings on ext4 it should allow such big block sized fs. I want to hear your comments. root@ubuntu:~# mkfs.ext4 -b 65536 /dev/sda3 Warning: blocksize 65536 not usable on most systems. mke2fs 1.41.12 (17-May-2010) mkfs.ext4: 65536-byte blocks too big for system (max 4096) Proceed anyway? (y,n) y Warning: 65536-byte blocks too big for system (max 4096), forced to continue Filesystem label= OS type: Linux Block size=65536 (log=6) Fragment size=65536 (log=6) Stride=0 blocks, Stripe width=0 blocks 19968 inodes, 19830 blocks 991 blocks (5.00%) reserved for the super user First data block=0 1 block group 65528 blocks per group, 65528 fragments per group 19968 inodes per group Writing inode tables: done Creating journal (1024 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 37 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. root@ubuntu:~# tune2fs -l /dev/sda3 tune2fs 1.41.12 (17-May-2010) Filesystem volume name: <none> Last mounted on: <not available> Filesystem UUID: 4cf3f507-e7b4-463c-be11-5b408097099b Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 19968 Block count: 19830 Reserved block count: 991 Free blocks: 18720 Free inodes: 19957 First block: 0 Block size: 65536 Fragment size: 65536 Blocks per group: 65528 Fragments per group: 65528 Inodes per group: 19968 Inode blocks per group: 78 Flex block group size: 16 Filesystem created: Sat Feb 5 14:39:55 2011 Last mount time: n/a Last write time: Sat Feb 5 14:40:02 2011 Mount count: 0 Maximum mount count: 37 Last checked: Sat Feb 5 14:39:55 2011 Check interval: 15552000 (6 months) Next check after: Thu Aug 4 14:39:55 2011 Lifetime writes: 70 MB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: afb5b570-9d47-4786-bad2-4aacb3b73516 Journal backup: inode blocks root@ubuntu:~# mount -t ext4 /dev/sda3 /mnt/ mount: wrong fs type, bad option, bad superblock on /dev/sda3, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so

    Read the article

  • Infinite loop when adding a row to a list in a class in python3

    - by Margaret
    I have a script which contains two classes. (I'm obviously deleting a lot of stuff that I don't believe is relevant to the error I'm dealing with.) The eventual task is to create a decision tree, as I mentioned in this question. Unfortunately, I'm getting an infinite loop, and I'm having difficulty identifying why. I've identified the line of code that's going haywire, but I would have thought the iterator and the list I'm adding to would be different objects. Is there some side effect of list's .append functionality that I'm not aware of? Or am I making some other blindingly obvious mistake? class Dataset: individuals = [] #Becomes a list of dictionaries, in which each dictionary is a row from the CSV with the headers as keys def field_set(self): #Returns a list of the fields in individuals[] that can be used to split the data (i.e. have more than one value amongst the individuals def classified(self, predicted_value): #Returns True if all the individuals have the same value for predicted_value def fields_exhausted(self, predicted_value): #Returns True if all the individuals are identical except for predicted_value def lowest_entropy_value(self, predicted_value): #Returns the field that will reduce <a href="http://en.wikipedia.org/wiki/Entropy_%28information_theory%29">entropy</a> the most def __init__(self, individuals=[]): and class Node: ds = Dataset() #The data that is associated with this Node links = [] #List of Nodes, the offspring Nodes of this node level = 0 #Tree depth of this Node split_value = '' #Field used to split out this Node from the parent node node_value = '' #Value used to split out this Node from the parent Node def split_dataset(self, split_value): fields = [] #List of options for split_value amongst the individuals datasets = {} #Dictionary of Datasets, each one with a value from fields[] as its key for field in self.ds.field_set()[split_value]: #Populates the keys of fields[] fields.append(field) datasets[field] = Dataset() for i in self.ds.individuals: #Adds individuals to the datasets.dataset that matches their result for split_value datasets[i[split_value]].individuals.append(i) #<---Causes an infinite loop on the second hit for field in fields: #Creates subnodes from each of the datasets.Dataset options self.add_subnode(datasets[field],split_value,field) def add_subnode(self, dataset, split_value='', node_value=''): def __init__(self, level, dataset=Dataset()): My initialisation code is currently: if __name__ == '__main__': filename = (sys.argv[1]) #Takes in a CSV file predicted_value = "# class" #Identifies the field from the CSV file that should be predicted base_dataset = parse_csv(filename) #Turns the CSV file into a list of lists parsed_dataset = individual_list(base_dataset) #Turns the list of lists into a list of dictionaries root = Node(0, Dataset(parsed_dataset)) #Creates a root node, passing it the full dataset root.split_dataset(root.ds.lowest_entropy_value(predicted_value)) #Performs the first split, creating multiple subnodes n = root.links[0] n.split_dataset(n.ds.lowest_entropy_value(predicted_value)) #Attempts to split the first subnode.

    Read the article

  • Can XPath concatenate two nodeset values? (for use in XForms)

    - by iHeartGreek
    Hi! I am wanting to concatenate two nodeset values using XPath in XForms. I know that XPath has a concat(string, string) function, but how would I go about concatenating two nodeset values? BEGIN EDIT: I tried concat function.. I tried this.. and variations of it to make it work, but it doesn't <xf:value ref="concat(instance('param_choices')/choice/root, .)"/> END EDIT Below is a simplified code example of what I am trying to achieve. XForms model: <xf:instance id="param_choices" xmlns=""> <choices> <root label="Param Choices">/param</root> <choice label="Name">/@AAA</choice> <choice label="Value">/@BBB</choice> </choices> </xf:instance> XForms ui code that I currently have: <xf:select ref="instance('criteria_data')/criteria/criterion" appearance="full"> <xf:label>Param choices:</xf:label> <br/> <xf:itemset nodeset="instance('param_choices')/choice"> <xf:label ref="@label"></xf:label> <xf:value ref="."></xf:value> </xf:itemset> </xf:select> (if user selects "Name" checkbox..) the XML output is: <criterion>/@BBB</criterion> However! I want to combine the root nodeset value with the current choice nodeset value. Essentially: <xf:value ref="(instance('definition_choices')/choice/root) + ."/> to achieve the following XML output: <criterion>/param/@BBB</criterion> Any suggestions on how to do this? (I am fairly new to XPath and XForms) p.s. what I am asking makes sense to me when I typed it out, but if you have trouble figuring out what I'm asking, just let me know.. thanks!

    Read the article

  • HTML Aligning Text

    - by nevillejones
    I want to display text on a page like in the following way: My Text: Text Here My Text: More Text Here......................................................... Text from line above continued here. I have the following markup just to test: <html> <head> <style type="text/css"> body { font-family: arial; } form div { padding: 3px; } form .label { float: left; text-align: right; width: 150px; margin-right: 8px; } </style> </head> <body> <form> <div class="label">My Text:</div> <div>Aenean tellus diam, pharetra sed posuere sed, ullamcorper eget enim. Suspendisse quis posuere nisi. Integer sodales neque id erat luctus suscipit. Curabitur in nisi arcu. Curabitur suscipit tellus non lectus blandit non mollis risus aliquet. Proin et felis nulla. Integer ut felis nibh, eu condimentum elit. Sed tincidunt fermentum lorem, convallis ornare ipsum mattis sed. Vestibulum vel quam sed velit condimentum volutpat eu sed enim. Duis tincidunt, turpis id suscipit molestie, erat tortor tincidunt augue, eu venenatis erat neque nec nisi. Nunc malesuada bibendum elit eu bibendum.</div> <div class="label">My Text 2:</div> <div>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla suscipit arcu ut risus dapibus tincidunt et ut orci. Vestibulum vitae urna in ligula fringilla facilisis aliquet vel nisl. Quisque placerat risus eget arcu fermentum at consectetur arcu lobortis. In hac habitasse platea dictumst. In in libero non justo pellentesque cursus quis non nisi. Donec sit amet pharetra ipsum. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Nulla enim enim, fringilla vitae sodales id, ultrices non lacus. Etiam id augue ut dui convallis hendrerit. Vivamus urna justo, dignissim in suscipit eu, facilisis a libero. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. .</div> </form> </body> When this is rendered about half the text is displayed underneath the corresponding "label" class div. I would like all the text to be appear to the right of the "label" div.

    Read the article

  • Lighttpd + fastcgi + python (for django) slow on first request

    - by EagleOne
    I'm having a problem with a django website I host with lighttpd + fastcgi. It works great but it seems that the first request always takes up to 3seconds. Subsequent requests are much faster (<1s). I activated access logs in lighttpd in order to track the issue. But I'm kind of stuck. Here are logs where I 'lose' 4s (from 10:04:17 to 10:04:21): 2012-12-01 10:04:17: (mod_fastcgi.c.3636) handling it in mod_fastcgi 2012-12-01 10:04:17: (response.c.470) -- before doc_root 2012-12-01 10:04:17: (response.c.471) Doc-Root : /var/www 2012-12-01 10:04:17: (response.c.472) Rel-Path : /finderauto.fcgi 2012-12-01 10:04:17: (response.c.473) Path : 2012-12-01 10:04:17: (response.c.521) -- after doc_root 2012-12-01 10:04:17: (response.c.522) Doc-Root : /var/www 2012-12-01 10:04:17: (response.c.523) Rel-Path : /finderauto.fcgi 2012-12-01 10:04:17: (response.c.524) Path : /var/www/finderauto.fcgi 2012-12-01 10:04:17: (response.c.541) -- logical -> physical 2012-12-01 10:04:17: (response.c.542) Doc-Root : /var/www 2012-12-01 10:04:17: (response.c.543) Rel-Path : /finderauto.fcgi 2012-12-01 10:04:17: (response.c.544) Path : /var/www/finderauto.fcgi 2012-12-01 10:04:21: (response.c.128) Response-Header: HTTP/1.1 200 OK Last-Modified: Sat, 01 Dec 2012 09:04:21 GMT Expires: Sat, 01 Dec 2012 09:14:21 GMT Content-Type: text/html; charset=utf-8 Cache-Control: max-age=600 Transfer-Encoding: chunked Date: Sat, 01 Dec 2012 09:04:21 GMT Server: lighttpd/1.4.28 I guess that if there is a problem, it's whith my configuration. So here is the way I launch my django app: python manage.py runfcgi method=threaded host=127.0.0.1 port=3033 And here is my lighttpd conf: server.modules = ( "mod_access", "mod_alias", "mod_compress", "mod_redirect", "mod_rewrite", "mod_fastcgi", "mod_accesslog", ) server.document-root = "/var/www" server.upload-dirs = ( "/var/cache/lighttpd/uploads" ) server.errorlog = "/var/log/lighttpd/error.log" server.pid-file = "/var/run/lighttpd.pid" server.username = "www-data" server.groupname = "www-data" accesslog.filename = "/var/log/lighttpd/access.log" debug.log-request-header = "enable" debug.log-response-header = "enable" debug.log-file-not-found = "enable" debug.log-request-handling = "enable" debug.log-timeouts = "enable" debug.log-ssl-noise = "enable" debug.log-condition-cache-handling = "enable" debug.log-condition-handling = "enable" fastcgi.server = ( "/finderauto.fcgi" => ( "main" => ( # Use host / port instead of socket for TCP fastcgi "host" => "127.0.0.1", "port" => 3033, #"socket" => "/home/finderadmin/finderauto.sock", "check-local" => "disable", "fix-root-scriptname" => "enable", ) ), ) alias.url = ( "/media" => "/home/user/django/contrib/admin/media/", ) url.rewrite-once = ( "^(/media.*)$" => "$1", "^/favicon\.ico$" => "/media/favicon.ico", "^(/.*)$" => "/finderauto.fcgi$1", ) index-file.names = ( "index.php", "index.html", "index.htm", "default.htm", " index.lighttpd.html" ) url.access-deny = ( "~", ".inc" ) static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) ## Use ipv6 if available #include_shell "/usr/share/lighttpd/use-ipv6.pl" dir-listing.encoding = "utf-8" server.dir-listing = "enable" compress.cache-dir = "/var/cache/lighttpd/compress/" compress.filetype = ( "application/x-javascript", "text/css", "text/html", "text/plain" ) include_shell "/usr/share/lighttpd/create-mime.assign.pl" include_shell "/usr/share/lighttpd/include-conf-enabled.pl" If any of you could help me finding out where I lose these 3 or 4 s. I would much appreciate. Thanks in advance!

    Read the article

  • Nginx + Nagios : 502 Bad gateway

    - by MrROY
    I have a fully new install nagios, but I can't access to it. Here's my Nginx config: server{ listen 80; server_name 61.148.45.10; # blahblah # Nagios Monitoring location /nagios3/ { proxy_pass http://127.0.0.1:80; } } Nagios is installed step by step(From this Linode guide): sudo apt-get install -y nagios3 Then I try to visit http://ip-address/nagios3/, but it shows 502 bad gateway. How do I deal with this ? This is my /var/log/syslog: Oct 25 14:18:17 my-server nagios3: SERVICE ALERT: localhost;Disk Space;WARNING;SOFT;1;DISK WARNING - free space: /boot 43 MB (20% inode=99%): Oct 25 14:19:07 my-server nagios3: SERVICE ALERT: localhost;HTTP;WARNING;SOFT;1;HTTP WARNING: HTTP/1.1 403 Forbidden - 319 bytes in 0.000 second response time Oct 25 14:19:17 my-server nagios3: SERVICE ALERT: localhost;Disk Space;WARNING;SOFT;2;DISK WARNING - free space: /boot 43 MB (20% inode=99%): Oct 25 14:20:07 my-server nagios3: SERVICE ALERT: localhost;HTTP;WARNING;SOFT;2;HTTP WARNING: HTTP/1.1 403 Forbidden - 319 bytes in 0.000 second response time Oct 25 14:20:17 my-server nagios3: SERVICE ALERT: localhost;Disk Space;WARNING;SOFT;3;DISK WARNING - free space: /boot 43 MB (20% inode=99%): Oct 25 14:21:07 my-server nagios3: SERVICE ALERT: localhost;HTTP;WARNING;SOFT;3;HTTP WARNING: HTTP/1.1 403 Forbidden - 319 bytes in 0.000 second response time Oct 25 14:21:17 my-server nagios3: SERVICE ALERT: localhost;Disk Space;WARNING;HARD;4;DISK WARNING - free space: /boot 43 MB (20% inode=99%): Oct 25 14:21:17 my-server nagios3: SERVICE NOTIFICATION: root;localhost;Disk Space;WARNING;notify-service-by-email;DISK WARNING - free space: /boot 43 MB (20% inode=99%): Oct 25 14:21:17 my-server postfix/pickup[24474]: 4F89F394034C: uid=109 from=<nagios> Oct 25 14:21:17 my-server postfix/cleanup[27756]: 4F89F394034C: message-id=<20131025062117.4F89F394034C@my-server> Oct 25 14:21:17 my-server postfix/qmgr[24475]: 4F89F394034C: from=<nagios@[email protected]>, size=594, nrcpt=1 (queue active) Oct 25 14:21:17 my-server postfix/local[27758]: 4F89F394034C: to=<root@localhost>, relay=local, delay=0.15, delays=0.11/0/0/0.04, dsn=2.0.0, status=sent (delivered to mailbox) Oct 25 14:21:17 my-server postfix/qmgr[24475]: 4F89F394034C: removed Oct 25 14:22:07 my-server nagios3: SERVICE ALERT: localhost;HTTP;WARNING;HARD;4;HTTP WARNING: HTTP/1.1 403 Forbidden - 319 bytes in 0.000 second response time Oct 25 14:22:07 my-server nagios3: SERVICE NOTIFICATION: root;localhost;HTTP;WARNING;notify-service-by-email;HTTP WARNING: HTTP/1.1 403 Forbidden - 319 bytes in 0.000 second response time Oct 25 14:22:07 my-server postfix/pickup[24474]: 219CA3940381: uid=109 from=<nagios> Oct 25 14:22:07 my-server postfix/cleanup[27756]: 219CA3940381: message-id=<20131025062207.219CA3940381@my-server> Oct 25 14:22:07 my-server postfix/qmgr[24475]: 219CA3940381: from=<nagios@[email protected]>, size=605, nrcpt=1 (queue active) Oct 25 14:22:07 my-server postfix/local[27758]: 219CA3940381: to=<root@localhost>, relay=local, delay=0.12, delays=0.07/0/0/0.05, dsn=2.0.0, status=sent (delivered to mailbox) Oct 25 14:22:07 my-server postfix/qmgr[24475]: 219CA3940381: removed Oct 25 14:39:01 my-server CRON[28242]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir fuser -s {} 2>/dev/null \; -delete) And there're lot of 127.0.0.1 visit in nginx log, but I actually visit from a external ip: 127.0.0.1 - - [25/Oct/2013:14:21:02 +0800] "GET /nagios3/ HTTP/1.0" 502 575 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/3 0.0.1599.69 Safari/537.36" 127.0.0.1 - - [25/Oct/2013:14:21:02 +0800] "GET /nagios3/ HTTP/1.0" 502 575 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/3 0.0.1599.69 Safari/537.36" 127.0.0.1 - - [25/Oct/2013:14:21:02 +0800] "GET /nagios3/ HTTP/1.0" 502 575 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/3 0.0.1599.69 Safari/537.36"

    Read the article

< Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >