Search Results

Search found 24172 results on 967 pages for 'mongodb update'.

Page 147/967 | < Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >

  • How can I remove old kernels/install new ones when /boot is full?

    - by Marcel
    I know this question is asked many times before, however with me it is just a bit different I guess. # df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 224G 5.2G 208G 3% / udev 1.9G 4.0K 1.9G 1% /dev tmpfs 777M 260K 777M 1% /run none 5.0M 0 5.0M 0% /run/lock none 1.9G 0 1.9G 0% /run/shm /dev/sda2 90M 88M 0 100% /boot /dev/sda6 1.9G 514M 1.3G 29% /tmp My boot partition is full. Current Kernel: # uname -r 3.2.0-35-generic All Kernels: # dpkg --list | grep linux-image ii linux-image-3.2.0-32-generic 3.2.0-32.51 Linux kernel image for version 3.2.0 on 64 bit x86 SMP ii linux-image-3.2.0-34-generic 3.2.0-34.53 Linux kernel image for version 3.2.0 on 64 bit x86 SMP ii linux-image-3.2.0-35-generic 3.2.0-35.55 Linux kernel image for version 3.2.0 on 64 bit x86 SMP iF linux-image-3.2.0-37-generic 3.2.0-37.58 Linux kernel image for version 3.2.0 on 64 bit x86 SMP iF linux-image-3.2.0-38-generic 3.2.0-38.60 Linux kernel image for version 3.2.0 on 64 bit x86 SMP iU linux-image-generic 3.2.0.37.45 Generic Linux kernel image So I thought of removing the 3.2.0.32-generic kernel with: # sudo apt-get purge linux-image-3.2.0-32-generic Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: linux-generic : Depends: linux-headers-generic (= 3.2.0.37.45) but 3.2.0.38.46 is to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). No success. When I try apt-get -f install it also fails: # apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: linux-headers-3.2.0-34 linux-headers-3.2.0-35 linux-headers-3.2.0-34-generic linux-headers-3.2.0-35-generic Use 'apt-get autoremove' to remove them. The following extra packages will be installed: linux-generic linux-image-generic The following packages will be upgraded: linux-generic linux-image-generic 2 upgraded, 0 newly installed, 0 to remove and 9 not upgraded. 5 not fully installed or removed. Need to get 0 B/4,334 B of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up initramfs-tools (0.99ubuntu13.1) ... update-initramfs: deferring update (trigger activated) Setting up linux-image-3.2.0-37-generic (3.2.0-37.58) ... Running depmod. update-initramfs: deferring update (hook will be called later) The link /initrd.img is a dangling linkto /boot/initrd.img-3.2.0-38-generic Examining /etc/kernel/postinst.d. run-parts: executing /etc/kernel/postinst.d/initramfs-tools 3.2.0-37-generic /boot/vmlinuz-3.2.0-37-generic update-initramfs: Generating /boot/initrd.img-3.2.0-37-generic gzip: stdout: No space left on device E: mkinitramfs failure cpio 141 gzip 1 update-initramfs: failed for /boot/initrd.img-3.2.0-37-generic with 1. run-parts: /etc/kernel/postinst.d/initramfs-tools exited with return code 1 Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/linux-image-3.2.0-37-generic.postinst line 1010. dpkg: error processing linux-image-3.2.0-37-generic (--configure): subprocess installed post-installation script returned error exit status 2 Setting up linux-image-3.2.0-38-generic (3.2.0-38.60) ... Running depmod. update-initramfs: deferring update (hook will be called later) The link /initrd.img is a dangling linkto /boot/initrd.img-3.2.0-37-generic Examining /etc/kernel/postinst.d. run-parts: executing /etc/kernel/postinst.d/initramfs-tools 3.2.0-38-generic /boot/vmlinuz-3.2.0-38-generic update-initramfs: Generating /boot/initrd.img-3.2.0-38-generic gzip: stdout: No space left on device E: mkinitramfs failure cpio 141 gzip 1 update-initramfs: failed for /boot/initrd.img-3.2.0-38-generic with 1. run-parts: /etc/kernel/postinst.d/initramfs-tools exited with return code 1 Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/linux-image-3.2.0-38-generic.postinst line 1010. dpkg: error processing linux-image-3.2.0-38-generic (--configure): subprocess installed post-installation script returned error exit status 2 dpkg: dependency problems prevent configuration of linux-image-generic: linux-image-generic depends on linux-image-3.2.0-37-generic; however: Package linux-image-3.2.0-37-generic is not configured yet. dpkg: error processing linux-image-generic (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-generic: linux-generic depends on linux-image-generic (= 3.2.0.37.45); however: Package linux-image-generic is not configured yet. linux-generic depends on linux-headers-generic (= 3.2.0.37.45); however: Version of linux-headers-generic on system is 3.2.0.38.46. dpkg: error processing linux-generic (--configure): dependency problems - leaving unconfigured Processing triggers for initramfs-tools ... No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because MaxReports is reached already update-initramfs: Generating /boot/initrd.img-3.2.0-35-generic gzip: stdout: No space left on device E: mkinitramfs failure cpio 141 gzip 1 update-initramfs: failed for /boot/initrd.img-3.2.0-35-generic with 1. dpkg: error processing initramfs-tools (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: linux-image-3.2.0-37-generic linux-image-3.2.0-38-generic linux-image-generic linux-generic initramfs-tools E: Sub-process /usr/bin/dpkg returned an error code (1) Any help would really be appreciated. Update: I did: sudo rm /boot/*-3.2.0-32-generic /boot/*-3.2.0-34-generic After that the following problem with apt-get -f install: root@localhost:/# apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: linux-generic The following packages will be upgraded: linux-generic 1 upgraded, 0 newly installed, 0 to remove and 9 not upgraded. 1 not fully installed or removed. Need to get 0 B/1,722 B of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y dpkg: dependency problems prevent configuration of linux-generic: linux-generic depends on linux-image-generic (= 3.2.0.37.45); however: Version of linux-image-generic on system is 3.2.0.38.46. linux-generic depends on linux-headers-generic (= 3.2.0.37.45); however: Version of linux-headers-generic on system is 3.2.0.38.46. dpkg: error processing linux-generic (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: linux-generic E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Big Data – Operational Databases Supporting Big Data – Key-Value Pair Databases and Document Databases – Day 13 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the Relational Database and NoSQL database in the Big Data Story. In this article we will understand the role of Key-Value Pair Databases and Document Databases Supporting Big Data Story. Now we will see a few of the examples of the operational databases. Relational Databases (Yesterday’s post) NoSQL Databases (Yesterday’s post) Key-Value Pair Databases (This post) Document Databases (This post) Columnar Databases (Tomorrow’s post) Graph Databases (Tomorrow’s post) Spatial Databases (Tomorrow’s post) Key Value Pair Databases Key Value Pair Databases are also known as KVP databases. A key is a field name and attribute, an identifier. The content of that field is its value, the data that is being identified and stored. They have a very simple implementation of NoSQL database concepts. They do not have schema hence they are very flexible as well as scalable. The disadvantages of Key Value Pair (KVP) database are that they do not follow ACID (Atomicity, Consistency, Isolation, Durability) properties. Additionally, it will require data architects to plan for data placement, replication as well as high availability. In KVP databases the data is stored as strings. Here is a simple example of how Key Value Database will look like: Key Value Name Pinal Dave Color Blue Twitter @pinaldave Name Nupur Dave Movie The Hero As the number of users grow in Key Value Pair databases it starts getting difficult to manage the entire database. As there is no specific schema or rules associated with the database, there are chances that database grows exponentially as well. It is very crucial to select the right Key Value Pair Database which offers an additional set of tools to manage the data and provides finer control over various business aspects of the same. Riak Rick is one of the most popular Key Value Database. It is known for its scalability and performance in high volume and velocity database. Additionally, it implements a mechanism for collection key and values which further helps to build manageable system. We will further discuss Riak in future blog posts. Key Value Databases are a good choice for social media, communities, caching layers for connecting other databases. In simpler words, whenever we required flexibility of the data storage keeping scalability in mind – KVP databases are good options to consider. Document Database There are two different kinds of document databases. 1) Full document Content (web pages, word docs etc) and 2) Storing Document Components for storage. The second types of the document database we are talking about over here. They use Javascript Object Notation (JSON) and Binary JSON for the structure of the documents. JSON is very easy to understand language and it is very easy to write for applications. There are two major structures of JSON used for Document Database – 1) Name Value Pairs and 2) Ordered List. MongoDB and CouchDB are two of the most popular Open Source NonRelational Document Database. MongoDB MongoDB databases are called collections. Each collection is build of documents and each document is composed of fields. MongoDB collections can be indexed for optimal performance. MongoDB ecosystem is highly available, supports query services as well as MapReduce. It is often used in high volume content management system. CouchDB CouchDB databases are composed of documents which consists fields and attachments (known as description). It supports ACID properties. The main attraction points of CouchDB are that it will continue to operate even though network connectivity is sketchy. Due to this nature CouchDB prefers local data storage. Document Database is a good choice of the database when users have to generate dynamic reports from elements which are changing very frequently. A good example of document usages is in real time analytics in social networking or content management system. Tomorrow In tomorrow’s blog post we will discuss about various other Operational Databases supporting Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Upgrade issues due to broken "dependency problems prevent configuration of linux-image-generic" error

    - by tsukune1791
    okay, I've recently upgrade from 11.10 to 12.04 and I've been having some issues. I don't know if its a bug or not, but I thought I would submit it here. Okay here's a little background; I ran the distro update from the update manager and got a couple errors that I didn't catch. the computer restarted, and when I logged the Launcher and my top bar of the Ubuntu desktop didn't load. While it was trying to load a couple error messages came up, I think they were called "apport", saying they couldn't send the bug information for some reason. I believe it said somethings wrong with my internet connection, but nothing's wrong with it. Anyway I tried running some things in terminal, namely sudo apt-get -f install sudo apt-get upgrade sudo apt-get dist-upgrade and keep getting the following errors; dustin@marceau-laptop:~$ sudo apt-get dist-upgrade [sudo] password for dustin: Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 4 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? Y Setting up initramfs-tools (0.99ubuntu13) ... update-initramfs: deferring update (trigger activated) Setting up linux-image-3.2.0-24-generic (3.2.0-24.37) ... Running depmod. update-initramfs: deferring update (hook will be called later) Examining /etc/kernel/postinst.d. run-parts: executing /etc/kernel/postinst.d/dkms 3.2.0-24-generic /boot/vmlinuz-3.2.0-24-generic run-parts: executing /etc/kernel/postinst.d/initramfs-tools 3.2.0-24-generic /boot/vmlinuz-3.2.0-24-generic update-initramfs: Generating /boot/initrd.img-3.2.0-24-generic run-parts: executing /etc/kernel/postinst.d/pm-utils 3.2.0-24-generic /boot/vmlinuz-3.2.0-24-generic run-parts: executing /etc/kernel/postinst.d/update-notifier 3.2.0-24-generic /boot/vmlinuz-3.2.0-24-generic run-parts: executing /etc/kernel/postinst.d/zz-runlilo 3.2.0-24-generic /boot/vmlinuz-3.2.0-24-generic Fatal: No images have been defined. run-parts: /etc/kernel/postinst.d/zz-runlilo exited with return code 1 Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/linux-image-3.2.0-24-generic.postinst line 1010. dpkg: error processing linux-image-3.2.0-24-generic (--configure): subprocess installed post-installation script returned error exit status 2 dpkg: dependency problems prevent configuration of linux-image-generic: linux-image-generic depends on linux-image-3.2.0-24-generic; however: Package linux-image-3.2.0-24-generic is not configured yet. dpkg: error processing linux-image-generic (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-generic: linux-generic depends on linux-image-generic (= 3.2.0.24.26); however: Package linux-image-generic is not configured yet. dpkg: error processing linux-generic (--configure): dependency problems - leaving unconfigured Processing triggers for initramfs-tools ... No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because the error message indicates its a followup error from a previous failure. update-initramfs: Generating /boot/initrd.img-3.2.0-24-generic Fatal: No images have been defined. run-parts: /etc/initramfs/post-update.d//runlilo exited with return code 1 dpkg: error processing initramfs-tools (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: linux-image-3.2.0-24-generic linux-image-generic linux-generic initramfs-tools localepurge: Disk space freed in /usr/share/locale: 0 KiB localepurge: Disk space freed in /usr/share/man: 0 KiB localepurge: Disk space freed in /usr/share/gnome/help: 0 KiB localepurge: Disk space freed in /usr/share/omf: 0 KiB localepurge: Disk space freed in /usr/share/doc/kde/HTML: 0 KiB Total disk space freed by localepurge: 0 KiB E: Sub-process /usr/bin/dpkg returned an error code (1) And my Ubuntu desktop is still not working. I can log into Gnome and Ubuntu 2D but the Launcher, I think it's call, doesn't load. Can someone help me fix these error, or point me in the right direction to get them fixed? It is much appriciated.

    Read the article

  • C#/.NET Little Wonders: The ConcurrentDictionary

    - by James Michael Hare
    Once again we consider some of the lesser known classes and keywords of C#.  In this series of posts, we will discuss how the concurrent collections have been developed to help alleviate these multi-threading concerns.  Last week’s post began with a general introduction and discussed the ConcurrentStack<T> and ConcurrentQueue<T>.  Today's post discusses the ConcurrentDictionary<T> (originally I had intended to discuss ConcurrentBag this week as well, but ConcurrentDictionary had enough information to create a very full post on its own!).  Finally next week, we shall close with a discussion of the ConcurrentBag<T> and BlockingCollection<T>. For more of the "Little Wonders" posts, see the index here. Recap As you'll recall from the previous post, the original collections were object-based containers that accomplished synchronization through a Synchronized member.  While these were convenient because you didn't have to worry about writing your own synchronization logic, they were a bit too finely grained and if you needed to perform multiple operations under one lock, the automatic synchronization didn't buy much. With the advent of .NET 2.0, the original collections were succeeded by the generic collections which are fully type-safe, but eschew automatic synchronization.  This cuts both ways in that you have a lot more control as a developer over when and how fine-grained you want to synchronize, but on the other hand if you just want simple synchronization it creates more work. With .NET 4.0, we get the best of both worlds in generic collections.  A new breed of collections was born called the concurrent collections in the System.Collections.Concurrent namespace.  These amazing collections are fine-tuned to have best overall performance for situations requiring concurrent access.  They are not meant to replace the generic collections, but to simply be an alternative to creating your own locking mechanisms. Among those concurrent collections were the ConcurrentStack<T> and ConcurrentQueue<T> which provide classic LIFO and FIFO collections with a concurrent twist.  As we saw, some of the traditional methods that required calls to be made in a certain order (like checking for not IsEmpty before calling Pop()) were replaced in favor of an umbrella operation that combined both under one lock (like TryPop()). Now, let's take a look at the next in our series of concurrent collections!For some excellent information on the performance of the concurrent collections and how they perform compared to a traditional brute-force locking strategy, see this wonderful whitepaper by the Microsoft Parallel Computing Platform team here. ConcurrentDictionary – the fully thread-safe dictionary The ConcurrentDictionary<TKey,TValue> is the thread-safe counterpart to the generic Dictionary<TKey, TValue> collection.  Obviously, both are designed for quick – O(1) – lookups of data based on a key.  If you think of algorithms where you need lightning fast lookups of data and don’t care whether the data is maintained in any particular ordering or not, the unsorted dictionaries are generally the best way to go. Note: as a side note, there are sorted implementations of IDictionary, namely SortedDictionary and SortedList which are stored as an ordered tree and a ordered list respectively.  While these are not as fast as the non-sorted dictionaries – they are O(log2 n) – they are a great combination of both speed and ordering -- and still greatly outperform a linear search. Now, once again keep in mind that if all you need to do is load a collection once and then allow multi-threaded reading you do not need any locking.  Examples of this tend to be situations where you load a lookup or translation table once at program start, then keep it in memory for read-only reference.  In such cases locking is completely non-productive. However, most of the time when we need a concurrent dictionary we are interleaving both reads and updates.  This is where the ConcurrentDictionary really shines!  It achieves its thread-safety with no common lock to improve efficiency.  It actually uses a series of locks to provide concurrent updates, and has lockless reads!  This means that the ConcurrentDictionary gets even more efficient the higher the ratio of reads-to-writes you have. ConcurrentDictionary and Dictionary differences For the most part, the ConcurrentDictionary<TKey,TValue> behaves like it’s Dictionary<TKey,TValue> counterpart with a few differences.  Some notable examples of which are: Add() does not exist in the concurrent dictionary. This means you must use TryAdd(), AddOrUpdate(), or GetOrAdd().  It also means that you can’t use a collection initializer with the concurrent dictionary. TryAdd() replaced Add() to attempt atomic, safe adds. Because Add() only succeeds if the item doesn’t already exist, we need an atomic operation to check if the item exists, and if not add it while still under an atomic lock. TryUpdate() was added to attempt atomic, safe updates. If we want to update an item, we must make sure it exists first and that the original value is what we expected it to be.  If all these are true, we can update the item under one atomic step. TryRemove() was added to attempt atomic, safe removes. To safely attempt to remove a value we need to see if the key exists first, this checks for existence and removes under an atomic lock. AddOrUpdate() was added to attempt an thread-safe “upsert”. There are many times where you want to insert into a dictionary if the key doesn’t exist, or update the value if it does.  This allows you to make a thread-safe add-or-update. GetOrAdd() was added to attempt an thread-safe query/insert. Sometimes, you want to query for whether an item exists in the cache, and if it doesn’t insert a starting value for it.  This allows you to get the value if it exists and insert if not. Count, Keys, Values properties take a snapshot of the dictionary. Accessing these properties may interfere with add and update performance and should be used with caution. ToArray() returns a static snapshot of the dictionary. That is, the dictionary is locked, and then copied to an array as a O(n) operation.  GetEnumerator() is thread-safe and efficient, but allows dirty reads. Because reads require no locking, you can safely iterate over the contents of the dictionary.  The only downside is that, depending on timing, you may get dirty reads. Dirty reads during iteration The last point on GetEnumerator() bears some explanation.  Picture a scenario in which you call GetEnumerator() (or iterate using a foreach, etc.) and then, during that iteration the dictionary gets updated.  This may not sound like a big deal, but it can lead to inconsistent results if used incorrectly.  The problem is that items you already iterated over that are updated a split second after don’t show the update, but items that you iterate over that were updated a split second before do show the update.  Thus you may get a combination of items that are “stale” because you iterated before the update, and “fresh” because they were updated after GetEnumerator() but before the iteration reached them. Let’s illustrate with an example, let’s say you load up a concurrent dictionary like this: 1: // load up a dictionary. 2: var dictionary = new ConcurrentDictionary<string, int>(); 3:  4: dictionary["A"] = 1; 5: dictionary["B"] = 2; 6: dictionary["C"] = 3; 7: dictionary["D"] = 4; 8: dictionary["E"] = 5; 9: dictionary["F"] = 6; Then you have one task (using the wonderful TPL!) to iterate using dirty reads: 1: // attempt iteration in a separate thread 2: var iterationTask = new Task(() => 3: { 4: // iterates using a dirty read 5: foreach (var pair in dictionary) 6: { 7: Console.WriteLine(pair.Key + ":" + pair.Value); 8: } 9: }); And one task to attempt updates in a separate thread (probably): 1: // attempt updates in a separate thread 2: var updateTask = new Task(() => 3: { 4: // iterates, and updates the value by one 5: foreach (var pair in dictionary) 6: { 7: dictionary[pair.Key] = pair.Value + 1; 8: } 9: }); Now that we’ve done this, we can fire up both tasks and wait for them to complete: 1: // start both tasks 2: updateTask.Start(); 3: iterationTask.Start(); 4:  5: // wait for both to complete. 6: Task.WaitAll(updateTask, iterationTask); Now, if I you didn’t know about the dirty reads, you may have expected to see the iteration before the updates (such as A:1, B:2, C:3, D:4, E:5, F:6).  However, because the reads are dirty, we will quite possibly get a combination of some updated, some original.  My own run netted this result: 1: F:6 2: E:6 3: D:5 4: C:4 5: B:3 6: A:2 Note that, of course, iteration is not in order because ConcurrentDictionary, like Dictionary, is unordered.  Also note that both E and F show the value 6.  This is because the output task reached F before the update, but the updates for the rest of the items occurred before their output (probably because console output is very slow, comparatively). If we want to always guarantee that we will get a consistent snapshot to iterate over (that is, at the point we ask for it we see precisely what is in the dictionary and no subsequent updates during iteration), we should iterate over a call to ToArray() instead: 1: // attempt iteration in a separate thread 2: var iterationTask = new Task(() => 3: { 4: // iterates using a dirty read 5: foreach (var pair in dictionary.ToArray()) 6: { 7: Console.WriteLine(pair.Key + ":" + pair.Value); 8: } 9: }); The atomic Try…() methods As you can imagine TryAdd() and TryRemove() have few surprises.  Both first check the existence of the item to determine if it can be added or removed based on whether or not the key currently exists in the dictionary: 1: // try add attempts an add and returns false if it already exists 2: if (dictionary.TryAdd("G", 7)) 3: Console.WriteLine("G did not exist, now inserted with 7"); 4: else 5: Console.WriteLine("G already existed, insert failed."); TryRemove() also has the virtue of returning the value portion of the removed entry matching the given key: 1: // attempt to remove the value, if it exists it is removed and the original is returned 2: int removedValue; 3: if (dictionary.TryRemove("C", out removedValue)) 4: Console.WriteLine("Removed C and its value was " + removedValue); 5: else 6: Console.WriteLine("C did not exist, remove failed."); Now TryUpdate() is an interesting creature.  You might think from it’s name that TryUpdate() first checks for an item’s existence, and then updates if the item exists, otherwise it returns false.  Well, note quite... It turns out when you call TryUpdate() on a concurrent dictionary, you pass it not only the new value you want it to have, but also the value you expected it to have before the update.  If the item exists in the dictionary, and it has the value you expected, it will update it to the new value atomically and return true.  If the item is not in the dictionary or does not have the value you expected, it is not modified and false is returned. 1: // attempt to update the value, if it exists and if it has the expected original value 2: if (dictionary.TryUpdate("G", 42, 7)) 3: Console.WriteLine("G existed and was 7, now it's 42."); 4: else 5: Console.WriteLine("G either didn't exist, or wasn't 7."); The composite Add methods The ConcurrentDictionary also has composite add methods that can be used to perform updates and gets, with an add if the item is not existing at the time of the update or get. The first of these, AddOrUpdate(), allows you to add a new item to the dictionary if it doesn’t exist, or update the existing item if it does.  For example, let’s say you are creating a dictionary of counts of stock ticker symbols you’ve subscribed to from a market data feed: 1: public sealed class SubscriptionManager 2: { 3: private readonly ConcurrentDictionary<string, int> _subscriptions = new ConcurrentDictionary<string, int>(); 4:  5: // adds a new subscription, or increments the count of the existing one. 6: public void AddSubscription(string tickerKey) 7: { 8: // add a new subscription with count of 1, or update existing count by 1 if exists 9: var resultCount = _subscriptions.AddOrUpdate(tickerKey, 1, (symbol, count) => count + 1); 10:  11: // now check the result to see if we just incremented the count, or inserted first count 12: if (resultCount == 1) 13: { 14: // subscribe to symbol... 15: } 16: } 17: } Notice the update value factory Func delegate.  If the key does not exist in the dictionary, the add value is used (in this case 1 representing the first subscription for this symbol), but if the key already exists, it passes the key and current value to the update delegate which computes the new value to be stored in the dictionary.  The return result of this operation is the value used (in our case: 1 if added, existing value + 1 if updated). Likewise, the GetOrAdd() allows you to attempt to retrieve a value from the dictionary, and if the value does not currently exist in the dictionary it will insert a value.  This can be handy in cases where perhaps you wish to cache data, and thus you would query the cache to see if the item exists, and if it doesn’t you would put the item into the cache for the first time: 1: public sealed class PriceCache 2: { 3: private readonly ConcurrentDictionary<string, double> _cache = new ConcurrentDictionary<string, double>(); 4:  5: // adds a new subscription, or increments the count of the existing one. 6: public double QueryPrice(string tickerKey) 7: { 8: // check for the price in the cache, if it doesn't exist it will call the delegate to create value. 9: return _cache.GetOrAdd(tickerKey, symbol => GetCurrentPrice(symbol)); 10: } 11:  12: private double GetCurrentPrice(string tickerKey) 13: { 14: // do code to calculate actual true price. 15: } 16: } There are other variations of these two methods which vary whether a value is provided or a factory delegate, but otherwise they work much the same. Oddities with the composite Add methods The AddOrUpdate() and GetOrAdd() methods are totally thread-safe, on this you may rely, but they are not atomic.  It is important to note that the methods that use delegates execute those delegates outside of the lock.  This was done intentionally so that a user delegate (of which the ConcurrentDictionary has no control of course) does not take too long and lock out other threads. This is not necessarily an issue, per se, but it is something you must consider in your design.  The main thing to consider is that your delegate may get called to generate an item, but that item may not be the one returned!  Consider this scenario: A calls GetOrAdd and sees that the key does not currently exist, so it calls the delegate.  Now thread B also calls GetOrAdd and also sees that the key does not currently exist, and for whatever reason in this race condition it’s delegate completes first and it adds its new value to the dictionary.  Now A is done and goes to get the lock, and now sees that the item now exists.  In this case even though it called the delegate to create the item, it will pitch it because an item arrived between the time it attempted to create one and it attempted to add it. Let’s illustrate, assume this totally contrived example program which has a dictionary of char to int.  And in this dictionary we want to store a char and it’s ordinal (that is, A = 1, B = 2, etc).  So for our value generator, we will simply increment the previous value in a thread-safe way (perhaps using Interlocked): 1: public static class Program 2: { 3: private static int _nextNumber = 0; 4:  5: // the holder of the char to ordinal 6: private static ConcurrentDictionary<char, int> _dictionary 7: = new ConcurrentDictionary<char, int>(); 8:  9: // get the next id value 10: public static int NextId 11: { 12: get { return Interlocked.Increment(ref _nextNumber); } 13: } Then, we add a method that will perform our insert: 1: public static void Inserter() 2: { 3: for (int i = 0; i < 26; i++) 4: { 5: _dictionary.GetOrAdd((char)('A' + i), key => NextId); 6: } 7: } Finally, we run our test by starting two tasks to do this work and get the results… 1: public static void Main() 2: { 3: // 3 tasks attempting to get/insert 4: var tasks = new List<Task> 5: { 6: new Task(Inserter), 7: new Task(Inserter) 8: }; 9:  10: tasks.ForEach(t => t.Start()); 11: Task.WaitAll(tasks.ToArray()); 12:  13: foreach (var pair in _dictionary.OrderBy(p => p.Key)) 14: { 15: Console.WriteLine(pair.Key + ":" + pair.Value); 16: } 17: } If you run this with only one task, you get the expected A:1, B:2, ..., Z:26.  But running this in parallel you will get something a bit more complex.  My run netted these results: 1: A:1 2: B:3 3: C:4 4: D:5 5: E:6 6: F:7 7: G:8 8: H:9 9: I:10 10: J:11 11: K:12 12: L:13 13: M:14 14: N:15 15: O:16 16: P:17 17: Q:18 18: R:19 19: S:20 20: T:21 21: U:22 22: V:23 23: W:24 24: X:25 25: Y:26 26: Z:27 Notice that B is 3?  This is most likely because both threads attempted to call GetOrAdd() at roughly the same time and both saw that B did not exist, thus they both called the generator and one thread got back 2 and the other got back 3.  However, only one of those threads can get the lock at a time for the actual insert, and thus the one that generated the 3 won and the 3 was inserted and the 2 got discarded.  This is why on these methods your factory delegates should be careful not to have any logic that would be unsafe if the value they generate will be pitched in favor of another item generated at roughly the same time.  As such, it is probably a good idea to keep those generators as stateless as possible. Summary The ConcurrentDictionary is a very efficient and thread-safe version of the Dictionary generic collection.  It has all the benefits of type-safety that it’s generic collection counterpart does, and in addition is extremely efficient especially when there are more reads than writes concurrently. Tweet Technorati Tags: C#, .NET, Concurrent Collections, Collections, Little Wonders, Black Rabbit Coder,James Michael Hare

    Read the article

  • Linux Mint 10 LXDE computer to act as LTSP Server without luck

    - by Rautamiekka
    So I've tried to make our screen-broken HP laptop to also serve as LTSP Server in addition to various other tasks, without luck, which may be cuz I'm running LM10 LXDE while the instructions are for Ubuntu. Excuse my ignorance. The entire output from Terminal after installing LTSP stuff along with a Server kernel and a load of other packages: administrator@rauta-mint-turion ~ $ sudo lt ltrace ltsp-build-client ltsp-chroot ltspfs ltspfsmounter ltsp-info ltsp-localapps ltsp-update-image ltsp-update-kernels ltsp-update-sshkeys administrator@rauta-mint-turion ~ $ sudo ltsp-update-sshkeys administrator@rauta-mint-turion ~ $ sudo ltsp-update-kernels find: `/opt/ltsp/': No such file or directory administrator@rauta-mint-turion ~ $ sudo ltsp-build-client /usr/share/ltsp/plugins/ltsp-build-client/common/010-chroot-tagging: line 3: /opt/ltsp/i386/etc/ltsp_chroot: No such file or directory error: LTSP client installation ended abnormally administrator@rauta-mint-turion ~ $ sudo ltsp-update-image Cannot determine assigned port. Assigning to port 2000. mkdir: cannot create directory `/opt/ltsp/i386/etc/ltsp': No such file or directory /usr/sbin/ltsp-update-image: 274: cannot create /opt/ltsp/i386/etc/ltsp/update-kernels.conf: Directory nonexistent /usr/sbin/ltsp-update-image: 274: cannot create /opt/ltsp/i386/etc/ltsp/update-kernels.conf: Directory nonexistent Regenerating kernel... chroot: failed to run command `/usr/share/ltsp/update-kernels': No such file or directory Done. Configuring inetd... Done. Updating pxelinux default configuration...Done. Skipping invalid chroot: /opt/ltsp/i386 chroot: failed to run command `test': No such file or directory administrator@rauta-mint-turion ~ $ sudo ltsp-chroot chroot: failed to run command `/bin/bash': No such file or directory administrator@rauta-mint-turion ~ $ bash administrator@rauta-mint-turion ~ $ exit exit administrator@rauta-mint-turion ~ $ sudo ls /opt/ltsp i386 administrator@rauta-mint-turion ~ $ sudo ls /opt/ltsp/i386/ administrator@rauta-mint-turion ~ $ sudo ltsp-build-client NOTE: Root directory /opt/ltsp/i386 already exists, this will lead to problems, please remove it before trying again. Exiting. error: LTSP client installation ended abnormally administrator@rauta-mint-turion ~ $ sudo rm -rv /opt/ltsp/i386 removed directory: `/opt/ltsp/i386' administrator@rauta-mint-turion ~ $ sudo ltsp-build-client /usr/share/ltsp/plugins/ltsp-build-client/common/010-chroot-tagging: line 3: /opt/ltsp/i386/etc/ltsp_chroot: No such file or directory error: LTSP client installation ended abnormally administrator@rauta-mint-turion ~ $ aptitude search ltsp p fts-ltsp-ldap - LDAP LTSP module for the TFTP/Fuse supplicant p ltsp-client - LTSP client environment p ltsp-client-core - LTSP client environment (core) p ltsp-cluster-accountmanager - Account creation and management daemon for LTSP p ltsp-cluster-control - Web based thin-client configuration management p ltsp-cluster-lbagent - LTSP loadbalancer agent offers variables about the state of the ltsp server p ltsp-cluster-lbserver - LTSP loadbalancer server returns the optimal ltsp server to terminal p ltsp-cluster-nxloadbalancer - Minimal NX loadbalancer for ltsp-cluster p ltsp-cluster-pxeconfig - LTSP-Cluster symlink generator p ltsp-controlaula - Classroom management tool with ltsp clients p ltsp-docs - LTSP Documentation p ltsp-livecd - starts an LTSP live server on an Ubuntu livecd session p ltsp-manager - Ubuntu LTSP server management GUI i A ltsp-server - Basic LTSP server environment i ltsp-server-standalone - Complete LTSP server environment i A ltspfs - Fuse based remote filesystem for LTSP thin clients p ltspfsd - Fuse based remote filesystem hooks for LTSP thin clients p ltspfsd-core - Fuse based remote filesystem daemon for LTSP thin clients p python-ltsp - provides ltsp related functions administrator@rauta-mint-turion ~ $ sudo aptitude purge ltsp-server ltsp-server-standalone ltspfs The following packages will be REMOVED: debconf-utils{u} debootstrap{u} dhcp3-server{u} gstreamer0.10-pulseaudio{u} ldm-server{u} libpulse-browse0{u} ltsp-server{p} ltsp-server-standalone{p} ltspfs{p} nbd-server{u} openbsd-inetd{u} pulseaudio{u} pulseaudio-esound-compat{u} pulseaudio-module-x11{u} pulseaudio-utils{u} squashfs-tools{u} tftpd-hpa{u} 0 packages upgraded, 0 newly installed, 17 to remove and 0 not upgraded. Need to get 0B of archives. After unpacking 6,996kB will be freed. Do you want to continue? [Y/n/?] (Reading database ... 158454 files and directories currently installed.) Removing ltsp-server-standalone ... Purging configuration files for ltsp-server-standalone ... Removing ltsp-server ... Purging configuration files for ltsp-server ... dpkg: warning: while removing ltsp-server, directory '/var/lib/tftpboot/ltsp' not empty so not removed. dpkg: warning: while removing ltsp-server, directory '/var/lib/tftpboot' not empty so not removed. Processing triggers for man-db ... (Reading database ... 158195 files and directories currently installed.) Removing debconf-utils ... Removing debootstrap ... Removing dhcp3-server ... * Stopping DHCP server dhcpd3 [ OK ] Removing gstreamer0.10-pulseaudio ... Removing ldm-server ... Removing pulseaudio-module-x11 ... Removing pulseaudio-esound-compat ... Removing pulseaudio ... * PulseAudio configured for per-user sessions Removing pulseaudio-utils ... Removing libpulse-browse0 ... Processing triggers for man-db ... Processing triggers for ureadahead ... ureadahead will be reprofiled on next reboot Processing triggers for libc-bin ... ldconfig deferred processing now taking place (Reading database ... 157944 files and directories currently installed.) Removing ltspfs ... Processing triggers for man-db ... (Reading database ... 157932 files and directories currently installed.) Removing nbd-server ... Stopping Network Block Device server: nbd-server. Removing openbsd-inetd ... * Stopping internet superserver inetd [ OK ] Removing squashfs-tools ... Removing tftpd-hpa ... tftpd-hpa stop/waiting Processing triggers for ureadahead ... Processing triggers for man-db ... administrator@rauta-mint-turion ~ $ sudo aptitude purge ~c The following packages will be REMOVED: dhcp3-server{p} libpulse-browse0{p} nbd-server{p} openbsd-inetd{p} pulseaudio{p} tftpd-hpa{p} 0 packages upgraded, 0 newly installed, 6 to remove and 0 not upgraded. Need to get 0B of archives. After unpacking 0B will be used. Do you want to continue? [Y/n/?] (Reading database ... 157881 files and directories currently installed.) Removing dhcp3-server ... Purging configuration files for dhcp3-server ... Removing libpulse-browse0 ... Purging configuration files for libpulse-browse0 ... Removing nbd-server ... Purging configuration files for nbd-server ... Removing openbsd-inetd ... Purging configuration files for openbsd-inetd ... Removing pulseaudio ... Purging configuration files for pulseaudio ... Removing tftpd-hpa ... Purging configuration files for tftpd-hpa ... Processing triggers for ureadahead ... administrator@rauta-mint-turion ~ $ sudo aptitude install ltsp-server-standalone The following NEW packages will be installed: debconf-utils{a} debootstrap{a} ldm-server{a} ltsp-server{a} ltsp-server-standalone ltspfs{a} nbd-server{a} openbsd-inetd{a} squashfs-tools{a} The following packages are RECOMMENDED but will NOT be installed: dhcp3-server pulseaudio-esound-compat tftpd-hpa 0 packages upgraded, 9 newly installed, 0 to remove and 0 not upgraded. Need to get 0B/498kB of archives. After unpacking 2,437kB will be used. Do you want to continue? [Y/n/?] Preconfiguring packages ... Selecting previously deselected package openbsd-inetd. (Reading database ... 157868 files and directories currently installed.) Unpacking openbsd-inetd (from .../openbsd-inetd_0.20080125-4ubuntu2_i386.deb) ... Processing triggers for man-db ... Processing triggers for ureadahead ... Setting up openbsd-inetd (0.20080125-4ubuntu2) ... * Stopping internet superserver inetd [ OK ] * Starting internet superserver inetd [ OK ] Selecting previously deselected package ldm-server. (Reading database ... 157877 files and directories currently installed.) Unpacking ldm-server (from .../ldm-server_2%3a2.1.3-0ubuntu1_all.deb) ... Selecting previously deselected package debconf-utils. Unpacking debconf-utils (from .../debconf-utils_1.5.32ubuntu3_all.deb) ... Selecting previously deselected package debootstrap. Unpacking debootstrap (from .../debootstrap_1.0.23ubuntu1_all.deb) ... Selecting previously deselected package nbd-server. Unpacking nbd-server (from .../nbd-server_1%3a2.9.14-2ubuntu1_i386.deb) ... Selecting previously deselected package squashfs-tools. Unpacking squashfs-tools (from .../squashfs-tools_1%3a4.0-8_i386.deb) ... Selecting previously deselected package ltsp-server. GNU nano 2.2.4 File: /etc/ltsp/ltsp-update-image.conf # Configuration file for ltsp-update-image # By default, do not compress the image # as it's reported to make it unstable NO_COMP="-noF -noD -noI -no-exports" [ Switched to /etc/ltsp/ltsp-update-image.conf ] administrator@rauta-mint-turion ~ $ ls /opt/ firefox/ ltsp/ mint-flashplugin/ administrator@rauta-mint-turion ~ $ ls /opt/ltsp/i386/ administrator@rauta-mint-turion ~ $ ls /opt/ltsp/ i386 administrator@rauta-mint-turion ~ $ sudo ltsp ltsp-build-client ltsp-chroot ltspfs ltspfsmounter ltsp-info ltsp-localapps ltsp-update-image ltsp-update-kernels ltsp-update-sshkeys administrator@rauta-mint-turion ~ $ sudo ltsp-build-client NOTE: Root directory /opt/ltsp/i386 already exists, this will lead to problems, please remove it before trying again. Exiting. error: LTSP client installation ended abnormally administrator@rauta-mint-turion ~ $ ^C administrator@rauta-mint-turion ~ $ sudo aptitude purge ltsp ltspfs ltsp-server ltsp-server-standalone administrator@rauta-mint-turion ~ $ sudo aptitude purge ltsp-server The following packages will be REMOVED: ltsp-server{p} 0 packages upgraded, 0 newly installed, 1 to remove and 0 not upgraded. Need to get 0B of archives. After unpacking 1,073kB will be freed. The following packages have unmet dependencies: ltsp-server-standalone: Depends: ltsp-server but it is not going to be installed. The following actions will resolve these dependencies: Remove the following packages: 1) ltsp-server-standalone Accept this solution? [Y/n/q/?] The following packages will be REMOVED: debconf-utils{u} debootstrap{u} ldm-server{u} ltsp-server{p} ltsp-server-standalone{a} ltspfs{u} nbd-server{u} openbsd-inetd{u} squashfs-tools{u} 0 packages upgraded, 0 newly installed, 9 to remove and 0 not upgraded. Need to get 0B of archives. After unpacking 2,437kB will be freed. Do you want to continue? [Y/n/?] (Reading database ... 158244 files and directories currently installed.) Removing ltsp-server-standalone ... (Reading database ... 158240 files and directories currently installed.) Removing ltsp-server ... Purging configuration files for ltsp-server ... dpkg: warning: while removing ltsp-server, directory '/var/lib/tftpboot/ltsp' not empty so not removed. dpkg: warning: while removing ltsp-server, directory '/var/lib/tftpboot' not empty so not removed. Processing triggers for man-db ... (Reading database ... 157987 files and directories currently installed.) Removing debconf-utils ... Removing debootstrap ... Removing ldm-server ... Removing ltspfs ... Removing nbd-server ... Stopping Network Block Device server: nbd-server. Removing openbsd-inetd ... * Stopping internet superserver inetd [ OK ] Removing squashfs-tools ... Processing triggers for man-db ... Processing triggers for ureadahead ... administrator@rauta-mint-turion ~ $ sudo aptitude purge ~c The following packages will be REMOVED: ltsp-server-standalone{p} nbd-server{p} openbsd-inetd{p} 0 packages upgraded, 0 newly installed, 3 to remove and 0 not upgraded. Need to get 0B of archives. After unpacking 0B will be used. Do you want to continue? [Y/n/?] (Reading database ... 157871 files and directories currently installed.) Removing ltsp-server-standalone ... Purging configuration files for ltsp-server-standalone ... Removing nbd-server ... Purging configuration files for nbd-server ... Removing openbsd-inetd ... Purging configuration files for openbsd-inetd ... Processing triggers for ureadahead ... administrator@rauta-mint-turion ~ $ sudo rm -rv /var/lib/t teamspeak-server/ tftpboot/ transmission-daemon/ administrator@rauta-mint-turion ~ $ sudo rm -rv /var/lib/tftpboot removed `/var/lib/tftpboot/ltsp/i386/pxelinux.cfg/default' removed directory: `/var/lib/tftpboot/ltsp/i386/pxelinux.cfg' removed directory: `/var/lib/tftpboot/ltsp/i386' removed directory: `/var/lib/tftpboot/ltsp' removed directory: `/var/lib/tftpboot' administrator@rauta-mint-turion ~ $ sudo find / -name "ltsp" /opt/ltsp administrator@rauta-mint-turion ~ $ sudo rm -rv /opt/ltsp removed directory: `/opt/ltsp/i386' removed directory: `/opt/ltsp' administrator@rauta-mint-turion ~ $ sudo aptitude install ltsp-server-standalone The following NEW packages will be installed: debconf-utils{a} debootstrap{a} ldm-server{a} ltsp-server{a} ltsp-server-standalone ltspfs{a} nbd-server{a} openbsd-inetd{a} squashfs-tools{a} The following packages are RECOMMENDED but will NOT be installed: dhcp3-server pulseaudio-esound-compat tftpd-hpa 0 packages upgraded, 9 newly installed, 0 to remove and 0 not upgraded. Need to get 0B/498kB of archives. After unpacking 2,437kB will be used. Do you want to continue? [Y/n/?] Preconfiguring packages ... Selecting previously deselected package openbsd-inetd. (Reading database ... 157868 files and directories currently installed.) Unpacking openbsd-inetd (from .../openbsd-inetd_0.20080125-4ubuntu2_i386.deb) ... Processing triggers for man-db ... GNU nano 2.2.4 New Buffer GNU nano 2.2.4 File: /etc/ltsp/dhcpd.conf # # Default LTSP dhcpd.conf config file. # authoritative; subnet 192.168.2.0 netmask 255.255.255.0 { range 192.168.2.70 192.168.2.79; option domain-name "jarvinen"; option domain-name-servers 192.168.2.254; option broadcast-address 192.168.2.255; option routers 192.168.2.254; # next-server 192.168.0.1; # get-lease-hostnames true; option subnet-mask 255.255.255.0; option root-path "/opt/ltsp/i386"; if substring( option vendor-class-identifier, 0, 9 ) = "PXEClient" { filename "/ltsp/i386/pxelinux.0"; } else { filename "/ltsp/i386/nbi.img"; } } [ Wrote 22 lines ] administrator@rauta-mint-turion ~ $ sudo service dhcp3-server start * Starting DHCP server dhcpd3 [ OK ] administrator@rauta-mint-turion ~ $ sudo ltsp-build-client /usr/share/ltsp/plugins/ltsp-build-client/common/010-chroot-tagging: line 3: /opt/ltsp/i386/etc/ltsp_chroot: No such file or directory error: LTSP client installation ended abnormally administrator@rauta-mint-turion ~ $ sudo cat /usr/share/ltsp/plugins/ltsp-build-client/common/010-chroot-tagging case "$MODE" in after-install) echo LTSP_CHROOT=$ROOT >> $ROOT/etc/ltsp_chroot ;; esac administrator@rauta-mint-turion ~ $ cd $ROOT/etc/ltsp_chroot bash: cd: /etc/ltsp_chroot: No such file or directory administrator@rauta-mint-turion ~ $ cd $ROOT/etc/ administrator@rauta-mint-turion /etc $ ls acpi chatscripts emacs group insserv.conf.d logrotate.conf mysql php5 rc6.d smbnetfs.conf UPower adduser.conf ConsoleKit environment group- iproute2 logrotate.d nanorc phpmyadmin rc.local snmp usb_modeswitch.conf alternatives console-setup esound grub.d issue lsb-base nbd-server pm rcS.d sound usb_modeswitch.d anacrontab cron.d firefox gshadow issue.net lsb-base-logging.sh ndiswrapper pnm2ppa.conf request-key.conf ssh ushare.conf apache2 cron.daily firefox-3.5 gshadow- java lsb-release netscsid.conf polkit-1 resolvconf ssl vga apm cron.hourly firestarter gtk-2.0 java-6-sun ltrace.conf network popularity-contest.conf resolv.conf sudoers vim apparmor cron.monthly fonts gtkmathview kbd ltsp NetworkManager ppp rmt sudoers.d vlc apparmor.d crontab foomatic gufw kernel lxdm networks printcap rpc su-to-rootrc vsftpd.conf apport cron.weekly fstab hal kernel-img.conf magic nsswitch.conf profile rsyslog.conf sweeprc w3m apt crypttab ftpusers hdparm.conf kerneloops.conf magic.mime ntp.conf profile.d rsyslog.d sysctl.conf wgetrc at.deny cups fuse.conf host.conf kompozer mailcap obex-data-server protocols samba sysctl.d wildmidi auto-apt dbconfig-common gai.conf hostname ldap mailcap.order ODBCDataSources psiconv sane.d teamspeak-server wodim.conf avahi dbus-1 gamin hosts ld.so.cache manpath.config odbc.ini pulse screenrc terminfo wpa_supplicant bash.bashrc debconf.conf gconf hosts.allow ld.so.conf menu openal purple securetty thunderbird X11 bash_completion debian_version gdb hosts.deny ld.so.conf.d menu-methods openoffice python security timezone xdg bash_completion.d default gdm hp legal mime.types opt python2.6 sensors3.conf transmission-daemon xml bindresvport.blacklist defoma ghostscript ifplugd lftp.conf mke2fs.conf pam.conf python2.7 sensors.d ts.conf xulrunner-1.9.2 blkid.conf deluser.conf gimp inetd.conf libpaper.d modprobe.d pam.d python3.1 services ucf.conf zsh_command_not_found blkid.tab depmod.d gnome init libreoffice modules pango rc0.d sgml udev bluetooth dhcp gnome-system-tools init.d linuxmint motd papersize rc1.d shadow ufw bonobo-activation dhcp3 gnome-vfs-2.0 initramfs-tools locale.alias mplayer passwd rc2.d shadow- updatedb.conf ca-certificates dictionaries-common gnome-vfs-mime-magic inputrc localtime mtab passwd- rc3.d shells update-manager ca-certificates.conf doc-base gre.d insserv logcheck mtab.fuselock pcmcia rc4.d skel update-motd.d calendar dpkg groff insserv.conf login.defs mtools.conf perl rc5.d smb2www update-notifier administrator@rauta-mint-turion /etc $ cd ltsp/ administrator@rauta-mint-turion /etc/ltsp $ ls dhcpd.conf ltsp-update-image.conf administrator@rauta-mint-turion /etc/ltsp $ cat dhcpd.conf # # Default LTSP dhcpd.conf config file. # authoritative; subnet 192.168.2.0 netmask 255.255.255.0 { range 192.168.2.70 192.168.2.79; option domain-name "jarvinen"; option domain-name-servers 192.168.2.254; option broadcast-address 192.168.2.255; option routers 192.168.2.254; # next-server 192.168.0.1; # get-lease-hostnames true; option subnet-mask 255.255.255.0; option root-path "/opt/ltsp/i386"; if substring( option vendor-class-identifier, 0, 9 ) = "PXEClient" { filename "/ltsp/i386/pxelinux.0"; } else { filename "/ltsp/i386/nbi.img"; } } administrator@rauta-mint-turion /etc/ltsp $

    Read the article

  • GAE Datastore Put()

    - by Ivan Slaughter
    def post(self): update = self.request.get('update') if users.get_current_user(): if update: personal = db.GqlQuery("SELECT * FROM Personal WHERE __key__ = :1", db.Key(update)) personal.name = self.request.get('name') personal.gender = self.request.get('gender') personal.mobile_num = self.request.get('mobile_num') personal.birthdate = int(self.request.get('birthdate')) personal.birthplace = self.request.get('birthplace') personal.address = self.request.get('address') personal.geo_pos = self.request.get('geo_pos') personal.info = self.request.get('info') photo = images.resize(self.request.get('img'), 0, 80) personal.photo = db.Blob(photo) personal.put() self.redirect('/admin/personal') else: personal= Personal() personal.name = self.request.get('name') personal.gender = self.request.get('gender') personal.mobile_num = self.request.get('mobile_num') personal.birthdate = int(self.request.get('birthdate')) personal.birthplace = self.request.get('birthplace') personal.address = self.request.get('address') personal.geo_pos = self.request.get('geo_pos') personal.info = self.request.get('info') photo = images.resize(self.request.get('img'), 0, 80) personal.photo = db.Blob(photo) personal.put() self.redirect('/admin/personal') else: self.response.out.write('I\'m sorry, you don\'t have permission to add this LP Personal Data.') Should this will update the existing record if the 'update' is querystring containing key datastore key. I try this but keep adding new record/entity. Please give me some sugesstion to correctly updating the record/entity. Correction? : def post(self): update = self.request.get('update') if users.get_current_user(): if update: personal = Personal.get(db.Key(update)) personal.name = self.request.get('name') personal.gender = self.request.get('gender') personal.mobile_num = self.request.get('mobile_num') personal.birthdate = int(self.request.get('birthdate')) personal.birthplace = self.request.get('birthplace') personal.address = self.request.get('address') personal.geo_pos = self.request.get('geo_pos') personal.info = self.request.get('info') photo = images.resize(self.request.get('img'), 0, 80) personal.photo = db.Blob(photo) personal.put() self.redirect('/admin/personal') else: personal= Personal() personal.name = self.request.get('name') personal.gender = self.request.get('gender') personal.mobile_num = self.request.get('mobile_num') personal.birthdate = int(self.request.get('birthdate')) personal.birthplace = self.request.get('birthplace') personal.address = self.request.get('address') personal.geo_pos = self.request.get('geo_pos') personal.info = self.request.get('info') photo = images.resize(self.request.get('img'), 0, 80) personal.photo = db.Blob(photo) personal.put() self.redirect('/admin/personal') else: self.response.out.write('I\'m sorry, you don\'t have permission to add this LP Personal Data.')

    Read the article

  • How to update all the SSIS packages&rsquo; Connection Managers in a BIDS project with PowerShell

    - by Luca Zavarella
    During the development of a BI solution, we all know that 80% of the time is spent during the ETL (Extract, Transform, Load) phase. If you use the BI Stack Tool provided by Microsoft SQL Server, this step is accomplished by the development of n Integration Services (SSIS) packages. In general, the number of packages made ??in the ETL phase for a non-trivial solution of BI is quite significant. An SSIS package, therefore, extracts data from a source, it "hammers" :) the data and then transfers it to a specific destination. Very often it happens that the connection to the source data is the same for all packages. Using Integration Services, this results in having the same Connection Manager (perhaps with the same name) for all packages: The source data of my BI solution comes from an Helper database (HLP), then, for each package tha import this data, I have the HLP Connection Manager (the use of a Shared Data Source is not recommended, because the Connection String is wired and therefore you have to open the SSIS project and use the proper wizard change it...). In order to change the HLP Connection String at runtime, we could use the Package Configuration, or we could run our packages with DTLoggedExec by Davide Mauri (a must-have if you are developing with SQL Server 2005/2008). But my need was to change all the HLP connections in all packages within the SSIS Visual Studio project, because I had to version them through Team Foundation Server (TFS). A good scribe with a lot of patience should have changed by hand all the connections by double-clicking the HLP Connection Manager of each package, and then changing the referenced server/database: Not being endowed with such virtues :) I took just a little of time to write a small script in PowerShell, using the fact that a SSIS package (a .dtsx file) is nothing but an xml file, and therefore can be changed quite easily. I'm not a guru of PowerShell, but I managed more or less to put together the following lines of code: $LeftDelimiterString = "Initial Catalog=" $RightDelimiterString = ";Provider=" $ToBeReplacedString = "AstarteToBeReplaced" $ReplacingString = "AstarteReplacing" $MainFolder = "C:\MySSISPackagesFolder" $files = get-childitem "$MainFolder" *.dtsx `       | Where-Object {!($_.PSIsContainer)} foreach ($file in $files) {       (Get-Content $file.FullName) `             | % {$_ -replace "($LeftDelimiterString)($ToBeReplacedString)($RightDelimiterString)", "`$1$ReplacingString`$3"} ` | Set-Content $file.FullName; } The script above just opens any SSIS package (.dtsx) in the supplied folder, then for each of them goes in search of the following text: Initial Catalog=AstarteToBeReplaced;Provider= and it replaces the text found with this: Initial Catalog=AstarteReplacing;Provider= I don’t enter into the details of each cmdlet used. I leave the reader to search for these details. Alternatively, you can use a specific object model exposed in some .NET assemblies provided by Integration Services, or you can use the Pacman utility: Enjoy! :) P.S. Using TFS as versioning system, before running the script I checked out the packages and, after the script executed succesfully, I checked in them.

    Read the article

  • P2P synchronization: can a player update fields of other players?

    - by CherryQu
    I know that synchronization is a huge topic, so I have minimized the problem to this example case. Let's say, Alice and Bob are playing a P2P game, fighting against each other. If Alice hits Bob, how should I do the network component to make Bob's HP decrease? I can think of two approaches: Alice perform a Bob.HP--, then send Bob's reduced HP to Bob. Alice send a "I just hit Bob" signal to Bob. Bob checks it, and reduce its own HP, then send his new HP to everyone including Alice. I think the second approach is better because I don't think a player in a P2P game should be able to modify other players' private fields. Otherwise cheating would be too easy, right? My philosophy is that in a P2P game especially, a player's attributes and all attributes of its belonging objects should only be updated by the player himself. However, I can't prove that this is right. Could someone give me some evidence? Thanks :)

    Read the article

  • How should I update my name server after I installed a new dedicated server?

    - by Jim Thio
    Say I got a dedi. The IP is 123.123.123.123 Now I got domain name domainname.com that will be the "main" domain name for that server. Should I? Set the name server of the domainname.com to ns1.domainname.com and ns2.domainname.com Add child nameserver ns1.domainname.com and ns2.domainname.com to point to that exact IP. or Should I? Point the name server to my registrar name server. Set an A address of the name server to point to my IP. Which one is right? Obviously I want ns1.domainname.com and ns2.domainname.com to point to my IP so I can then point hundreds of domains to that IP. But how exactly I should do that? Specifically I simply use cpanel. Centosh with cpanel.

    Read the article

  • GWB | 30 in 60 Update &ndash; Enrique is almost there!

    - by Staff of Geeks
    We are very close to having our first blogger to reach 30 posts, Enrique Lima.  Stuart Brierley is over the hump with 16 posts and Dave Campbell and Eric Nelson are definitely in the running.  If you don’t know what I am talking about, we are running a contest for our bloggers.  Anyone who blogs on Geekswithblogs who creates 30 posts from May 15th to July 13th will receive a custom Geekswithblogs.net t-shirt with their URL on the back.  This could be their Geekswithblogs.net address or their custom domain.  It is definitely not too late to get started and with TechEd or WWDC right around the corner, there is definitely a lot to talk about. Current Standings: Enrique Lima (28 posts) - http://geekswithblogs.net/enriquelima StuartBrierley (16 posts) - http://geekswithblogs.net/StuartBrierley Dave Campbell (12 posts) - http://geekswithblogs.net/WynApseTechnicalMusings Eric Nelson (10 posts) - http://geekswithblogs.net/iupdateable Christopher House (10 posts) - http://geekswithblogs.net/13DaysaWeek mbcrump (7 posts) - http://geekswithblogs.net/mbcrump Chris Williams (6 posts) - http://geekswithblogs.net/cwilliams Michael Stephenson (5 posts) - http://geekswithblogs.net/michaelstephenson Steve Michelotti (5 posts) - http://geekswithblogs.net/michelotti Liam McLennan (5 posts) - http://geekswithblogs.net/liammclennan Follow Us On Twitter: @StaffOfGeeks Technorati Tags: Geekswithblogs,30 in 60,Standings

    Read the article

  • CodePlex Daily Summary for Saturday, June 09, 2012

    CodePlex Daily Summary for Saturday, June 09, 2012Popular ReleasesARCBots API Program: ARCBots API Program v3 STABLE: Stable Release, you can find the update notes in the source code.Microsoft SQL Server Product Samples: Database: AdventureWorks Sample Reports 2008 R2: AdventureWorks Sample Reports 2008 R2.zip contains several reports include Sales Reason Comparisons SQL2008R2.rdl which uses Adventure Works DW 2008R2 as a data source reference. For more information, go to Sales Reason Comparisons report.RULI Chain Code Image Generator: RULI Chain Code BW Image Generator v. 0.4: added features: - 3x3 bitmask support - 7x7 bitmask support - app icon added some refactoring for later library-creationThe Chronicles of Asku: Alpha Test v1.1: Welcome to the Chronicles of Asku alpha test. The current state of the game is 2 tomb floors, level 10 cap, and almost all core systems in the game, excluding a store that you buy armor in. This is just a test for downloading the game, and severe bug hunts... INSTRUCTIONS: When you download the folder, right click within the folder and select EXTRACT ALL Run Setup.exe Follow any on screen instructions DO NOT TRY TO LOAD A CHARACTER IF YOU HAVE NEVER SAVED ONE BEFORE Patch 1.1 -Added a...Json.NET: Json.NET 4.5 Release 7: Fix - Fixed Metro build to pass Windows Application Certification Kit on Windows 8 Release Preview Fix - Fixed Metro build error caused by an anonymous type Fix - Fixed ItemConverter not being used when serializing dictionaries Fix - Fixed an incorrect object being passed to the Error event when serializing dictionaries Fix - Fixed decimal properties not being correctly ignored with DefaultValueHandlingLINQ Extensions Library: 1.0.3.0: New to release 1.0.3.0:Combinatronics: Combinations (unique) Combinations (with repetition) Permutations (unique) Permutations (with repetition) Convert jagged arrays to fixed multidimensional arrays Convert fixed multidimensional arrays to jagged arrays ElementAtMax ElementAtMin ElementAtAverage New set of array extension (1.0.2.8):Rotate Flip Resize (maintaing data) Split Fuse Replace Append and Prepend extensions (1.0.2.7) IndexOf extensions (1.0.2.7) Ne...Audio Pitch & Shift: Audio Pitch And Shift 4.5.0: Added Instruments tab for modules Open folder content feature Some bug fixesPython Tools for Visual Studio: 1.5 Beta 1: We’re pleased to announce the release of Python Tools for Visual Studio 1.5 Beta. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including: • Supports CPython, IronPython, Jython and PyPy • Python editor with advanced member, signature intellisense and refactoring • Code navigation: “Find all refs”, goto definition, and object browser • Local and remote debugging •...Circuit Diagram: Circuit Diagram 2.0 Beta 1: New in this release: Automatically flip components when placing Delete components using keyboard delete key Resize document Document properties window Print document Recent files list Confirm when exiting with unsaved changes Thumbnail previews in Windows Explorer for CDDX files Show shortcut keys in toolbox Highlight selected item in toolbox Zoom using mouse scroll wheel while holding down ctrl key Plugin support for: Custom export formats Custom import formats Open...Umbraco CMS: Umbraco CMS 5.2 Beta: The future of Umbracov5 represents the future architecture of Umbraco, so please be aware that while it's technically superior to v4 it's not yet on a par feature or performance-wise. What's new? For full details see our http://progress.umbraco.org task tracking page showing all items complete for 5.2. In a nutshellPackage Builder Starter Kits Dynamic Extension Methods Querying / IsHelpers Friendly alt template URLs Localization Various bug fixes / performance enhancements Gett...JayData - The cross-platform HTML5 data-management library for JavaScript: JayData 1.0.5: JayData is a unified data access library for JavaScript developers to query and update data from different sources like WebSQL, IndexedDB, OData, Facebook or YQL. See it in action in this 6 minutes video New features in JayData 1.0.5http://jaydata.org/blog/jaydata-1.0.5-is-here-with-authentication-support-and-more http://jaydata.org/blog/release-notes Sencha Touch 2 module (read-only)This module can be used to bind data retrieved by JayData to Sencha Touch 2 generated user interface. (exam...32feet.NET: 3.5: This version changes the 32feet.NET library (both desktop and NETCF) to use .NET Framework version 3.5. Previously we compiled for .NET v2.0. There are no code changes from our version 3.4. See the 3.4 release for more information. Changes due to compiling for .NET 3.5Applications should be changed to use NET/NETCF v3.5. Removal of class InTheHand.Net.Bluetooth.AsyncCompletedEventArgs, which we provided on NETCF. We now just use the standard .NET System.ComponentModel.AsyncCompletedEvent...DotNetNuke® Links: 06.02.01: Added new DNN 6.2.0 beta social feature "friends" BugfixesApplication Architecture Guidelines: Application Architecture Guidelines 3.0.7: 3.0.7Jolt Environment: Jolt v2 Stable: Many new features. Follow development here for more information: http://www.rune-server.org/runescape-development/rs-503-client-server/projects/298763-jolt-environment-v2.html Setup instructions in downloadSharePoint Euro 2012 - UEFA European Football Predictor: havivi.euro2012.wsp (1.5): New fetures:Multilingual Support Max users property in Standings Web Part Games time zone change (UTC +1) bug fix - Version 1.4 locking problem http://euro2012.codeplex.com/discussions/358262 bug fix - Field Title not found (v.1.3) German SP http://euro2012.codeplex.com/discussions/358189#post844228 Bug fix - Access is denied.for users with contribute rights Bug fix - Installing on non-English version of SharePoint Bug fix - Title Rules Installing SharePoint Euro 2012 PredictorSharePoint E...myManga: myManga v1.0.0.4: ChangeLogUpdating from Previous Version: Extract contents of Release - myManga v1.0.0.4.zip to previous version's folder. Replaces: myManga.exe BakaBox.dll CoreMangaClasses.dll Manga.dll Plugins/MangaReader.manga.dll Plugins/MangaFox.manga.dll Plugins/MangaHere.manga.dll Plugins/MangaPanda.manga.dllMVVM Light Toolkit: V4RC (binaries only) including Windows 8 RP: This package contains all the latest DLLs for MVVM Light V4 RC. It includes the DLLs for Windows 8 Release Preview. An updated Nuget package is also available at http://nuget.org/packages/MvvmLightLibsPreviewExtAspNet: ExtAspNet v3.1.7: +2012-06-03 v3.1.7 -?????????BUG,??????RadioButtonList?,AJAX????????BUG(swtseaman、????)。 +?Grid?BoundField、HyperLinkField、LinkButtonField、WindowField??HtmlEncode?HtmlEncodeFormatString(TiDi)。 -HtmlEncode?HtmlEncodeFormatString??????true,??????HTML????????。 -??????Asp.Net??GridView?BoundField?????????。 -http://msdn.microsoft.com/en-us/library/system.web.ui.webcontrols.boundfield.htmlencode -?Grid?HyperLinkField、WindowField??UrlEncode??,????URL??(???true)。 -?????????????,?????????????...LiveChat Starter Kit: LCSK v1.5.2: New features: Visitor location (City - Country) from geo-location Pass configuration via javascript for the chat box New visitor identification (no more using the IP address as visitor identification) To update from 1.5.1 Run the /src/1.5.2-sql-updates.txt SQL script to update your database tables. If you have it installed via NuGet, simply update your package and the file will be included so you can run the update script. New installation The easiest way to add LCSK to your app is by...New ProjectsAdvanceWars: Advance Wars For NetARCBots API Program: This is a simple API program designed for quick use for the ARCBots API.AutoUpdaterdotNET: AutoUpdater.NET is a class library that allows .net developers to easily add auto update functionality to their project.C++ AMP Conformance Test Suite: C++ AMP Conformance Test Suite contains a set of tests to aid in verifying compiler, library, and runtime behaviors as specified in the C++ AMP open specification. DynamicObjectProxy: DOP (Dynamic Object Proxy) é uma biblioteca que permite que qualquer método de qualquer objeto possa ser interceptado através de um proxy dinâmico. Ao interceptar um método, pode-se decorar o objeto, alterando ou recuperando informações sobre o seu comportamento. Extremamente útil para logs, verificações de transações, etc. DOP (Dynamic Object Proxy) is a library that contains classes that makes it possible that any method of any object can be intercepted by a dynamic proxy. By interceptin...EAIP: ???????? Flatland: Artificial life, or A-Life, is a broad and ever emerging field that has found its applications in almost any field: economics, medicine, traffic planning, shopping habits, patterns in music. And it is evident that work with modelling logical life in artificial environments is a necessity for the future software developer. Abbots novella "Flatland: A Romance of Many Dimensions" describes the two dimensional world "Flatland", where life is geometrical shapes that have human-like emotions, but...GameAudioSystem: Simple Audio Game System written with OpenAL to be used with Ogre3D.HyperionSS2P - Simple scan to pdf solution: Simple scan to pdf solution.Issue Impala: Issue Impala is a powerful and elegant issue tracker.KoekyGL Wrapper: A .net wrapper for OpenGL aimed at making it easy to use OpenGL.Marik Sample Project: This is a sample by KitMongoDB Managment Client: Development MongoDB Web client. HTML 5 jQuery. One page web application. When Windows 8 metro is released then Metro style application To.MongoDB.Dynamic: MongoDB.Dynamic is a personal project that I’ve started when I was developing my first application targeting MongoDB as DBMS, using the “official driver” MongoDB.Driver, supported by 10gen. The objective is to provide a lightweight library with some interesting features that speed up development for desktop/web applications that accesses MongoDB databases. MongoDB.Dynamic is oriented to interfaces. You don’t need to create concrete classes of your entities, all you need to do is setup your...My Google Workspace: An easy way to create documents search at Google and read your emails and Much moreqzgfjava: git????,?android??“??”???SP Sticky Notes: SP Sticky Notes allows your users to add sticky notes to a page on your SharePoint site.Weather3: It's Metro Style AppXNA Scumm: This is a rewrite of the ScummVM engine using XNA. ScummVM is an engine that runs old school LucasArts graphical adventure games. It is written completely in C# and the first version will run on PC and Xbox 360. A Windows phone version will probably follow. Of course, you will need to own the original games in order to use it. I will start my work by The Secret of Monkey Island, more specifically the VGA CD version. Monkey Island 2: LeChuck's Revenge and Indiana Jones 4 and Fate of Atl...YoG Community Game: This project is a game, being developed by several members of the Yogscast Community Forums. It is a top-down shooter, based around the protagonist 'Joe', and his adventures through TV shows every day.

    Read the article

  • Azure eBook Update #1 &ndash; 16 authors so far!

    - by Eric Nelson
    I just wanted to share with folks where we are up to with the Windows Azure eBook (Check out the original post for full details) I have had lots of great submissions from folks with some awesome stuff to share on Azure. Currently we have 16 authors and 25 proposed articles. There is still a couple of days left to submit your proposal if you would like to get involved (see the original post ) and some topic suggestions below for which we don’t currently have authors. It is official – I’m excited! :-) Article Area Accepted Wikipedia Explorer: A case study how we did it and why. CaseSetudy Optional Patterns for the Windows Azure Platform (picking up 1 or 2 patterns that seem to be evolving) Architecture Optional Azure and cost-oriented architecture. Architecture Yes Code walkthrough of a comprehensive application submitted to newCloudApp contest CaseSetudy Yes Principles of highly scalable apps on Azure Compute Optional Auto-Scaling Azure Compute Yes Implementing a distributed cache using memcached with worker roles Interop Yes Building a content-based router service to direct requests to internal HTTP endpoints Compute Optional How to debug an Azure app by with a custom TraceListener & the AppFabric Service Bus AppFabric Yes How to host Java apps in Azure Interop Yes Bing Maps Tile Servers using Azure Blog Storage Interop Yes Tricks for storing time and date fields in Table Storage Storage Yes Service Runtime in Windows Azure Compute Yes Azure Drive Storage Optional Queries in Azure Table Storage Optional Getting RubyOnRails running on Azure Interop Yes Consuming Azure services within Windows Phone Interop Yes De-risking Your First Azure Project Architecture Yes Designing for failure Architecture Optional Connecting to SQL Azure In x Minutes SQLAzure Yes Using Azure Table Service as a NoSQL store via the REST API Storage Yes Azure Table Service REST API Storage Optional Threading, Scalability and Reliability in the Cloud Compute Yes Azure Diagnostics Compute Yes 5 steps to getting started with Windows Azure Introduction Yes The best tools for working with Windows Azure Tools Author Needed Understanding how SQL Azure works SQLAzure Author Needed Getting started with AppFabric Control Services AppFabric Author Needed Using the Microsoft Sync Framework with SQL Azure SQLAzure Author Needed Dallas - just a TV show or something more? Dallas Author Needed Comparing Azure to other cloud offerings Interop Author Needed Hybrid solutions using Azure and on-premise Interop Author Needed

    Read the article

  • Developer Preview available for the Java Access Bridge is now included in Java SE 7 Update 6

    - by Ragini Prasad
    The Java Access Bridge product is now being included with Java SE 7u6. Manual installation of the Java Access Bridge will no longer be required. All Access Bridge files will be automatically installed by the JRE and the JDK.             The Developer Preview for this feature is now available and can be downloaded from http://jdk7.java.net/download.html.            By default, the Java Access Bridge is disabled. In order to use the Java Access Bridge, enable it using the steps mentioned below and test your applications for accessibility.             Enable the Java Access Bridge: Use one of these mechanism:             1. Ease Of Access control panel.     On Windows Vista and later the Java Access Bridge can be enabled     from Ease of Access Center.     Select "Use the computer without a display". In "Other programs     installed" section , select the check box to     "Enable Java Access Bridge" and apply. 2. Or run the following command in the Command Window.     %JRE_HOME%\bin\jabswitch -enable Note: You must restart your Assistive Technology software and Java application after enabling the bridge.             Test the Java Access Bridge: 1. Enable the Java Access Bridge as described above. 2. Run an Assistive Technology that supports the Java Access Bridge. 3. Run a Java application. Ensure that the Assistive Technology  reads    the values of your application. Disable the Java Access Bridge:             Run the following command from the Command Window.     %JRE_HOME%\bin\jabswitch -disable                 Note: The Ease Of Access control panel cannot be used to disable the bridge. You must use jabswitch from the Command window to disable the Java Access Bridge.

    Read the article

  • how to compare the two xml files and update the one xml file in java?

    - by prasad.gai
    I have created two xml files from those xml file i would like to compare one xml file with another xml file and update the xml file. I have created an xml file as follows: main.xml <?xml version="1.0" encoding="utf-8"?> <ScrollView xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" android:background="#00BFFF"> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" android:orientation="vertical"> <LinearLayout android:id="@+id/linearLayout1" android:layout_width="fill_parent" android:layout_height="wrap_content" android:gravity="center" android:visibility="visible"> </LinearLayout> <LinearLayout android:id="@+id/linearLayout2" android:layout_width="fill_parent" android:layout_height="wrap_content" android:gravity="center" android:visibility="visible"> </LinearLayout> </LinearLayout> </ScrollView> properties.xml <?xml version='1.0' encoding='utf-8' standalone='yes' ?> <map> <int name="linearLayout1" value="8" /> <int name="linearLayout2" value="0" /> </map> from the above two xml files i would like to compare with LinearLayout android:id ="@id/LinearLayout1" from main.xml with int name ="linearLayout1" from properties.xml. from the above properties.xml file where name ="linearLayout1" and value ="8" then i would like to update the main.xml file where as LinearLayout android:id="@id/linearLayout1" propertie value as android:visibility="gone" From the above xml file how can i update main.xml file by comparing properties.xml? please any body help me.....

    Read the article

  • Oracle VM 3: New Patch Set! (or Mega Millions winner?...you decide!..)

    - by Adam Hawley
    Today, my favorite number is 14736185 (despite the fact that it did not win me $249million in the MegaMillions lottery...or did it?)!  Why?  Because it is our latest patch release and it is chock-full of good stuff for the Oracle VM 3.0 user.  Oracle VM support customers can find it on My Oracle Support as patch number 14736185.   This can be installed on Oracle VM 3.0.x systems as an incremental patch on top of 3.0.3, so if you previously ran 3.0.3 GA or updated to 3.0.3 patch 1 ( build 150) this will just apply on top.  We're recommending you update to this patch set at your earliest convenience.  For more details, see below but also see Wim Coekaert's blog with related info here. Oracle VM Manager Update Instructions Oracle VM Manager 3.0.2 or 3.0.3 can be upgraded to this Oracle VM Manager 3.0.3 patch update. Unzip the patch file on the server running Oracle VM Manager and execute the runUpgrader.sh script. # ./runUpgrader.sh Please refer to Oracle VM Installation and Upgrade Guide for details. Upgrade Oracle VM Servers It's highly recommended to update Oracle VM Server 3.0.3 with the latest patch update. Please review Oracle VM 3.0.3 User Guide http://docs.oracle.com/cd/E26996_01/e18549/BABDDEGC.html for specific instructions how to use Yum repository to perform the server update. To receive notification on the software update delivered to Oracle Unbreakable Linux Network (ULN, http://linux.oracle.com) for Oracle VM, you can sign up here http://oss.oracle.com/mailman/listinfo/oraclevm-errata.  Additional Information Oracle VM documentation is available on the Oracle Technology Network (OTN):http://www.oracle.com/technetwork/server-storage/vm/documentation/index.html  Please refer to the Oracle VM 3.0.3 Release Notes for a list of features and known issues. For the latest information, best practices white papers and webinars, please visit http://oracle.com/virtualization

    Read the article

  • Is it possible to update old database from dbml file ? (C#, .Net 4, Linq, SQL Server)

    - by Emil
    Hi all, I began recently a new job, a very interesting project (C#,.Net 4, Linq, VS 2010 and SQL Server). And immediately I got a very exciting challenge: I must implement either a new tool or integrate the logic when program start, or whatever, but what must happen is the following: the customers have previous application and database (full with their specific data). Now a new version is ready and the customer gets the update. In the mean time we made some modification on DB (new table, columns, maybe an old column deleted, or whatever). I’m pretty new in Linq and also SQL databases and my first solution can be: I check the applications/databases version and implement all the changes step by step comparing all tables, columns, keys, constrains, etc. (all this new information I have in my dbml and the old I asked from the existing DB). And I’ll do this each time the version changed. But somehow I feel, this is NOT a smart solution so I look for a general solution of this problem. Is there a way to update customers DB from the dbml file? To create a new one is not a problem (CreateDatabase with DataContext), is there any update/alter database methods? I guess I’m not the only one who search for such a solution (I found nothing in internet – or I looked for bad keywords). How did you solve this problem? I look also for an external tool, but first for a solution with C#, Linq or something similar. For any idea thank you in advance! Best regards, Emil

    Read the article

  • Is there a rational reason to wait for the release date to download, install or update to the next version of Ubuntu?

    - by badp
    Today, October 6th 2010, Ubuntu 10.10 is in Feature Definition Freeze, Debian Import Freeze, Feature Freeze, User Interface Freeze, Beta Freeze, Documentation String Freeze, Final Freeze, Kernel Freeze and past the Translation Deadlines in both the non-language pack and language pack editions as the release schedule details. Basically, except for last minute bugfixes, the version of Ubuntu 10.10 you can download today is identical to the version of Ubuntu 10.10 you can download on the 10th when it gets released. If you downloaded and installed Ubuntu 10.10 today, you would: help find glaring issues for last minute fixing help defray the network load on October 10th see Ubuntu 10.10 in action without waiting Those sound like pretty strong arguments... to me, and indeed I've been using Ubuntu 10.10 for a month now roughly. However, most people prefer to make the jump with everybody else on release day. What are the rational reasons for that?

    Read the article

  • JSF 2.0 Problem

    - by Sarang
    I am doing a project where I am using JSF 2.0 & Primefaces UI Components. There is a tab view component with tabs, "Day","Week" & "Month". In all tab, I have to display Bar Charts in each. For the same, I am fetching three list using the following three method. In the following code, UpdateCountHelper is fetching the data from database. So, UpdateCountHelper is taking some time for fetching data. This is code for fetching lists : public List<UpdateCount> getDayUpdateCounts() { if (projectFlag == true) { if (displayFlag == 1) { dayUpdateCounts = UpdateCountHelper.getProjectUpdates(1); } else { dayUpdateCounts = UpdateCountHelper.getProjectUpdates(name, 1); } } else { dayUpdateCounts = UpdateCountHelper.getResourceUpdates(userName, 1); } return dayUpdateCounts; } public List<UpdateCount> getMonthUpdateCounts() { if (projectFlag == true) { if (displayFlag == 1) { monthUpdateCounts = UpdateCountHelper.getProjectUpdates(30); } else { monthUpdateCounts = UpdateCountHelper.getProjectUpdates(name, 30); } } else { monthUpdateCounts = UpdateCountHelper.getResourceUpdates(userName, 30); } return monthUpdateCounts; } public List<UpdateCount> getWeekUpdateCounts() { if (projectFlag == true) { if (displayFlag == 1) { weekUpdateCounts = UpdateCountHelper.getProjectUpdates(7); } else { weekUpdateCounts = UpdateCountHelper.getProjectUpdates(name, 7); } } else { weekUpdateCounts = UpdateCountHelper.getResourceUpdates(userName, 7); } return weekUpdateCounts; } This is code for UI of Bar Chart : <p:panel id="Chart"> <p:tabView dynamic="false" cache="false"> <p:tab title="Day"> <p:panel id="chartDayPanel"> <center> <h:outputText id="projectWiseDayText" rendered="#{systemUtilizationServiceBean.projectFlag}" value="Project Wise Day Update"/> <p:columnChart id="projectWiseDayUpdateChart" rendered="#{systemUtilizationServiceBean.projectFlag}" value="#{systemUtilizationServiceBean.dayUpdateCounts}" var="dayWiseUpdate" xfield="#{dayWiseUpdate.name}" height="200px" width="640px"> <p:chartSeries label="Project Wise Current Day Update" value="#{dayWiseUpdate.noUpdates}"/> </p:columnChart> <h:outputText id="resourceWiseDayText" rendered="#{systemUtilizationServiceBean.resourceFlag}" value="Resource Wise Day Update"/> <p:columnChart id="resourceWiseDayUpdateChart" rendered="#{systemUtilizationServiceBean.resourceFlag}" value="#{systemUtilizationServiceBean.dayUpdateCounts}" var="dayWiseResourceUpdate" xfield="#{dayWiseResourceUpdate.name}" height="200px" width="640px"> <p:chartSeries label="Resource Wise Current Day Update" value="#{dayWiseResourceUpdate.noUpdates}"/> </p:columnChart> </center> </p:panel> </p:tab> <p:tab title="Week"> <p:panel id="chartWeekPanel"> <center> <h:outputText id="projectWiseWeekText" rendered="#{systemUtilizationServiceBean.projectFlag}" value="Project Wise Week Update"/> <p:columnChart id="projectWiseWeekUpdateChart" rendered="#{systemUtilizationServiceBean.projectFlag}" value="#{systemUtilizationServiceBean.weekUpdateCounts}" var="weekWiseUpdate" xfield="#{weekWiseUpdate.name}" height="200px" width="640px"> <p:chartSeries label="Project Wise Current Week Update" value="#{weekWiseUpdate.noUpdates}"/> </p:columnChart> <h:outputText id="resourceWiseWeekText" rendered="#{systemUtilizationServiceBean.resourceFlag}" value="Resource Wise Week Update"/> <p:columnChart id="resourceWiseWeekUpdateChart" rendered="#{systemUtilizationServiceBean.resourceFlag}" value="#{systemUtilizationServiceBean.weekUpdateCounts}" var="weekWiseResourceUpdate" xfield="#{weekWiseResourceUpdate.name}" height="200px" width="640px"> <p:chartSeries label="Resource Wise Current Week Update" value="#{weekWiseResourceUpdate.noUpdates}"/> </p:columnChart> </center> </p:panel> </p:tab> <p:tab title="Month"> <p:panel id="chartMonthPanel"> <center> <h:outputText id="projectWiseMonthText" rendered="#{systemUtilizationServiceBean.projectFlag}" value="Project Wise Month Update"/> <p:columnChart id="projectWiseMonthUpdateChart" rendered="#{systemUtilizationServiceBean.projectFlag}" value="#{systemUtilizationServiceBean.monthUpdateCounts}" var="monthWiseUpdate" xfield="#{monthWiseUpdate.name}" height="200px" width="640px"> <p:chartSeries label="Project Wise Current Month Update" value="#{monthWiseUpdate.noUpdates}"/> </p:columnChart> <h:outputText id="resourceWiseMonthText" rendered="#{systemUtilizationServiceBean.resourceFlag}" value="Resource Wise Month Update"/> <p:columnChart id="resourceWiseMonthUpdateChart" rendered="#{systemUtilizationServiceBean.resourceFlag}" value="#{systemUtilizationServiceBean.monthUpdateCounts}" var="monthWiseResourceUpdate" xfield="#{monthWiseResourceUpdate.name}" height="200px" width="640px"> <p:chartSeries label="Resource Wise Current Month Update" value="#{monthWiseResourceUpdate.noUpdates}"/> </p:columnChart> </center> </p:panel> </p:tab> </p:tabView> </p:panel> Now, I have to display same data in other tabview with same tabs as mentioned above & only thing is now I have to display in Pie Chart. Now in pie chart, I am using the same lists. So, it will again fetch the data from database & waste time. To solve that problem I have created other three lists & have given only reference of those previous lists. So, now no database fetching occur. The Code for applying the reference is : public List<UpdateCount> getPieDayUpdateCounts() { pieDayUpdateCounts = dayUpdateCounts; return pieDayUpdateCounts; } public List<UpdateCount> getPieMonthUpdateCounts() { pieMonthUpdateCounts = monthUpdateCounts; return pieMonthUpdateCounts; } public List<UpdateCount> getPieWeekUpdateCounts() { pieWeekUpdateCounts = weekUpdateCounts; return pieWeekUpdateCounts; } But, over here the problem occurring is that only chart of which the tab is enable is displayed but the other remaining 2 tabs are not showing any chart. The code for UI is : <p:tabView dynamic="false" cache="false"> <p:tab title="Day"> <center> <p:panel id="pieChartDayPanel"> <h:outputText id="projectWiseDayPieChartText" rendered="#{systemUtilizationServiceBean.projectFlag}" value="Project Wise Day Update"/> <p:pieChart id="projectWiseDayUpdatePieChart" rendered="#{systemUtilizationServiceBean.projectFlag}" value="#{systemUtilizationServiceBean.dayUpdateCounts}" var="dayWisePieUpdate" categoryField="#{dayWisePieUpdate.name}" dataField="#{dayWisePieUpdate.noUpdates}" height="200" width="200"/> <h:outputText id="resourceWiseDayPieChartText" rendered="#{systemUtilizationServiceBean.resourceFlag}" value="Resource Wise Day Update"/> <p:pieChart id="resourceWiseDayUpdatePieChart" rendered="#{systemUtilizationServiceBean.resourceFlag}" value="#{systemUtilizationServiceBean.dayUpdateCounts}" var="dayWiseResourcePieUpdate" categoryField="#{dayWiseResourcePieUpdate.name}" dataField="#{dayWiseResourcePieUpdate.noUpdates}" height="200" width="200"/> </p:panel> </center> </p:tab> <p:tab title="Week"> <center> <p:panel id="pieChartWeekPanel"> <h:outputText id="projectWiseWeekPieChartText" rendered="#{systemUtilizationServiceBean.projectFlag}" value="Project Wise Week Update"/> <p:pieChart id="projectWiseWeekUpdatePieChart" rendered="#{systemUtilizationServiceBean.projectFlag}" value="#{systemUtilizationServiceBean.weekUpdateCounts}" var="weekWisePieUpdate" categoryField="#{weekWisePieUpdate.name}" dataField="#{weekWisePieUpdate.noUpdates}" height="200" width="200"/> <h:outputText id="resourceWiseWeekPieChartText" rendered="#{systemUtilizationServiceBean.resourceFlag}" value="Resource Wise Week Update"/> <p:pieChart id="resourceWiseWeekUpdatePieChart" rendered="#{systemUtilizationServiceBean.resourceFlag}" value="#{systemUtilizationServiceBean.weekUpdateCounts}" var="weekWiseResourcePieUpdate" categoryField="#{weekWiseResourcePieUpdate.name}" dataField="#{weekWiseResourcePieUpdate.noUpdates}" height="200" width="200"/> </p:panel> </center> </p:tab> <p:tab title="Month"> <center> <p:panel id="pieChartMonthPanel"> <h:outputText id="projectWiseMonthPieChartText" rendered="#{systemUtilizationServiceBean.projectFlag}" value="Project Wise Month Update"/> <p:pieChart id="projectWiseMonthUpdatePieChart" rendered="#{systemUtilizationServiceBean.projectFlag}" value="#{systemUtilizationServiceBean.monthUpdateCounts}" var="monthWisePieUpdate" categoryField="#{monthWisePieUpdate.name}" dataField="#{monthWisePieUpdate.noUpdates}" height="200" width="200"/> <h:outputText id="resourceWiseMonthPieChartText" rendered="#{systemUtilizationServiceBean.resourceFlag}" value="Resource Wise Month Update"/> <p:pieChart id="resourceWiseMonthUpdatePieChart" rendered="#{systemUtilizationServiceBean.resourceFlag}" value="#{systemUtilizationServiceBean.monthUpdateCounts}" var="monthWiseResourcePieUpdate" categoryField="#{monthWiseResourcePieUpdate.name}" dataField="#{monthWiseResourcePieUpdate.noUpdates}" height="200" width="200"/> </p:panel> </center> </p:tab> </p:tabView> What should be the reason behind this ?

    Read the article

  • Security Alert For CVE-2010-4476 Released

    - by eric.maurice
    Hello, this is Eric Maurice again. Oracle just released a Security Alert with a fix for the vulnerability CVE-2010-4476, which affects Oracle Java SE and Oracle Java For Business. This vulnerability is present in Java running on servers as well as standalone Java desktop applications. Its successful exploitation by a malicious attacker can result in a complete denial of service for the affected servers. While only recently publicly disclosed, a number of Internet sites have since then reproduced details about this vulnerability, including exploit codes, which may result in allowing a malicious attacker to create a denial of service condition against the targeted system. Oracle therefore strongly recommends that affected organizations apply this fix as soon as possible. Please note that a fix for this vulnerability will also be included in the upcoming Java Critical Patch Update (Java SE and Java for Business Critical Patch Update - February 2011), which will be released on February 15th 2011. Note that the impact of this vulnerability on desktops is minimal: the affected applications or applets running in Internet browsers for example, might stop responding and may need to be restarted; however the desktop itself will not be compromised (i.e. no compromise at the desktop OS level). Oracle therefore recommends that consumers use the Java auto-update mechanism to get this fix. This will prompt them to install the latest version of the Java Runtime Environment 6 update 24 or higher (JRE), which includes the fix for this vulnerability. JRE 6 update 24 will also be distributed with the Java SE and Java for Business Critical Patch Update - February 2011. For More Information: The Critical Patch Updates and Security Alerts page is located at http://www.oracle.com/technetwork/topics/security/alerts-086861.html The Advisory for Security Alert CVE-2010-4476 is located at http://www.oracle.com/technetwork/topics/security/alert-cve-2010-4476-305811.html More information on Oracle Software Security Assurance is located at http://www.oracle.com/us/support/assurance/index.html Consumers can go to http://www.java.com/en/download/installed.jsp to ensure that they have the latest version of Java running on their desktops. More information on Java Update is available at http://www.java.com/en/download/help/java_update.xml

    Read the article

  • OSX launchd plist for node forever process

    - by lostintranslation
    I am trying to write a launchd.plist file for my node server. I am using forever to run my node server. I would like the server to start on boot. I would also like to wait for the mongodb launchd plist to run first. I installed mongobb using homebrew and it came with a launchd.plist already. I have executed the following: $ launchctl load ~/Library/LaunchAgents/homebrew.mxcl.mongodb.plist plist for mongodb is: <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>homebrew.mxcl.mongodb</string> <key>ProgramArguments</key> <array> <string>/usr/local/opt/mongodb/mongod</string> <string>run</string> <string>--config</string> <string>/usr/local/etc/mongod.conf</string> </array> <key>RunAtLoad</key> <true/> <key>KeepAlive</key> <false/> <key>WorkingDirectory</key> <string>/usr/local</string> <key>StandardErrorPath</key> <string>/usr/local/var/log/mongodb/output.log</string> <key>StandardOutPath</key> <string>/usr/local/var/log/mongodb/output.log</string> <key>HardResourceLimits</key> <dict> <key>NumberOfFiles</key> <integer>1024</integer> </dict> <key>SoftResourceLimits</key> <dict> <key>NumberOfFiles</key> <integer>1024</integer> </dict> </dict> </plist> If I shutdown the computer and restart mongodb fires up as it should. However my node server is not starting. Any ideas? <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>KeepAlive</key> <dict> <key>SuccessfulExit</key> <false/> </dict> <key>Label</key> <string>com.test.app</string> <key>ProgramArguments</key> <array> <string>/usr/local/bin/forever</string> <string>-a</string> <string>-l</string> <string>/var/log/app/app.log</string> <string>-e</string> <string>/var/log/app/app_error.log</string> <string>/data/server/app.js</string> </array> <key>RunAtLoad</key> <true/> <key>StartInterval</key> <integer>3600</integer> </dict> </plist> EDIT: writing to log file and I see this: env: node: No such file or directory I think this means that the node binary cannot be found. I can echo $PATH and /usr/local/bin is in my path. I can start node from the terminal. Ideas?

    Read the article

  • OOP concept: is it possible to update the class of an instantiated object?

    - by Federico
    I am trying to write a simple program that should allow a user to save and display sets of heterogeneous, but somehow related data. For clarity sake, I will use a representative example of vehicles. The program flow is like this: The program creates a Garage object, which is basically a class that can contain a list of vehicles objects Then the users creates Vehicles objects, these Vehicles each have a property, lets say License Plate Nr. Once created, the Vehicle object get added to a list within the Garage object --Later on--, the user can specify that a given Vehicle object is in fact a Car object or a Truck object (thus giving access to some specific attributes such as Number of seats for the Car, or Cargo weight for the truck) At first sight, this might look like an OOP textbook question involving a base class and inheritance, but the problem is more subtle because at the object creation time (and until the user decides to give more info), the computer doesn't know the exact Vehicle type. Hence my question: how would you proceed to implement this program flow? Is OOP the way to go? Just to give an initial answer, here is what I've came up until now. There is only one Vehicle class and the various properties/values are handled by the main program (not the class) through a dictionary. However, I'm pretty sure that there must be a more elegant solution (I'm developing using VB.net): Public Class Garage Public GarageAdress As String Private _ListGarageVehicles As New List(Of Vehicles) Public Sub AddVehicle(Vehicle As Vehicles) _ListGarageVehicles.Add(Vehicle) End Sub End Class Public Class Vehicles Public LicensePlateNumber As String Public Enum VehicleTypes Generic = 0 Car = 1 Truck = 2 End Enum Public VehicleType As VehicleTypes Public DictVehicleProperties As New Dictionary(Of String, String) End Class NOTE that in the example above the public/private modifiers do not necessarily reflect the original code

    Read the article

  • Swing: what to do when a JTree update takes too long and freezes other GUI elements?

    - by java.is.for.desktop
    Hello, everyone! I know that GUI code in Java Swing must be put inside SwingUtilities.invokeAndWait or SwingUtilities.invokeLater. This way threading works fine. Sadly, in my situation, the GUI update it that thing which takes much longer than background thread(s). More specific: I update a JTree with about just 400 entries, nesting depth is maximum 4, so should be nothing scary, right? But it takes sometimes one second! I need to ensure that the user is able to type in a JTextPane without delays. Well, guess what, the slow JTree updates do cause delays for JTextPane during input. It refreshes only as soon as the tree gets updated. I am using Netbeans and know empirically that a Java app can update lots of information without freezing the rest of the UI. How can it be done? NOTE 1: All those DefaultMutableTreeNodes are prepared outside the invokeAndWait. NOTE 2: When I replace invokeAndWait with invokeLater the tree doesn't get updated. NOTE 3: Fond out that recursive tree expansion takes far the most time. NOTE 4: I'm using custom tree cell renderer, will try without and report. NOTE 4a: My tree cell renderer uses a map to cache and reuse created JTextComponents, depending on tree node (as a key). CLUE 1: Wow! Without setting custom cell renderer it's 10 times faster. I think, I'll need few good tutorials on writing custom tree cell renderers.

    Read the article

  • How long does it take for Google to re-index pages or update the link titles?

    - by ElHaix
    On one of our classified sites, when doing site:[mysite.com] in Google, the link text is simply [product name] - [mysite.com], where as it should read [product name] classifieds for sale in... I suspect that the site map may have been submitted when we just had [product name], and updated the page titles later. However, it has been a couple of weeks that I have confirmed the longer page titles, and still they appear shortened in organic results. How can I get this looking right in Google's organic results?

    Read the article

  • How can I validate if a 13.10 update was complete?

    - by James
    I attempted to upgrade Ubuntu 13.04 server to 13.10 tonight via the standard linux text console and it had some trouble. Machine boots and displays 13.10, but I am unsure exactly what or how much was successfully upgraded. Is there some command I can run which will validate that all system has all standard binaries upgraded to the 13.10 release? As for the issue .... everything seemed to be going along ok until the screen displayed some kind of menu option regarding local edits to samba config file. There was a prompt requesting root password or ctrl-d to continue, but it would not take any input. From another terminal screen I tried killing the process displaying this samba message, and then some screen/SCREEN processes. The hard drive activity picked up for a while and then all the processes on that pty were gone. As I said, reboot was OK, but I have no idea if everything was upgraded. The machine seems to be acting like normal, except that the upgrade killed my openpvn process which I'll need to reload. Thanks

    Read the article

  • YOUR FREE, EXCLUSIVE, ONLINE UPDATE ON FANTASTIC NEW ORACLE PARTNER OPPORTUNITIES - REGISTER TODAY!

    - by Claudia Costa
    New products. New specializations. New opportunities.There really has never been a better time to be an Oracle partner! Find out exactly what Oracle's "Software. Hardware. Complete" strategy, and the latest developments in the OPN Specialized program, mean for your business.   Register now for the Oracle PartnerNetwork Days Virtual Event on the 29th of June at 11:00h to learn: How to use Oracle's uniquely comprehensive technology stack to grow your business How specialization with Oracle can significantly improve your competitive position How the Oracle PartnerNetwork is evolving to help you succeed Highlights include important updates from Oracle EMEA strategy, partner and product leaders, a live link to the Oracle FY11 Global Partner Kickoff, and interviews with local Oracle partners that are already enjoying the benefits of specialization. The event will also feature: ·         Live Q&A sessions with our speakers, ·         Virtual information booths packed with useful information ·         Opportunities to network with Oracle experts and your peers. ·         Special guest speaker is a former Microsoft executive who has used the principles of specialization with spectacular results to become one of the world's most successful social entrepreneurs. Plus, at the end of the event, you can submit your feedback form for your chance to win two passes to Oracle OpenWorld in San Francisco this September! CLICK HERE TO REGISTER NOW!

    Read the article

< Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >