Search Results

Search found 6295 results on 252 pages for 'git push'.

Page 102/252 | < Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >

  • C#/.NET Little Wonders: The Concurrent Collections (1 of 3)

    - by James Michael Hare
    Once again we consider some of the lesser known classes and keywords of C#.  In the next few weeks, we will discuss the concurrent collections and how they have changed the face of concurrent programming. This week’s post will begin with a general introduction and discuss the ConcurrentStack<T> and ConcurrentQueue<T>.  Then in the following post we’ll discuss the ConcurrentDictionary<T> and ConcurrentBag<T>.  Finally, we shall close on the third post with a discussion of the BlockingCollection<T>. For more of the "Little Wonders" posts, see the index here. A brief history of collections In the beginning was the .NET 1.0 Framework.  And out of this framework emerged the System.Collections namespace, and it was good.  It contained all the basic things a growing programming language needs like the ArrayList and Hashtable collections.  The main problem, of course, with these original collections is that they held items of type object which means you had to be disciplined enough to use them correctly or you could end up with runtime errors if you got an object of a type you weren't expecting. Then came .NET 2.0 and generics and our world changed forever!  With generics the C# language finally got an equivalent of the very powerful C++ templates.  As such, the System.Collections.Generic was born and we got type-safe versions of all are favorite collections.  The List<T> succeeded the ArrayList and the Dictionary<TKey,TValue> succeeded the Hashtable and so on.  The new versions of the library were not only safer because they checked types at compile-time, in many cases they were more performant as well.  So much so that it's Microsoft's recommendation that the System.Collections original collections only be used for backwards compatibility. So we as developers came to know and love the generic collections and took them into our hearts and embraced them.  The problem is, thread safety in both the original collections and the generic collections can be problematic, for very different reasons. Now, if you are only doing single-threaded development you may not care – after all, no locking is required.  Even if you do have multiple threads, if a collection is “load-once, read-many” you don’t need to do anything to protect that container from multi-threaded access, as illustrated below: 1: public static class OrderTypeTranslator 2: { 3: // because this dictionary is loaded once before it is ever accessed, we don't need to synchronize 4: // multi-threaded read access 5: private static readonly Dictionary<string, char> _translator = new Dictionary<string, char> 6: { 7: {"New", 'N'}, 8: {"Update", 'U'}, 9: {"Cancel", 'X'} 10: }; 11:  12: // the only public interface into the dictionary is for reading, so inherently thread-safe 13: public static char? Translate(string orderType) 14: { 15: char charValue; 16: if (_translator.TryGetValue(orderType, out charValue)) 17: { 18: return charValue; 19: } 20:  21: return null; 22: } 23: } Unfortunately, most of our computer science problems cannot get by with just single-threaded applications or with multi-threading in a load-once manner.  Looking at  today's trends, it's clear to see that computers are not so much getting faster because of faster processor speeds -- we've nearly reached the limits we can push through with today's technologies -- but more because we're adding more cores to the boxes.  With this new hardware paradigm, it is even more important to use multi-threaded applications to take full advantage of parallel processing to achieve higher application speeds. So let's look at how to use collections in a thread-safe manner. Using historical collections in a concurrent fashion The early .NET collections (System.Collections) had a Synchronized() static method that could be used to wrap the early collections to make them completely thread-safe.  This paradigm was dropped in the generic collections (System.Collections.Generic) because having a synchronized wrapper resulted in atomic locks for all operations, which could prove overkill in many multithreading situations.  Thus the paradigm shifted to having the user of the collection specify their own locking, usually with an external object: 1: public class OrderAggregator 2: { 3: private static readonly Dictionary<string, List<Order>> _orders = new Dictionary<string, List<Order>>(); 4: private static readonly _orderLock = new object(); 5:  6: public void Add(string accountNumber, Order newOrder) 7: { 8: List<Order> ordersForAccount; 9:  10: // a complex operation like this should all be protected 11: lock (_orderLock) 12: { 13: if (!_orders.TryGetValue(accountNumber, out ordersForAccount)) 14: { 15: _orders.Add(accountNumber, ordersForAccount = new List<Order>()); 16: } 17:  18: ordersForAccount.Add(newOrder); 19: } 20: } 21: } Notice how we’re performing several operations on the dictionary under one lock.  With the Synchronized() static methods of the early collections, you wouldn’t be able to specify this level of locking (a more macro-level).  So in the generic collections, it was decided that if a user needed synchronization, they could implement their own locking scheme instead so that they could provide synchronization as needed. The need for better concurrent access to collections Here’s the problem: it’s relatively easy to write a collection that locks itself down completely for access, but anything more complex than that can be difficult and error-prone to write, and much less to make it perform efficiently!  For example, what if you have a Dictionary that has frequent reads but in-frequent updates?  Do you want to lock down the entire Dictionary for every access?  This would be overkill and would prevent concurrent reads.  In such cases you could use something like a ReaderWriterLockSlim which allows for multiple readers in a lock, and then once a writer grabs the lock it blocks all further readers until the writer is done (in a nutshell).  This is all very complex stuff to consider. Fortunately, this is where the Concurrent Collections come in.  The Parallel Computing Platform team at Microsoft went through great pains to determine how to make a set of concurrent collections that would have the best performance characteristics for general case multi-threaded use. Now, as in all things involving threading, you should always make sure you evaluate all your container options based on the particular usage scenario and the degree of parallelism you wish to acheive. This article should not be taken to understand that these collections are always supperior to the generic collections. Each fills a particular need for a particular situation. Understanding what each container is optimized for is key to the success of your application whether it be single-threaded or multi-threaded. General points to consider with the concurrent collections The MSDN points out that the concurrent collections all support the ICollection interface. However, since the collections are already synchronized, the IsSynchronized property always returns false, and SyncRoot always returns null.  Thus you should not attempt to use these properties for synchronization purposes. Note that since the concurrent collections also may have different operations than the traditional data structures you may be used to.  Now you may ask why they did this, but it was done out of necessity to keep operations safe and atomic.  For example, in order to do a Pop() on a stack you have to know the stack is non-empty, but between the time you check the stack’s IsEmpty property and then do the Pop() another thread may have come in and made the stack empty!  This is why some of the traditional operations have been changed to make them safe for concurrent use. In addition, some properties and methods in the concurrent collections achieve concurrency by creating a snapshot of the collection, which means that some operations that were traditionally O(1) may now be O(n) in the concurrent models.  I’ll try to point these out as we talk about each collection so you can be aware of any potential performance impacts.  Finally, all the concurrent containers are safe for enumeration even while being modified, but some of the containers support this in different ways (snapshot vs. dirty iteration).  Once again I’ll highlight how thread-safe enumeration works for each collection. ConcurrentStack<T>: The thread-safe LIFO container The ConcurrentStack<T> is the thread-safe counterpart to the System.Collections.Generic.Stack<T>, which as you may remember is your standard last-in-first-out container.  If you think of algorithms that favor stack usage (for example, depth-first searches of graphs and trees) then you can see how using a thread-safe stack would be of benefit. The ConcurrentStack<T> achieves thread-safe access by using System.Threading.Interlocked operations.  This means that the multi-threaded access to the stack requires no traditional locking and is very, very fast! For the most part, the ConcurrentStack<T> behaves like it’s Stack<T> counterpart with a few differences: Pop() was removed in favor of TryPop() Returns true if an item existed and was popped and false if empty. PushRange() and TryPopRange() were added Allows you to push multiple items and pop multiple items atomically. Count takes a snapshot of the stack and then counts the items. This means it is a O(n) operation, if you just want to check for an empty stack, call IsEmpty instead which is O(1). ToArray() and GetEnumerator() both also take snapshots. This means that iteration over a stack will give you a static view at the time of the call and will not reflect updates. Pushing on a ConcurrentStack<T> works just like you’d expect except for the aforementioned PushRange() method that was added to allow you to push a range of items concurrently. 1: var stack = new ConcurrentStack<string>(); 2:  3: // adding to stack is much the same as before 4: stack.Push("First"); 5:  6: // but you can also push multiple items in one atomic operation (no interleaves) 7: stack.PushRange(new [] { "Second", "Third", "Fourth" }); For looking at the top item of the stack (without removing it) the Peek() method has been removed in favor of a TryPeek().  This is because in order to do a peek the stack must be non-empty, but between the time you check for empty and the time you execute the peek the stack contents may have changed.  Thus the TryPeek() was created to be an atomic check for empty, and then peek if not empty: 1: // to look at top item of stack without removing it, can use TryPeek. 2: // Note that there is no Peek(), this is because you need to check for empty first. TryPeek does. 3: string item; 4: if (stack.TryPeek(out item)) 5: { 6: Console.WriteLine("Top item was " + item); 7: } 8: else 9: { 10: Console.WriteLine("Stack was empty."); 11: } Finally, to remove items from the stack, we have the TryPop() for single, and TryPopRange() for multiple items.  Just like the TryPeek(), these operations replace Pop() since we need to ensure atomically that the stack is non-empty before we pop from it: 1: // to remove items, use TryPop or TryPopRange to get multiple items atomically (no interleaves) 2: if (stack.TryPop(out item)) 3: { 4: Console.WriteLine("Popped " + item); 5: } 6:  7: // TryPopRange will only pop up to the number of spaces in the array, the actual number popped is returned. 8: var poppedItems = new string[2]; 9: int numPopped = stack.TryPopRange(poppedItems); 10:  11: foreach (var theItem in poppedItems.Take(numPopped)) 12: { 13: Console.WriteLine("Popped " + theItem); 14: } Finally, note that as stated before, GetEnumerator() and ToArray() gets a snapshot of the data at the time of the call.  That means if you are enumerating the stack you will get a snapshot of the stack at the time of the call.  This is illustrated below: 1: var stack = new ConcurrentStack<string>(); 2:  3: // adding to stack is much the same as before 4: stack.Push("First"); 5:  6: var results = stack.GetEnumerator(); 7:  8: // but you can also push multiple items in one atomic operation (no interleaves) 9: stack.PushRange(new [] { "Second", "Third", "Fourth" }); 10:  11: while(results.MoveNext()) 12: { 13: Console.WriteLine("Stack only has: " + results.Current); 14: } The only item that will be printed out in the above code is "First" because the snapshot was taken before the other items were added. This may sound like an issue, but it’s really for safety and is more correct.  You don’t want to enumerate a stack and have half a view of the stack before an update and half a view of the stack after an update, after all.  In addition, note that this is still thread-safe, whereas iterating through a non-concurrent collection while updating it in the old collections would cause an exception. ConcurrentQueue<T>: The thread-safe FIFO container The ConcurrentQueue<T> is the thread-safe counterpart of the System.Collections.Generic.Queue<T> class.  The concurrent queue uses an underlying list of small arrays and lock-free System.Threading.Interlocked operations on the head and tail arrays.  Once again, this allows us to do thread-safe operations without the need for heavy locks! The ConcurrentQueue<T> (like the ConcurrentStack<T>) has some departures from the non-concurrent counterpart.  Most notably: Dequeue() was removed in favor of TryDequeue(). Returns true if an item existed and was dequeued and false if empty. Count does not take a snapshot It subtracts the head and tail index to get the count.  This results overall in a O(1) complexity which is quite good.  It’s still recommended, however, that for empty checks you call IsEmpty instead of comparing Count to zero. ToArray() and GetEnumerator() both take snapshots. This means that iteration over a queue will give you a static view at the time of the call and will not reflect updates. The Enqueue() method on the ConcurrentQueue<T> works much the same as the generic Queue<T>: 1: var queue = new ConcurrentQueue<string>(); 2:  3: // adding to queue is much the same as before 4: queue.Enqueue("First"); 5: queue.Enqueue("Second"); 6: queue.Enqueue("Third"); For front item access, the TryPeek() method must be used to attempt to see the first item if the queue.  There is no Peek() method since, as you’ll remember, we can only peek on a non-empty queue, so we must have an atomic TryPeek() that checks for empty and then returns the first item if the queue is non-empty. 1: // to look at first item in queue without removing it, can use TryPeek. 2: // Note that there is no Peek(), this is because you need to check for empty first. TryPeek does. 3: string item; 4: if (queue.TryPeek(out item)) 5: { 6: Console.WriteLine("First item was " + item); 7: } 8: else 9: { 10: Console.WriteLine("Queue was empty."); 11: } Then, to remove items you use TryDequeue().  Once again this is for the same reason we have TryPeek() and not Peek(): 1: // to remove items, use TryDequeue. If queue is empty returns false. 2: if (queue.TryDequeue(out item)) 3: { 4: Console.WriteLine("Dequeued first item " + item); 5: } Just like the concurrent stack, the ConcurrentQueue<T> takes a snapshot when you call ToArray() or GetEnumerator() which means that subsequent updates to the queue will not be seen when you iterate over the results.  Thus once again the code below will only show the first item, since the other items were added after the snapshot. 1: var queue = new ConcurrentQueue<string>(); 2:  3: // adding to queue is much the same as before 4: queue.Enqueue("First"); 5:  6: var iterator = queue.GetEnumerator(); 7:  8: queue.Enqueue("Second"); 9: queue.Enqueue("Third"); 10:  11: // only shows First 12: while (iterator.MoveNext()) 13: { 14: Console.WriteLine("Dequeued item " + iterator.Current); 15: } Using collections concurrently You’ll notice in the examples above I stuck to using single-threaded examples so as to make them deterministic and the results obvious.  Of course, if we used these collections in a truly multi-threaded way the results would be less deterministic, but would still be thread-safe and with no locking on your part required! For example, say you have an order processor that takes an IEnumerable<Order> and handles each other in a multi-threaded fashion, then groups the responses together in a concurrent collection for aggregation.  This can be done easily with the TPL’s Parallel.ForEach(): 1: public static IEnumerable<OrderResult> ProcessOrders(IEnumerable<Order> orderList) 2: { 3: var proxy = new OrderProxy(); 4: var results = new ConcurrentQueue<OrderResult>(); 5:  6: // notice that we can process all these in parallel and put the results 7: // into our concurrent collection without needing any external locking! 8: Parallel.ForEach(orderList, 9: order => 10: { 11: var result = proxy.PlaceOrder(order); 12:  13: results.Enqueue(result); 14: }); 15:  16: return results; 17: } Summary Obviously, if you do not need multi-threaded safety, you don’t need to use these collections, but when you do need multi-threaded collections these are just the ticket! The plethora of features (I always think of the movie The Three Amigos when I say plethora) built into these containers and the amazing way they acheive thread-safe access in an efficient manner is wonderful to behold. Stay tuned next week where we’ll continue our discussion with the ConcurrentBag<T> and the ConcurrentDictionary<TKey,TValue>. For some excellent information on the performance of the concurrent collections and how they perform compared to a traditional brute-force locking strategy, see this wonderful whitepaper by the Microsoft Parallel Computing Platform team here.   Tweet Technorati Tags: C#,.NET,Concurrent Collections,Collections,Multi-Threading,Little Wonders,BlackRabbitCoder,James Michael Hare

    Read the article

  • How to debug a .bash_profile

    - by Blankman
    I was updating my .bash_profile, and unfortunetly I made a few updates and now I am getting: env: bash: No such file or directory env: bash: No such file or directory env: bash: No such file or directory env: bash: No such file or directory env: bash: No such file or directory -bash: tar: command not found -bash: grep: command not found -bash: cat: command not found -bash: find: command not found -bash: dirname: command not found -bash: /preexec.sh.lib: No such file or directory -bash: preexec_install: command not found -bash: sed: command not found -bash: git: command not found My bash_profile actually pulls in other .sh files (sources them) so I am not exactly sure which modification may have caused this. Now if I even try and to a list of files, I get: >ls -bash: ls: command not found -bash: sed: command not found -bash: git: command not found Any tips on how to trace the source of the error, and how to be able to use the terminal for basic things like listing files etc?

    Read the article

  • Every command fails with "command not found" after changing .bash_profile?

    - by Blankman
    I was updating my .bash_profile, and unfortunetly I made a few updates and now I am getting: env: bash: No such file or directory env: bash: No such file or directory env: bash: No such file or directory env: bash: No such file or directory env: bash: No such file or directory -bash: tar: command not found -bash: grep: command not found -bash: cat: command not found -bash: find: command not found -bash: dirname: command not found -bash: /preexec.sh.lib: No such file or directory -bash: preexec_install: command not found -bash: sed: command not found -bash: git: command not found My bash_profile actually pulls in other .sh files (sources them) so I am not exactly sure which modification may have caused this. Now if I even try and to a list of files, I get: >ls -bash: ls: command not found -bash: sed: command not found -bash: git: command not found Any tips on how to trace the source of the error, and how to be able to use the terminal for basic things like listing files etc?

    Read the article

  • Windows Azure Evolution - Web Sites (aka Antares) Part 1

    - by Shaun
    This is the 3rd post of my Windows Azure Evolution series, focus on the new features and enhancement which was alone with the Windows Azure Platform Upgrade June 2012, announced at the MEET Windows Azure event on 7th June. In the first post I introduced the new preview developer portal and how to works for the existing features such as cloud services, storages and SQL databases. In the second one I talked about the Windows Azure .NET SDK 1.7 on the latest Visual Studio 2012 RC on Windows 8. From this one I will begin to introduce some new features. Now let’s have a look on the first one of them, Windows Azure Web Sites.   Overview Windows Azure Web Sites (WAWS), as known as Antares, was a new feature still in preview stage in this upgrade. It allows people to quickly and easily deploy websites to a highly scalable cloud environment, uses the languages and open source apps of the choice then deploy such as FTP, Git and TFS. It also can be integrated with Windows Azure services like SQL Database, Caching, CDN and Storage easily. After read its introduction we may have a question: since we can deploy a website from both cloud service web role and web site, what’s the different between them? So, let’s have a quick compare.   CLOUD SERVICE WEB SITE OS Windows Server Windows Server Virtualization Windows Azure Virtual Machine Windows Azure Virtual Machine Host IIS IIS Platform ASP.NET WebForm, ASP.NET MVC, WCF ASP.NET WebForm, ASP.NET MVC, PHP Language C#, VB.NET C#, VB.NET, PHP Database SQL Database SQL Database, MySQL Architecture Multi layered, background worker, message queuing, etc.. Simple website with backend database. VS Project Windows Azure Cloud Service ASP.NET Web Form, ASP.NET MVC, etc.. Out-of-box Gallery (none) Drupal, DotNetNuke, WordPress, etc.. Deployment Package upload, Visual Studio publish FTP, Git, TFS, WebMatrix Compute Mode Dedicate VM Shared Across VMs, Dedicate VM Scale Scale up, scale out Scale up, scale out As you can see, there are many difference between the cloud service and web site, but the main point is that, the cloud service focus on those complex architecture web application. For example, if you want to build a website with frontend layer, middle business layer and data access layer, with some background worker process connected through the message queue, then you should better use cloud service, since it provides full control of your code and application. But if you just want to build a personal blog or a  business portal, then you can use the web site. Since the web site have many galleries, you can create them even without any coding and configuration. David Pallmann have an awesome figure explains the benefits between the could service, web site and virtual machine.   Create a Personal Blog in Web Site from Gallery As I mentioned above, one of the big feature in WAWS is to build a website from an existing gallery, which means we don’t need to coding and configure. What we need to do is open the windows azure developer portal and click the NEW button, select WEB SITE and FROM GALLERY. In the popping up windows there are many websites we can choose to use. For example, for personal blog there are Orchard CMS, WordPress; for CMS there are DotNetNuke, Drupal 7, mojoPortal. Let’s select WordPress and click the next button. The next step is to configure the web site. We will need to specify the DNS name and select the subscription and region. Since the WordPress uses MySQL as its backend database, we also need to create a MySQL database as well. Windows Azure Web Sites utilize ClearDB to host the MySQL databases. You cannot create a MySQL database directly from SQL Databases section. Finally, since we selected to create a new MySQL database we need to specify the database name and region in the last step. Also we need to accept the ClearDB’s terms as well. Then windows azure platform will download the WordPress codes and deploy the MySQL database and website. Then it will be ready to use. Select the website and click the BROWSE button, the WordPress administration page will be shown. After configured the WordPress here is my personal web blog on the cloud. It took me no more than 10 minutes to establish without any coding.   Monitor, Configure, Scale and Linked Resources Let’s click into the website I had just created in the portal and have a look on what we can do. In the website details page where are five sections. - Dashboard The overall information about this website, such as the basic usage status, public URL, compute mode, FTP address, subscription and links that we can specify the deployment credentials, TFS and Git publish setting, etc.. - Monitor Some status information such as the CPU usage, memory usage etc., errors, etc.. We can add more metrics by clicking the ADD METRICS button and the bottom as well. - Configure Here we can set the configurations of our website such as the .NET and PHP runtime version, diagnostics settings, application settings and the IIS default documents. - Scale This is something interesting. In WAWS there are two compute mode or called web site mode. One is “shared”, which means our website will be shared with other web sites in a group of windows azure virtual machines. Each web site have its own process (w3wp.exe) with some sandbox technology to isolate from others. When we need to scaling-out our web site in shared mode, we actually increased the working process count. Hence in shared mode we cannot specify the virtual machine size since they are shared across all web sites. This is a little bit different than the scaling mode of the cloud service (hosted service web role and worker role). The other mode called “dedicate”, which means our web site will use the whole windows azure virtual machine. This is the same hosting behavior as cloud service web role. In web role it will be deployed on the virtual machines we specified and all of them are only used by us. In web sites dedicate mode, it’s the same. In this mode when we scaling-out our web site we will use more virtual machines, and each of them will only host our own website. And we can specify the virtual machine size in this mode. In the developer portal we can select which mode we are using from the scale section. In shared mode we can only specify the instance count, but in dedicate mode we can specify the instance size as well as the instance count. - Linked Resource The MySQL database created alone with the creation of our WordPress web site is a linked resource. We can add more linked resources in this section.   Pricing For the web site itself, since this feature is in preview period if you are using shared mode, then you will get free up to 10 web sites. But if you are using dedicate mode, the price would be the virtual machines you are using. For example, if you are using dedicate and configured two middle size virtual machines then you will pay $230.40 per month. If there is SQL Database linked to your web site then they will be charged separately based on the Pay-As-You-Go price. For example a 1GB web edition database costs $9.99 per month. And the bandwidth will be charged as well. For example 10GB outbound data transfer costs $1.20 per month. For more information about the pricing please have a look at the windows azure pricing page.   Summary Windows Azure Web Sites gives us easier and quicker way to create, develop and deploy website to window azure platform. Comparing with the cloud service web role, the WAWS have many out-of-box gallery we can use directly. So if you just want to build a blog, CMS or business portal you don’t need to learn ASP.NET, you don’t need to learn how to configure DotNetNuke, you don’t need to learn how to prepare PHP and MySQL. By using WAWS gallery you can establish a website within 10 minutes without any lines of code. But in some cases we do need to code by ourselves. We may need to tweak the layout of our pages, or we may have a traditional ASP.NET or PHP web application which needed to migrated to the cloud. Besides the gallery WAWS also provides many features to download, upload code. It also provides the feature to integrate with some version control services such as TFS and Git. And it also provides the deploy approaches through FTP and Web Deploy. In the next post I will demonstrate how to use WebMatrix to download and modify the website, and how to use TFS and Git to deploy automatically one our code changes committed.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Meet the New Windows Azure

    - by ScottGu
    Today we are releasing a major set of improvements to Windows Azure.  Below is a short-summary of just a few of them: New Admin Portal and Command Line Tools Today’s release comes with a new Windows Azure portal that will enable you to manage all features and services offered on Windows Azure in a seamless, integrated way.  It is very fast and fluid, supports filtering and sorting (making it much easier to use for large deployments), works on all browsers, and offers a lot of great new features – including built-in VM, Web site, Storage, and Cloud Service monitoring support. The new portal is built on top of a REST-based management API within Windows Azure – and everything you can do through the portal can also be programmed directly against this Web API. We are also today releasing command-line tools (which like the portal call the REST Management APIs) to make it even easier to script and automate your administration tasks.  We are offering both a Powershell (for Windows) and Bash (for Mac and Linux) set of tools to download.  Like our SDKs, the code for these tools is hosted on GitHub under an Apache 2 license. Virtual Machines Windows Azure now supports the ability to deploy and run durable VMs in the cloud.  You can easily create these VMs using a new Image Gallery built-into the new Windows Azure Portal, or alternatively upload and run your own custom-built VHD images. Virtual Machines are durable (meaning anything you install within them persists across reboots) and you can use any OS with them.  Our built-in image gallery includes both Windows Server images (including the new Windows Server 2012 RC) as well as Linux images (including Ubuntu, CentOS, and SUSE distributions).  Once you create a VM instance you can easily Terminal Server or SSH into it in order to configure and customize the VM however you want (and optionally capture your own image snapshot of it to use when creating new VM instances).  This provides you with the flexibility to run pretty much any workload within Windows Azure.   The new Windows Azure Portal provides a rich set of management features for Virtual Machines – including the ability to monitor and track resource utilization within them.  Our new Virtual Machine support also enables the ability to easily attach multiple data-disks to VMs (which you can then mount and format as drives).  You can optionally enable geo-replication support on these – which will cause Windows Azure to continuously replicate your storage to a secondary data-center at least 400 miles away from your primary data-center as a backup. We use the same VHD format that is supported with Windows virtualization today (and which we’ve released as an open spec), which enables you to easily migrate existing workloads you might already have virtualized into Windows Azure.  We also make it easy to download VHDs from Windows Azure, which also provides the flexibility to easily migrate cloud-based VM workloads to an on-premise environment.  All you need to do is download the VHD file and boot it up locally, no import/export steps required. Web Sites Windows Azure now supports the ability to quickly and easily deploy ASP.NET, Node.js and PHP web-sites to a highly scalable cloud environment that allows you to start small (and for free) and then scale up as your traffic grows.  You can create a new web site in Azure and have it ready to deploy to in under 10 seconds: The new Windows Azure Portal provides built-in administration support for Web sites – including the ability to monitor and track resource utilization in real-time: You can deploy to web-sites in seconds using FTP, Git, TFS and Web Deploy.  We are also releasing tooling updates today for both Visual Studio and Web Matrix that enable developers to seamlessly deploy ASP.NET applications to this new offering.  The VS and Web Matrix publishing support includes the ability to deploy SQL databases as part of web site deployment – as well as the ability to incrementally update database schema with a later deployment. You can integrate web application publishing with source control by selecting the “Set up TFS publishing” or “Set up Git publishing” links on a web-site’s dashboard: Doing do will enable integration with our new TFS online service (which enables a full TFS workflow – including elastic build and testing support), or create a Git repository that you can reference as a remote and push deployments to.  Once you push a deployment using TFS or Git, the deployments tab will keep track of the deployments you make, and enable you to select an older (or newer) deployment and quickly redeploy your site to that snapshot of the code.  This provides a very powerful DevOps workflow experience.   Windows Azure now allows you to deploy up to 10 web-sites into a free, shared/multi-tenant hosting environment (where a site you deploy will be one of multiple sites running on a shared set of server resources).  This provides an easy way to get started on projects at no cost. You can then optionally upgrade your sites to run in a “reserved mode” that isolates them so that you are the only customer within a virtual machine: And you can elastically scale the amount of resources your sites use – allowing you to increase your reserved instance capacity as your traffic scales: Windows Azure automatically handles load balancing traffic across VM instances, and you get the same, super fast, deployment options (FTP, Git, TFS and Web Deploy) regardless of how many reserved instances you use. With Windows Azure you pay for compute capacity on a per-hour basis – which allows you to scale up and down your resources to match only what you need. Cloud Services and Distributed Caching Windows Azure also supports the ability to build cloud services that support rich multi-tier architectures, automated application management, and scale to extremely large deployments.  Previously we referred to this capability as “hosted services” – with this week’s release we are now referring to this capability as “cloud services”.  We are also enabling a bunch of new features with them. Distributed Cache One of the really cool new features being enabled with cloud services is a new distributed cache capability that enables you to use and setup a low-latency, in-memory distributed cache within your applications.  This cache is isolated for use just by your applications, and does not have any throttling limits. This cache can dynamically grow and shrink elastically (without you have to redeploy your app or make code changes), and supports the full richness of the AppFabric Cache Server API (including regions, high availability, notifications, local cache and more).  In addition to supporting the AppFabric Cache Server API, it also now supports the Memcached protocol – allowing you to point code written against Memcached at it (no code changes required). The new distributed cache can be setup to run in one of two ways: 1) Using a co-located approach.  In this option you allocate a percentage of memory in your existing web and worker roles to be used by the cache, and then the cache joins the memory into one large distributed cache.  Any data put into the cache by one role instance can be accessed by other role instances in your application – regardless of whether the cached data is stored on it or another role.  The big benefit with the “co-located” option is that it is free (you don’t have to pay anything to enable it) and it allows you to use what might have been otherwise unused memory within your application VMs. 2) Alternatively, you can add “cache worker roles” to your cloud service that are used solely for caching.  These will also be joined into one large distributed cache ring that other roles within your application can access.  You can use these roles to cache 10s or 100s of GBs of data in-memory very effectively – and the cache can be elastically increased or decreased at runtime within your application: New SDKs and Tooling Support We have updated all of the Windows Azure SDKs with today’s release to include new features and capabilities.  Our SDKs are now available for multiple languages, and all of the source in them is published under an Apache 2 license and and maintained in GitHub repositories. The .NET SDK for Azure has in particular seen a bunch of great improvements with today’s release, and now includes tooling support for both VS 2010 and the VS 2012 RC. We are also now shipping Windows, Mac and Linux SDK downloads for languages that are offered on all of these systems – allowing developers to develop Windows Azure applications using any development operating system. Much, Much More The above is just a short list of some of the improvements that are shipping in either preview or final form today – there is a LOT more in today’s release.  These include new Virtual Private Networking capabilities, new Service Bus runtime and tooling support, the public preview of the new Azure Media Services, new Data Centers, significantly upgraded network and storage hardware, SQL Reporting Services, new Identity features, support within 40+ new countries and territories, and much, much more. You can learn more about Windows Azure and sign-up to try it for free at http://windowsazure.com.  You can also watch a live keynote I’m giving at 1pm June 7th (later today) where I’ll walk through all of the new features.  We will be opening up the new features I discussed above for public usage a few hours after the keynote concludes.  We are really excited to see the great applications you build with them. Hope this helps, Scott

    Read the article

  • iCloud stuff stops working while connected to OpenVPN

    - by Taco Bob
    I have a fairly simple OpenVPN setup on an OpenVZ VPS with Ubuntu 11.10. Client is the Viscosity client on Mac OS X 10.8.2, and after some testing, we can rule out the client as being part of the problem. Everything has been working fine except for Apple's iCloud stuff. Web surfing, email, FTP, NNTP, and Skype are all working as expected. It's ONLY the iCloud services that cease to function. If I connect to the VPN, iCloud stuff stops working. I no longer get anything in Messages, Calendar items don't get updated, and Notifications stop working. If I disconnect, the iCloud stuff all starts working. Connect again, iCloud stops working. Here's the server.conf: status openvpn-status.log log /var/log/openvpn.log verb 4 port 1194 proto udp dev tun ca /etc/openvpn/ca.crt cert /etc/openvpn/server.crt key /etc/openvpn/server.key dh /etc/openvpn/dh1024.pem server 10.9.8.0 255.255.255.0 ifconfig-pool-persist ipp.txt push "redirect-gateway def1" push “dhcp-option DNS 10.9.8.1? keepalive 10 120 duplicate-cn cipher BF-CBC comp-lzo user nobody group nogroup persist-key persist-tun tun-mtu 1500 mssfix 1400 I'm using iptables in a script, and it's also fairly simplistic. iptables -F iptables -t nat -F iptables -t mangle -F iptables -A FORWARD -i tun0 -o venet0 -j ACCEPT iptables -A FORWARD -i venet0 -o tun0 -j ACCEPT iptables -A INPUT -p tcp --dport 22 -j ACCEPT iptables -A INPUT -p tcp --dport 1194 -j ACCEPT iptables -A INPUT -p udp --dport 1194 -j ACCEPT iptables -t nat -A POSTROUTING -s 10.9.8.0/24 -j SNAT --to-source <server's public ip> echo 1 > /proc/sys/net/ipv4/ip_forward I tried forwarding ports as well, with no success. iptables -A FORWARD -p tcp -d 10.9.8.0/24 --dport 5222:5230 -j ACCEPT iptables -t nat -A PREROUTING -p tcp --dport 5222:5230 -j DNAT --to-destination 10.9.8.6 I am also sometimes behind a double-NAT situation that I have no control over. Client -> work VPN -> my OpenVPN box -> Internet. Client -> Airport Express -> ISP (which is doing NAT) -> my OpenVPN box -> Internet. Those two situations are just the fact of life where I am, and I cannot change them. I do have full control over my client and the OpenVPN server. I am completely out of ideas. I have posted a similar query at the OpenVPN forums, but it hasn't posted yet and seems to be in their moderation queue still. Tried on freenode irc channels, but nobody is awake, so here I am. I have Googled extensively for this, and can find nothing that is related. Help me get iCloud stuff working again! (I tried serverfault, it was closed as off-topic. I'm trying here and the Unix site as well. Here because it's a more general audience that might know more about OpenVPN based on the number of questions I see asked about it) EDIT: -I have also tried upgrading to Version: 2.3-beta1-debian0 - issue persists. -Removed all iptables rules except for the ones that flush -left this rule:iptables -t nat -A POSTROUTING -s 10.9.8.0/24 -j SNAT --to-source (server ip) -added iptables -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT still, nothing works. I can see traffic in tcpdump on the server if i watch the tunnel: 20:03:48.702835 IP nk11p01st-courier105-bz.push.apple.com.5223 10.9.8.6.60772: Flags [F.], seq 2635, ack 1218, win 76, options [nop,nop,TS val 914984811 ecr 745921298], length 0 20:03:48.911244 IP 10.9.8.6.60772 nk11p01st-courier105-bz.push.apple.com.5223: Flags [R], seq 3621143451, win 0, length 0 But still, no push messages/notifications are ever delivered. :/ EDIT: * Further testing indicates that it might actually be the client after all.

    Read the article

  • Best solution for a team home server

    - by aliasbody
    I created a home server with Ubuntu 12.04 Server (using an old Netbook with an Atom CPU and 512Mb). The idea is just to be used for a small team (maximum 10 persons) that will have constant access by SSH to the main projects and could add features with Git, and will, as well, have their own directory (with VirtualHost configured) for their own personal projects. Everything is configured and running, but my question is : What is the best solution here for everyone to work? It is to have them on the http group and then all have access as normal users to the /var/www folder (that also contains GitWeb and Drupal), or would be to create a new user named after the project (as an example) where only those with the password could have access to work (configured with VirtualHost). Notice: The idea is to have 1 person responsible of the server directly (since he is the one who is hosting it), 2 more people that will have access to the root from their home in order to configure anything from their home, plus anyone else that joins the group without any root access, but just the necessary access to create personal works and work with Git.

    Read the article

  • Identify my terminal session that started a particular process

    - by Sam
    I'm using Gnome on Ubuntu. I often have 8-20 terminal sessions open and in some of them I have su'd to a different user. The specific problem that caused me to write this query happens when using git status, but this is more general issue. git status will tell me I have an uncontrolled file .foo.java.swp. This means that in one of my terminal sessions I have vi open on foo.java. I need a script or tool that would tell me in which terminal session that vi is running. I can do a "ps aux | grep vi" to pretty easily find the pid of the particular vi. It would be nice if the tool highlighted the terminal on my task bar in some way. Thanks. -Sam

    Read the article

  • Jumping around to work on different features when you get stuck, is it a source of project failures?

    - by codecompleting
    On personal projects (or work), if one gets stuck on a problem, or waiting to figure out a solution to the problem, if you jump to another section of your code, don't you think it will be a good reason your application will be buggy or worse yet never get completed? Assuming you are not using git and code each feature to a specific branch, things can get out of hand since you have 3 different features you are working on, and you have unresolved issues in each. So when you get done to work, you get stressed out because you have these hanging issues and half-baked code lingering about. What's the best way to avoid this problem? (if you have it) I'm guessing using something like git and creating a branch per feature is the safest way to avoid this bad habit. Any other suggestions?

    Read the article

  • Is there a version control system that can show changes to a specific method or function?

    - by chesles
    Sometimes it would be nice to be able to say something like: (git|svn|hg|etc) diff Foo.c:main (git|svn|hg|etc) log log Foo.c:main to see the changes made to a specific function within a source file since the last commit, or the complete history of changes. My question is two-fold: Does something exist that does this? Would such a tool be practical? It would have to do some simple parsing of the code at each revision in order to compare different versions of the function; would the overhead be too much for it to be efficient?

    Read the article

  • Release Note for 3/30/2012

    We have been pretty busy working on a new UI for CodePlex, I will have a preview post coming shortly. Here are the notes from today’s release: Updated source code tab to show Author and Committer for Git (Thanks to Brad Wilson for reporting) Fixed issue where pagination did not work correctly in topic view Fixed issue where additional comments on a given line of code would get overridden for Git project Have ideas on how to improve CodePlex? Visit our ideas page! Vote for your favorite ideas or submit a new one. Got Twitter? Follow us and keep apprised of the latest releases and service status at @codeplex.

    Read the article

  • Why am I getting a " instance has no attribute '__getitem__' " error?

    - by Kevin Yusko
    Here's the code: class BinaryTree: def __init__(self,rootObj): self.key = rootObj self.left = None self.right = None root = [self.key, self.left, self.right] def getRootVal(root): return root[0] def setRootVal(newVal): root[0] = newVal def getLeftChild(root): return root[1] def getRightChild(root): return root[2] def insertLeft(self,newNode): if self.left == None: self.left = BinaryTree(newNode) else: t = BinaryTree(newNode) t.left = self.left self.left = t def insertRight(self,newNode): if self.right == None: self.right = BinaryTree(newNode) else: t = BinaryTree(newNode) t.right = self.right self.right = t def buildParseTree(fpexp): fplist = fpexp.split() pStack = Stack() eTree = BinaryTree('') pStack.push(eTree) currentTree = eTree for i in fplist: if i == '(': currentTree.insertLeft('') pStack.push(currentTree) currentTree = currentTree.getLeftChild() elif i not in '+-*/)': currentTree.setRootVal(eval(i)) parent = pStack.pop() currentTree = parent elif i in '+-*/': currentTree.setRootVal(i) currentTree.insertRight('') pStack.push(currentTree) currentTree = currentTree.getRightChild() elif i == ')': currentTree = pStack.pop() else: print "error: I don't recognize " + i return eTree def postorder(tree): if tree != None: postorder(tree.getLeftChild()) postorder(tree.getRightChild()) print tree.getRootVal() def preorder(self): print self.key if self.left: self.left.preorder() if self.right: self.right.preorder() def inorder(tree): if tree != None: inorder(tree.getLeftChild()) print tree.getRootVal() inorder(tree.getRightChild()) class Stack: def __init__(self): self.items = [] def isEmpty(self): return self.items == [] def push(self, item): self.items.append(item) def pop(self): return self.items.pop() def peek(self): return self.items[len(self.items)-1] def size(self): return len(self.items) def main(): parseData = raw_input( "Please enter the problem you wished parsed.(NOTE: problem must have parenthesis to seperate each binary grouping and must be spaced out.) " ) tree = buildParseTree(parseData) print( "The post order is: ", + postorder(tree)) print( "The post order is: ", + postorder(tree)) print( "The post order is: ", + preorder(tree)) print( "The post order is: ", + inorder(tree)) main() And here is the error: Please enter the problem you wished parsed.(NOTE: problem must have parenthesis to seperate each binary grouping and must be spaced out.) ( 1 + 2 ) Traceback (most recent call last): File "C:\Users\Kevin\Desktop\Python Stuff\Assignment 11\parseTree.py", line 108, in main() File "C:\Users\Kevin\Desktop\Python Stuff\Assignment 11\parseTree.py", line 102, in main tree = buildParseTree(parseData) File "C:\Users\Kevin\Desktop\Python Stuff\Assignment 11\parseTree.py", line 46, in buildParseTree currentTree = currentTree.getLeftChild() File "C:\Users\Kevin\Desktop\Python Stuff\Assignment 11\parseTree.py", line 15, in getLeftChild return root[1] AttributeError: BinaryTree instance has no attribute '__getitem__'

    Read the article

  • Permission denied: .hg\store\lock

    - by harpo
    This smells like a serverfault question, yet there are many similar questions here. Your call. I'm setting up Mercurial over IIS6, and thanks to a number of detailed blogs, it's working fine — almost. I can browse and clone the repositories fine, but this is what happens when I try to push: D:\sample2>hg push pushing to http://localhost/hg/sample2 searching for changes abort: HTTP Error 500: Permission denied: .hg\store\lock First of all, there is no such file or folder. Second, the App Pool's logon has total permission on the repository's parent directory, with these inherited ad infinitum. The repository is located on another logical drive (on the same machine), and if I push to it directly, that also works: D:\sample2>hg push e:\hg\sample2 pushing to e:\hg\sample2 searching for changes adding changesets adding manifests adding file changes added 1 changesets with 1 changes to 1 files If I change the password in my hgrc, the message indicates a failed authorization, so I believe that's working. I've been fighting this for a couple of days, so any leads would be helpful. Thanks!

    Read the article

  • Strings in array are no longer strings after jQuery.each()

    - by Álvaro G. Vicario
    I'm pretty confused with the behaviour of arrays of strings when I loop them through the jQuery.each() method. Apparently, the strings become jQuery objects inside the callback function. However, I cannot use the this.get() method to obtain the original string; doing so triggers a this.get is not a function error message. I suppose the reason is that it's not a DOM node. I can do $(this).get() but it makes my string become an array (from "foo" to ["f", "o", "o"]). How can I cast it back to string? I need to get a variable of String type because I pass it to other functions that compare the values among them. I enclose a self-contained test case (requires Firebug's console): <!DOCTYPE html> <html> <head><title></title> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js"></script> <script type="text/javascript"><!-- $(function(){ var foo = []; var $foo = $(foo); foo.push("987"); $foo.push("987"); foo.push("654"); $foo.push("654"); $.each(foo, function(i){ console.log("foo[%d]: object=%o; value=%s; string=%o", i, this, this, $(this).get()); // this.get() does not exist }); $foo.each(function(i){ console.log("$foo[%d]: object=%o; value=%s; string=%o", i, this, this, $(this).get()); // this.get() does not exist }); }); //--></script> </head> <body> </body> </html>

    Read the article

  • posting nutch data into a BASIC auth secured Solr instance

    - by mlathe
    Hi. I've secured a solr instance using BASIC auth, kind of how it is shown here: http://blog.comtaste.com/2009/02/securing_your_solr_server_on_t.html Now i'm trying to update my batch processes to push data into the authenticated instance. The ones using "curl" are easy, but i also have a Nutch crawl that uses the "solrindex" command to push data into Solr. When i do that i get this error: 2010-02-22 12:09:28,226 INFO auth.AuthChallengeProcessor - basic authentication scheme selected 2010-02-22 12:09:28,229 INFO httpclient.HttpMethodDirector - No credentials available for BASIC 'Tomcat Manager Application'@ninja:5500 2010-02-22 12:09:28,236 WARN mapred.LocalJobRunner - job_local_0001 org.apache.solr.common.SolrException: Unauthorized Unauthorized request: http://ninja:5500/solr/foo/update?wt=javabin&version=2.2 at org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:343) at org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:183) at org.apache.solr.client.solrj.request.UpdateRequest.process(UpdateRequest.java:217) at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:48) at org.apache.nutch.indexer.solr.SolrWriter.close(SolrWriter.java:69) at org.apache.nutch.indexer.IndexerOutputFormat$1.close(IndexerOutputFormat.java:48) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:447) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:170) 2010-02-22 12:09:29,134 FATAL solr.SolrIndexer - SolrIndexer: java.io.IOException: Job failed! at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1232) at org.apache.nutch.indexer.solr.SolrIndexer.indexSolr(SolrIndexer.java:73) at org.apache.nutch.indexer.solr.SolrIndexer.run(SolrIndexer.java:95) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.nutch.indexer.solr.SolrIndexer.main(SolrIndexer.java:104) Apparently nutch uses SolrJ to push the content, and after going through the solrj code, it's clear that it uses commons-httpclient without providing a way to set the credentials. Here are my question(s) Is this possible to do? ie push from nutch into a BASIC auth secured Solr instance? Is it possible to tell commons-httpclient about a credential without explicitly doing an _httpclient.getState().setCredentials(...)? Anyother ideas? One idea i had was to use an IPfiltering Valve for just the "update" Solr webservices. That would mean you could only make an update call from certain nodes. Thanks

    Read the article

  • iPhone:Tabbar hides when pushing from TableView to UIViewController

    - by user187532
    Hello all, I have four Tab bar items in a Tab bar which is being bottom of the view where i have the TableView. I am adding Tab bar and items programmatically (Refer below code) not through I.B. Click on first three Tab bar items, will show the data in the same TableView itself. But clicking on last Tab bar items will push to another UIViewcontroller and show the data there. The problem here is, when i push to the viewController when clicking on last Tab bar item, main "Tab bar" is getting removed. Tab bar code: UITabBar *tabBar = [[UITabBar alloc] initWithFrame:CGRectMake(0, 376, 320, 44)]; item1 = [[UITabBarItem alloc] initWithTitle:@"First Tab" image:[UIImage imageNamed:@"first.png"] tag:0]; item2 = [[UITabBarItem alloc] initWithTitle:@"Second Tab" image:[UIImage imageNamed:@"second.png"] tag:1]; item3 = [[UITabBarItem alloc] initWithTitle:@"Third Tab" image:[UIImage imageNamed:@"third.png"] tag:2]; item4 = [[UITabBarItem alloc] initWithTitle:@"Fourth Tab" image:[UIImage imageNamed:@"fourth.png"] tag:3]; item5 = [[UITabBarItem alloc] initWithTitle:@"Fifth Tab" image:[UIImage imageNamed:@"fifth.png"] tag:4]; NSArray *items = [NSArray arrayWithObjects: item1,item2,item3,item4, item5, nil]; [tabBar setItems:items animated:NO]; [tabBar setSelectedItem:item1]; tabBar.delegate=self; [self.view addSubview:tabBar]; Push controller code clicking from last Tab bar item: myViewController = [ [MyViewController alloc] initWithNibName:@"MyView" bundle:nil]; myViewController.hidesBottomBarWhenPushed=NO; [[self navigationController] pushViewController:myViewController animated:NO]; I am not seeing bottom Tab bar when i push my current TableView to myViewController. I am seeing full screen view there. I want to see bottom Tab bar always when every tab item clicked. What might be the problem here? Could someone who come across this issue, please share your suggestion to me? Thank you.

    Read the article

  • How to make Stack.Pop threadsafe

    - by user260197
    I am using the BlockingQueue code posted in this question, but realized I needed to use a Stack instead of a Queue given how my program runs. I converted it to use a Stack and renamed the class as needed. For performance I removed locking in Push, since my producer code is single threaded. My problem is how can thread working on the (now) thread safe Stack know when it is empty. Even if I add another thread safe wrapper around Count that locks on the underlying collection like Push and Pop do, I still run into the race condition that access Count and then Pop are not atomic. Possible solutions as I see them (which is preferred and am I missing any that would work better?): Consumer threads catch the InvalidOperationException thrown by Pop(). Pop() return a nullptr when _stack-Count == 0, however C++-CLI does not have the default() operator ala C#. Pop() returns a boolean and uses an output parameter to return the popped element. Here is the code I am using right now: generic <typename T> public ref class ThreadSafeStack { public: ThreadSafeStack() { _stack = gcnew Collections::Generic::Stack<T>(); } public: void Push(T element) { _stack->Push(element); } T Pop(void) { System::Threading::Monitor::Enter(_stack); try { return _stack->Pop(); } finally { System::Threading::Monitor::Exit(_stack); } } public: property int Count { int get(void) { System::Threading::Monitor::Enter(_stack); try { return _stack->Count; } finally { System::Threading::Monitor::Exit(_stack); } } } private: Collections::Generic::Stack<T> ^_stack; };

    Read the article

  • Help me understand Inorder Traversal without using recursion

    - by vito
    I am able to understand preorder traversal without using recursion, but I'm having a hard time with inorder traversal. I just don't seem to get it, perhaps, because I haven't understood the inner working of recursion. This is what I've tried so far: def traverseInorder(node): lifo = Lifo() lifo.push(node) while True: if node is None: break if node.left is not None: lifo.push(node.left) node = node.left continue prev = node while True: if node is None: break print node.value prev = node node = lifo.pop() node = prev if node.right is not None: lifo.push(node.right) node = node.right else: break The inner while-loop just doesn't feel right. Also, some of the elements are getting printed twice; may be I can solve this by checking if that node has been printed before, but that requires another variable, which, again, doesn't feel right. Where am I going wrong? I haven't tried postorder traversal, but I guess it's similar and I will face the same conceptual blockage there, too. Thanks for your time! P.S.: Definitions of Lifo and Node: class Node: def __init__(self, value, left=None, right=None): self.value = value self.left = left self.right = right class Lifo: def __init__(self): self.lifo = () def push(self, data): self.lifo = (data, self.lifo) def pop(self): if len(self.lifo) == 0: return None ret, self.lifo = self.lifo return ret

    Read the article

  • Intel Assembly Programming

    - by Kay
    class MyString{ char buf[100]; int len; boolean append(MyString str){ int k; if(this.len + str.len>100){ for(k=0; k<str.len; k++){ this.buf[this.len] = str.buf[k]; this.len ++; } return false; } return true; } } Does the above translate to: start: push ebp ; save calling ebp mov ebp, esp ; setup new ebp push esi ; push ebx ; mov esi, [ebp + 8] ; esi = 'this' mov ebx, [ebp + 14] ; ebx = str mov ecx, 0 ; k=0 mov edx, [esi + 200] ; edx = this.len append: cmp edx + [ebx + 200], 100 jle ret_true ; if (this.len + str.len)<= 100 then ret_true cmp ecx, edx jge ret_false ; if k >= str.len then ret_false mov [esi + edx], [ebx + 2*ecx] ; this.buf[this.len] = str.buf[k] inc edx ; this.len++ aux: inc ecx ; k++ jmp append ret_true: pop ebx ; restore ebx pop esi ; restore esi pop ebp ; restore ebp ret true ret_false: pop ebx ; restore ebx pop esi ; restore esi pop ebp ; restore ebp ret false My greatest difficulty here is figuring out what to push onto the stack and the math for pointers. NOTE: I'm not allowed to use global variables and i must assume 32-bit ints, 16-bit chars and 8-bit booleans.

    Read the article

  • Is there some advantage to filling a stack with nils and interpreting the "top" as the last non-nil value?

    - by dwilbank
    While working on a rubymonk exercise, I am asked to implement a stack with a hard size limit. It should return 'nil' if I try to push too many values, or if I try to pop an empty stack. My solution is below, followed by their solution. Mine passes every test I can give it in my IDE, while it fails rubymonk's test. But that isn't my question. Question is, why did they choose to fill the stack with nils instead of letting it shrink and grow like it does in my version? It just makes their code more complex. Here's my solution: class Stack def initialize(size) @max = size @store = Array.new end def pop empty? ? nil : @store.pop end def push(element) return nil if full? @store.push(element) end def size @store.size end def look @store.last end private def full? @store.size == @max end def empty? @store.size == 0 end end and here is the accepted answer class Stack def initialize(size) @size = size @store = Array.new(@size) @top = -1 end def pop if empty? nil else popped = @store[@top] @store[@top] = nil @top = @top.pred popped end end def push(element) if full? or element.nil? nil else @top = @top.succ @store[@top] = element self end end def size @size end def look @store[@top] end private def full? @top == (@size - 1) end def empty? @top == -1 end end

    Read the article

  • Intel IA-32 Assembly

    - by Kay
    I'm having a bit of difficulty converting the following java code into Intel IA-32 Assembly: class Person() { char name [8]; int age; void printName() {...} static void printAdults(Person [] list) { for(int k = 0; k < 100; k++){ if (list[k].age >= 18) { list[k].printName(); } } } } My attempt is: Person: push ebp; save callers ebp mov ebp, esp; setup new ebp push esi; esi will hold name push ebx; ebx will hold list push ecx; ecx will hold k init: mov esi, [ebp + 8]; mov ebx, [ebp + 12]; mov ecx, 0; k=0 forloop: cmp ecx, 100; jge end; if k>= 100 then break forloop cmp [ebx + 4 * ecx], 18 ; jl auxloop; if list[k].age < 18 then go to auxloop jmp printName; printName: auxloop: inc ecx; jmp forloop; end: pop ecx; pop ebx; pop esi; pop ebp; Is my code correct? NOTE: I'm not allowed to use global variables.

    Read the article

  • How can I create a new Person object correctly in Javascript?

    - by TimDog
    I'm still struggling with this concept. I have two different Person objects, very simply: ;Person1 = (function() { function P (fname, lname) { P.FirstName = fname; P.LastName = lname; return P; } P.FirstName = ''; P.LastName = ''; var prName = 'private'; P.showPrivate = function() { alert(prName); }; return P; })(); ;Person2 = (function() { var prName = 'private'; this.FirstName = ''; this.LastName = ''; this.showPrivate = function() { alert(prName); }; return function(fname, lname) { this.FirstName = fname; this.LastName = lname; } })(); And let's say I invoke them like this: var s = new Array(); //Person1 s.push(new Person1("sal", "smith")); s.push(new Person1("bill", "wonk")); alert(s[0].FirstName); alert(s[1].FirstName); s[1].showPrivate(); //Person2 s.push(new Person2("sal", "smith")); s.push(new Person2("bill", "wonk")); alert(s[2].FirstName); alert(s[3].FirstName); s[3].showPrivate(); The Person1 set alerts "bill" twice, then alerts "private" once -- so it recognizes the showPrivate function, but the local FirstName variable gets overwritten. The second Person2 set alerts "sal", then "bill", but it fails when the showPrivate function is called. The new keyword here works as I'd expect, but showPrivate (which I thought was a publicly exposed function within the closure) is apparently not public. I want to get my object to have distinct copies of all local variables and also expose public methods -- I've been studying closures quite a bit, but I'm still confused on this one. Thanks for your help.

    Read the article

  • Formvalidator from the iframe.

    - by basit74
    Hello I have a following formvalidatior function in my document. function formValidator(formid) { var form = cic.$(formid); if(!form) return (''); var errors = []; var len = form.elements.length; for(var elementIdx = 0; elementIdx < len; elementIdx++) { var element = form.elements[elementIdx]; if(!element && !element.getAttribute('validationtype')) return (''); switch (element.getAttribute('validationtype')) { case 'text' : if(cic.getValue(element).strip() == "") errors.push(element.getAttribute('validationmsg')); break; case 'email' : if(!cic.isEmail(cic.getValue(element))) errors.push(element.getAttribute('validationmsg')); break; case 'numeric' : if(isNaN(cic.getValue(element).replace(',', '.'))) errors.push(element.getAttribute('validationmsg')); break; case 'confirm' : if(cic.getValue(cic.$(element.getAttribute('sourcefield'))) !== cic.getValue(element)) errors.push(element.getAttribute('validationmsg')); break; } } return (errors.length > 0) ? '<li>' + errors.uniq().join("<li>") : ''; } It works fine, now I have an Iframe in my document, and that I frame contains the form to validate. What will be the best practice to change this function in such a way that it can validate document forms and iframe from simeltaniously. Thanks

    Read the article

  • Pushing a local mercurial repository to a remote server or cloning at server from local

    - by Samaursa
    I have a local repository that I have now decided to push to a remote server (for example, I have a host that allows mercurial repositories and I am also trying to push to bitbucket). The repository has a lot of files and is a little more than 200mb. Locally, I am able to clone the repository without problems. Now I have a lot of changes in this repository, and I have wasted a couple of days trying to figure out how to get the remote server to clone my repository. I cannot get hg serve to work outside of the LAN. I have tried everything. So instead, I created a new repository at the remote servers (both at the host and bitbucket) with nothing in it. Now I am pushing the complete repository that I have locally to these remote locations. So far it has been unsuccessful, as the push operation is stuck on searching for changes and does not give me any other useful output. I have let it go for about an hour with no change. Now my questions is, what am I doing wrong as far as hg serve is concerned? I can access it locally but not remotely (through DynDns - I have configured it properly and the router forwards the ports correctly) so that I can get the server to clone the repository the first time after which I will be pushing to it. My second question is, assuming the clone at server does not work (for example, if I was to push my current repository to bitbucket), is creating an empty repository at the server and then pushing a local repository to the new remote repository ok? Is that the source of the searching for changes problem? Any help in this regard would be greatly appreciated.

    Read the article

  • Building Active Record Conditions in an array - private method 'scan' called error

    - by Nick
    Hi, I'm attempting to build a set of conditions dynamically using an array as suggested in the first answer here: http://stackoverflow.com/questions/1658990/one-or-more-params-in-model-find-conditions-with-ruby-on-rails. However I seem to be doing something incorrectly and I'm not sure if what I'm trying is fundamentally unsound or if I'm simply botching my syntax. I'm simplifying down to a single condition here to try to illustrate the issue as I've tried to built a simple Proof of concept along these lines before layering on the 5 different condition styles I'm contending with. This works: excluded.push 12 excluded.push 30 @allsites = Site.all(:conditions => ["id not in (?)", excluded]) This results in a private method 'scan' called error: excluded.push 12 excluded.push 30 conditionsSet << ["id not in (?)", excluded] @allsites = Site.all(:conditions => conditionsSet) Thanks for any advice. I wasn't sure if the proper thing was to put this as a followup item to the related question/answers I noted at the top. Since I've got a problem not an answer. If there is a better way to post this related to the existing post please let me know.

    Read the article

< Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >