Search Results

Search found 2495 results on 100 pages for 'hash of hashes'.

Page 94/100 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100  | Next Page >

  • Inside the Concurrent Collections: ConcurrentDictionary

    - by Simon Cooper
    Using locks to implement a thread-safe collection is rather like using a sledgehammer - unsubtle, easy to understand, and tends to make any other tool redundant. Unlike the previous two collections I looked at, ConcurrentStack and ConcurrentQueue, ConcurrentDictionary uses locks quite heavily. However, it is careful to wield locks only where necessary to ensure that concurrency is maximised. This will, by necessity, be a higher-level look than my other posts in this series, as there is quite a lot of code and logic in ConcurrentDictionary. Therefore, I do recommend that you have ConcurrentDictionary open in a decompiler to have a look at all the details that I skip over. The problem with locks There's several things to bear in mind when using locks, as encapsulated by the lock keyword in C# and the System.Threading.Monitor class in .NET (if you're unsure as to what lock does in C#, I briefly covered it in my first post in the series): Locks block threads The most obvious problem is that threads waiting on a lock can't do any work at all. No preparatory work, no 'optimistic' work like in ConcurrentQueue and ConcurrentStack, nothing. It sits there, waiting to be unblocked. This is bad if you're trying to maximise concurrency. Locks are slow Whereas most of the methods on the Interlocked class can be compiled down to a single CPU instruction, ensuring atomicity at the hardware level, taking out a lock requires some heavy lifting by the CLR and the operating system. There's quite a bit of work required to take out a lock, block other threads, and wake them up again. If locks are used heavily, this impacts performance. Deadlocks When using locks there's always the possibility of a deadlock - two threads, each holding a lock, each trying to aquire the other's lock. Fortunately, this can be avoided with careful programming and structured lock-taking, as we'll see. So, it's important to minimise where locks are used to maximise the concurrency and performance of the collection. Implementation As you might expect, ConcurrentDictionary is similar in basic implementation to the non-concurrent Dictionary, which I studied in a previous post. I'll be using some concepts introduced there, so I recommend you have a quick read of it. So, if you were implementing a thread-safe dictionary, what would you do? The naive implementation is to simply have a single lock around all methods accessing the dictionary. This would work, but doesn't allow much concurrency. Fortunately, the bucketing used by Dictionary allows a simple but effective improvement to this - one lock per bucket. This allows different threads modifying different buckets to do so in parallel. Any thread making changes to the contents of a bucket takes the lock for that bucket, ensuring those changes are thread-safe. The method that maps each bucket to a lock is the GetBucketAndLockNo method: private void GetBucketAndLockNo( int hashcode, out int bucketNo, out int lockNo, int bucketCount) { // the bucket number is the hashcode (without the initial sign bit) // modulo the number of buckets bucketNo = (hashcode & 0x7fffffff) % bucketCount; // and the lock number is the bucket number modulo the number of locks lockNo = bucketNo % m_locks.Length; } However, this does require some changes to how the buckets are implemented. The 'implicit' linked list within a single backing array used by the non-concurrent Dictionary adds a dependency between separate buckets, as every bucket uses the same backing array. Instead, ConcurrentDictionary uses a strict linked list on each bucket: This ensures that each bucket is entirely separate from all other buckets; adding or removing an item from a bucket is independent to any changes to other buckets. Modifying the dictionary All the operations on the dictionary follow the same basic pattern: void AlterBucket(TKey key, ...) { int bucketNo, lockNo; 1: GetBucketAndLockNo( key.GetHashCode(), out bucketNo, out lockNo, m_buckets.Length); 2: lock (m_locks[lockNo]) { 3: Node headNode = m_buckets[bucketNo]; 4: Mutate the node linked list as appropriate } } For example, when adding another entry to the dictionary, you would iterate through the linked list to check whether the key exists already, and add the new entry as the head node. When removing items, you would find the entry to remove (if it exists), and remove the node from the linked list. Adding, updating, and removing items all follow this pattern. Performance issues There is a problem we have to address at this point. If the number of buckets in the dictionary is fixed in the constructor, then the performance will degrade from O(1) to O(n) when a large number of items are added to the dictionary. As more and more items get added to the linked lists in each bucket, the lookup operations will spend most of their time traversing a linear linked list. To fix this, the buckets array has to be resized once the number of items in each bucket has gone over a certain limit. (In ConcurrentDictionary this limit is when the size of the largest bucket is greater than the number of buckets for each lock. This check is done at the end of the TryAddInternal method.) Resizing the bucket array and re-hashing everything affects every bucket in the collection. Therefore, this operation needs to take out every lock in the collection. Taking out mutiple locks at once inevitably summons the spectre of the deadlock; two threads each hold a lock, and each trying to acquire the other lock. How can we eliminate this? Simple - ensure that threads never try to 'swap' locks in this fashion. When taking out multiple locks, always take them out in the same order, and always take out all the locks you need before starting to release them. In ConcurrentDictionary, this is controlled by the AcquireLocks, AcquireAllLocks and ReleaseLocks methods. Locks are always taken out and released in the order they are in the m_locks array, and locks are all released right at the end of the method in a finally block. At this point, it's worth pointing out that the locks array is never re-assigned, even when the buckets array is increased in size. The number of locks is fixed in the constructor by the concurrencyLevel parameter. This simplifies programming the locks; you don't have to check if the locks array has changed or been re-assigned before taking out a lock object. And you can be sure that when a thread takes out a lock, another thread isn't going to re-assign the lock array. This would create a new series of lock objects, thus allowing another thread to ignore the existing locks (and any threads controlling them), breaking thread-safety. Consequences of growing the array Just because we're using locks doesn't mean that race conditions aren't a problem. We can see this by looking at the GrowTable method. The operation of this method can be boiled down to: private void GrowTable(Node[] buckets) { try { 1: Acquire first lock in the locks array // this causes any other thread trying to take out // all the locks to block because the first lock in the array // is always the one taken out first // check if another thread has already resized the buckets array // while we were waiting to acquire the first lock 2: if (buckets != m_buckets) return; 3: Calculate the new size of the backing array 4: Node[] array = new array[size]; 5: Acquire all the remaining locks 6: Re-hash the contents of the existing buckets into array 7: m_buckets = array; } finally { 8: Release all locks } } As you can see, there's already a check for a race condition at step 2, for the case when the GrowTable method is called twice in quick succession on two separate threads. One will successfully resize the buckets array (blocking the second in the meantime), when the second thread is unblocked it'll see that the array has already been resized & exit without doing anything. There is another case we need to consider; looking back at the AlterBucket method above, consider the following situation: Thread 1 calls AlterBucket; step 1 is executed to get the bucket and lock numbers. Thread 2 calls GrowTable and executes steps 1-5; thread 1 is blocked when it tries to take out the lock in step 2. Thread 2 re-hashes everything, re-assigns the buckets array, and releases all the locks (steps 6-8). Thread 1 is unblocked and continues executing, but the calculated bucket and lock numbers are no longer valid. Between calculating the correct bucket and lock number and taking out the lock, another thread has changed where everything is. Not exactly thread-safe. Well, a similar problem was solved in ConcurrentStack and ConcurrentQueue by storing a local copy of the state, doing the necessary calculations, then checking if that state is still valid. We can use a similar idea here: void AlterBucket(TKey key, ...) { while (true) { Node[] buckets = m_buckets; int bucketNo, lockNo; GetBucketAndLockNo( key.GetHashCode(), out bucketNo, out lockNo, buckets.Length); lock (m_locks[lockNo]) { // if the state has changed, go back to the start if (buckets != m_buckets) continue; Node headNode = m_buckets[bucketNo]; Mutate the node linked list as appropriate } break; } } TryGetValue and GetEnumerator And so, finally, we get onto TryGetValue and GetEnumerator. I've left these to the end because, well, they don't actually use any locks. How can this be? Whenever you change a bucket, you need to take out the corresponding lock, yes? Indeed you do. However, it is important to note that TryGetValue and GetEnumerator don't actually change anything. Just as immutable objects are, by definition, thread-safe, read-only operations don't need to take out a lock because they don't change anything. All lockless methods can happily iterate through the buckets and linked lists without worrying about locking anything. However, this does put restrictions on how the other methods operate. Because there could be another thread in the middle of reading the dictionary at any time (even if a lock is taken out), the dictionary has to be in a valid state at all times. Every change to state has to be made visible to other threads in a single atomic operation (all relevant variables are marked volatile to help with this). This restriction ensures that whatever the reading threads are doing, they never read the dictionary in an invalid state (eg items that should be in the collection temporarily removed from the linked list, or reading a node that has had it's key & value removed before the node itself has been removed from the linked list). Fortunately, all the operations needed to change the dictionary can be done in that way. Bucket resizes are made visible when the new array is assigned back to the m_buckets variable. Any additions or modifications to a node are done by creating a new node, then splicing it into the existing list using a single variable assignment. Node removals are simply done by re-assigning the node's m_next pointer. Because the dictionary can be changed by another thread during execution of the lockless methods, the GetEnumerator method is liable to return dirty reads - changes made to the dictionary after GetEnumerator was called, but before the enumeration got to that point in the dictionary. It's worth listing at this point which methods are lockless, and which take out all the locks in the dictionary to ensure they get a consistent view of the dictionary: Lockless: TryGetValue GetEnumerator The indexer getter ContainsKey Takes out every lock (lockfull?): Count IsEmpty Keys Values CopyTo ToArray Concurrent principles That covers the overall implementation of ConcurrentDictionary. I haven't even begun to scratch the surface of this sophisticated collection. That I leave to you. However, we've looked at enough to be able to extract some useful principles for concurrent programming: Partitioning When using locks, the work is partitioned into independant chunks, each with its own lock. Each partition can then be modified concurrently to other partitions. Ordered lock-taking When a method does need to control the entire collection, locks are taken and released in a fixed order to prevent deadlocks. Lockless reads Read operations that don't care about dirty reads don't take out any lock; the rest of the collection is implemented so that any reading thread always has a consistent view of the collection. That leads us to the final collection in this little series - ConcurrentBag. Lacking a non-concurrent analogy, it is quite different to any other collection in the class libraries. Prepare your thinking hats!

    Read the article

  • New Replication, Optimizer and High Availability features in MySQL 5.6.5!

    - by Rob Young
    As the Product Manager for the MySQL database it is always great to announce when the MySQL Engineering team delivers another great product release.  As a field DBA and developer it is even better when that release contains improvements and innovation that I know will help those currently using MySQL for apps that range from modest intranet sites to the most highly trafficked web sites on the web.  That said, it is my pleasure to take my hat off to MySQL Engineering for today's release of the MySQL 5.6.5 Development Milestone Release ("DMR"). The new highlighted features in MySQL 5.6.5 are discussed here: New Self-Healing Replication ClustersThe 5.6.5 DMR improves MySQL Replication by adding Global Transaction Ids and automated utilities for self-healing Replication clusters.  Prior to 5.6.5 this has been somewhat of a pain point for MySQL users with most developing custom solutions or looking to costly, complex third-party solutions for these capabilities.  With 5.6.5 these shackles are all but removed by a solution that is included with the GPL version of the database and supporting GPL tools.  You can learn all about the details of the great, problem solving Replication features in MySQL 5.6 in Mat Keep's Developer Zone article.  New Replication Administration and Failover UtilitiesAs mentioned above, the new Replication features, Global Transaction Ids specifically, are now supported by a set of automated GPL utilities that leverage the new GTIDs to provide administration and manual or auto failover to the most up to date slave (that is the default, but user configurable if needed) in the event of a master failure. The new utilities, along with links to Engineering related blogs, are discussed in detail in the DevZone Article noted above. Better Query Optimization and ThroughputThe MySQL Optimizer team continues to amaze with the latest round of improvements in 5.6.5. Along with much refactoring of the legacy code base, the Optimizer team has improved complex query optimization and throughput by adding these functional improvements: Subquery Optimizations - Subqueries are now included in the Optimizer path for runtime optimization.  Better throughput of nested queries enables application developers to simplify and consolidate multiple queries and result sets into a single unit or work. Optimizer now uses CURRENT_TIMESTAMP as default for DATETIME columns - For simplification, this eliminates the need for application developers to assign this value when a column of this type is blank by default. Optimizations for Range based queries - Optimizer now uses ready statistics vs Index based scans for queries with multiple range values. Optimizations for queries using filesort and ORDER BY.  Optimization criteria/decision on execution method is done now at optimization vs parsing stage. Print EXPLAIN in JSON format for hierarchical readability and Enterprise tool consumption. You can learn the details about these new features as well all of the Optimizer based improvements in MySQL 5.6 by following the Optimizer team blog. You can download and try the MySQL 5.6.5 DMR here. (look under "Development Releases")  Please let us know what you think!  The new HA utilities for Replication Administration and Failover are available as part of the MySQL Workbench Community Edition, which you can download here .Also New in MySQL LabsAs has become our tradition when announcing DMRs we also like to provide "Early Access" development features to the MySQL Community via the MySQL Labs.  Today is no exception as we are also releasing the following to Labs for you to download, try and let us know your thoughts on where we need to improve:InnoDB Online OperationsMySQL 5.6 now provides Online ADD Index, FK Drop and Online Column RENAME.  These operations are non-blocking and will continue to evolve in future DMRs.  You can learn the grainy details by following John Russell's blog.InnoDB data access via Memcached API ("NotOnlySQL") - Improved refresh of an earlier feature releaseSimilar to Cluster 7.2, MySQL 5.6 provides direct NotOnlySQL access to InnoDB data via the familiar Memcached API. This provides the ultimate in flexibility for developers who need fast, simple key/value access and complex query support commingled within their applications.Improved Transactional Performance, ScaleThe InnoDB Engineering team has once again under promised and over delivered in the area of improved performance and scale.  These improvements are also included in the aggregated Spring 2012 labs release:InnoDB CPU cache performance improvements for modern, multi-core/CPU systems show great promise with internal tests showing:    2x throughput improvement for read only activity 6x throughput improvement for SELECT range Read/Write benchmarks are in progress More details on the above are available here. You can download all of the above in an aggregated "InnoDB 2012 Spring Labs Release" binary from the MySQL Labs. You can also learn more about these improvements and about related fixes to mysys mutex and hash sort by checking out the InnoDB team blog.MySQL 5.6.5 is another installment in what we believe will be the best release of the MySQL database ever.  It also serves as a shining example of how the MySQL Engineering team at Oracle leads in MySQL innovation.You can get the overall Oracle message on the MySQL 5.6.5 DMR and Early Access labs features here. As always, thanks for your continued support of MySQL, the #1 open source database on the planet!

    Read the article

  • Kendo UI Mobile with Knockout for Master-Detail Views

    - by Steve Michelotti
    Lately I’ve been playing with Kendo UI Mobile to build iPhone apps. It’s similar to jQuery Mobile in that they are both HTML5/JavaScript based frameworks for buildings mobile apps. The primary thing that drew me to investigate Kendo UI was its innate ability to adaptively render a native looking app based on detecting the device it’s currently running on. In other words, it will render to look like a native iPhone app if it’s running on an iPhone and it will render to look like a native Droid app if it’s running on a Droid. This is in contrast to jQuery Mobile which looks the same on all devices and, therefore, it can never quite look native for whatever device it’s running on. My first impressions of Kendo UI were great. Using HTML5 data-* attributes to define “roles” for UI elements is easy, the rendering looked great, and the basic navigation was simple and intuitive. However, I ran into major confusion when trying to figure out how to “correctly” build master-detail views. Since I was already very family with KnockoutJS, I set out to use that framework in conjunction with Kendo UI Mobile to build the following simple scenario: I wanted to have a simple “Task Manager” application where my first screen just showed a list of tasks like this:   Then clicking on a specific task would navigate to a detail screen that would show all details of the specific task that was selected:   Basic navigation between views in Kendo UI is simple. The href of an <a> tag just needs to specify a hash tag followed by the ID of the view to navigate to as shown in this jsFiddle (notice the href of the <a> tag matches the id of the second view):   Direct link to jsFiddle: here. That is all well and good but the problem I encountered was: how to pass data between the views? Specifically, I need the detail view to display all the details of whichever task was selected. If I was doing this with my typical technique with KnockoutJS, I know exactly what I would do. First I would create a view model that had my collection of tasks and a property for the currently selected task like this: 1: function ViewModel() { 2: var self = this; 3: self.tasks = ko.observableArray(data); 4: self.selectedTask = ko.observable(null); 5: } Then I would bind my list of tasks to the unordered list - I would attach a “click” handler to each item (each <li> in the unordered list) so that it would select the “selectedTask” for the view model. The problem I found is this approach simply wouldn’t work for Kendo UI Mobile. It completely ignored the click handlers that I was trying to attach to the <a> tags – it just wanted to look at the href (at least that’s what I observed). But if I can’t intercept this, then *how* can I pass data or any context to the next view? The only thing I was able to find in the Kendo documentation is that you can pass query string arguments on the view name you’re specifying in the href. This enabled me to do the following: Specify the task ID in each href – something like this: <a href=”#taskDetail?id=3></a> Attach an “init method” (via the “data-show” attribute on the details view) that runs whenever the view is activated Inside this “init method”, grab the task ID passed from the query string to look up the item from my view model’s list of tasks in order to set the selected task I was able to get all that working with about 20 lines of JavaScript as shown in this jsFiddle. If you click on the Results tab, you can navigate between views and see the the detail screen is correctly binding to the selected item:   Direct link to jsFiddle: here.   With all that being done, I was very happy to get it working with the behavior I wanted. However, I have no idea if that is the “correct” way to do it or if there is a “better” way to do it. I know that Kendo UI comes with its own data binding framework but my preference is to be able to use (the well-documented) KnockoutJS since I’m already familiar with that framework rather than having to learn yet another new framework. While I think my solution above is probably “acceptable”, there are still a couple of things that bug me about it. First, it seems odd that I have to loop through my items to *find* my selected item based on the ID that was passed on the query string - normally, with Knockout I can just refer directly to my selected item from where it was used. Second, it didn’t feel exactly right that I had to rely on the “data-show” method of the details view to set my context – normally with Knockout, I could just attach a click handler to the <a> tag that was actually clicked by the user in order to set the “selected item.” I’m not sure if I’m being too picky. I know there are many people that have *way* more expertise in Kendo UI compared to me – I’d be curious to know if there are better ways to achieve the same results.

    Read the article

  • IndyTechFest Recap

    - by Johnm
    The sun had yet to raise above the horizon on Saturday, May 22nd and I was traveling toward the location of the 2010 IndyTechFest. In my freshly awaken, and pre-coffee, state I reflected on the months that preceded this day and how quickly they slipped away. The big day had finally come and the morning dew glistened with a unique brightness that morning. What is this all about? For those who are unfamiliar with IndyTechFest, it is a regional conference held in Indianapolis and hosted by the Indianapolis .NET Developers Association (IndyNDA) and the Indianapolis Professional Association for SQL Server (IndyPASS).  The event presents multiple tracks and sessions covering subjects such as Business Intelligence,  Database Administration, .NET Development, SharePoint Development, Windows Mobile Development as well as non-Microsoft topics such as Lean and MongoDB. This year's event was the third hosting of IndyTechFest. No man is an island No event such as IndyTechFest is executed by a single person. My fellow co-founders, with their highly complementary skill sets and philanthropy make the process very enjoyable. Our amazing volunteers and their aid were indispensible. The generous financial support of our sponsors that made the event and fabulous prizes possible. The spectacular line up of speakers who came from near and far to donate their time and knowledge. Our beloved attendees who sacrificed the first sunny Saturday in weeks to expand their skill sets and network with their peers. We are deeply appreciative. Challenges in preparation With the preparation of any event comes challenges. It is these challenges that makes the process of planning an event so interesting. This year's largest challenge was the location of the event. In the past two years IndyTechFest was held at the Gene B. Glick Junior Achievement Center in Indianapolis. This facility has been the hub of the Indy technical community for many years. As the big day drew near, the facility's availability came into question due to some recent changes that had occurred with those who operated the facility. We began our search for an alternative option. Thankfully, the Marriott Indianapolis East was available, was very spacious and willing to work within the range of our budget. Within days of our event, the decision to move proved to be wise since the prior location had begun renovations to the interior. Whew! Always trust your gut. Every day it's getting better At the ending of each year, we huddle together, review the evaluations and identify an area in which the event could improve. This year's big opportunity for improvement resided in the prize give-away portion at the end of the day. In the 2008 event, admittedly, this portion was rather chaotic, rushed and disorganized. This year, we broke the drawing into two sections, of which each attendee received two tickets. The first ticket was a drawing for the mountain of books that were given away. The second ticket was a drawing for the big prizes, the 2 Xboxes, 3 laptops and iPad. We peppered the ticket drawings with gift card raffles and tossing t-shirts into the audience. If at first you don't succeed, try and try again Each year of IndyTechFest, we have offered a means for ad-hoc sessions or discussion groups to pop-up. To our disappointment it was something that never quite took off. We have always believed that this unique type of session was valuable and wanted to figure out a way to make it work for this year. A special thanks to Alan Stevens, who took on and facilitated the "open space" track and made it an official success. Share with your tweety When the attendee badges were designed we decided to place an emphasis on the attendee's Twitter account as well as the events hash-tag (#IndyTechFest) to encourage some real-time buzz during the day. At the host table we displayed a Twitter feed for all to enjoy. It was quite successful and encouraging use of social media. My badge was missing my Twitter account since it was recently changed. For those who care to follow my rather sparse tweets, my address is @johnnydata. Man, this is one long blog post! All in all it was a very successful event. It is always great to see new faces and meet old friends. The planning for the 2011 IndyTechFest will kick off very soon. We have more capacity for future growth and a truck full of great ideas. Stay tuned!

    Read the article

  • Auto-Configuring SSIS Packages

    - by Davide Mauri
    SSIS Package Configurations are very useful to make packages flexible so that you can change objects properties at run-time and thus make the package configurable without having to open and edit it. In a complex scenario where you have dozen of packages (even in in the smallest BI project I worked on I had 50 packages), each package may have its own configuration needs. This means that each time you have to run the package you have to pass the correct Package Configuration. I usually use XML configuration files and I also force everyone that works with me to make sure that an object that is used in several packages has the same name in all package where it is used, in order to simplify configurations usage. Connection Managers are a good example of one of those objects. For example, all the packages that needs to access to the Data Warehouse database must have a Connection Manager named DWH. Basically we define a set of “global” objects so that we can have a configuration file for them, so that it can be used by all packages. If a package as some specific configuration needs, we create a specific – or “local” – XML configuration file or we set the value that needs to be configured at runtime using DTLoggedExec’s Package Parameters: http://dtloggedexec.davidemauri.it/Package%20Parameters.ashx Now, how we can improve this even more? I’d like to have a package that, when it’s run, automatically goes “somewhere” and search for global or local configuration, loads it and applies it to itself. That’s the basic idea of Auto-Configuring Packages. The “somewhere” is a SQL Server table, defined in this way In this table you’ll put the values that you want to be used at runtime by your package: The ConfigurationFilter column specify to which package that configuration line has to be applied. A package will use that line only if the value specified in the ConfigurationFilter column is equal to its name. In the above sample. only the package named “simple-package” will use the line number two. There is an exception here: the $$Global value indicate a configuration row that has to be applied to any package. With this simple behavior it’s possible to replicate the “global” and the “local” configuration approach I’ve described before. The ConfigurationValue contains the value you want to be applied at runtime and the PackagePath contains the object to which that value will be applied. The ConfiguredValueType column defined the data type of the value and the Checksum column is contains a calculated value that is simply the hash value of ConfigurationFilter plus PackagePath so that it can be used as a Primary Key to guarantee uniqueness of configuration rows. As you may have noticed the table is very similar to the table originally used by SSIS in order to put DTS Configuration into SQL Server tables: SQL Server SSIS Configuration Type: http://msdn.microsoft.com/en-us/library/ms141682.aspx Now, how it works? It’s very easy: you just have to call DTLoggedExec with the /AC option: DTLoggedExec.exe /FILE:”mypackage.dtsx” /AC:"localhost;ssis_auto_configuration;ssiscfg.configuration" the AC option expects a string with the following format: <database_server>;<database_name>;<table_name>; only Windows Authentication is supported. When DTLoggedExec finds an Auto-Configuration request, it injects a new connection manager in the loaded package. The injected connection manager is named $$DTLoggedExec_AutoConfigure and is used by the two SQL Server DTS Configuration ($$DTLoggedExec_Global and $$DTLoggedExec_Local) also injected by DTLoggedExec, used to load “local” and “global” configuration. Now, you may start to wonder why this approach cannot be used without having all this stuff going around, but just passing to a package always two XML DTS Configuration files, (to have to “local” and the “global” configurations) doing something like this: DTLoggedExec.exe /FILE:”mypackage.dtsx” /CONF:”global.dtsConfig” /CONF:”mypackage.dtsConfig” The problem is that this approach doesn’t work if you have, in one of the two configuration file, a value that has to be applied to an object that doesn’t exists in the loaded package. This situation will raise an error that will halt package execution. To solve this problem, you may want to create a configuration file for each package. Unfortunately this will make deployment and management harder, since you’ll have to deal with a great number of configuration files. The Auto-Configuration approach solve all these problems at once! We’re using it in a project where we have hundreds of packages and I can tell you that deployment of packages and their configuration for the pre-production and production environment has never been so easy! To use the Auto-Configuration option you have to download the latest DTLoggedExec release: http://dtloggedexec.codeplex.com/releases/view/62218 Feedback, as usual, are very welcome!

    Read the article

  • Converting LINQ to Twitter to Twitter API v1.1

    - by Joe Mayo
    Twitter recently updated their API to v1.1 (Current status: API v1.1). Naturally, LINQ to Twitter  needed to be updated too. This blog post outlines the changes made to LINQ to Twitter during this conversion and highlights important features that LINQ to Twitter developers will want to know. Overall Impact Generally speaking, Twitter API v1.1 is semantically very much the same as it’s predecessor. The base URL changed and so did a few resource segments, but the resources themselves are still intact. The good news is that LINQ to Twitter has always shielded the developer from this plumbing, so the entities, types, and filters didn’t change much at all.  The following sections describe what did  change. Authentication In Twitter API v1.0 authentication was not required for some resources, such as user timelines and search. However, that’s all changed because *all* queries must be authenticated in Twitter API v1.1. LINQ to Twitter has various types of authorizers you can use, supporting whatever OAuth options are available via Twitter.  You can see the LINQ to Twitter documentation, Securing Your Applications, for more info on OAuth support. The New Search One of the larger changes to the API was Search. To be more specific, the Search entity now contains a List<Status>, named Statuses, to hold results.  Additionally, any meta-data associated with the search is now in a property named SearchMetaData. The change to the Search entity and responses is the big change, but the good news is that your Search query syntax doesn’t change. Different Rate Limits The issue of rate limits itself is contentious, but this discussion is focused on the coding experience and I’ll leave the politics to those who prefer to engage in that activity. What’s important here is that both headers and resources have changed. You should review Twitter’s Rate Limit documentation to understand what the changes mean.  A quick explanation is that rate limits are applied individually to each resource in 15 minute time intervals. In LINQ to Twitter these changes surface on the Help entity, via HelpType.RateLimits. The RateLimits query has a Resources filter where you can specify a comma-separated list of categories to return rate limit info for.  The results materialize in the RateLimits dictionary, keyed on category. The Help entity also has a RateLimitsAuthorizationContext, holding the Access Token for the user performing queries – and to whom the rate limits apply. In addition to the new RateLimits query, there are new RateLimit headers that appear in the query response, whose HTTP header name is of the form X-Rate-Limit… which is different from the previous header name. LINQ to Twitter surfaces these headers via the existing properties of the TwitterContext instance. For anyone who retrieved rate limit information via the Headers property of TwitterContext, you should be aware of the new header names.  I haven’t done anything with Feature rate limit properties yet, but they appear to no longer be available – this will require more follow-up. Error Handling Twitter API v1.1 has a new format for Error Codes & Responses. LINQ to Twitter wraps these messages in the TwitterQueryException, which has been updated appropriately. The Message property of TwitterQueryException now reflects the Twitter error message, when available. There’s also a new ErrorCode that’s populated with the message error code. Parameters Most parameters stayed the same, but one of interest is Include Entities (different from LINQ to Twitter data object entities). Entities are metadata hanging off tweets, that provide start/end position in the tweet and other information for mentions, urls, hash tags, and media. Entities used to not be included unless you specified you wanted them. Now, in v1.1, entities are included by default for all APIs that return a Status.  If you were always setting IncludeEntities to true, then you won’t see a change. However, be aware that you’ll now be receiving additional data in your response from Twitter, which will explain a sudden increase in bandwidth utilization. This might or might not  matter to you  depending on the requirements of your application, but you should be aware of it. Everything Else There might be small changes here and there that I haven’t mentioned, but these were the ones you should be most aware of.  Streams didn’t change, but Twitter will be deprecating username/password authentication on public streams, in favor of OAuth, so you’ll be seeing me make that change some time in the future.  Also, Twitter will continue to evolve the API and you can expect that LINQ to Twitter will change accordingly. Summary The big changes to Twitter API were Authentication, Search, Rate Limits, and Error Handling. All API calls must be authenticated. You’ll need to change your code to read Search results differently, but the query is much the same as you use now. There’s a new RateLimits API, one of the Help queries.  Also, the new error messages are integrated into TwitterQueryException. Besides these changes, I expect  most others to be small or affect a smaller percentage of developers.  You can get the latest version of LINQ to Twitter from NuGet or visit the LINQ to Twitter download page at CodePlex.com.   @JoeMayo

    Read the article

  • Oracle Enterprise Manager Cloud Control 12c Release 2 (12.1.0.2) is Available Now !

    - by Anand Akela
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Oracle today announced the availability of Oracle Enterprise Manager Cloud Control 12c Release 2 (12.1.0.2). It is now available for download on OTN on ALL platforms. This is the first major release since the launch of Enterprise Manager 12c in October of 2011. This is the first time when Enterprise Manager release is available on all platforms simultaneously. This is primarily a stability release which incorporates many of issues and feedback reported by early adopters. In addition, this release contains many new features and enhancements in areas across the board. New Capabilities and Features Enhanced management capabilities for enterprise private clouds: Introduces new capabilities to allow customers to build and manage a Java Platform-as-a-Service (PaaS) cloud based on Oracle Weblogic Server. The new capabilities include guided set up of PaaS Cloud, self-service provisioning, automatic scale out and metering and chargeback. Enhanced lifecycle management capabilities for Oracle WebLogic Server environments: Combining in-context multiple domain, patching and configuration file synchronizations. Integrated Hardware-Software management for Oracle Exalogic Elastic Cloud through features such as rack schematics visualization and integrated monitoring of all hardware and software components. The latest management capabilities for business-critical applications include: Business Application Management: A new Business Application (BA) target type and dashboard with flexible definitions provides a logical view of an application’s business transactions, end-user experiences and the cloud infrastructure the monitored application is running on. Enhanced User Experience Reporting: Oracle Real User Experience Insight has been enhanced to provide reporting capabilities on client-side issues for applications running in the cloud and has been more tightly coupled with Oracle Business Transaction Management to help ensure that real-time user experience and transaction tracing data is provided to users in context. Several key improvements address ease of administration, reporting and extensibility for massively scalable cloud environments including dynamic groups, self-updateable monitoring templates, bulk operations against many events, etc. New and Revised Plug-Ins: Several plug-Ins have been updated as a part of this release resulting in either new versions or revisions. Revised plug-ins contain only bug-fixes and while new plug-ins incorporate both bug fixes as well as new functionality. Plug-In Name Version Enterprise Manager for Oracle Database 12.1.0.2 (revision) Enterprise Manager for Oracle Fusion Middleware 12.1.0.3 (new) Enterprise Manager for Chargeback and Capacity Planning 12.1.0.3 (new) Enterprise Manager for Oracle Fusion Applications 12.1.0.3 (new) Enterprise Manager for Oracle Virtualization 12.1.0.3 (new) Enterprise Manager for Oracle Exadata 12.1.0.3 (new) Enterprise Manager for Oracle Cloud 12.1.0.4 (new) Installation and Upgrade: All major platforms have been released simultaneously (Linux 32 / 64 bit, Solaris (SPARC), Solaris x86-64, IBM AIX 64-bit, and Windows x86-64 (64-bit) ) Enterprise Manager 12.1.0.2 is a complete release that includes both the EM OMS and Agent versions of 12.1.0.2. Installation options available with EM 12.1.0.2: User can do fresh Install or an upgrade from versions EM 10.2.0.5, 11.1, or 12.1.0.2 ( Bundle Patch 1 not mandatory). Upgrading to EM 12.1.0.2 from EM 12.1.0.1 is not a patch application (similar to Bundle Patch 1) but is achieved through a 1-system upgrade. Documentation: Oracle Enterprise Manager Cloud Control Introduction Document provides a broad overview of capabilities and highlights"What's New" in EM 12.1.0.2. All updated Oracle Enterprise Manager documentation can be found on OTN Upgrade Guide Please feel free to ask questions related to the new Oracle Enterprise Manager Cloud Control 12c Release 2 (12.1.0.2) at the Oracle Enterprise Manager Forum . You could also share your feedback at twitter  using hash tag #em12c or at Facebook . Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Access Control Service v2: Registering Web Identities in your Applications [concepts]

    - by Your DisplayName here!
    ACS v2 support two fundamental types of client identities– I like to call them “enterprise identities” (WS-*) and “web identities” (Google, LiveID, OpenId in general…). I also see two different “mind sets” when it comes to application design using the above identity types: Enterprise identities – often the fact that a client can present a token from a trusted identity provider means he is a legitimate user of the application. Trust relationships and authorization details have been negotiated out of band (often on paper). Web identities – the fact that a user can authenticate with Google et al does not necessarily mean he is a legitimate (or registered) user of an application. Typically additional steps are necessary (like filling out a form, email confirmation etc). Sometimes also a mixture of both approaches exist, for the sake of this post, I will focus on the web identity case. I got a number of questions how to implement the web identity scenario and after some conversations it turns out it is the old authentication vs. authorization problem that gets in the way. Many people use the IsAuthenticated property on IIdentity to make security decisions in their applications (or deny user=”?” in ASP.NET terms). That’s a very natural thing to do, because authentication was done inside the application and we knew exactly when the IsAuthenticated condition is true. Been there, done that. Guilty ;) The fundamental difference between these “old style” apps and federation is, that authentication is not done by the application anymore. It is done by a third party service, and in the case of web identity providers, in services that are not under our control (nor do we have a formal business relationship with these providers). Now the issue is, when you switch to ACS, and someone with a Google account authenticates, indeed IsAuthenticated is true – because that’s what he is! This does not mean, that he is also authorized to use the application. It just proves he was able to authenticate with Google. Now this obviously leads to confusion. How can we solve that? Easy answer: We have to deal with authentication and authorization separately. Job done ;) For many application types I see this general approach: Application uses ACS for authentication (maybe both enterprise and web identities, we focus on web identities but you could easily have a dual approach here) Application offers to authenticate (or sign in) via web identity accounts like LiveID, Google, Facebook etc. Application also maintains a database of its “own” users. Typically you want to store additional information about the user In such an application type it is important to have a unique identifier for your users (think the primary key of your user database). What would that be? Most web identity provider (and all the standard ACS v2 supported ones) emit a NameIdentifier claim. This is a stable ID for the client (scoped to the relying party – more on that later). Furthermore ACS emits a claims identifying the identity provider (like the original issuer concept in WIF). When you combine these two values together, you can be sure to have a unique identifier for the user, e.g.: Facebook-134952459903700\799880347 You can now check on incoming calls, if the user is already registered and if yes, swap the ACS claims with claims coming from your user database. One claims would maybe be a role like “Registered User” which can then be easily used to do authorization checks in the application. The WIF claims authentication manager is a perfect place to do the claims transformation. If the user is not registered, show a register form. Maybe you can use some claims from the identity provider to pre-fill form fields. (see here where I show how to use the Facebook API to fetch additional user properties). After successful registration (which may include other mechanisms like a confirmation email), flip the bit in your database to make the web identity a registered user. This is all very theoretical. In the next post I will show some code and provide a download link for the complete sample. More on NameIdentifier Identity providers “guarantee” that the name identifier for a given user in your application will always be the same. But different applications (in the case of ACS – different ACS namespaces) will see different name identifiers. This is by design to protect the privacy of users because identical name identifiers could be used to create “profiles” of some sort for that user. In technical terms they create the name identifier approximately like this: name identifier = Hash((Provider Internal User ID) + (Relying Party Address)) Why is this important to know? Well – when you change the name of your ACS namespace, the name identifiers will change as well and you will will lose your “connection” to your existing users. Oh an btw – never use any other claims (like email address or name) to form a unique ID – these can often be changed by users.

    Read the article

  • How should I implement simple caches with concurrency on Redis?

    - by solublefish
    Background I have a 2-tier web service - just my app server and an RDBMS. I want to move to a pool of identical app servers behind a load balancer. I currently cache a bunch of objects in-process. I hope to move them to a shared Redis. I have a dozen or so caches of simple, small-sized business objects. For example, I have a set of Foos. Each Foo has a unique FooId and an OwnerId. One "owner" may own multiple Foos. In a traditional RDBMS this is just a table with an index on the PK FooId and one on OwnerId. I'm caching this in one process simply: Dictionary<int,Foo> _cacheFooById; Dictionary<int,HashSet<int>> _indexFooIdsByOwnerId; Reads come straight from here, and writes go here and to the RDBMS. I usually have this invariant: "For a given group [say by OwnerId], the whole group is in cache or none of it is." So when I cache miss on a Foo, I pull that Foo and all the owner's other Foos from the RDBMS. Updates make sure to keep the index up to date and respect the invariant. When an owner calls GetMyFoos I never have to worry that some are cached and some aren't. What I did already The first/simplest answer seems to be to use plain ol' SET and GET with a composite key and json value: SET( "ServiceCache:Foo:" + theFoo.Id, JsonSerialize(theFoo)); I later decided I liked: HSET( "ServiceCache:Foo", theFoo.FooId, JsonSerialize(theFoo)); That lets me get all the values in one cache as HVALS. It also felt right - I'm literally moving hashtables to Redis, so perhaps my top-level items should be hashes. This works to first order. If my high-level code is like: UpdateCache(myFoo); AddToIndex(myFoo); That translates into: HSET ("ServiceCache:Foo", theFoo.FooId, JsonSerialize(theFoo)); var myFoos = JsonDeserialize( HGET ("ServiceCache:FooIndex", theFoo.OwnerId) ); myFoos.Add(theFoo.OwnerId); HSET ("ServiceCache:FooIndex", theFoo.OwnerId, JsonSerialize(myFoos)); However, this is broken in two ways. Two concurrent operations can read/modify/write at the same time. The latter "wins" the final HSET and the former's index update is lost. Another operation could read the index in between the first and second lines. It would miss a Foo that it should find. So how do I index properly? I think I could use a Redis set instead of a json-encoded value for the index. That would solve part of the problem since the "add-to-index-if-not-already-present" would be atomic. I also read about using MULTI as a "transaction" but it doesn't seem like it does what I want. Am I right that I can't really MULTI; HGET; {update}; HSET; EXEC since it doesn't even do the HGET before I issue the EXEC? I also read about using WATCH and MULTI for optimistic concurrency, then retrying on failure. But WATCH only works on top-level keys. So it's back to SET/GET instead of HSET/HGET. And now I need a new index-like-thing to support getting all the values in a given cache. If I understand it right, I can combine all these things to do the job. Something like: while(!succeeded) { WATCH( "ServiceCache:Foo:" + theFoo.FooId ); WATCH( "ServiceCache:FooIndexByOwner:" + theFoo.OwnerId ); WATCH( "ServiceCache:FooIndexAll" ); MULTI(); SET ("ServiceCache:Foo:" + theFoo.FooId, JsonSerialize(theFoo)); SADD ("ServiceCache:FooIndexByOwner:" + theFoo.OwnerId, theFoo.FooId); SADD ("ServiceCache:FooIndexAll", theFoo.FooId); EXEC(); //TODO somehow set succeeded properly } Finally I'd have to translate this pseudocode into real code depending how my client library uses WATCH/MULTI/EXEC; it looks like they need some sort of context to hook them together. All in all this seems like a lot of complexity for what has to be a very common case; I can't help but think there's a better, smarter, Redis-ish way to do things that I'm just not seeing. How do I lock properly? Even if I had no indexes, there's still a (probably rare) race condition. A: HGET - cache miss B: HGET - cache miss A: SELECT B: SELECT A: HSET C: HGET - cache hit C: UPDATE C: HSET B: HSET ** this is stale data that's clobbering C's update. Note that C could just be a really-fast A. Again I think WATCH, MULTI, retry would work, but... ick. I know in some places people use special Redis keys as locks for other objects. Is that a reasonable approach here? Should those be top-level keys like ServiceCache:FooLocks:{Id} or ServiceCache:Locks:Foo:{Id}? Or make a separate hash for them - ServiceCache:Locks with subkeys Foo:{Id}, or ServiceCache:Locks:Foo with subkeys {Id} ? How would I work around abandoned locks, say if a transaction (or a whole server) crashes while "holding" the lock?

    Read the article

  • Spring Security - Interactive login attempt was unsuccessful

    - by Taylor L
    I've been trying to track down why Spring Security isn't creating the SPRING_SECURITY_REMEMBER_ME_COOKIE so I turned on logging for org.springframework.security.web.authentication.rememberme. At first glance, the logs make it seem like the login is failing but the login is actually successful in the sense that if I navigate to a page that requires authentication I am not redirected back to the login page. However, the logs appear to be saying the login credentials are invalid. Any ideas as to what is going on? Mar 16, 2010 10:05:56 AM org.springframework.security.web.authentication.rememberme.PersistentTokenBasedRememberMeServices onLoginSuccess FINE: Creating new persistent login for user [email protected] Mar 16, 2010 10:10:07 AM org.springframework.security.web.authentication.rememberme.AbstractRememberMeServices loginFail FINE: Interactive login attempt was unsuccessful. Mar 16, 2010 10:10:07 AM org.springframework.security.web.authentication.rememberme.AbstractRememberMeServices cancelCookie FINE: Cancelling cookie <http auto-config="false"> <intercept-url pattern="/css/**" filters="none" /> <intercept-url pattern="/img/**" filters="none" /> <intercept-url pattern="/js/**" filters="none" /> <intercept-url pattern="/app/admin/**" filters="none" /> <intercept-url pattern="/app/login/**" filters="none" /> <intercept-url pattern="/app/register/**" filters="none" /> <intercept-url pattern="/app/error/**" filters="none" /> <intercept-url pattern="/" filters="none" /> <intercept-url pattern="/**" access="ROLE_USER" /> <logout logout-success-url="/" /> <form-login login-page="/app/login" default-target-url="/" authentication-failure-url="/app/login?login_error=1" /> <session-management invalid-session-url="/app/login" /> <remember-me services-ref="rememberMeServices" key="myKey" /> </http> <authentication-manager alias="authenticationManager"> <authentication-provider user-service-ref="userDetailsService"> <password-encoder hash="sha-256" base64="true"> <salt-source user-property="username" /> </password-encoder> </authentication-provider> </authentication-manager> <beans:bean id="userDetailsService" class="com.my.service.auth.UserDetailsServiceImpl" /> <beans:bean id="rememberMeServices" class="org.springframework.security.web.authentication.rememberme.PersistentTokenBasedRememberMeServices"> <beans:property name="userDetailsService" ref="userDetailsService" /> <beans:property name="tokenRepository" ref="persistentTokenRepository" /> <beans:property name="key" value="myKey" /> </beans:bean> <beans:bean id="persistentTokenRepository" class="com.my.service.auth.PersistentTokenRepositoryImpl" />

    Read the article

  • Spring Security - Persistent Remember Me Issue

    - by Taylor L
    I've been trying to track down why Spring Security isn't creating the Spring Security remember me cookie (SPRING_SECURITY_REMEMBER_ME_COOKIE). At first glance, the logs make it seem like the login is failing, but the login is actually successful in the sense that if I navigate to a page that requires authentication I am not redirected back to the login page. However, the logs appear to be saying the login credentials are invalid. I'm using Spring 3.0.1, Spring Security 3.0.1, and Google App Engine 1.3.1. Any ideas as to what is going on? Mar 16, 2010 10:05:56 AM org.springframework.security.web.authentication.rememberme.PersistentTokenBasedRememberMeServices onLoginSuccess FINE: Creating new persistent login for user [email protected] Mar 16, 2010 10:10:07 AM org.springframework.security.web.authentication.rememberme.AbstractRememberMeServices loginFail FINE: Interactive login attempt was unsuccessful. Mar 16, 2010 10:10:07 AM org.springframework.security.web.authentication.rememberme.AbstractRememberMeServices cancelCookie FINE: Cancelling cookie Below is the relevant portion of the applicationContext-security.xml. <http auto-config="false"> <intercept-url pattern="/css/**" filters="none" /> <intercept-url pattern="/img/**" filters="none" /> <intercept-url pattern="/js/**" filters="none" /> <intercept-url pattern="/app/admin/**" filters="none" /> <intercept-url pattern="/app/login/**" filters="none" /> <intercept-url pattern="/app/register/**" filters="none" /> <intercept-url pattern="/app/error/**" filters="none" /> <intercept-url pattern="/" filters="none" /> <intercept-url pattern="/**" access="ROLE_USER" /> <logout logout-success-url="/" /> <form-login login-page="/app/login" default-target-url="/" authentication-failure-url="/app/login?login_error=1" /> <session-management invalid-session-url="/app/login" /> <remember-me services-ref="rememberMeServices" key="myKey" /> </http> <authentication-manager alias="authenticationManager"> <authentication-provider user-service-ref="userDetailsService"> <password-encoder hash="sha-256" base64="true"> <salt-source user-property="username" /> </password-encoder> </authentication-provider> </authentication-manager> <beans:bean id="userDetailsService" class="com.my.service.auth.UserDetailsServiceImpl" /> <beans:bean id="rememberMeServices" class="org.springframework.security.web.authentication.rememberme.PersistentTokenBasedRememberMeServices"> <beans:property name="userDetailsService" ref="userDetailsService" /> <beans:property name="tokenRepository" ref="persistentTokenRepository" /> <beans:property name="key" value="myKey" /> </beans:bean> <beans:bean id="persistentTokenRepository" class="com.my.service.auth.PersistentTokenRepositoryImpl" />

    Read the article

  • Resque Runtime Error at /workers: wrong number of arguments for 'exists' command

    - by Superflux
    I'm having a runtime errror when i'm looking at the "workers" tab on resque-web (localhost). Everything else works. Edit: when this error occurs, i also have some (3 or 4) unknown workers 'not working'. I think they are responsible for the error but i don't understand how they got here Can you help me on this ? Did i do something wrong ? Config: Resque 1.8.5 as a gem in a rails 2.3.8 app on Snow Leopard redis 1.0.7 / rack 1.1 / sinatra 1.0 / vegas 0.1.7 file: client.rb location: format_error_reply line: 558 BACKTRACE: * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in format_error_reply * 551. when DOLLAR then format_bulk_reply(line) 552. when ASTERISK then format_multi_bulk_reply(line) 553. else raise ProtocolError.new(reply_type) 554. end 555. end 556. 557. def format_error_reply(line) 558. raise "-" + line.strip 559. end 560. 561. def format_status_reply(line) 562. line.strip 563. end 564. 565. def format_integer_reply(line) * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in format_reply * 541. 542. def reconnect 543. disconnect && connect_to_server 544. end 545. 546. def format_reply(reply_type, line) 547. case reply_type 548. when MINUS then format_error_reply(line) 549. when PLUS then format_status_reply(line) 550. when COLON then format_integer_reply(line) 551. when DOLLAR then format_bulk_reply(line) 552. when ASTERISK then format_multi_bulk_reply(line) 553. else raise ProtocolError.new(reply_type) 554. end 555. end * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in read_reply * 478. disconnect 479. 480. raise Errno::EAGAIN, "Timeout reading from the socket" 481. end 482. 483. raise Errno::ECONNRESET, "Connection lost" unless reply_type 484. 485. format_reply(reply_type, @sock.gets) 486. end 487. 488. 489. if "".respond_to?(:bytesize) 490. def get_size(string) 491. string.bytesize 492. end * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in process_command * 448. return pipeline ? results : results[0] 449. end 450. 451. def process_command(command, argvv) 452. @sock.write(command) 453. argvv.map do |argv| 454. processor = REPLY_PROCESSOR[argv[0].to_s] 455. processor ? processor.call(read_reply) : read_reply 456. end 457. end 458. 459. def maybe_lock(&block) 460. if @thread_safe 461. @mutex.synchronize(&block) 462. else * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in map * 446. end 447. 448. return pipeline ? results : results[0] 449. end 450. 451. def process_command(command, argvv) 452. @sock.write(command) 453. argvv.map do |argv| 454. processor = REPLY_PROCESSOR[argv[0].to_s] 455. processor ? processor.call(read_reply) : read_reply 456. end 457. end 458. 459. def maybe_lock(&block) 460. if @thread_safe * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in process_command * 446. end 447. 448. return pipeline ? results : results[0] 449. end 450. 451. def process_command(command, argvv) 452. @sock.write(command) 453. argvv.map do |argv| 454. processor = REPLY_PROCESSOR[argv[0].to_s] 455. processor ? processor.call(read_reply) : read_reply 456. end 457. end 458. 459. def maybe_lock(&block) 460. if @thread_safe * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in raw_call_command * 435. @sock.write(command) 436. return true 437. end 438. # The normal command execution is reading and processing the reply. 439. results = maybe_lock do 440. begin 441. set_socket_timeout!(0) if requires_timeout_reset?(argvv[0][0].to_s) 442. process_command(command, argvv) 443. ensure 444. set_socket_timeout!(@timeout) if requires_timeout_reset?(argvv[0][0].to_s) 445. end 446. end 447. 448. return pipeline ? results : results[0] 449. end * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in synchronize * 454. processor = REPLY_PROCESSOR[argv[0].to_s] 455. processor ? processor.call(read_reply) : read_reply 456. end 457. end 458. 459. def maybe_lock(&block) 460. if @thread_safe 461. @mutex.synchronize(&block) 462. else 463. block.call 464. end 465. end 466. 467. def read_reply 468. * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in maybe_lock * 454. processor = REPLY_PROCESSOR[argv[0].to_s] 455. processor ? processor.call(read_reply) : read_reply 456. end 457. end 458. 459. def maybe_lock(&block) 460. if @thread_safe 461. @mutex.synchronize(&block) 462. else 463. block.call 464. end 465. end 466. 467. def read_reply 468. * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in raw_call_command * 432. end 433. # When in Pub/Sub mode we don't read replies synchronously. 434. if @pubsub 435. @sock.write(command) 436. return true 437. end 438. # The normal command execution is reading and processing the reply. 439. results = maybe_lock do 440. begin 441. set_socket_timeout!(0) if requires_timeout_reset?(argvv[0][0].to_s) 442. process_command(command, argvv) 443. ensure 444. set_socket_timeout!(@timeout) if requires_timeout_reset?(argvv[0][0].to_s) 445. end 446. end * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in call_command * 336. # try to reconnect just one time, otherwise let the error araise. 337. def call_command(argv) 338. log(argv.inspect, :debug) 339. 340. connect_to_server unless connected? 341. 342. begin 343. raw_call_command(argv.dup) 344. rescue Errno::ECONNRESET, Errno::EPIPE, Errno::ECONNABORTED 345. if reconnect 346. raw_call_command(argv.dup) 347. else 348. raise Errno::ECONNRESET 349. end 350. end * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in method_missing * 385. connect_to(@host, @port) 386. call_command([:auth, @password]) if @password 387. call_command([:select, @db]) if @db != 0 388. @sock 389. end 390. 391. def method_missing(*argv) 392. call_command(argv) 393. end 394. 395. def raw_call_command(argvp) 396. if argvp[0].is_a?(Array) 397. argvv = argvp 398. pipeline = true 399. else * /Library/Ruby/Gems/1.8/gems/redis-namespace-0.4.4/lib/redis/namespace.rb in send * 159. args = add_namespace(args) 160. args.push(last) if last 161. when :alternate 162. args = [ add_namespace(Hash[*args]) ] 163. end 164. 165. # Dispatch the command to Redis and store the result. 166. result = @redis.send(command, *args, &block) 167. 168. # Remove the namespace from results that are keys. 169. result = rem_namespace(result) if after == :all 170. 171. result 172. end 173. * /Library/Ruby/Gems/1.8/gems/redis-namespace-0.4.4/lib/redis/namespace.rb in method_missing * 159. args = add_namespace(args) 160. args.push(last) if last 161. when :alternate 162. args = [ add_namespace(Hash[*args]) ] 163. end 164. 165. # Dispatch the command to Redis and store the result. 166. result = @redis.send(command, *args, &block) 167. 168. # Remove the namespace from results that are keys. 169. result = rem_namespace(result) if after == :all 170. 171. result 172. end 173. * /Library/Ruby/Gems/1.8/gems/resque-1.8.5/lib/resque/worker.rb in state * 416. def idle? 417. state == :idle 418. end 419. 420. # Returns a symbol representing the current worker state, 421. # which can be either :working or :idle 422. def state 423. redis.exists("worker:#{self}") ? :working : :idle 424. end 425. 426. # Is this worker the same as another worker? 427. def ==(other) 428. to_s == other.to_s 429. end 430. * /Library/Ruby/Gems/1.8/gems/resque-1.8.5/lib/resque/server/views/workers.erb in __tilt_a2112543c5200dbe0635da5124b47311 * 46. <tr> 47. <th>&nbsp;</th> 48. <th>Where</th> 49. <th>Queues</th> 50. <th>Processing</th> 51. </tr> 52. <% for worker in (workers = resque.workers.sort_by { |w| w.to_s }) %> 53. <tr class="<%=state = worker.state%>"> 54. <td class='icon'><img src="<%=u state %>.png" alt="<%= state %>" title="<%= state %>"></td> 55. 56. <% host, pid, queues = worker.to_s.split(':') %> 57. <td class='where'><a href="<%=u "workers/#{worker}"%>"><%= host %>:<%= pid %></a></td> 58. <td class='queues'><%= queues.split(',').map { |q| '<a class="queue-tag" href="' + u("/queues/#{q}") + '">' + q + '</a>'}.join('') %></td> 59. 60. <td class='process'> * /Library/Ruby/Gems/1.8/gems/resque-1.8.5/lib/resque/server/views/workers.erb in each * /Library/Ruby/Gems/1.8/gems/resque-1.8.5/lib/resque/server/views/workers.erb in __tilt_a2112543c5200dbe0635da5124b47311 * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/tilt.rb in send * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/tilt.rb in evaluate * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/tilt.rb in render * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in render * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in erb * /Library/Ruby/Gems/1.8/gems/resque-1.8.5/lib/resque/server.rb in show * /Library/Ruby/Gems/1.8/gems/resque-1.8.5/lib/resque/server.rb in GET /workers * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in call * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in route * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in instance_eval * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in route_eval * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in route! * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in catch * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in route! * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in each * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in route! * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in dispatch! * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in call! * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in instance_eval * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in invoke * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in catch * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in invoke * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in call! * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in call * /Volumes/Donnees/Users/**/.gem/ruby/1.8/gems/rack-1.1.0/lib/rack/showexceptions.rb in call * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in call * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in synchronize * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in call * /Volumes/Donnees/Users/**/.gem/ruby/1.8/gems/rack-1.1.0/lib/rack/content_length.rb in call * /Volumes/Donnees/Users/**/.gem/ruby/1.8/gems/rack-1.1.0/lib/rack/chunked.rb in call * /Volumes/Donnees/Users/**/.gem/ruby/1.8/gems/rack-1.1.0/lib/rack/handler/mongrel.rb in process * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in process_client * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in each * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in process_client * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in run * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in initialize * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in new * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in run * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in initialize * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in new * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in run * /Volumes/Donnees/Users/**/.gem/ruby/1.8/gems/rack-1.1.0/lib/rack/handler/mongrel.rb in run * /Library/Ruby/Gems/1.8/gems/vegas-0.1.7/lib/vegas/runner.rb in run! * /Library/Ruby/Gems/1.8/gems/vegas-0.1.7/lib/vegas/runner.rb in start * /Library/Ruby/Gems/1.8/gems/resque-1.8.5/bin/resque-web in new * /Library/Ruby/Gems/1.8/gems/resque-1.8.5/bin/resque-web in nil * /usr/bin/resque-web in load

    Read the article

  • Jquery-UI tabs : Double loading of the default tab

    - by Stephane
    I use jqueryui-tabs to display a tabbed UI. here is how my markup looks in a MasterPage: <div id="channel-tabs" class="ui-tabs"> <ul class="ui-tabs-nav"> <li><%=Html.ActionLink("Blogs", "Index", "Blog", new { query = Model.Query, lang = Model.SelectedLanguage, fromTo = Model.FromTo, filters = Model.FilterId }, new{ title="Blog Results" }) %></li> <li><%=Html.ActionLink("Forums", "Index", "Forums", new { query = Model.Query, lang = Model.SelectedLanguage, fromTo = Model.FromTo, filters = Model.FilterId }, null) %></li> <li><%=Html.ActionLink("Twitter", "Index", "Twitter", new { query = Model.Query, lang = Model.SelectedLanguage, fromTo = Model.FromTo, filters = Model.FilterId }, null) %></li> </ul> <div id="Blog_Results"> <asp:ContentPlaceHolder ID="ResultPlaceHolder" runat="server"> </asp:ContentPlaceHolder> </div> If the content is loaded via ajax, I return a partial view with the content of the tab. If the content is loaded directly, I load a page that include the content in the ContentPlaceHolder. somewhat like this : <asp:Content ID="Content2" ContentPlaceHolderID="BlogPlaceHolder" runat="server"> <%=Html.Partial("Partial",Model) %> </asp:Content> //same goes for the other tabs. With this in place, if I access the url "/Forums" It loads the forum content in the Blog tab first, trigger the ajax load of the Blog tab and replace the content with the blog content. I tried putting a different placeholder for each tab, but that didn't fix everything either, since when loading "/Forums" it will sure load the forum tab, but the Blog tab will show up first. Furthermore, when using separate placeholders, If I load the "/Blogs" url, It will first load the content statically in the Blog contentplaceholder and then trigger an ajax call to load it a second time and replace it. If I just link the tab to the hashtag, then when loading the forum tabs, I won't get the blog content... How would you achieve the expected behaviour? I feel like I might have a deeper probelm in the organization of my views. Is putting the tabs in the masterpage the way to go? Maybe I should just hijax the links manually and not rely on jquery-ui tabs to do the work for me. I cannot load all tabs by default and display them using the hash tags, I need an ajax loading because it is a search process that can be long. So to sum up : /Forum should load the forum tab, and let the other tabs be loaded with an ajax call when clicking on it. /Twitter should load the twitter tab and let the other tabs.... the same goes for /Blogs and any tabs I would add later. Any idea to have this working properly?

    Read the article

  • How do I verify a DKIM signature in PHP?

    - by angrychimp
    I'll admit I'm not very adept at key verification. What I have is a script that downloads messages from a POP3 server, and I'm attempting to verify the DKIM signatures in PHP. I've already figured out the body hash (bh) validation check, but I can't figure out the header validation. http://www.dkim.org/specs/rfc4871-dkimbase.html#rfc.section.6.1.3 Below is an example of my message headers. I've been able to use the Mail::DKIM package to validate the signature in Perl, so I know it's good. I just can't seem to figure out the instructions in the RFC and translate them into PHP code. DomainKey-Signature: q=dns; a=rsa-sha1; c=nofws; s=angrychimp-1.bh; d=angrychimp.net; h=From:X-Outgoing; b=RVkenibHQ7GwO5Y3tun2CNn5wSnooBSXPHA1Kmxsw6miJDnVp4XKmA9cUELwftf9 nGiRCd3rLc6eswAcVyNhQ6mRSsF55OkGJgDNHiwte/pP5Z47Lo/fd6m7rfCnYxq3 DKIM-Signature: v=1; a=rsa-sha1; d=angrychimp.net; s=angrychimp-1.bh; c=relaxed/simple; q=dns/txt; [email protected]; t=1268436255; h=From:Subject:X-Outgoing:Date; bh=gqhC2GEWbg1t7T3IfGMUKzt1NCc=; b=ZmeavryIfp5jNDIwbpifsy1UcavMnMwRL6Fy6axocQFDOBd2KjnjXpCkHxs6yBZn Wu+UCFeAP+1xwN80JW+4yOdAiK5+6IS8fiVa7TxdkFDKa0AhmJ1DTHXIlPjGE4n5; To: [email protected] Message-ID: From: DKIM Tester Reply-To: [email protected] Subject: Automated DKIM Testing (angrychimp.net) X-Outgoing: dhaka Date: Fri, 12 Mar 2010 15:24:15 -0800 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable Content-Disposition: inline MIME-Version: 1.0 Return-Path: [email protected] X-OriginalArrivalTime: 12 Mar 2010 23:25:50.0326 (UTC) FILETIME=[5A0ED160:01CAC23B] I can extract the public key from my DNS just fine, and I believe I'm canonicalizing the headers correctly, but I just can't get the signature validated. I don't think I'm preparing my key or computing the signature validation correctly. Is this something that's possible (do I need pear extensions or something?) or is manually validating a DKIM signature in PHP just not feasible?

    Read the article

  • rails paperclip unable to access image from another view

    - by curiousCoder
    my app has an habtm relation b/w listings and categories. now from the categories index page, a user filters select box to view listings in the show page. now i am not able to access images attached to listings in the category show page. listing.rb attr_accessible :placeholder, :categories_ids has_and_belongs_to_many :categories has_attached_file :placeholder, :styles => { :medium => "300x300>", :thumb => "100x100>" }, :default_url => "/images/:style/missing.png", :url => "/system/:hash.:extension", :hash_secret => "longSecretString" categories controller def index @categories = Category.all end def show @categories = Category.find_by_sql ["select distinct l.* from listings l , categories c, categories_listings cl where c.id = cl.category_id and l.id = cl.listing_id and c.id in (?,?)" , params[:c][:id1] , params[:c][:id2]] end the sql just filters and displays the listings in show page where i can show its attributes, but cant access the placeholder. note the plural @categories in show categories show page <ul> <% @categories.each_with_index do |c, index| %> <% if index == 0 %> <li class="first"><%= c.place %></li> <%= image_tag c.placeholder.url(:thumb) %> <li><%= c.price %></li> <% else %> <li><%= c.place %></li> <li><%= c.price %></li> <%= image_tag c.placeholder.url(:thumb) %> <% end %> <% end %> </ul> Access image from different view in a view with paperclip gem ruby on rails this said to make the object plural and call a loop, wch shall allow to access the image. it does not work in this case. undefined method `placeholder' for #<Category:0x5c78640> but the amazing thing is, placeholder will be displayed as an array of all images for all the listings if used as suggested in that stackoverflow, wch is, obviously, not the way i prefer. where's the issue? what am i missing?

    Read the article

  • Optimizing multiple dispatch notification algorithm in C#?

    - by Robert Fraser
    Sorry about the title, I couldn't think of a better way to describe the problem. Basically, I'm trying to implement a collision system in a game. I want to be able to register a "collision handler" that handles any collision of two objects (given in either order) that can be cast to particular types. So if Player : Ship : Entity and Laser : Particle : Entity, and handlers for (Ship, Particle) and (Laser, Entity) are registered than for a collision of (Laser, Player), both handlers should be notified, with the arguments in the correct order, and a collision of (Laser, Laser) should notify only the second handler. A code snippet says a thousand words, so here's what I'm doing right now (naieve method): public IObservable<Collision<T1, T2>> onCollisionsOf<T1, T2>() where T1 : Entity where T2 : Entity { Type t1 = typeof(T1); Type t2 = typeof(T2); Subject<Collision<T1, T2>> obs = new Subject<Collision<T1, T2>>(); _onCollisionInternal += delegate(Entity obj1, Entity obj2) { if (t1.IsAssignableFrom(obj1.GetType()) && t2.IsAssignableFrom(obj2.GetType())) obs.OnNext(new Collision<T1, T2>((T1) obj1, (T2) obj2)); else if (t1.IsAssignableFrom(obj2.GetType()) && t2.IsAssignableFrom(obj1.GetType())) obs.OnNext(new Collision<T1, T2>((T1) obj2, (T2) obj1)); }; return obs; } However, this method is quite slow (measurable; I lost ~2 FPS after implementing this), so I'm looking for a way to shave a couple cycles/allocation off this. I thought about (as in, spent an hour implementing then slammed my head against a wall for being such an idiot) a method that put the types in an order based on their hash code, then put them into a dictionary, with each entry being a linked list of handlers for pairs of that type with a boolean indication whether the handler wanted the order of arguments reversed. Unfortunately, this doesn't work for derived types, since if a derived type is passed in, it won't notify a subscriber for the base type. Can anyone think of a way better than checking every type pair (twice) to see if it matches? Thanks, Robert

    Read the article

  • Securing a license key with RSA key.

    - by Jesse Knott
    Hello, it's late, I'm tired, and probably being quite dense.... I have written an application that I need to secure so it will only run on machines that I generate a key for. What I am doing for now is getting the BIOS serial number and generating a hash from that, I then am encrypting it using a XML RSA private key. I then sign the XML to ensure that it is not tampered with. I am trying to package the public key to decrypt and verify the signature with, but every time I try to execute the code as a different user than the one that generated the signature I get a failure on the signature. Most of my code is modified from sample code I have found since I am not as familiar with RSA encryption as I would like to be. Below is the code I was using and the code I thought I needed to use to get this working right... Any feedback would be greatly appreciated as I am quite lost at this point the original code I was working with was this, this code works fine as long as the user launching the program is the same one that signed the document originally... CspParameters cspParams = new CspParameters(); cspParams.KeyContainerName = "XML_DSIG_RSA_KEY"; cspParams.Flags = CspProviderFlags.UseMachineKeyStore; // Create a new RSA signing key and save it in the container. RSACryptoServiceProvider rsaKey = new RSACryptoServiceProvider(cspParams) { PersistKeyInCsp = true, }; This code is what I believe I should be doing but it's failing to verify the signature no matter what I do, regardless if it's the same user or a different one... RSACryptoServiceProvider rsaKey = new RSACryptoServiceProvider(); //Load the private key from xml file XmlDocument xmlPrivateKey = new XmlDocument(); xmlPrivateKey.Load("KeyPriv.xml"); rsaKey.FromXmlString(xmlPrivateKey.InnerXml); I believe this to have something to do with the key container name (Being a real dumbass here please excuse me) I am quite certain that this is the line that is both causing it to work in the first case and preventing it from working in the second case.... cspParams.KeyContainerName = "XML_DSIG_RSA_KEY"; Is there a way for me to sign/encrypt the XML with a private key when the application license is generated and then drop the public key in the app directory and use that to verify/decrypt the code? I can drop the encryption part if I can get the signature part working right. I was using it as a backup to obfuscate the origin of the license code I am keying from. Does any of this make sense? Am I a total dunce? Thanks for any help anyone can give me in this..

    Read the article

  • Potential issues using member's "from" address and the "sender" header

    - by Paul Burney
    Hi all, A major component of our application sends email to members on behalf of other members. Currently we set the "From" address to our system address and use a "Reply-to" header with the member's address. The issue is that replies from some email clients (and auto-replies/bounces) don't respect the "Reply-to" header so get sent to our system address, effectively sending them to a black hole. We're considering setting the "From" address to our member's address, and the "Sender" address to our system address. It appears this way would pass SPF and Sender-ID checks. Are there any reasons not to switch to this method? Are there any other potential issues? Thanks in advance, -Paul Here are way more details than you probably need: When the application was first developed, we just changed the "from" address to be that of the sending member as that was the common practice at the time (this was many years ago). We later changed that to have the "from" address be the member's name and our address, i.e., From: "Mary Smith" <[email protected]> With a "reply-to" header set to the member's address: Reply-To: "Mary Smith" <[email protected]> This helped with messages being mis-categorized as spam. As SPF became more popular, we added an additional header that would work in conjunction with our SPF records: Sender: <[email protected]> Things work OK, but it turns out that, in practice, some email clients and most MTA's don't respect the "Reply-To" header. Because of this, many members send messages to [email protected] instead of the desired member. So, I started envisioning various schemes to add data about the sender to the email headers or encode it in the "from" email address so that we could process the response and redirect appropriately. For example, From: "Mary Smith" <[email protected]> where the string after "messages" is a hash representing Mary Smith's member in our system. Of course, that path could lead to a lot of pain as we need to develop MTA functionality for our system address. I was looking again at the SPF documentation and found this page interesting: http://www.openspf.org/Best_Practices/Webgenerated They show two examples, that of evite.com and that of egreetings.com. Basically, evite.com is doing it the way we're doing it. The egreetings.com example uses the member's from address with an added "Sender" header. So the question is, are there any potential issues with using the egreetings method of the member's from address with a sender header? That would eliminate the replies that bad clients send to the system address. I don't believe that it solves the bounce/vacation/whitelist issue since those often send to the MAIL FROM even if Return Path is specified.

    Read the article

  • Symfony 1.4 Layout footer glitch: Footer div is echoed out with $sf_content

    - by Parijat Kalia
    I have a very simple Layout for my application. A header, the main content, and a footer. Semantically, they are rendered like this: <body> <div id = "header"> </div> <div id = "content"> </div> <div id = "footer"> </div> </body> The corresponding CSS is very basic as well: #header{ width:100%; min-height:10%; } #center{ width:100%; min-height:80%; } #footer{ width:100%; min-height:10%: } As you would know in the layout page, here is how the content is rendered: <div id= "content"> <?php echo $sf_content; ?> </div> All of the above is very fine and it renders itself as it is supposed to. But there is a glitch with this, the moment i put in <?php echo $sf_content; ?> the footer is included as part of the content and not as a div that is after the #content markup. Essentially, I get this: <div id = "header"></div> <div id = "content> <div id ="symfony_template_to_be_rendered"> <!-- all web application related content like forms etc. --> </div> <div id = "footer">Footer material </div> </div> As you can see, for some weird reason, the footer moved up along with the symfony content. Clearly this is a glitch because if I remove the php hash $sf_content part from the div tags in my layouts, then the footer renders itself as and where it should be and everything takes up the required dimensions. What's going on here?

    Read the article

  • C# - How to override GetHashCode with Lists in object

    - by Christian
    Hi, I am trying to create a "KeySet" to modify UIElement behaviour. The idea is to create a special function if, eg. the user clicks on an element while holding a. Or ctrl+a. My approach so far, first lets create a container for all possible modifiers. If I would simply allow a single key, it would be no problem. I could use a simple Dictionary, with Dictionary<Keys, Action> _specialActionList If the dictionary is empty, use the default action. If there are entries, check what action to use depending on current pressed keys And if I wasn't greedy, that would be it... Now of course, I want more. I want to allow multiple keys or modifiers. So I created a wrapper class, wich can be used as Key to my dictionary. There is an obvious problem when using a more complex class. Currently two different instances would create two different key, and thereby he would never find my function (see code to understand, really obvious) Now I checked this post: http://stackoverflow.com/questions/638761/c-gethashcode-override-of-object-containing-generic-array which helped a little. But my question is, is my basic design for the class ok. Should I use a hashset to store the modifier and normal keyboardkeys (instead of Lists). And If so, how would the GetHashCode function look like? I know, its a lot of code to write (boring hash functions), some tips would be sufficient to get me started. Will post tryouts here... And here comes the code so far, the Test obviously fails... public class KeyModifierSet { private readonly List<Key> _keys = new List<Key>(); private readonly List<ModifierKeys> _modifierKeys = new List<ModifierKeys>(); private static readonly Dictionary<KeyModifierSet, Action> _testDict = new Dictionary<KeyModifierSet, Action>(); public static void Test() { _testDict.Add(new KeyModifierSet(Key.A), () => Debug.WriteLine("nothing")); if (!_testDict.ContainsKey(new KeyModifierSet(Key.A))) throw new Exception("Not done yet, help :-)"); } public KeyModifierSet(IEnumerable<Key> keys, IEnumerable<ModifierKeys> modifierKeys) { foreach (var key in keys) _keys.Add(key); foreach (var key in modifierKeys) _modifierKeys.Add(key); } public KeyModifierSet(Key key, ModifierKeys modifierKey) { _keys.Add(key); _modifierKeys.Add(modifierKey); } public KeyModifierSet(Key key) { _keys.Add(key); } }

    Read the article

  • Alternatives to requiring users to register for an account?

    - by jamieb
    I'm working on a side project to build a new web app idea of mine. For the sake of discussion, let's say this app displays a random photograph of a famous work of art. On a scale of 1 to 5, users are asked to rate how well they like each piece of art, and then are shown the next photo. Eventually, the app is able to get an sense of the person's style and is able to recommend artwork that he/she may find pleasing. The whole concept is similar to Netflix. I understand how to do all the preference matching logic (although not as sophisticated as Netflix). But I'd like to find a way to do this without requiring that users create an account first. This is a novelty website that a typical user might use only a handful of times. Requiring registration is overkill and will likely drastically reduce it's utility. I'd like to allow people to begin rating artwork within five seconds of their initial pageview, yet maintain the integrity of the voting (since recommendations are predicated on how other people have rated the various pieces of artwork). Can it be done? Some ideas: OpenID. The perfect solution except for the fact that it's not wildly used and my target audience isn't the most technically adept demographic. Text message. User inputs phone number and is texted a four digit code to key into the web app. Quick, easy, and great way to limit abuse. However, privacy concerns abound... people are probably even less likely to give me their phone number than their email address. Facebook login. I personally don't have a Facebook account due to privacy concerns. And I'd really hate to support such a proprietary platform. Hash code/Bookmark. Vistor's initial pageview generates a 5 or 6 digit alphanumeric code that is embedded in each subsequent URL. They can bookmark any page to save their state. Good: Very simple system that doesn't require any user action. Bad: Very easy to stuff the ballot box, might be difficult to account for users sharing the link containing their ID code via email or social networking sites.

    Read the article

  • Binary Search Help

    - by aloh
    Hi, for a project I need to implement a binary search. This binary search allows duplicates. I have to get all the index values that match my target. I've thought about doing it this way if a duplicate is found to be in the middle: Target = G Say there is this following sorted array: B, D, E, F, G, G, G, G, G, G, Q, R S, S, Z I get the mid which is 7. Since there are target matches on both sides, and I need all the target matches, I thought a good way to get all would be to check mid + 1 if it is the same value. If it is, keep moving mid to the right until it isn't. So, it would turn out like this: B, D, E, F, G, G, G, G, G, G (MID), Q, R S, S, Z Then I would count from 0 to mid to count up the target matches and store their indexes into an array and return it. That was how I was thinking of doing it if the mid was a match and the duplicate happened to be in the mid the first time and on both sides of the array. Now, what if it isn't a match the first time? For example: B, D, E, F, G, G, J, K, L, O, Q, R, S, S, Z Then as normal, it would grab the mid, then call binary search from first to mid-1. B, D, E, F, G, G, J Since G is greater than F, call binary search from mid+1 to last. G, G, J. The mid is a match. Since it is a match, search from mid+1 to last through a for loop and count up the number of matches and store the match indexes into an array and return. Is this a good way for the binary search to grab all duplicates? Please let me know if you see problems in my algorithm and hints/suggestions if any. The only problem I see is that if all the matches were my target, I would basically be searching the whole array but then again, if that were the case I still would need to get all the duplicates. Thank you BTW, my instructor said we cannot use Vectors, Hash or anything else. He wants us to stay on the array level and get used to using them and manipulating them.

    Read the article

  • shopify_app syntax error

    - by Pete171
    Edit: Debugging has got me further. Question clarified. We have installed Ruby, RubyGems and Rails and have forked the shopify_app project. We have created a new rails applications and added three items to the Gemfile: execjs, therubyracer and shopify_app. Running rails s in order to start our rails application returns this trace: root@ubuntu:/usr/local/pete-shopify/cart# rails s Faraday: you may want to install system_timer for reliable timeouts /var/lib/gems/1.8/gems/shopify_app-4.1.0/lib/shopify_app.rb:15:in `require': /var/lib /gems/1.8/gems/shopify_app-4.1.0/lib/shopify_app/login_protection.rb:5: syntax error, unexpected ':', expecting kEND (SyntaxError) ...rce::UnauthorizedAccess, with: :close_session ^ from /var/lib/gems/1.8/gems/shopify_app-4.1.0/lib/shopify_app.rb:15 from /var/lib/gems/1.8/gems/bundler-1.2.1/lib/bundler/runtime.rb:68:in `require' from /var/lib/gems/1.8/gems/bundler-1.2.1/lib/bundler/runtime.rb:68:in `require' from /var/lib/gems/1.8/gems/bundler-1.2.1/lib/bundler/runtime.rb:66:in `each' from /var/lib/gems/1.8/gems/bundler-1.2.1/lib/bundler/runtime.rb:66:in `require' from /var/lib/gems/1.8/gems/bundler-1.2.1/lib/bundler/runtime.rb:55:in `each' from /var/lib/gems/1.8/gems/bundler-1.2.1/lib/bundler/runtime.rb:55:in `require' from /var/lib/gems/1.8/gems/bundler-1.2.1/lib/bundler.rb:128:in `require' from /usr/local/pete-shopify/cart/config/application.rb:7 from /var/lib/gems/1.8/gems/railties-3.2.8/lib/rails/commands.rb:53:in `require' from /var/lib/gems/1.8/gems/railties-3.2.8/lib/rails/commands.rb:53 from /var/lib/gems/1.8/gems/railties-3.2.8/lib/rails/commands.rb:50:in `tap' from /var/lib/gems/1.8/gems/railties-3.2.8/lib/rails/commands.rb:50 from script/rails:6:in `require' from script/rails:6 I haven't modified any files since forking from Github. Lines 1 - 6 of login_protection.rb are as follows: module ShopifyApp::LoginProtection extend ActiveSupport::Concern included do rescue from ActiveResource::UnauthorizedAccess, with: :close_session end I've looked into this and it seems that the error is caused by a new-style hash syntax between Ruby 1.8 and 1.9; key : value instead of key => value. Running ruby -v from the command line returns ruby 1.9.3p0 (2011-10-30 revision 33570) [x86_64-linux]. This would seem to be OK... but I did some debugging, and inside the file /var/lib/gems/1.8/gems/shopify_app-4.1.0/lib/shopify_app.rb (at the top) by putting this: puts RUBY_VERSION exit It printed 1.8.7. **Why are ruby -v and RUBY_VERSION giving me different results? And am I correct in assuming this is the cause of my problems? Note: To upgrade Ruby I installed the later version with apt-get and then switched to it by using update-alternatives --config ruby and selecting option 2 like this: root@ubuntu:/usr/local/pete-shopify/cart# update-alternatives --config ruby There are 2 choices for the alternative ruby (providing /usr/bin/ruby). Selection Path Priority Status ------------------------------------------------------------ 0 /usr/bin/ruby1.8 50 auto mode 1 /usr/bin/ruby1.8 50 manual mode * 2 /usr/bin/ruby1.9.1 10 manual mode Also note: We're PHP/Python developers so this is all new to us! Summary: 1 - Am I right in determining the cause of the syntax error? 2 - Why does RUBY_VERSION and ruby -v give me different results?

    Read the article

  • Explanation of converting exporting an XML document as a relational database using XSLT

    - by Yaaqov
    I would like to better understand the basic steps needed to a take an XML document like this Breakfast Menu... <?xml version="1.0" encoding="ISO-8859-1"?> <breakfast_menu> <food> <name>Belgian Waffles</name> <price>$5.95</price> <description>two of our famous Belgian Waffles with plenty of real maple syrup</description> <calories>650</calories> </food> <food> <name>Strawberry Belgian Waffles</name> <price>$7.95</price> <description>light Belgian waffles covered with strawberries and whipped cream</description> <calories>900</calories> </food> <food> <name>Berry-Berry Belgian Waffles</name> <price>$8.95</price> <description>light Belgian waffles covered with an assortment of fresh berries and whipped cream</description> <calories>900</calories> </food> <food> <name>French Toast</name> <price>$4.50</price> <description>thick slices made from our homemade sourdough bread</description> <calories>600</calories> </food> <food> <name>Homestyle Breakfast</name> <price>$6.95</price> <description>two eggs, bacon or sausage, toast, and our ever-popular hash browns</description> <calories>950</calories> </food> </breakfast_menu> And "export" it to say, an Access or MySQL database using XSLT, creating two joined tables: Table: breakfast_menu Field: menu_item_id Field: food_id Table: food Field: food_id Field: name Field: price Field: description Field: calories If there are online tutorials on this that you know of, I'd be interesting in learning more, as well. Thanks.

    Read the article

  • Xcache - No different after using it

    - by Charles Yeung
    Hi I have installed Xcache in my site(using xampp), I have tested more then 10 times on several page and the result is same as default(no any cache installed), is it something wrong with the configure? Updated [xcache-common] ;; install as zend extension (recommended), normally "$extension_dir/xcache.so" zend_extension = /usr/local/lib/php/extensions/non-debug-non-zts-xxx/xcache.so zend_extension_ts = /usr/local/lib/php/extensions/non-debug-zts-xxx/xcache.so ;; For windows users, replace xcache.so with php_xcache.dll zend_extension_ts = C:\xampp\php\ext\php_xcache.dll ;; or install as extension, make sure your extension_dir setting is correct ; extension = xcache.so ;; or win32: ; extension = php_xcache.dll [xcache.admin] xcache.admin.enable_auth = On xcache.admin.user = "mOo" ; xcache.admin.pass = md5($your_password) xcache.admin.pass = "" [xcache] ; ini only settings, all the values here is default unless explained ; select low level shm/allocator scheme implemenation xcache.shm_scheme = "mmap" ; to disable: xcache.size=0 ; to enable : xcache.size=64M etc (any size > 0) and your system mmap allows xcache.size = 60M ; set to cpu count (cat /proc/cpuinfo |grep -c processor) xcache.count = 1 ; just a hash hints, you can always store count(items) > slots xcache.slots = 8K ; ttl of the cache item, 0=forever xcache.ttl = 0 ; interval of gc scanning expired items, 0=no scan, other values is in seconds xcache.gc_interval = 0 ; same as aboves but for variable cache xcache.var_size = 4M xcache.var_count = 1 xcache.var_slots = 8K ; default ttl xcache.var_ttl = 0 xcache.var_maxttl = 0 xcache.var_gc_interval = 300 xcache.test = Off ; N/A for /dev/zero xcache.readonly_protection = Off ; for *nix, xcache.mmap_path is a file path, not directory. ; Use something like "/tmp/xcache" if you want to turn on ReadonlyProtection ; 2 group of php won't share the same /tmp/xcache ; for win32, xcache.mmap_path=anonymous map name, not file path xcache.mmap_path = "/dev/zero" ; leave it blank(disabled) or "/tmp/phpcore/" ; make sure it's writable by php (without checking open_basedir) xcache.coredump_directory = "" ; per request settings xcache.cacher = On xcache.stat = On xcache.optimizer = Off [xcache.coverager] ; per request settings ; enable coverage data collecting for xcache.coveragedump_directory and xcache_coverager_start/stop/get/clean() functions (will hurt executing performance) xcache.coverager = Off ; ini only settings ; make sure it's readable (care open_basedir) by coverage viewer script ; requires xcache.coverager=On xcache.coveragedump_directory = "" Thanks you

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100  | Next Page >