Search Results

Search found 26692 results on 1068 pages for 'virtual private cloud'.

Page 513/1068 | < Previous Page | 509 510 511 512 513 514 515 516 517 518 519 520  | Next Page >

  • Slot Machine Pay Out

    - by Kris.Mitchell
    I have done a lot of research into random number generators for slot machines, reel stop calculations and how to physically give the user a good chance on winning. What I can't figure out is how to properly insure that the machine is going to have a payout rating of (lets say) 95%. So, I have a reel set up wit 22 spaces on it. Filled with 16 different symbols. When I get my random number, mod divide it by 64 and get the remainder, I hop over to a loop up table to see how the virtual stop relates to the reel position. Now that I have how the reels are going to stop, do I make sure the payout ratio is correct? For every dollar they put in, how to I make sure the machine will pay out .95 cents? Thanks for the ideas. I am working in actionscript, if that helps with the language issues, but in general I am just looking for theory.

    Read the article

  • SPARC M7 Chip - 32 cores - Mind Blowing performance

    - by Angelo-Oracle
    The M7 Chip Oracle just announced its Next Generation Processor at the HotChips HC26 conference. As the Tech Lead in our Systems Division's Partner group, I had a front row seat to the extraordinary price performance advantage of Oracle current T5 and M6 based systems. Partner after partner tested  these systems and were impressed with it performance. Just read some of the quotes to see what our partner has been saying about our hardware. We just announced our next generation processor, the M7. This has 32 cores (up from 16-cores in T5 and 12-cores in M6). With 20 nm technology  this is our most advanced processor. The processor has more cores than anything else in the industry today. After the Sun acquisition Oracle has released 5 processors in 4 years and this is the 6th.  The S4 core  The M7 is built using the foundation of the S4 core. This is the next generation core technology. Like its predecessor, the S4 has 8 dynamic threads. It increases the frequency while maintaining the Pipeline depth. Each core has its own fine grain power estimator that keeps the core within its power envelop in 250 nano-sec granularity. Each core also includes Software in Silicon features for Application Acceleration Support. Each core includes features to improve Application Data Integrity, with almost no performance loss. The core also allows using part of the Virtual Address to store meta-data.  User-Level Synchronization Instructions are also part of the S4 core. Each core has 16 KB Instruction and 16 KB Data L1 cache. The Core Clusters  The cores on the M7 chip are organized in sets of 4-core clusters. The core clusters share  L2 cache.  All four cores in the complex share 256 KB of 4 way set associative L2 Instruction Cache, with over 1/2 TB/s of throughput. Two cores share 256 KB of 8 way set associative L2 Data Cache, with over 1/2 TB/s of throughput. With this innovative Core Cluster architecture, the M7 doubles core execution bandwidth. to maximize per-thread performance.  The Chip  Each  M7 chip has 8 sets of these core-clusters. The chip has 64 MB on-chip L3 cache. This L3 caches is shared among all the cores and is partitioned into 8 x 8 MB chunks. Each chunk is  8-way set associative cache. The aggregate bandwidth for the L3 cache on the chip is over 1.6TB/s. Each chip has 4 DDR4 memory controllers and can support upto 16 DDR4 DIMMs, allowing for 2 TB of RAM/chip. The chip also includes 4 internal links of PCIe Gen3 I/O controllers.  Each chip has 7 coherence links, allowing for 8 of these chips to be connected together gluelessly. Also 32 of these chips can be connected in an SMP configuration. A potential system with 32 chips will have 1024 cores and 8192 threads and 64 TB of RAM.  Software in Silicon The M7 chip has many built in Application Accelerators in Silicon. These features will be exposed to our Software partners using the SPARC Accelerator Program.  The M7  has built-in logic to decompress data at the speed of memory access. This means that applications can directly work on compressed data in memory increasing the data access rates. The VA Masking feature allows the use of part of the virtual address to store meta-data.  Realtime Application Data Integrity The Realtime Application Data Integrity feature helps applications safeguard against invalid, stale memory reference and buffer overflows. The first 4-bits if the Pointer can be used to store a version number and this version number is also maintained in the memory & cache lines. When a pointer accesses memory the hardware checks to make sure the two versions match. A SEGV signal is raised when there is a mismatch. This feature can be used by the Database, applications and the OS.  M7 Database In-Memory Query Accelerator The M7 chip also includes a In-Silicon Query Engines.  These accelerate tasks that work on In-Memory Columnar Vectors. Oracle In-Memory options stores data in Column Format. The M7 Query Engine can speed up In-Memory Format Conversion, Value and Range Comparisons and Set Membership lookups. This engine can work on Compressed data - this means not only are we accelerating the query performance but also increasing the memory bandwidth for queries.  SPARC Accelerated Program  At the Hotchips conference we also introduced the SPARC Accelerated Program to provide our partners and third part developers access to all the goodness of the M7's SPARC Application Acceleration features. Please get in touch with us if you are interested in knowing more about this program. 

    Read the article

  • Wondering What to Expect from Master Data Management at OpenWorld 2012? Hold On to Your Seats…

    - by Mala Narasimharajan
    The Countdown begins – just 23 days till OpenWorld hits San Francisco. Oracle OpenWorld 2012 for MDM promises to be chock full of interesting sessions, specifically focused on our customers. We’ve made sure that our sessions are balanced between product information, strategy and real world stories and last but certainly not least - lessons learned – straight from our customers. Attendee / Presenters Toolkit Oracle Master Data Management FOCUS ON DOCUMENT – For all MDM sessions at OOW - where and when Oracle Schedule Builder – use search terms such as : MDM, master data, customer hub, product hub and master data management Oracle Music Festival - AMAZING Line up!!  Oracle Customer Appreciation Night –NOT TO BE MISSED!! Oracle OpenWorld LIVE On-Demand Stay on top of all that’s OpenWorld – when it comes to MDM. We’ll be posting not-t- miss sessions and blogs on what our customer lineup will be like at the big show. Look forward to seeing you at OOW – and in case you didn’t get approval to attend- take advantage of our virtual on-demand conference. See you at OpenWorld 2012 ! 

    Read the article

  • GDL Presents: Van Gogh Meets Alan Turing

    GDL Presents: Van Gogh Meets Alan Turing How can art and daily life be joined together? Host Ido Green chats with creators Uri Shaked & Tom Teman about tackling this question with their "Music Room" -- a case study in the power of Android -- and with Emmanuel Witzthum on his project "Dissolving Realities," which aims to connect the virtual environment of the Internet using Google Street View. Host: Ido Green, Developer Advocate Guests: Uri Shaked and Emmanuel Witzthum From: GoogleDevelopers Views: 0 0 ratings Time: 00:00 More in Science & Technology

    Read the article

  • How to comply with this guideline for submitting an application to the Software Center?

    - by George Edison
    I was reading through the Ubuntu Developer Programme Agreement for submitting applications to the Software Center and stubled across the following clause: 3.1 You must first test Apps you submit to confirm they are compatible with all currently supported versions of Ubuntu (as listed on Canonical's website at the date of submission by you) and your Apps must comply with the Publishing Policy. Does this mean I must install both the 32 and 64 bit versions of Ubuntu 8.04, 10.04, 10.10, 11.04, and 11.10? If so, that's 10 installations of Ubuntu - is that really feasible (even with virtual machines)? Alternatively, does anyone have suggestions for testing the application without actually installing each version? Some sort of chroot tool, perhaps?

    Read the article

  • SQL SERVER – Faster SQL Server Databases and Applications – Power and Control with SafePeak Caching Options

    - by Pinal Dave
    Update: This blog post is written based on the SafePeak, which is available for free download. Today, I’d like to examine more closely one of my preferred technologies for accelerating SQL Server databases, SafePeak. Safepeak’s software provides a variety of advanced data caching options, techniques and tools to accelerate the performance and scalability of SQL Server databases and applications. I’d like to look more closely at some of these options, as some of these capabilities could help you address lagging database and performance on your systems. To better understand the available options, it is best to start by understanding the difference between the usual “Basic Caching” vs. SafePeak’s “Dynamic Caching”. Basic Caching Basic Caching (or the stale and static cache) is an ability to put the results from a query into cache for a certain period of time. It is based on TTL, or Time-to-live, and is designed to stay in cache no matter what happens to the data. For example, although the actual data can be modified due to DML commands (update/insert/delete), the cache will still hold the same obsolete query data. Meaning that with the Basic Caching is really static / stale cache.  As you can tell, this approach has its limitations. Dynamic Caching Dynamic Caching (or the non-stale cache) is an ability to put the results from a query into cache while maintaining the cache transaction awareness looking for possible data modifications. The modifications can come as a result of: DML commands (update/insert/delete), indirect modifications due to triggers on other tables, executions of stored procedures with internal DML commands complex cases of stored procedures with multiple levels of internal stored procedures logic. When data modification commands arrive, the caching system identifies the related cache items and evicts them from cache immediately. In the dynamic caching option the TTL setting still exists, although its importance is reduced, since the main factor for cache invalidation (or cache eviction) become the actual data updates commands. Now that we have a basic understanding of the differences between “basic” and “dynamic” caching, let’s dive in deeper. SafePeak: A comprehensive and versatile caching platform SafePeak comes with a wide range of caching options. Some of SafePeak’s caching options are automated, while others require manual configuration. Together they provide a complete solution for IT and Data managers to reach excellent performance acceleration and application scalability for  a wide range of business cases and applications. Automated caching of SQL Queries: Fully/semi-automated caching of all “read” SQL queries, containing any types of data, including Blobs, XMLs, Texts as well as all other standard data types. SafePeak automatically analyzes the incoming queries, categorizes them into SQL Patterns, identifying directly and indirectly accessed tables, views, functions and stored procedures; Automated caching of Stored Procedures: Fully or semi-automated caching of all read” stored procedures, including procedures with complex sub-procedure logic as well as procedures with complex dynamic SQL code. All procedures are analyzed in advance by SafePeak’s  Metadata-Learning process, their SQL schemas are parsed – resulting with a full understanding of the underlying code, objects dependencies (tables, views, functions, sub-procedures) enabling automated or semi-automated (manually review and activate by a mouse-click) cache activation, with full understanding of the transaction logic for cache real-time invalidation; Transaction aware cache: Automated cache awareness for SQL transactions (SQL and in-procs); Dynamic SQL Caching: Procedures with dynamic SQL are pre-parsed, enabling easy cache configuration, eliminating SQL Server load for parsing time and delivering high response time value even in most complicated use-cases; Fully Automated Caching: SQL Patterns (including SQL queries and stored procedures) that are categorized by SafePeak as “read and deterministic” are automatically activated for caching; Semi-Automated Caching: SQL Patterns categorized as “Read and Non deterministic” are patterns of SQL queries and stored procedures that contain reference to non-deterministic functions, like getdate(). Such SQL Patterns are reviewed by the SafePeak administrator and in usually most of them are activated manually for caching (point and click activation); Fully Dynamic Caching: Automated detection of all dependent tables in each SQL Pattern, with automated real-time eviction of the relevant cache items in the event of “write” commands (a DML or a stored procedure) to one of relevant tables. A default setting; Semi Dynamic Caching: A manual cache configuration option enabling reducing the sensitivity of specific SQL Patterns to “write” commands to certain tables/views. An optimization technique relevant for cases when the query data is either known to be static (like archive order details), or when the application sensitivity to fresh data is not critical and can be stale for short period of time (gaining better performance and reduced load); Scheduled Cache Eviction: A manual cache configuration option enabling scheduling SQL Pattern cache eviction based on certain time(s) during a day. A very useful optimization technique when (for example) certain SQL Patterns can be cached but are time sensitive. Example: “select customers that today is their birthday”, an SQL with getdate() function, which can and should be cached, but the data stays relevant only until 00:00 (midnight); Parsing Exceptions Management: Stored procedures that were not fully parsed by SafePeak (due to too complex dynamic SQL or unfamiliar syntax), are signed as “Dynamic Objects” with highest transaction safety settings (such as: Full global cache eviction, DDL Check = lock cache and check for schema changes, and more). The SafePeak solution points the user to the Dynamic Objects that are important for cache effectiveness, provides easy configuration interface, allowing you to improve cache hits and reduce cache global evictions. Usually this is the first configuration in a deployment; Overriding Settings of Stored Procedures: Override the settings of stored procedures (or other object types) for cache optimization. For example, in case a stored procedure SP1 has an “insert” into table T1, it will not be allowed to be cached. However, it is possible that T1 is just a “logging or instrumentation” table left by developers. By overriding the settings a user can allow caching of the problematic stored procedure; Advanced Cache Warm-Up: Creating an XML-based list of queries and stored procedure (with lists of parameters) for periodically automated pre-fetching and caching. An advanced tool allowing you to handle more rare but very performance sensitive queries pre-fetch them into cache allowing high performance for users’ data access; Configuration Driven by Deep SQL Analytics: All SQL queries are continuously logged and analyzed, providing users with deep SQL Analytics and Performance Monitoring. Reduce troubleshooting from days to minutes with database objects and SQL Patterns heat-map. The performance driven configuration helps you to focus on the most important settings that bring you the highest performance gains. Use of SafePeak SQL Analytics allows continuous performance monitoring and analysis, easy identification of bottlenecks of both real-time and historical data; Cloud Ready: Available for instant deployment on Amazon Web Services (AWS). As you can see, there are many options to configure SafePeak’s SQL Server database and application acceleration caching technology to best fit a lot of situations. If you’re not familiar with their technology, they offer free-trial software you can download that comes with a free “help session” to help get you started. You can access the free trial here. Also, SafePeak is available to use on Amazon Cloud. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • TDE Tablespace Encryption 11.2.0.1 Certified with EBS 11i

    - by Steven Chan
    Oracle Advanced Security is an optional licenced Oracle 11g Database add-on.  Oracle Advanced Security Transparent Data Encryption (TDE) offers two different features:  column encryption and tablespace encryption.  TDE Tablespace Encryption 11.2.0.1 is now certified with Oracle E-Business Suite Release 11i. What is Transparent Data Encryption (TDE) ? Oracle Advanced Security Transparent Data Encryption (TDE) allows you to protect data at rest. TDE helps address privacy and PCI requirements by encrypting personally identifiable information (PII) such as Social Security numbers and credit card numbers. TDE is completely transparent to existing applications with no triggers, views or other application changes required. Data is transparently encrypted when written to disk and transparently decrypted after an application user has successfully authenticated and passed all authorization checks. Authorization checks include verifying the user has the necessary select and update privileges on the application table and checking Database Vault, Label Security and Virtual Private Database enforcement policies.

    Read the article

  • Citrix rachète l'entreprise allemande de SaaS Netviewer AG, pour renforcer ses offres virtuelles et en ligne

    Citrix rachète l'entreprise allemande de SaaS Netviewer AG, pour renforcer ses offres virtuelles et en ligne Le spécialiste de la virtualisation des postes de travail Citrix vient d'annoncer avoir racheté Netviewer AG, éditeur allemand de services et d'outils collaboratifs en ligne de type SaaS, pour l'inclure à sa division Citrix Online. La compagnie ainsi assimilée apportera ses compétences de modèle à la demande et ses 18.000 clients en Europe. Cette transaction devrait permettre d'étoffer le catalogue de services IT en ligne de Citrix, par exemple en vidéoconférence. ; ainsi que ses offres en virtual computing. « La collaboration et les services IT en mode SaaS ont été la clé de notre croissance h...

    Read the article

  • VirtualBox

    - by DesigningCode
    I was wanting to play around with something in a VM the other day.  I was curious what was available for free, if anything, for windows.   I quickly came across Virtual Box  ( http://www.virtualbox.org/ ).   Downloaded, Installed. No Problem!  Works really nicely.   It was commercial software (by sun (now oracle)) that turned open source.   In terms of a license it says :- In summary, the VirtualBox PUEL allows you to use VirtualBox free of charge for personal use or, alternatively, for product evaluation. An interesting feature it has is built in RDP.   Which is useful if you have a guest OS that doesn’t support RDP.   Speaking of RDP…..  which I will in my next blog post… I learnt something REALLY useful the other day.

    Read the article

  • Understanding C# async / await (2) Awaitable / Awaiter Pattern

    - by Dixin
    What is awaitable Part 1 shows that any Task is awaitable. Actually there are other awaitable types. Here is an example: Task<int> task = new Task<int>(() => 0); int result = await task.ConfigureAwait(false); // Returns a ConfiguredTaskAwaitable<TResult>. The returned ConfiguredTaskAwaitable<TResult> struct is awaitable. And it is not Task at all: public struct ConfiguredTaskAwaitable<TResult> { private readonly ConfiguredTaskAwaiter m_configuredTaskAwaiter; internal ConfiguredTaskAwaitable(Task<TResult> task, bool continueOnCapturedContext) { this.m_configuredTaskAwaiter = new ConfiguredTaskAwaiter(task, continueOnCapturedContext); } public ConfiguredTaskAwaiter GetAwaiter() { return this.m_configuredTaskAwaiter; } } It has one GetAwaiter() method. Actually in part 1 we have seen that Task has GetAwaiter() method too: public class Task { public TaskAwaiter GetAwaiter() { return new TaskAwaiter(this); } } public class Task<TResult> : Task { public new TaskAwaiter<TResult> GetAwaiter() { return new TaskAwaiter<TResult>(this); } } Task.Yield() is a another example: await Task.Yield(); // Returns a YieldAwaitable. The returned YieldAwaitable is not Task either: public struct YieldAwaitable { public YieldAwaiter GetAwaiter() { return default(YieldAwaiter); } } Again, it just has one GetAwaiter() method. In this article, we will look at what is awaitable. The awaitable / awaiter pattern By observing different awaitable / awaiter types, we can tell that an object is awaitable if It has a GetAwaiter() method (instance method or extension method); Its GetAwaiter() method returns an awaiter. An object is an awaiter if: It implements INotifyCompletion or ICriticalNotifyCompletion interface; It has an IsCompleted, which has a getter and returns a Boolean; it has a GetResult() method, which returns void, or a result. This awaitable / awaiter pattern is very similar to the iteratable / iterator pattern. Here is the interface definitions of iteratable / iterator: public interface IEnumerable { IEnumerator GetEnumerator(); } public interface IEnumerator { object Current { get; } bool MoveNext(); void Reset(); } public interface IEnumerable<out T> : IEnumerable { IEnumerator<T> GetEnumerator(); } public interface IEnumerator<out T> : IDisposable, IEnumerator { T Current { get; } } In case you are not familiar with the out keyword, please find out the explanation in Understanding C# Covariance And Contravariance (2) Interfaces. The “missing” IAwaitable / IAwaiter interfaces Similar to IEnumerable and IEnumerator interfaces, awaitable / awaiter can be visualized by IAwaitable / IAwaiter interfaces too. This is the non-generic version: public interface IAwaitable { IAwaiter GetAwaiter(); } public interface IAwaiter : INotifyCompletion // or ICriticalNotifyCompletion { // INotifyCompletion has one method: void OnCompleted(Action continuation); // ICriticalNotifyCompletion implements INotifyCompletion, // also has this method: void UnsafeOnCompleted(Action continuation); bool IsCompleted { get; } void GetResult(); } Please notice GetResult() returns void here. Task.GetAwaiter() / TaskAwaiter.GetResult() is of such case. And this is the generic version: public interface IAwaitable<out TResult> { IAwaiter<TResult> GetAwaiter(); } public interface IAwaiter<out TResult> : INotifyCompletion // or ICriticalNotifyCompletion { bool IsCompleted { get; } TResult GetResult(); } Here the only difference is, GetResult() return a result. Task<TResult>.GetAwaiter() / TaskAwaiter<TResult>.GetResult() is of this case. Please notice .NET does not define these IAwaitable / IAwaiter interfaces at all. As an UI designer, I guess the reason is, IAwaitable interface will constraint GetAwaiter() to be instance method. Actually C# supports both GetAwaiter() instance method and GetAwaiter() extension method. Here I use these interfaces only for better visualizing what is awaitable / awaiter. Now, if looking at above ConfiguredTaskAwaitable / ConfiguredTaskAwaiter, YieldAwaitable / YieldAwaiter, Task / TaskAwaiter pairs again, they all “implicitly” implement these “missing” IAwaitable / IAwaiter interfaces. In the next part, we will see how to implement awaitable / awaiter. Await any function / action In C# await cannot be used with lambda. This code: int result = await (() => 0); will cause a compiler error: Cannot await 'lambda expression' This is easy to understand because this lambda expression (() => 0) may be a function or a expression tree. Obviously we mean function here, and we can tell compiler in this way: int result = await new Func<int>(() => 0); It causes an different error: Cannot await 'System.Func<int>' OK, now the compiler is complaining the type instead of syntax. With the understanding of the awaitable / awaiter pattern, Func<TResult> type can be easily made into awaitable. GetAwaiter() instance method, using IAwaitable / IAwaiter interfaces First, similar to above ConfiguredTaskAwaitable<TResult>, a FuncAwaitable<TResult> can be implemented to wrap Func<TResult>: internal struct FuncAwaitable<TResult> : IAwaitable<TResult> { private readonly Func<TResult> function; public FuncAwaitable(Func<TResult> function) { this.function = function; } public IAwaiter<TResult> GetAwaiter() { return new FuncAwaiter<TResult>(this.function); } } FuncAwaitable<TResult> wrapper is used to implement IAwaitable<TResult>, so it has one instance method, GetAwaiter(), which returns a IAwaiter<TResult>, which wraps that Func<TResult> too. FuncAwaiter<TResult> is used to implement IAwaiter<TResult>: public struct FuncAwaiter<TResult> : IAwaiter<TResult> { private readonly Task<TResult> task; public FuncAwaiter(Func<TResult> function) { this.task = new Task<TResult>(function); this.task.Start(); } bool IAwaiter<TResult>.IsCompleted { get { return this.task.IsCompleted; } } TResult IAwaiter<TResult>.GetResult() { return this.task.Result; } void INotifyCompletion.OnCompleted(Action continuation) { new Task(continuation).Start(); } } Now a function can be awaited in this way: int result = await new FuncAwaitable<int>(() => 0); GetAwaiter() extension method As IAwaitable shows, all that an awaitable needs is just a GetAwaiter() method. In above code, FuncAwaitable<TResult> is created as a wrapper of Func<TResult> and implements IAwaitable<TResult>, so that there is a  GetAwaiter() instance method. If a GetAwaiter() extension method  can be defined for Func<TResult>, then FuncAwaitable<TResult> is no longer needed: public static class FuncExtensions { public static IAwaiter<TResult> GetAwaiter<TResult>(this Func<TResult> function) { return new FuncAwaiter<TResult>(function); } } So a Func<TResult> function can be directly awaited: int result = await new Func<int>(() => 0); Using the existing awaitable / awaiter - Task / TaskAwaiter Remember the most frequently used awaitable / awaiter - Task / TaskAwaiter. With Task / TaskAwaiter, FuncAwaitable / FuncAwaiter are no longer needed: public static class FuncExtensions { public static TaskAwaiter<TResult> GetAwaiter<TResult>(this Func<TResult> function) { Task<TResult> task = new Task<TResult>(function); task.Start(); return task.GetAwaiter(); // Returns a TaskAwaiter<TResult>. } } Similarly, with this extension method: public static class ActionExtensions { public static TaskAwaiter GetAwaiter(this Action action) { Task task = new Task(action); task.Start(); return task.GetAwaiter(); // Returns a TaskAwaiter. } } an action can be awaited as well: await new Action(() => { }); Now any function / action can be awaited: await new Action(() => HelperMethods.IO()); // or: await new Action(HelperMethods.IO); If function / action has parameter(s), closure can be used: int arg0 = 0; int arg1 = 1; int result = await new Action(() => HelperMethods.IO(arg0, arg1)); Using Task.Run() The above code is used to demonstrate how awaitable / awaiter can be implemented. Because it is a common scenario to await a function / action, so .NET provides a built-in API: Task.Run(): public class Task2 { public static Task Run(Action action) { // The implementation is similar to: Task task = new Task(action); task.Start(); return task; } public static Task<TResult> Run<TResult>(Func<TResult> function) { // The implementation is similar to: Task<TResult> task = new Task<TResult>(function); task.Start(); return task; } } In reality, this is how we await a function: int result = await Task.Run(() => HelperMethods.IO(arg0, arg1)); and await a action: await Task.Run(() => HelperMethods.IO());

    Read the article

  • Podcast Show Notes: William Ulrich and Neal McWhorter on Business Architecture

    - by Bob Rhubart
    The latest ArchBeat podcast program features a four-part conversation with William Ulrich and Neal McWhorter, the authors of Business Architecture: The Art and Practice of Business Transformation, available from Meghan-Kiffer Press. Listen to Part 1 Bill and Neal cover the basics and discuss the effects of the lack of business architecture on organizations. Listen to Part 2 (Jan 19) What really happens to the billions of dollars annually invested in IT. Listen to Part 3 (Jan 26) Why the IT and business sides of many organizations can’t play nice. Listen to Part 4 (Feb 2) How IT architects and business architects can work together to get the ship back on course and keep it there. Connect William Ulrich Website | LinkedIn | Business Architecture Guild Neal McWhorter Website | LinkedIn | Business Architecture Group on OMG Coming Soon Bob Hensle, Director, Oracle Enterprise Architecture Group, discusses the recently launched IT Solutions from Oracle (ITSO) library of documents. Excerpts from a recent OTN Architect Community Virtual Meet-up. Stay tuned: RSS del.icio.us Tags: business architecture,enterprise architecture,arch2arch,archbeat,podcast,business transformation,oracle,oracle technology network Technorati Tags: business architecture,enterprise architecture,arch2arch,archbeat,podcast,business transformation,oracle,oracle technology network

    Read the article

  • Online Multiplayer Game Architecture [on hold]

    - by Eric
    I am just starting to research online multiplayer game development and I have a high-level architectural question regarding how online multiple games function. I have server-side and client-side programming experience, and I understand how AJAX-esque transfer protocol operates. What I don't understand yet is how online multiple fits into all of that. For example, an online Tetris multiplayer game. Would both players have the entire Tetris game built out on their client-side and then get pushed "moves" from the other player via some AJAX-esque mechanism, in which case each client would have to be constantly listening via JavaScript for inbound "moves" and update the client appropriately? Or would each client build out the aesthetics and run a virtual server per game to which each client connects and thus pull and push commands in real-time via something like web sockets? I apologize if this question is too high-level and general, but I couldn't find anything online that offered this high-level of a perspective on the topic.

    Read the article

  • Mocking successive calls of similar type via sequential mocking

    - by mehfuzh
    In this post , i show how you can benefit from  sequential mocking feature[In JustMock] for setting up expectations with successive calls of same type.  To start let’s first consider the following dummy database and entity class. public class Person {     public virtual string Name { get; set; }     public virtual int Age { get; set; } }   public interface IDataBase {     T Get<T>(); } Now, our test goal is to return different entity for successive calls on IDataBase.Get<T>(). By default, the behavior in JustMock is override , which is similar to other popular mocking tools. By override it means that the tool will consider always the latest user setup. Therefore, the first example will return the latest entity every-time and will fail in line #12: Person person1 = new Person { Age = 30, Name = "Kosev" }; Person person2 = new Person { Age = 80, Name = "Mihail" };   var database = Mock.Create<IDataBase>();   Queue<Person> queue = new Queue<Person>();   Mock.Arrange(() => database.Get<Person>()).Returns(() => queue.Dequeue()); Mock.Arrange(() => database.Get<Person>()).Returns(person2);   // this will fail Assert.Equal(person1.GetHashCode(), database.Get<Person>().GetHashCode());   Assert.Equal(person2.GetHashCode(), database.Get<Person>().GetHashCode()); We can solve it the following way using a Queue and that removes the item from bottom on each call: Person person1 = new Person { Age = 30, Name = "Kosev" }; Person person2 = new Person { Age = 80, Name = "Mihail" };   var database = Mock.Create<IDataBase>();   Queue<Person> queue = new Queue<Person>();   queue.Enqueue(person1); queue.Enqueue(person2);   Mock.Arrange(() => database.Get<Person>()).Returns(queue.Dequeue());   Assert.Equal(person1.GetHashCode(), database.Get<Person>().GetHashCode()); Assert.Equal(person2.GetHashCode(), database.Get<Person>().GetHashCode()); This will ensure that right entity is returned but this is not an elegant solution. So, in JustMock we introduced a  new option that lets you set up your expectations sequentially. Like: Person person1 = new Person { Age = 30, Name = "Kosev" }; Person person2 = new Person { Age = 80, Name = "Mihail" };   var database = Mock.Create<IDataBase>();   Mock.Arrange(() => database.Get<Person>()).Returns(person1).InSequence(); Mock.Arrange(() => database.Get<Person>()).Returns(person2).InSequence();   Assert.Equal(person1.GetHashCode(), database.Get<Person>().GetHashCode()); Assert.Equal(person2.GetHashCode(), database.Get<Person>().GetHashCode()); The  “InSequence” modifier will tell the mocking tool to return the expected result as in the order it is specified by user. The solution though pretty simple and but neat(to me) and way too simpler than using a collection to solve this type of cases. Hope that helps P.S. The example shown in my blog is using interface don’t require a profiler  and you can even use a notepad and build it referencing Telerik.JustMock.dll, run it with GUI tools and it will work. But this feature also applies to concrete methods that includes JM profiler and can be implemented for more complex scenarios.

    Read the article

  • A WPF Image Button

    - by psheriff
    Instead of a normal button with words, sometimes you want a button that is just graphical. Yes, you can put an Image control in the Content of a normal Button control, but you still have the button outline, and trying to change the style can be rather difficult. Instead I like creating a user control that simulates a button, but just accepts an image. Figure 1 shows an example of three of these custom user controls to represent minimize, maximize and close buttons for a borderless window. Notice the highlighted image button has a gray rectangle around it. You will learn how to highlight using the VisualStateManager in this blog post.Figure 1: Creating a custom user control for things like image buttons gives you complete control over the look and feel.I would suggest you read my previous blog post on creating a custom Button user control as that is a good primer for what I am going to expand upon in this blog post. You can find this blog post at http://weblogs.asp.net/psheriff/archive/2012/08/10/create-your-own-wpf-button-user-controls.aspx.The User ControlThe XAML for this image button user control contains just a few controls, plus a Visual State Manager. The basic outline of the user control is shown below:<Border Grid.Row="0"        Name="borMain"        Style="{StaticResource pdsaButtonImageBorderStyle}"        MouseEnter="borMain_MouseEnter"        MouseLeave="borMain_MouseLeave"        MouseLeftButtonDown="borMain_MouseLeftButtonDown">  <VisualStateManager.VisualStateGroups>  ... MORE XAML HERE ...  </VisualStateManager.VisualStateGroups>  <Image Style="{StaticResource pdsaButtonImageImageStyle}"         Visibility="{Binding Path=Visibility}"         Source="{Binding Path=ImageUri}"         ToolTip="{Binding Path=ToolTip}" /></Border>There is a Border control named borMain and a single Image control in this user control. That is all that is needed to display the buttons shown in Figure 1. The definition for this user control is in a DLL named PDSA.WPF. The Style definitions for both the Border and the Image controls are contained in a resource dictionary names PDSAButtonStyles.xaml. Using a resource dictionary allows you to create a few different resource dictionaries, each with a different theme for the buttons.The Visual State ManagerTo display the highlight around the button as your mouse moves over the control, you will need to add a Visual State Manager group. Two different states are needed; MouseEnter and MouseLeave. In the MouseEnter you create a ColorAnimation to modify the BorderBrush color of the Border control. You specify the color to animate as “DarkGray”. You set the duration to less than a second. The TargetName of this storyboard is the name of the Border control “borMain” and since we are specifying a single color, you need to set the TargetProperty to “BorderBrush.Color”. You do not need any storyboard for the MouseLeave state. Leaving this VisualState empty tells the Visual State Manager to put everything back the way it was before the MouseEnter event.<VisualStateManager.VisualStateGroups>  <VisualStateGroup Name="MouseStates">    <VisualState Name="MouseEnter">      <Storyboard>        <ColorAnimation             To="DarkGray"            Duration="0:0:00.1"            Storyboard.TargetName="borMain"            Storyboard.TargetProperty="BorderBrush.Color" />      </Storyboard>    </VisualState>    <VisualState Name="MouseLeave" />  </VisualStateGroup></VisualStateManager.VisualStateGroups>Writing the Mouse EventsTo trigger the Visual State Manager to run its storyboard in response to the specified event, you need to respond to the MouseEnter event on the Border control. In the code behind for this event call the GoToElementState() method of the VisualStateManager class exposed by the user control. To this method you will pass in the target element (“borMain”) and the state (“MouseEnter”). The VisualStateManager will then run the storyboard contained within the defined state in the XAML.private void borMain_MouseEnter(object sender,  MouseEventArgs e){  VisualStateManager.GoToElementState(borMain,    "MouseEnter", true);}You also need to respond to the MouseLeave event. In this event you call the VisualStateManager as well, but specify “MouseLeave” as the state to go to.private void borMain_MouseLeave(object sender, MouseEventArgs e){  VisualStateManager.GoToElementState(borMain,     "MouseLeave", true);}The Resource DictionaryBelow is the definition of the PDSAButtonStyles.xaml resource dictionary file contained in the PDSA.WPF DLL. This dictionary can be used as the default look and feel for any image button control you add to a window. <ResourceDictionary  ... >  <!-- ************************* -->  <!-- ** Image Button Styles ** -->  <!-- ************************* -->  <!-- Image/Text Button Border -->  <Style TargetType="Border"         x:Key="pdsaButtonImageBorderStyle">    <Setter Property="Margin"            Value="4" />    <Setter Property="Padding"            Value="2" />    <Setter Property="BorderBrush"            Value="Transparent" />    <Setter Property="BorderThickness"            Value="1" />    <Setter Property="VerticalAlignment"            Value="Top" />    <Setter Property="HorizontalAlignment"            Value="Left" />    <Setter Property="Background"            Value="Transparent" />  </Style>  <!-- Image Button -->  <Style TargetType="Image"         x:Key="pdsaButtonImageImageStyle">    <Setter Property="Width"            Value="40" />    <Setter Property="Margin"            Value="6" />    <Setter Property="VerticalAlignment"            Value="Top" />    <Setter Property="HorizontalAlignment"            Value="Left" />  </Style></ResourceDictionary>Using the Button ControlOnce you make a reference to the PDSA.WPF DLL from your WPF application you will see the “PDSAucButtonImage” control appear in your Toolbox. Drag and drop the button onto a Window or User Control in your application. I have not referenced the PDSAButtonStyles.xaml file within the control itself so you do need to add a reference to this resource dictionary somewhere in your application such as in the App.xaml.<Application.Resources>  <ResourceDictionary>    <ResourceDictionary.MergedDictionaries>      <ResourceDictionary         Source="/PDSA.WPF;component/PDSAButtonStyles.xaml" />    </ResourceDictionary.MergedDictionaries>  </ResourceDictionary></Application.Resources>This will give your buttons a default look and feel unless you override that dictionary on a specific Window or User Control or on an individual button. After you have given a global style to your application and you drag your image button onto a window, the following will appear in your XAML window.<my:PDSAucButtonImage ... />There will be some other attributes set on the above XAML, but you simply need to set the x:Name, the ToolTip and ImageUri properties. You will also want to respond to the Click event procedure in order to associate an action with clicking on this button. In the sample code you download for this blog post you will find the declaration of the Minimize button to be the following:<my:PDSAucButtonImage       x:Name="btnMinimize"       Click="btnMinimize_Click"       ToolTip="Minimize Application"       ImageUri="/PDSA.WPF;component/Images/Minus.png" />The ImageUri property is a dependency property in the PDSAucButtonImage user control. The x:Name and the ToolTip we get for free. You have to create the Click event procedure yourself. This is also created in the PDSAucButtonImage user control as follows:private void borMain_MouseLeftButtonDown(object sender,  MouseButtonEventArgs e){  RaiseClick(e);}public delegate void ClickEventHandler(object sender,  RoutedEventArgs e);public event ClickEventHandler Click;protected void RaiseClick(RoutedEventArgs e){  if (null != Click)    Click(this, e);}Since a Border control does not have a Click event you will create one by using the MouseLeftButtonDown on the border to fire an event you create called “Click”.SummaryCreating your own image button control can be done in a variety of ways. In this blog post I showed you how to create a custom user control and simulate a button using a Border and Image control. With just a little bit of code to respond to the MouseLeftButtonDown event on the border you can raise your own Click event. Dependency properties, such as ImageUri, allow you to set attributes on your custom user control. Feel free to expand on this button by adding additional dependency properties, change the resource dictionary, and even the animation to make this button look and act like you want.NOTE: You can download the sample code for this article by visiting my website at http://www.pdsa.com/downloads. Select “Tips & Tricks”, then select “A WPF Image  Button” from the drop down list.

    Read the article

  • How To Watch Netflix On Ubuntu with the Netflix Desktop App

    - by Chris Hoffman
    We previously covered watching Netflix on Linux and concluded that using a virtual machine was your best bet. There’s now an even better solution – a “Netflix Desktop” app that allows you to watch Netflix on Linux. This app is actually a package containing a patched version of Wine, the Windows build of Firefox, Microsoft Silverlight, and some tweaks to make it all work together. Previously, Silverlight would not run properly in Wine. Note: While this worked pretty well for us, it’s an unofficial solution that relies on Wine. Netflix doesn’t officially support it. How To Boot Your Android Phone or Tablet Into Safe Mode HTG Explains: Does Your Android Phone Need an Antivirus? How To Use USB Drives With the Nexus 7 and Other Android Devices

    Read the article

  • Are there any "traps" in cloning a VirtualBox VM for concurrent use on the same Host/LAN?

    - by fred.bear
    With Ubuntu as the Host, I want to run two similar/identical(?) instances of a VirtualBox Guest on the same Host, or perhaps on another Host which is on the same LAN... I have set up a Guest as a "base" for the two clones. I have exported it as an ovf appliance. I've imported this "base" guest OS back into VirtualBox, with a unique name and .vdk ... and I have started them both on the same Host, and all seems okay, but I do wonder if I have missed some significant point. ...eg. Is the virtual NIC the same? this would throw the LAN into confusion (I think)... and what about UUIDs? I haven't actually tried 2 clones together, yet... only the original and one clone, but I haven't gone beyond a simple startup ...

    Read the article

  • Hyper-V for Developers - presentation from London .NET Users and VBUG Bracknell

    - by Liam Westley
    Thanks to both London .NET User group and VBUG Bracknell for allowing me to present my Hyper-V for Developers talk last week.  A weekend at DDD Scotland followed by two user group presentations means I'm a bit late getting the presentations uploaded to the blog, so many apologies if you've been waiting.   LDNUG - www.tigernews.co.uk/blog-twickers/LDNUG-HyperV4Devs.zip   VBUG - www.tigernews.co.uk/blog-twickers/VBUG-HyperV4Devs.zip Also, at VBUG Bracknell I was asked if you could configure a Hyper-V server to user wireless networking (which might be useful if you have a laptop for demonstrations).  Well here's the post from Ben Armstrong (Virtual PC Guy) which details how that can be configured,   http://blogs.msdn.com/virtual_pc_guy/archive/2008/01/09/using-hyper-v-with-a-wireless-network-adapter.aspx ... and it's also detailed on the TechNet wiki as part of running Hyper-V on a laptop,   social.technet.microsoft.com/wiki/contents/articles/hyper-v-how-to-run-hyper-v-on-a-laptop.aspx

    Read the article

  • What units does the ntp drift file use?

    - by arielf
    When the ntpd daemon is running, the file: /var/lib/ntp/ntp.drift gets updated periodically. Example: 17:20 hostname 118 ~> ls -l /var/lib/ntp/ntp.drift -rw-r--r-- 1 ntp ntp 7 May 20 16:46 /var/lib/ntp/ntp.drift # So it looks like it was last updated ~34 minutes ago The file has one number in it, for example, looking at a 4 virtual hosts, I find these values, respectively: -22.086 -10.214 -13.669 6.045 I assume these are seconds per day(?), but not sure. man ntpd mentions a different drift file /etc/ntp.drift which doesn't seem to exist. The man page doesn't explain what units are being used for the drift. Questions: Is /etc/ntp.drift actually /var/lib/ntp/ntp.drift on Ubuntu? What units is the drift expressed in? Thanks!

    Read the article

  • The Data Scientist

    - by BuckWoody
    A new term - well, perhaps not that new - has come up and I’m actually very excited about it. The term is Data Scientist, and since it’s new, it’s fairly undefined. I’ll explain what I think it means, and why I’m excited about it. In general, I’ve found the term deals at its most basic with analyzing data. Of course, we all do that, and the term itself in that definition is redundant. There is no science that I know of that does not work with analyzing lots of data. But the term seems to refer to more than the common practices of looking at data visually, putting it in a spreadsheet or report, or even using simple coding to examine data sets. The term Data Scientist (as far as I can make out this early in it’s use) is someone who has a strong understanding of data sources, relevance (statistical and otherwise) and processing methods as well as front-end displays of large sets of complicated data. Some - but not all - Business Intelligence professionals have these skills. In other cases, senior developers, database architects or others fill these needs, but in my experience, many lack the strong mathematical skills needed to make these choices properly. I’ve divided the knowledge base for someone that would wear this title into three large segments. It remains to be seen if a given Data Scientist would be responsible for knowing all these areas or would specialize. There are pretty high requirements on the math side, specifically in graduate-degree level statistics, but in my experience a company will only have a few of these folks, so they are expected to know quite a bit in each of these areas. Persistence The first area is finding, cleaning and storing the data. In some cases, no cleaning is done prior to storage - it’s just identified and the cleansing is done in a later step. This area is where the professional would be able to tell if a particular data set should be stored in a Relational Database Management System (RDBMS), across a set of key/value pair storage (NoSQL) or in a file system like HDFS (part of the Hadoop landscape) or other methods. Or do you examine the stream of data without storing it in another system at all? This is an important decision - it’s a foundation choice that deals not only with a lot of expense of purchasing systems or even using Cloud Computing (PaaS, SaaS or IaaS) to source it, but also the skillsets and other resources needed to care and feed the system for a long time. The Data Scientist sets something into motion that will probably outlast his or her career at a company or organization. Often these choices are made by senior developers, database administrators or architects in a company. But sometimes each of these has a certain bias towards making a decision one way or another. The Data Scientist would examine these choices in light of the data itself, starting perhaps even before the business requirements are created. The business may not even be aware of all the strategic and tactical data sources that they have access to. Processing Once the decision is made to store the data, the next set of decisions are based around how to process the data. An RDBMS scales well to a certain level, and provides a high degree of ACID compliance as well as offering a well-known set-based language to work with this data. In other cases, scale should be spread among multiple nodes (as in the case of Hadoop landscapes or NoSQL offerings) or even across a Cloud provider like Windows Azure Table Storage. In fact, in many cases - most of the ones I’m dealing with lately - the data should be split among multiple types of processing environments. This is a newer idea. Many data professionals simply pick a methodology (RDBMS with Star Schemas, NoSQL, etc.) and put all data there, regardless of its shape, processing needs and so on. A Data Scientist is familiar not only with the various processing methods, but how they work, so that they can choose the right one for a given need. This is a huge time commitment, hence the need for a dedicated title like this one. Presentation This is where the need for a Data Scientist is most often already being filled, sometimes with more or less success. The latest Business Intelligence systems are quite good at allowing you to create amazing graphics - but it’s the data behind the graphics that are the most important component of truly effective displays. This is where the mathematics requirement of the Data Scientist title is the most unforgiving. In fact, someone without a good foundation in statistics is not a good candidate for creating reports. Even a basic level of statistics can be dangerous. Anyone who works in analyzing data will tell you that there are multiple errors possible when data just seems right - and basic statistics bears out that you’re on the right track - that are only solvable when you understanding why the statistical formula works the way it does. And there are lots of ways of presenting data. Sometimes all you need is a “yes” or “no” answer that can only come after heavy analysis work. In that case, a simple e-mail might be all the reporting you need. In others, complex relationships and multiple components require a deep understanding of the various graphical methods of presenting data. Knowing which kind of chart, color, graphic or shape conveys a particular datum best is essential knowledge for the Data Scientist. Why I’m excited I love this area of study. I like math, stats, and computing technologies, but it goes beyond that. I love what data can do - how it can help an organization. I’ve been fortunate enough in my professional career these past two decades to work with lots of folks who perform this role at companies from aerospace to medical firms, from manufacturing to retail. Interestingly, the size of the company really isn’t germane here. I worked with one very small bio-tech (cryogenics) company that worked deeply with analysis of complex interrelated data. So  watch this space. No, I’m not leaving Azure or distributed computing or Microsoft. In fact, I think I’m perfectly situated to investigate this role further. We have a huge set of tools, from RDBMS to Hadoop to allow me to explore. And I’m happy to share what I learn along the way.

    Read the article

  • Virtualhost entries gets over-written when apache httpd.conf is rebuilt

    - by Amitabh
    Background: We have been trying to get a wildcard SSL working on multiple sub domains on a single dedicated address.. We have two sub domains next.my-personal-website.com and blog.my-personal-website.com Part of our strategy has been to edit the httpd.conf and add the NameVirtualHost xx.xx.144.72:443 directive and the virtualhost entries for port 443 for the subdomains there. This works good if we just edit the httpd.conf, add the entries, save it and restart the apache. The problem: But if we add a new sub domain from cpanel or we run the # /usr/local/cpanel/bin/apache_conf_distiller --update # /scripts/rebuildhttpdconf the virtualhost entries that we added manually are no more there in the newly generated httpd.conf file. Only the virtualhost entry for the main domain for port 443 that was there before we made edits to the httpd.conf is there(assuming we are not discussing virtualhost entries for port 80). I understand we need to put the new virtualhost entries in some include files as mentioned here in the cpanel documentation. But am not sure where to. So the question would be where do I put the NameVirtualHost xx.xx.144.72:443 directive and the two virtualhost directive for port 443, so that they are not overwritten when httpd.conf is rebuilt/regenerated later. Virtualhost entries: The two virtualhost entries for the subdomains are: <VirtualHost xx.xx.144.72:443> ServerName next.my-personal-website.com ServerAlias www.next.my-personal-website.com DocumentRoot /home/myguardi/public_html/next.my-personal-website.com ServerAdmin [email protected] UseCanonicalName On CustomLog /usr/local/apache/domlogs/next.my-personal-website.com combined CustomLog /usr/local/apache/domlogs/next.my-personal-website.com-bytes_log "%{%s}t %I .\n%{%s}t %O ." ## User myguardi # Needed for Cpanel::ApacheConf <IfModule mod_suphp.c> suPHP_UserGroup myguardi myguardi </IfModule> <IfModule !mod_disable_suexec.c> SuexecUserGroup myguardi myguardi </IfModule> ScriptAlias /cgi-bin/ /home/myguardi/public_html/next.my-personal-website.com/cgi-bin/ SSLEngine on SSLCertificateFile /etc/ssl/certs/my-personal-website.com.crt SSLCertificateKeyFile /etc/ssl/private/my-personal-website.com.key SSLCACertificateFile /etc/ssl/certs/my-personal-website.com.cabundle CustomLog /usr/local/apache/domlogs/next.my-personal-website.com-ssl_log combined SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown <Directory "/home/myguardi/public_html/cgi-bin"> SSLOptions +StdEnvVars </Directory> and <VirtualHost xx.xx.144.72:443> ServerName blog.my-personal-website.com ServerAlias www.blog.my-personal-website.com DocumentRoot /home/myguardi/public_html/blog.my-personal-website.com ServerAdmin [email protected] UseCanonicalName On CustomLog /usr/local/apache/domlogs/blog.my-personal-website.com combined CustomLog /usr/local/apache/domlogs/blog.my-personal-website.com-bytes_log "%{%s}t %I .\n%{%s}t %O ." ## User myguardi # Needed for Cpanel::ApacheConf <IfModule mod_suphp.c> suPHP_UserGroup myguardi myguardi </IfModule> <IfModule !mod_disable_suexec.c> SuexecUserGroup myguardi myguardi </IfModule> ScriptAlias /cgi-bin/ /home/myguardi/public_html/blog.my-personal-website.com/cgi-bin/ SSLEngine on SSLCertificateFile /etc/ssl/certs/my-personal-website.com.crt SSLCertificateKeyFile /etc/ssl/private/my-personal-website.com.key SSLCACertificateFile /etc/ssl/certs/my-personal-website.com.cabundle CustomLog /usr/local/apache/domlogs/blog.my-personal-website.com-ssl_log combined SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown <Directory "/home/myguardi/public_html/cgi-bin"> SSLOptions +StdEnvVars </Directory> and the automatically generated virtualhost entry for the main domain for port 443 is <VirtualHost xx.xx.144.72:443> ServerName my-personal-website.com ServerAlias www.my-personal-website.com DocumentRoot /home/myguardi/public_html ServerAdmin [email protected] UseCanonicalName Off CustomLog /usr/local/apache/domlogs/my-personal-website.com combined CustomLog /usr/local/apache/domlogs/my-personal-website.com-bytes_log "%{%s}t %I .\n%{%s}t %O ." ## User myguardi # Needed for Cpanel::ApacheConf <IfModule mod_suphp.c> suPHP_UserGroup myguardi myguardi </IfModule> <IfModule !mod_disable_suexec.c> SuexecUserGroup myguardi myguardi </IfModule> ScriptAlias /cgi-bin/ /home/myguardi/public_html/cgi-bin/ SSLEngine on SSLCertificateFile /etc/ssl/certs/my-personal-website.com.crt SSLCertificateKeyFile /etc/ssl/private/my-personal-website.com.key SSLCACertificateFile /etc/ssl/certs/my-personal-website.com.cabundle CustomLog /usr/local/apache/domlogs/my-personal-website.com-ssl_log combined SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown <Directory "/home/myguardi/public_html/cgi-bin"> SSLOptions +StdEnvVars </Directory> # To customize this VirtualHost use an include file at the following location # Include "/usr/local/apache/conf/userdata/ssl/2/myguardi/my-personal-website.com/*.conf" I really appreciate if somebody can tell me how to proceed on this. Thank you. Update: Include directives present are: `Include "/usr/local/apache/conf/includes/pre_main_global.conf" Include "/usr/local/apache/conf/includes/pre_main_2.conf" Include "/usr/local/apache/conf/php.conf" Include "/usr/local/apache/conf/includes/errordocument.conf" Include "/usr/local/apache/conf/modsec2.conf" Include "/usr/local/apache/conf/includes/pre_virtualhost_global.conf" Include "/usr/local/apache/conf/includes/pre_virtualhost_2.conf" ` These are the entries that are generated before any virtualhost entry is defined. Towards the end of the httpd.conf file , the following two entries are added Include "/usr/local/apache/conf/includes/post_virtualhost_global.conf" Include "/usr/local/apache/conf/includes/post_virtualhost_2.conf" The older httpd.conf file before we added the virtualhost entries for sub domains for port 443 can be viewed here

    Read the article

  • Cant get lm-sensors to load ATI Radeon temp or fan

    - by woody
    New to Linux and having minor issues :/ . I followed this guide initially but did not recieve the proper output and did not show my ATI Radeon HD 5000 temp or fan speed. Then used this guide, same problems exhibited. No issues installing and no errors. I think its not reading i2c for some reason. The proprietary driver is installed and functioning correctly according fglrxinfo. I can use aticonfig commands and view both temp and fan. Any ideas on how to get it working under 'sensors'? When i run 'sudo sensors-detect' this is my ouput # sensors-detect revision 5984 (2011-07-10 21:22:53 +0200) # System: LENOVO IdeaPad Y560 (laptop) # Board: Lenovo KL3 This program will help you determine which kernel modules you need to load to use lm_sensors most effectively. It is generally safe and recommended to accept the default answers to all questions, unless you know what you're doing. Some south bridges, CPUs or memory controllers contain embedded sensors. Do you want to scan for them? This is totally safe. (YES/no): y Silicon Integrated Systems SIS5595... No VIA VT82C686 Integrated Sensors... No VIA VT8231 Integrated Sensors... No AMD K8 thermal sensors... No AMD Family 10h thermal sensors... No AMD Family 11h thermal sensors... No AMD Family 12h and 14h thermal sensors... No AMD Family 15h thermal sensors... No AMD Family 15h power sensors... No Intel digital thermal sensor... Success! (driver `coretemp') Intel AMB FB-DIMM thermal sensor... No VIA C7 thermal sensor... No VIA Nano thermal sensor... No Some Super I/O chips contain embedded sensors. We have to write to standard I/O ports to probe them. This is usually safe. Do you want to scan for Super I/O sensors? (YES/no): y Probing for Super-I/O at 0x2e/0x2f Trying family `National Semiconductor/ITE'... Yes Found unknown chip with ID 0x8502 Probing for Super-I/O at 0x4e/0x4f Trying family `National Semiconductor/ITE'... No Trying family `SMSC'... No Trying family `VIA/Winbond/Nuvoton/Fintek'... No Trying family `ITE'... No Some hardware monitoring chips are accessible through the ISA I/O ports. We have to write to arbitrary I/O ports to probe them. This is usually safe though. Yes, you do have ISA I/O ports even if you do not have any ISA slots! Do you want to scan the ISA I/O ports? (YES/no): y Probing for `National Semiconductor LM78' at 0x290... No Probing for `National Semiconductor LM79' at 0x290... No Probing for `Winbond W83781D' at 0x290... No Probing for `Winbond W83782D' at 0x290... No Lastly, we can probe the I2C/SMBus adapters for connected hardware monitoring devices. This is the most risky part, and while it works reasonably well on most systems, it has been reported to cause trouble on some systems. Do you want to probe the I2C/SMBus adapters now? (YES/no): y Using driver `i2c-i801' for device 0000:00:1f.3: Intel 3400/5 Series (PCH) Now follows a summary of the probes I have just done. Just press ENTER to continue: Driver `coretemp': * Chip `Intel digital thermal sensor' (confidence: 9) To load everything that is needed, add this to /etc/modules: #----cut here---- # Chip drivers coretemp #----cut here---- If you have some drivers built into your kernel, the list above will contain too many modules. Skip the appropriate ones! Do you want to add these lines automatically to /etc/modules? (yes/NO) My output for 'sensors' is: acpitz-virtual-0 Adapter: Virtual device temp1: +58.0°C (crit = +100.0°C) coretemp-isa-0000 Adapter: ISA adapter Core 0: +56.0°C (high = +84.0°C, crit = +100.0°C) Core 1: +57.0°C (high = +84.0°C, crit = +100.0°C) Core 2: +58.0°C (high = +84.0°C, crit = +100.0°C) Core 3: +57.0°C (high = +84.0°C, crit = +100.0°C) and my '/etc/modules' is: # /etc/modules: kernel modules to load at boot time. # # This file contains the names of kernel modules that should be loaded # at boot time, one per line. Lines beginning with "#" are ignored. lp rtc # Generated by sensors-detect on Fri Nov 30 23:24:31 2012 # Chip drivers coretemp

    Read the article

  • SQL SERVER – Core Concepts – Elasticity, Scalability and ACID Properties – Exploring NuoDB an Elastically Scalable Database System

    - by pinaldave
    I have been recently exploring Elasticity and Scalability attributes of databases. You can see that in my earlier blog posts about NuoDB where I wanted to look at Elasticity and Scalability concepts. The concepts are very interesting, and intriguing as well. I have discussed these concepts with my friend Joyti M and together we have come up with this interesting read. The goal of this article is to answer following simple questions What is Elasticity? What is Scalability? How ACID properties vary from NOSQL Concepts? What are the prevailing problems in the current database system architectures? Why is NuoDB  an innovative and welcome change in database paradigm? Elasticity This word’s original form is used in many different ways and honestly it does do a decent job in holding things together over the years as a person grows and contracts. Within the tech world, and specifically related to software systems (database, application servers), it has come to mean a few things - allow stretching of resources without reaching the breaking point (on demand). What are resources in this context? Resources are the usual suspects – RAM/CPU/IO/Bandwidth in the form of a container (a process or bunch of processes combined as modules). When it is about increasing resources the simplest idea which comes to mind is the addition of another container. Another container means adding a brand new physical node. When it is about adding a new node there are two questions which comes to mind. 1) Can we add another node to our software system? 2) If yes, does adding new node cause downtime for the system? Let us assume we have added new node, let us see what the new needs of the system are when a new node is added. Balancing incoming requests to multiple nodes Synchronization of a shared state across multiple nodes Identification of “downstate” and resolution action to bring it to “upstate” Well, adding a new node has its advantages as well. Here are few of the positive points Throughput can increase nearly horizontally across the node throughout the system Response times of application will increase as in-between layer interactions will be improved Now, Let us put the above concepts in the perspective of a Database. When we mention the term “running out of resources” or “application is bound to resources” the resources can be CPU, Memory or Bandwidth. The regular approach to “gain scalability” in the database is to look around for bottlenecks and increase the bottlenecked resource. When we have memory as a bottleneck we look at the data buffers, locks, query plans or indexes. After a point even this is not enough as there needs to be an efficient way of managing such large workload on a “single machine” across memory and CPU bound (right kind of scheduling)  workload. We next move on to either read/write separation of the workload or functionality-based sharing so that we still have control of the individual. But this requires lots of planning and change in client systems in terms of knowing where to go/update/read and for reporting applications to “aggregate the data” in an intelligent way. What we ideally need is an intelligent layer which allows us to do these things without us getting into managing, monitoring and distributing the workload. Scalability In the context of database/applications, scalability means three main things Ability to handle normal loads without pressure E.g. X users at the Y utilization of resources (CPU, Memory, Bandwidth) on the Z kind of hardware (4 processor, 32 GB machine with 15000 RPM SATA drives and 1 GHz Network switch) with T throughput Ability to scale up to expected peak load which is greater than normal load with acceptable response times Ability to provide acceptable response times across the system E.g. Response time in S milliseconds (or agreed upon unit of measure) – 90% of the time The Issue – Need of Scale In normal cases one can plan for the load testing to test out normal, peak, and stress scenarios to ensure specific hardware meets the needs. With help from Hardware and Software partners and best practices, bottlenecks can be identified and requisite resources added to the system. Unfortunately this vertical scale is expensive and difficult to achieve and most of the operational people need the ability to scale horizontally. This helps in getting better throughput as there are physical limits in terms of adding resources (Memory, CPU, Bandwidth and Storage) indefinitely. Today we have different options to achieve scalability: Read & Write Separation The idea here is to do actual writes to one store and configure slaves receiving the latest data with acceptable delays. Slaves can be used for balancing out reads. We can also explore functional separation or sharing as well. We can separate data operations by a specific identifier (e.g. region, year, month) and consolidate it for reporting purposes. For functional separation the major disadvantage is when schema changes or workload pattern changes. As the requirement grows one still needs to deal with scale need in manual ways by providing an abstraction in the middle tier code. Using NOSQL solutions The idea is to flatten out the structures in general to keep all values which are retrieved together at the same store and provide flexible schema. The issue with the stores is that they are compromising on mostly consistency (no ACID guarantees) and one has to use NON-SQL dialect to work with the store. The other major issue is about education with NOSQL solutions. Would one really want to make these compromises on the ability to connect and retrieve in simple SQL manner and learn other skill sets? Or for that matter give up on ACID guarantee and start dealing with consistency issues? Hybrid Deployment – Mac, Linux, Cloud, and Windows One of the challenges today that we see across On-premise vs Cloud infrastructure is a difference in abilities. Take for example SQL Azure – it is wonderful in its concepts of throttling (as it is shared deployment) of resources and ability to scale using federation. However, the same abilities are not available on premise. This is not a mistake, mind you – but a compromise of the sweet spot of workloads, customer requirements and operational SLAs which can be supported by the team. In today’s world it is imperative that databases are available across operating systems – which are a commodity and used by developers of all hues. An Ideal Database Ability List A system which allows a linear scale of the system (increase in throughput with reasonable response time) with the addition of resources A system which does not compromise on the ACID guarantees and require developers to learn new paradigms A system which does not force fit a new way interacting with database by learning Non-SQL dialect A system which does not force fit its mechanisms for providing availability across its various modules. Well NuoDB is the first database which has all of the above abilities and much more. In future articles I will cover my hands-on experience with it. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • FTP Publishing with the new Windows Azure Release

    - by Harish Ranganathan
    There is a good chance you might have stumbled upon the new Windows Azure Release that we made on June 6th.  Scott Guthrie’s Post quite summarizes the overall new features. One of my favorite features is the Windows Azure Websites and the ability to do publish files to Azure using your FTP Client. Windows Azure Websites offers low cost (free upto 10 websites) web hosting where you can deploy any website that can run on IIS 7.0, quickly. The earlier releases of Azure SDKs and the Azure platform support .NET 3.5 & above for running your applications.  This was a constraint for many since there are/were a lot of ASP.NET 2.0 applications built over time and simply to put it on Azure, many of you were skeptical to migrate it to .NET 4. Windows Azure Websites offer the flexibility of running IIS 7.0 supported .NET Versions which means you can run .NET 1.1, 2.0, 3.5 and .NET 4.  Not just that! You can also run classic ASP Applications. Windows Azure Websites don’t need you to go through the complexity of adding the Cloud Project Template and then publishing the Configuration Files.  Lets take a step by step understanding of Websites and publishing using FTP. I downloaded the Club Website Starter Kit from http://www.asp.net/downloads/starter-kits/club It also requires a database and I downloaded the SQL Scripts and created a SQL Server Database called Club. This installs a Web Site Project Template.  Note that I am running Windows 8 Release Preview and Visual Studio 2012 RC.  After installing the template, select File – New – Website and don’t forget to choose the Framework version as .NET 2.0 You can see the “Club Website Starter Kit” .  Once you select the Website gets created.  You would encounter a warning indicating that the Club Website Starter Kit uses SQL Express and the recommended database is LocalDB Express.  Click ok to continue.  Once the Website is created open up the Web.config and locate the “ClubSiteDB” connection string.  By default, it points to a SQL Express Database.  Instead configure it to use your local SQL Server. Also, open up Global.asax and comment out the following line if (!Roles.RoleExists("Administrators")) Roles.CreateRole("Administrators"); There seems to be an issue in the code that doesn’t create the role.  Post that, hit CTRL+F5 and you should be able to see the Website Running, as below So, now we have the Club Starter Kit site up running locally.  Moving to Azure Visit http://manage.windowsazure.com/ and sign up for a trial account.  This allows you to host up to 10 websites for free and a host of other benefits.  The free Websites can be extended to an year without any charge.  Once you have signed up, sign in to the portal using the Live ID used for sign up. After signing in, you would be presented with the “All Items” listing page which lists, Websites, Cloud Services, Databases etc.,  If this is the first time, you wouldn’t find anything. Click on the “Websites” link from the left menu.  Click on “New” in the bottom and it should show up a dialog.  In the same, select Website and click on “Quick Create” and in the URL Textbox, specify “MyFirstDemo” and click the “Create Web Site” link below. It should take a few seconds to create the Website.  Once the Website is created, click on the listing and it should open up the Dashboard.  Since we haven’t done anything yet, there shouldn’t be any statistics Click on the “Download publish profile” link in the right bottom.  This file has the FTP publishing settings. Also, if you scroll down you can see the FTP URL for this site.  It should typically start ftp://waws-xxxx-xxx-xxxx In the downloaded publish profile file, you can also find the ftp URL.  Pick the following from this file publishUrl (the 2nd one, the one that features after publishMethod =”FTP”) and the userName and userPWD that follows. Note that we have everything required to publish the files.  But since the Club Starter Kit uses Databases, we need to have the Database running on SQL Azure.  Go back to the Main Menu and click on “New” in the bottom but this time select “SQL Database” and provide “Club” as Database name for “Quick Create” If this is the first time a Server would be created.  Otherwise, it would pickup the existing server name. Once the database is created, you can use the SQL Azure Migration Wizard http://sqlazuremw.codeplex.com/ and provide the credentials to connect to local database and then the SQL Azure database for migrating the “Club” database.  The migration wizard UI hasn’t changed much and is the same as explained by me in one my posts earlier http://geekswithblogs.net/ranganh/archive/2009/09/29/taking-your-northwind-database-to-sql-azure-and-binding-it.aspx Once the database is migrated, come back to the main screen and click on the Database base in the Azure Management Portal.  It opens up the dashboard of the database.  Click on “Show connection Strings” and it would popup a list of connection string formats.  Choose the ADO.NET connection string and after editing the password with the password that you provided when creating the database server in the Azure Portal, paste it into the config file of the Club Starter Kit Website.  Just to reiterate, the connection string key is ClubSiteDB. Try running the Website once to ensure that the application though running locally could connect to the SQL Database running on Azure. Once you are able to run the website successfully, we are all set to do the FTP Publishing. Download your favorite FTP tool.  I use http://filezilla-project.org/ In the Host Textbox, paste the FTP URL that you picked up from the publish profile file and also paste the username and password.  Click on “QuickConnect”.  If everything is fine, you should be able to connect to the remote server.  If it is successfully connected, you can see the wwwroot folder of the Website, running in Azure Make sure on the “Local Site” in the left, you choose the path to the folder of your Website.  Open up the Website folder on the left such that it lists all the files and folders inside.  Select all of them and click select “Upload” or simply drag and drop all the files to the root folder that is listed above.  Once the publishing is done, you should be able to hit the SiteURL that you can find the dashboard page of the website.  In our case, it would be http://MyFirstDemo.azurewebsites.net That’s it, we have now done FTP publishing in Azure and that too we are running a .NET 2.0 Website on Azure. Cheers !!!

    Read the article

  • Unable to cast transparent proxy to type &lt;type&gt;

    - by Rick Strahl
    This is not the first time I've run into this wonderful error while creating new AppDomains in .NET and then trying to load types and access them across App Domains. In almost all cases the problem I've run into with this error the problem comes from the two AppDomains involved loading different copies of the same type. Unless the types match exactly and come exactly from the same assembly the typecast will fail. The most common scenario is that the types are loaded from different assemblies - as unlikely as that sounds. An Example of Failure To give some context, I'm working on some old code in Html Help Builder that creates a new AppDomain in order to parse assembly information for documentation purposes. I create a new AppDomain in order to load up an assembly process it and then immediately unload it along with the AppDomain. The AppDomain allows for unloading that otherwise wouldn't be possible as well as isolating my code from the assembly that's being loaded. The process to accomplish this is fairly established and I use it for lots of applications that use add-in like functionality - basically anywhere where code needs to be isolated and have the ability to be unloaded. My pattern for this is: Create a new AppDomain Load a Factory Class into the AppDomain Use the Factory Class to load additional types from the remote domain Here's the relevant code from my TypeParserFactory that creates a domain and then loads a specific type - TypeParser - that is accessed cross-AppDomain in the parent domain:public class TypeParserFactory : System.MarshalByRefObject,IDisposable { …/// <summary> /// TypeParser Factory method that loads the TypeParser /// object into a new AppDomain so it can be unloaded. /// Creates AppDomain and creates type. /// </summary> /// <returns></returns> public TypeParser CreateTypeParser() { if (!CreateAppDomain(null)) return null; /// Create the instance inside of the new AppDomain /// Note: remote domain uses local EXE's AppBasePath!!! TypeParser parser = null; try { Assembly assembly = Assembly.GetExecutingAssembly(); string assemblyPath = Assembly.GetExecutingAssembly().Location; parser = (TypeParser) this.LocalAppDomain.CreateInstanceFrom(assemblyPath, typeof(TypeParser).FullName).Unwrap(); } catch (Exception ex) { this.ErrorMessage = ex.GetBaseException().Message; return null; } return parser; } private bool CreateAppDomain(string lcAppDomain) { if (lcAppDomain == null) lcAppDomain = "wwReflection" + Guid.NewGuid().ToString().GetHashCode().ToString("x"); AppDomainSetup setup = new AppDomainSetup(); // *** Point at current directory setup.ApplicationBase = AppDomain.CurrentDomain.BaseDirectory; //setup.PrivateBinPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "bin"); this.LocalAppDomain = AppDomain.CreateDomain(lcAppDomain,null,setup); // Need a custom resolver so we can load assembly from non current path AppDomain.CurrentDomain.AssemblyResolve += new ResolveEventHandler(CurrentDomain_AssemblyResolve); return true; } …} Note that the classes must be either [Serializable] (by value) or inherit from MarshalByRefObject in order to be accessible remotely. Here I need to call methods on the remote object so all classes are MarshalByRefObject. The specific problem code is the loading up a new type which points at an assembly that visible both in the current domain and the remote domain and then instantiates a type from it. This is the code in question:Assembly assembly = Assembly.GetExecutingAssembly(); string assemblyPath = Assembly.GetExecutingAssembly().Location; parser = (TypeParser) this.LocalAppDomain.CreateInstanceFrom(assemblyPath, typeof(TypeParser).FullName).Unwrap(); The last line of code is what blows up with the Unable to cast transparent proxy to type <type> error. Without the cast the code actually returns a TransparentProxy instance, but the cast is what blows up. In other words I AM in fact getting a TypeParser instance back but it can't be cast to the TypeParser type that is loaded in the current AppDomain. Finding the Problem To see what's going on I tried using the .NET 4.0 dynamic type on the result and lo and behold it worked with dynamic - the value returned is actually a TypeParser instance: Assembly assembly = Assembly.GetExecutingAssembly(); string assemblyPath = Assembly.GetExecutingAssembly().Location; object objparser = this.LocalAppDomain.CreateInstanceFrom(assemblyPath, typeof(TypeParser).FullName).Unwrap(); // dynamic works dynamic dynParser = objparser; string info = dynParser.GetVersionInfo(); // method call works // casting fails parser = (TypeParser)objparser; So clearly a TypeParser type is coming back, but nevertheless it's not the right one. Hmmm… mysterious.Another couple of tries reveal the problem however:// works dynamic dynParser = objparser; string info = dynParser.GetVersionInfo(); // method call works // c:\wwapps\wwhelp\wwReflection20.dll (Current Execution Folder) string info3 = typeof(TypeParser).Assembly.CodeBase; // c:\program files\vfp9\wwReflection20.dll (my COM client EXE's folder) string info4 = dynParser.GetType().Assembly.CodeBase; // fails parser = (TypeParser)objparser; As you can see the second value is coming from a totally different assembly. Note that this is even though I EXPLICITLY SPECIFIED an assembly path to load the assembly from! Instead .NET decided to load the assembly from the original ApplicationBase folder. Ouch! How I actually tracked this down was a little more tedious: I added a method like this to both the factory and the instance types and then compared notes:public string GetVersionInfo() { return ".NET Version: " + Environment.Version.ToString() + "\r\n" + "wwReflection Assembly: " + typeof(TypeParserFactory).Assembly.CodeBase.Replace("file:///", "").Replace("/", "\\") + "\r\n" + "Assembly Cur Dir: " + Directory.GetCurrentDirectory() + "\r\n" + "ApplicationBase: " + AppDomain.CurrentDomain.SetupInformation.ApplicationBase + "\r\n" + "App Domain: " + AppDomain.CurrentDomain.FriendlyName + "\r\n"; } For the factory I got: .NET Version: 4.0.30319.239wwReflection Assembly: c:\wwapps\wwhelp\bin\wwreflection20.dllAssembly Cur Dir: c:\wwapps\wwhelpApplicationBase: C:\Programs\vfp9\App Domain: wwReflection534cfa1f For the instance type I got: .NET Version: 4.0.30319.239wwReflection Assembly: C:\\Programs\\vfp9\wwreflection20.dllAssembly Cur Dir: c:\\wwapps\\wwhelpApplicationBase: C:\\Programs\\vfp9\App Domain: wwDotNetBridge_56006605 which clearly shows the problem. You can see that both are loading from different appDomains but the each is loading the assembly from a different location. Probably a better solution yet (for ANY kind of assembly loading problem) is to use the .NET Fusion Log Viewer to trace assembly loads.The Fusion viewer will show a load trace for each assembly loaded and where it's looking to find it. Here's what the viewer looks like: The last trace above that I found for the second wwReflection20 load (the one that is wonky) looks like this:*** Assembly Binder Log Entry (1/13/2012 @ 3:06:49 AM) *** The operation was successful. Bind result: hr = 0x0. The operation completed successfully. Assembly manager loaded from: C:\Windows\Microsoft.NET\Framework\V4.0.30319\clr.dll Running under executable c:\programs\vfp9\vfp9.exe --- A detailed error log follows. === Pre-bind state information === LOG: User = Ras\ricks LOG: DisplayName = wwReflection20, Version=4.61.0.0, Culture=neutral, PublicKeyToken=null (Fully-specified) LOG: Appbase = file:///C:/Programs/vfp9/ LOG: Initial PrivatePath = NULL LOG: Dynamic Base = NULL LOG: Cache Base = NULL LOG: AppName = vfp9.exe Calling assembly : (Unknown). === LOG: This bind starts in default load context. LOG: Using application configuration file: C:\Programs\vfp9\vfp9.exe.Config LOG: Using host configuration file: LOG: Using machine configuration file from C:\Windows\Microsoft.NET\Framework\V4.0.30319\config\machine.config. LOG: Policy not being applied to reference at this time (private, custom, partial, or location-based assembly bind). LOG: Attempting download of new URL file:///C:/Programs/vfp9/wwReflection20.DLL. LOG: Assembly download was successful. Attempting setup of file: C:\Programs\vfp9\wwReflection20.dll LOG: Entering run-from-source setup phase. LOG: Assembly Name is: wwReflection20, Version=4.61.0.0, Culture=neutral, PublicKeyToken=null LOG: Binding succeeds. Returns assembly from C:\Programs\vfp9\wwReflection20.dll. LOG: Assembly is loaded in default load context. WRN: The same assembly was loaded into multiple contexts of an application domain: WRN: Context: Default | Domain ID: 2 | Assembly Name: wwReflection20, Version=4.61.0.0, Culture=neutral, PublicKeyToken=null WRN: Context: LoadFrom | Domain ID: 2 | Assembly Name: wwReflection20, Version=4.61.0.0, Culture=neutral, PublicKeyToken=null WRN: This might lead to runtime failures. WRN: It is recommended to inspect your application on whether this is intentional or not. WRN: See whitepaper http://go.microsoft.com/fwlink/?LinkId=109270 for more information and common solutions to this issue. Notice that the fusion log clearly shows that the .NET loader makes no attempt to even load the assembly from the path I explicitly specified. Remember your Assembly Locations As mentioned earlier all failures I've seen like this ultimately resulted from different versions of the same type being available in the two AppDomains. At first sight that seems ridiculous - how could the types be different and why would you have multiple assemblies - but there are actually a number of scenarios where it's quite possible to have multiple copies of the same assembly floating around in multiple places. If you're hosting different environments (like hosting the Razor Engine, or ASP.NET Runtime for example) it's common to create a private BIN folder and it's important to make sure that there's no overlap of assemblies. In my case of Html Help Builder the problem started because I'm using COM interop to access the .NET assembly and the above code. COM Interop has very specific requirements on where assemblies can be found and because I was mucking around with the loader code today, I ended up moving assemblies around to a new location for explicit loading. The explicit load works in the main AppDomain, but failed in the remote domain as I showed. The solution here was simple enough: Delete the extraneous assembly which was left around by accident. Not a common problem, but one that when it bites is pretty nasty to figure out because it seems so unlikely that types wouldn't match. I know I've run into this a few times and writing this down hopefully will make me remember in the future rather than poking around again for an hour trying to debug the issue as I did today. Hopefully it'll save some of you some time as well in the future.© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  COM   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Which mailx package should I install for Nagios?

    - by user1196
    I'm following the Nagios Ubuntu quickstart instructions. I'm on Ubuntu 10.10 and installing Nagios 3.2.3. At the bottom of the docs it says I need to install the mailx and postfix packages. (Postfix is already installed.) But when I try to install mailx, I get asked which of 3 packages to install: $ sudo apt-get install mailx [sudo] password for nagios: Reading package lists... Done Building dependency tree Reading state information... Done Package mailx is a virtual package provided by: mailutils 1:2.1+dfsg1-4ubuntu1 heirloom-mailx 12.4-1.1 bsd-mailx 8.1.2-0.20090911cvs-2ubuntu1 You should explicitly select one to install. E: Package mailx has no installation candidate Which one should I install?

    Read the article

< Previous Page | 509 510 511 512 513 514 515 516 517 518 519 520  | Next Page >