Search Results

Search found 41783 results on 1672 pages for 'method group'.

Page 183/1672 | < Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >

  • What do the 4 keyboard input method systems in 10.04 mean?

    - by Android Eve
    I am trying to install another language support (in addition to the default US). Checking that language checkbox in "Install / Remove Languages..." wasn't too difficult. :) But now I want to add keyboard support, too, for that language. Again, I am prompted with a nice listbox with the following 4 options: none ibus lo-gtk th-gtk But I have no idea what these mean. I googled "ubuntu 10.04 keyboard input method system none ibus lo-gtk th-gtk" but all I could find was descriptions of problems, not an actual definition. Could you please point me to a webpage where I can learn about the meanings of these 4 different methods and +'s and -'s of each?

    Read the article

  • Z600 Workstation ACPI Fan Noise

    - by dpb
    Hi -- I have an HP z600 workstation that has the FAN running full when idle. In fact, after the boot, the fan never slows down or varies. I looked in dmesg, and noticed this: [ 1.516778] ACPI Error (dsfield-0143): [CAPD] Namespace lookup failure, AE_ALREADY_EXISTS [ 1.516781] ACPI Error (psparse-0537): Method parse/execution failed [\_SB_.PCI0._OSC] (Node ffff8801b8c4e3e0), AE_ALREADY_EXISTS [ 1.516786] ACPI: Marking method _OSC as Serialized because of AE_ALREADY_EXISTS error [ 1.519868] ACPI Error (dsfield-0143): [CAPD] Namespace lookup failure, AE_ALREADY_EXISTS [ 1.519872] ACPI Error (psparse-0537): Method parse/execution failed [\_SB_.PCI0._OSC] (Node ffff8801b8c4e3e0), AE_ALREADY_EXISTS [ 1.624638] ACPI Error (dsfield-0143): [CAPD] Namespace lookup failure, AE_ALREADY_EXISTS [ 1.624642] ACPI Error (psparse-0537): Method parse/execution failed [\_SB_.PCI0._OSC] (Node ffff8801b8c4e3e0), AE_ALREADY_EXISTS [ 1.624726] ACPI Error (dsfield-0143): [CAPD] Namespace lookup failure, AE_ALREADY_EXISTS [ 1.624729] ACPI Error (psparse-0537): Method parse/execution failed [\_SB_.PCI0._OSC] (Node ffff8801b8c4e3e0), AE_ALREADY_EXISTS [ 1.624802] ACPI Error (dsfield-0143): [CAPD] Namespace lookup failure, AE_ALREADY_EXISTS [ 1.624805] ACPI Error (psparse-0537): Method parse/execution failed [\_SB_.PCI0._OSC] (Node ffff8801b8c4e3e0), AE_ALREADY_EXISTS [ 1.624895] ACPI Error (dsfield-0143): [CAPD] Namespace lookup failure, AE_ALREADY_EXISTS [ 1.624898] ACPI Error (psparse-0537): Method parse/execution failed [\_SB_.PCI0._OSC] (Node ffff8801b8c4e3e0), AE_ALREADY_EXISTS [ 1.624977] ACPI Error (dsfield-0143): [CAPD] Namespace lookup failure, AE_ALREADY_EXISTS [ 1.624981] ACPI Error (psparse-0537): Method parse/execution failed [\_SB_.PCI0._OSC] (Node ffff8801b8c4e3e0), AE_ALREADY_EXISTS [ 1.625070] ACPI Error (dsfield-0143): [CAPD] Namespace lookup failure, AE_ALREADY_EXISTS [ 1.625074] ACPI Error (psparse-0537): Method parse/execution failed [\_SB_.PCI0._OSC] (Node ffff8801b8c4e3e0), AE_ALREADY_EXISTS [ 1.625153] ACPI Error (dsfield-0143): [CAPD] Namespace lookup failure, AE_ALREADY_EXISTS [ 1.625157] ACPI Error (psparse-0537): Method parse/execution failed [\_SB_.PCI0._OSC] (Node ffff8801b8c4e3e0), AE_ALREADY_EXISTS Anyone know what could be done to fix this?

    Read the article

  • Understanding C# async / await (2) Awaitable / Awaiter Pattern

    - by Dixin
    What is awaitable Part 1 shows that any Task is awaitable. Actually there are other awaitable types. Here is an example: Task<int> task = new Task<int>(() => 0); int result = await task.ConfigureAwait(false); // Returns a ConfiguredTaskAwaitable<TResult>. The returned ConfiguredTaskAwaitable<TResult> struct is awaitable. And it is not Task at all: public struct ConfiguredTaskAwaitable<TResult> { private readonly ConfiguredTaskAwaiter m_configuredTaskAwaiter; internal ConfiguredTaskAwaitable(Task<TResult> task, bool continueOnCapturedContext) { this.m_configuredTaskAwaiter = new ConfiguredTaskAwaiter(task, continueOnCapturedContext); } public ConfiguredTaskAwaiter GetAwaiter() { return this.m_configuredTaskAwaiter; } } It has one GetAwaiter() method. Actually in part 1 we have seen that Task has GetAwaiter() method too: public class Task { public TaskAwaiter GetAwaiter() { return new TaskAwaiter(this); } } public class Task<TResult> : Task { public new TaskAwaiter<TResult> GetAwaiter() { return new TaskAwaiter<TResult>(this); } } Task.Yield() is a another example: await Task.Yield(); // Returns a YieldAwaitable. The returned YieldAwaitable is not Task either: public struct YieldAwaitable { public YieldAwaiter GetAwaiter() { return default(YieldAwaiter); } } Again, it just has one GetAwaiter() method. In this article, we will look at what is awaitable. The awaitable / awaiter pattern By observing different awaitable / awaiter types, we can tell that an object is awaitable if It has a GetAwaiter() method (instance method or extension method); Its GetAwaiter() method returns an awaiter. An object is an awaiter if: It implements INotifyCompletion or ICriticalNotifyCompletion interface; It has an IsCompleted, which has a getter and returns a Boolean; it has a GetResult() method, which returns void, or a result. This awaitable / awaiter pattern is very similar to the iteratable / iterator pattern. Here is the interface definitions of iteratable / iterator: public interface IEnumerable { IEnumerator GetEnumerator(); } public interface IEnumerator { object Current { get; } bool MoveNext(); void Reset(); } public interface IEnumerable<out T> : IEnumerable { IEnumerator<T> GetEnumerator(); } public interface IEnumerator<out T> : IDisposable, IEnumerator { T Current { get; } } In case you are not familiar with the out keyword, please find out the explanation in Understanding C# Covariance And Contravariance (2) Interfaces. The “missing” IAwaitable / IAwaiter interfaces Similar to IEnumerable and IEnumerator interfaces, awaitable / awaiter can be visualized by IAwaitable / IAwaiter interfaces too. This is the non-generic version: public interface IAwaitable { IAwaiter GetAwaiter(); } public interface IAwaiter : INotifyCompletion // or ICriticalNotifyCompletion { // INotifyCompletion has one method: void OnCompleted(Action continuation); // ICriticalNotifyCompletion implements INotifyCompletion, // also has this method: void UnsafeOnCompleted(Action continuation); bool IsCompleted { get; } void GetResult(); } Please notice GetResult() returns void here. Task.GetAwaiter() / TaskAwaiter.GetResult() is of such case. And this is the generic version: public interface IAwaitable<out TResult> { IAwaiter<TResult> GetAwaiter(); } public interface IAwaiter<out TResult> : INotifyCompletion // or ICriticalNotifyCompletion { bool IsCompleted { get; } TResult GetResult(); } Here the only difference is, GetResult() return a result. Task<TResult>.GetAwaiter() / TaskAwaiter<TResult>.GetResult() is of this case. Please notice .NET does not define these IAwaitable / IAwaiter interfaces at all. As an UI designer, I guess the reason is, IAwaitable interface will constraint GetAwaiter() to be instance method. Actually C# supports both GetAwaiter() instance method and GetAwaiter() extension method. Here I use these interfaces only for better visualizing what is awaitable / awaiter. Now, if looking at above ConfiguredTaskAwaitable / ConfiguredTaskAwaiter, YieldAwaitable / YieldAwaiter, Task / TaskAwaiter pairs again, they all “implicitly” implement these “missing” IAwaitable / IAwaiter interfaces. In the next part, we will see how to implement awaitable / awaiter. Await any function / action In C# await cannot be used with lambda. This code: int result = await (() => 0); will cause a compiler error: Cannot await 'lambda expression' This is easy to understand because this lambda expression (() => 0) may be a function or a expression tree. Obviously we mean function here, and we can tell compiler in this way: int result = await new Func<int>(() => 0); It causes an different error: Cannot await 'System.Func<int>' OK, now the compiler is complaining the type instead of syntax. With the understanding of the awaitable / awaiter pattern, Func<TResult> type can be easily made into awaitable. GetAwaiter() instance method, using IAwaitable / IAwaiter interfaces First, similar to above ConfiguredTaskAwaitable<TResult>, a FuncAwaitable<TResult> can be implemented to wrap Func<TResult>: internal struct FuncAwaitable<TResult> : IAwaitable<TResult> { private readonly Func<TResult> function; public FuncAwaitable(Func<TResult> function) { this.function = function; } public IAwaiter<TResult> GetAwaiter() { return new FuncAwaiter<TResult>(this.function); } } FuncAwaitable<TResult> wrapper is used to implement IAwaitable<TResult>, so it has one instance method, GetAwaiter(), which returns a IAwaiter<TResult>, which wraps that Func<TResult> too. FuncAwaiter<TResult> is used to implement IAwaiter<TResult>: public struct FuncAwaiter<TResult> : IAwaiter<TResult> { private readonly Task<TResult> task; public FuncAwaiter(Func<TResult> function) { this.task = new Task<TResult>(function); this.task.Start(); } bool IAwaiter<TResult>.IsCompleted { get { return this.task.IsCompleted; } } TResult IAwaiter<TResult>.GetResult() { return this.task.Result; } void INotifyCompletion.OnCompleted(Action continuation) { new Task(continuation).Start(); } } Now a function can be awaited in this way: int result = await new FuncAwaitable<int>(() => 0); GetAwaiter() extension method As IAwaitable shows, all that an awaitable needs is just a GetAwaiter() method. In above code, FuncAwaitable<TResult> is created as a wrapper of Func<TResult> and implements IAwaitable<TResult>, so that there is a  GetAwaiter() instance method. If a GetAwaiter() extension method  can be defined for Func<TResult>, then FuncAwaitable<TResult> is no longer needed: public static class FuncExtensions { public static IAwaiter<TResult> GetAwaiter<TResult>(this Func<TResult> function) { return new FuncAwaiter<TResult>(function); } } So a Func<TResult> function can be directly awaited: int result = await new Func<int>(() => 0); Using the existing awaitable / awaiter - Task / TaskAwaiter Remember the most frequently used awaitable / awaiter - Task / TaskAwaiter. With Task / TaskAwaiter, FuncAwaitable / FuncAwaiter are no longer needed: public static class FuncExtensions { public static TaskAwaiter<TResult> GetAwaiter<TResult>(this Func<TResult> function) { Task<TResult> task = new Task<TResult>(function); task.Start(); return task.GetAwaiter(); // Returns a TaskAwaiter<TResult>. } } Similarly, with this extension method: public static class ActionExtensions { public static TaskAwaiter GetAwaiter(this Action action) { Task task = new Task(action); task.Start(); return task.GetAwaiter(); // Returns a TaskAwaiter. } } an action can be awaited as well: await new Action(() => { }); Now any function / action can be awaited: await new Action(() => HelperMethods.IO()); // or: await new Action(HelperMethods.IO); If function / action has parameter(s), closure can be used: int arg0 = 0; int arg1 = 1; int result = await new Action(() => HelperMethods.IO(arg0, arg1)); Using Task.Run() The above code is used to demonstrate how awaitable / awaiter can be implemented. Because it is a common scenario to await a function / action, so .NET provides a built-in API: Task.Run(): public class Task2 { public static Task Run(Action action) { // The implementation is similar to: Task task = new Task(action); task.Start(); return task; } public static Task<TResult> Run<TResult>(Func<TResult> function) { // The implementation is similar to: Task<TResult> task = new Task<TResult>(function); task.Start(); return task; } } In reality, this is how we await a function: int result = await Task.Run(() => HelperMethods.IO(arg0, arg1)); and await a action: await Task.Run(() => HelperMethods.IO());

    Read the article

  • Understanding C# async / await (1) Compilation

    - by Dixin
    Now the async / await keywords are in C#. Just like the async and ! in F#, this new C# feature provides great convenience. There are many nice documents talking about how to use async / await in specific scenarios, like using async methods in ASP.NET 4.5 and in ASP.NET MVC 4, etc. In this article we will look at the real code working behind the syntax sugar. According to MSDN: The async modifier indicates that the method, lambda expression, or anonymous method that it modifies is asynchronous. Since lambda expression / anonymous method will be compiled to normal method, we will focus on normal async method. Preparation First of all, Some helper methods need to make up. internal class HelperMethods { internal static int Method(int arg0, int arg1) { // Do some IO. WebClient client = new WebClient(); Enumerable.Repeat("http://weblogs.asp.net/dixin", 10) .Select(client.DownloadString).ToArray(); int result = arg0 + arg1; return result; } internal static Task<int> MethodTask(int arg0, int arg1) { Task<int> task = new Task<int>(() => Method(arg0, arg1)); task.Start(); // Hot task (started task) should always be returned. return task; } internal static void Before() { } internal static void Continuation1(int arg) { } internal static void Continuation2(int arg) { } } Here Method() is a long running method doing some IO. Then MethodTask() wraps it into a Task and return that Task. Nothing special here. Await something in async method Since MethodTask() returns Task, let’s try to await it: internal class AsyncMethods { internal static async Task<int> MethodAsync(int arg0, int arg1) { int result = await HelperMethods.MethodTask(arg0, arg1); return result; } } Because we used await in the method, async must be put on the method. Now we get the first async method. According to the naming convenience, it is called MethodAsync. Of course a async method can be awaited. So we have a CallMethodAsync() to call MethodAsync(): internal class AsyncMethods { internal static async Task<int> CallMethodAsync(int arg0, int arg1) { int result = await MethodAsync(arg0, arg1); return result; } } After compilation, MethodAsync() and CallMethodAsync() becomes the same logic. This is the code of MethodAsyc(): internal class CompiledAsyncMethods { [DebuggerStepThrough] [AsyncStateMachine(typeof(MethodAsyncStateMachine))] // async internal static /*async*/ Task<int> MethodAsync(int arg0, int arg1) { MethodAsyncStateMachine methodAsyncStateMachine = new MethodAsyncStateMachine() { Arg0 = arg0, Arg1 = arg1, Builder = AsyncTaskMethodBuilder<int>.Create(), State = -1 }; methodAsyncStateMachine.Builder.Start(ref methodAsyncStateMachine); return methodAsyncStateMachine.Builder.Task; } } It just creates and starts a state machine MethodAsyncStateMachine: [CompilerGenerated] [StructLayout(LayoutKind.Auto)] internal struct MethodAsyncStateMachine : IAsyncStateMachine { public int State; public AsyncTaskMethodBuilder<int> Builder; public int Arg0; public int Arg1; public int Result; private TaskAwaiter<int> awaitor; void IAsyncStateMachine.MoveNext() { try { if (this.State != 0) { this.awaitor = HelperMethods.MethodTask(this.Arg0, this.Arg1).GetAwaiter(); if (!this.awaitor.IsCompleted) { this.State = 0; this.Builder.AwaitUnsafeOnCompleted(ref this.awaitor, ref this); return; } } else { this.State = -1; } this.Result = this.awaitor.GetResult(); } catch (Exception exception) { this.State = -2; this.Builder.SetException(exception); return; } this.State = -2; this.Builder.SetResult(this.Result); } [DebuggerHidden] void IAsyncStateMachine.SetStateMachine(IAsyncStateMachine param0) { this.Builder.SetStateMachine(param0); } } The generated code has been cleaned up so it is readable and can be compiled. Several things can be observed here: The async modifier is gone, which shows, unlike other modifiers (e.g. static), there is no such IL/CLR level “async” stuff. It becomes a AsyncStateMachineAttribute. This is similar to the compilation of extension method. The generated state machine is very similar to the state machine of C# yield syntax sugar. The local variables (arg0, arg1, result) are compiled to fields of the state machine. The real code (await HelperMethods.MethodTask(arg0, arg1)) is compiled into MoveNext(): HelperMethods.MethodTask(this.Arg0, this.Arg1).GetAwaiter(). CallMethodAsync() will create and start its own state machine CallMethodAsyncStateMachine: internal class CompiledAsyncMethods { [DebuggerStepThrough] [AsyncStateMachine(typeof(CallMethodAsyncStateMachine))] // async internal static /*async*/ Task<int> CallMethodAsync(int arg0, int arg1) { CallMethodAsyncStateMachine callMethodAsyncStateMachine = new CallMethodAsyncStateMachine() { Arg0 = arg0, Arg1 = arg1, Builder = AsyncTaskMethodBuilder<int>.Create(), State = -1 }; callMethodAsyncStateMachine.Builder.Start(ref callMethodAsyncStateMachine); return callMethodAsyncStateMachine.Builder.Task; } } CallMethodAsyncStateMachine has the same logic as MethodAsyncStateMachine above. The detail of the state machine will be discussed soon. Now it is clear that: async /await is a C# level syntax sugar. There is no difference to await a async method or a normal method. A method returning Task will be awaitable. State machine and continuation To demonstrate more details in the state machine, a more complex method is created: internal class AsyncMethods { internal static async Task<int> MultiCallMethodAsync(int arg0, int arg1, int arg2, int arg3) { HelperMethods.Before(); int resultOfAwait1 = await MethodAsync(arg0, arg1); HelperMethods.Continuation1(resultOfAwait1); int resultOfAwait2 = await MethodAsync(arg2, arg3); HelperMethods.Continuation2(resultOfAwait2); int resultToReturn = resultOfAwait1 + resultOfAwait2; return resultToReturn; } } In this method: There are multiple awaits. There are code before the awaits, and continuation code after each await After compilation, this multi-await method becomes the same as above single-await methods: internal class CompiledAsyncMethods { [DebuggerStepThrough] [AsyncStateMachine(typeof(MultiCallMethodAsyncStateMachine))] // async internal static /*async*/ Task<int> MultiCallMethodAsync(int arg0, int arg1, int arg2, int arg3) { MultiCallMethodAsyncStateMachine multiCallMethodAsyncStateMachine = new MultiCallMethodAsyncStateMachine() { Arg0 = arg0, Arg1 = arg1, Arg2 = arg2, Arg3 = arg3, Builder = AsyncTaskMethodBuilder<int>.Create(), State = -1 }; multiCallMethodAsyncStateMachine.Builder.Start(ref multiCallMethodAsyncStateMachine); return multiCallMethodAsyncStateMachine.Builder.Task; } } It creates and starts one single state machine, MultiCallMethodAsyncStateMachine: [CompilerGenerated] [StructLayout(LayoutKind.Auto)] internal struct MultiCallMethodAsyncStateMachine : IAsyncStateMachine { public int State; public AsyncTaskMethodBuilder<int> Builder; public int Arg0; public int Arg1; public int Arg2; public int Arg3; public int ResultOfAwait1; public int ResultOfAwait2; public int ResultToReturn; private TaskAwaiter<int> awaiter; void IAsyncStateMachine.MoveNext() { try { switch (this.State) { case -1: HelperMethods.Before(); this.awaiter = AsyncMethods.MethodAsync(this.Arg0, this.Arg1).GetAwaiter(); if (!this.awaiter.IsCompleted) { this.State = 0; this.Builder.AwaitUnsafeOnCompleted(ref this.awaiter, ref this); } break; case 0: this.ResultOfAwait1 = this.awaiter.GetResult(); HelperMethods.Continuation1(this.ResultOfAwait1); this.awaiter = AsyncMethods.MethodAsync(this.Arg2, this.Arg3).GetAwaiter(); if (!this.awaiter.IsCompleted) { this.State = 1; this.Builder.AwaitUnsafeOnCompleted(ref this.awaiter, ref this); } break; case 1: this.ResultOfAwait2 = this.awaiter.GetResult(); HelperMethods.Continuation2(this.ResultOfAwait2); this.ResultToReturn = this.ResultOfAwait1 + this.ResultOfAwait2; this.State = -2; this.Builder.SetResult(this.ResultToReturn); break; } } catch (Exception exception) { this.State = -2; this.Builder.SetException(exception); } } [DebuggerHidden] void IAsyncStateMachine.SetStateMachine(IAsyncStateMachine stateMachine) { this.Builder.SetStateMachine(stateMachine); } } The above code is already cleaned up, but there are still a lot of things. More clean up can be done, and the state machine can be very simple: [CompilerGenerated] [StructLayout(LayoutKind.Auto)] internal struct MultiCallMethodAsyncStateMachine : IAsyncStateMachine { // State: // -1: Begin // 0: 1st await is done // 1: 2nd await is done // ... // -2: End public int State; public TaskCompletionSource<int> ResultToReturn; // int resultToReturn ... public int Arg0; // int Arg0 public int Arg1; // int arg1 public int Arg2; // int arg2 public int Arg3; // int arg3 public int ResultOfAwait1; // int resultOfAwait1 ... public int ResultOfAwait2; // int resultOfAwait2 ... private Task<int> currentTaskToAwait; /// <summary> /// Moves the state machine to its next state. /// </summary> void IAsyncStateMachine.MoveNext() { try { switch (this.State) { // Orginal code is splitted by "case"s: // case -1: // HelperMethods.Before(); // MethodAsync(Arg0, arg1); // case 0: // int resultOfAwait1 = await ... // HelperMethods.Continuation1(resultOfAwait1); // MethodAsync(arg2, arg3); // case 1: // int resultOfAwait2 = await ... // HelperMethods.Continuation2(resultOfAwait2); // int resultToReturn = resultOfAwait1 + resultOfAwait2; // return resultToReturn; case -1: // -1 is begin. HelperMethods.Before(); // Code before 1st await. this.currentTaskToAwait = AsyncMethods.MethodAsync(this.Arg0, this.Arg1); // 1st task to await // When this.currentTaskToAwait is done, run this.MoveNext() and go to case 0. this.State = 0; IAsyncStateMachine this1 = this; // Cannot use "this" in lambda so create a local variable. this.currentTaskToAwait.ContinueWith(_ => this1.MoveNext()); // Callback break; case 0: // Now 1st await is done. this.ResultOfAwait1 = this.currentTaskToAwait.Result; // Get 1st await's result. HelperMethods.Continuation1(this.ResultOfAwait1); // Code after 1st await and before 2nd await. this.currentTaskToAwait = AsyncMethods.MethodAsync(this.Arg2, this.Arg3); // 2nd task to await // When this.currentTaskToAwait is done, run this.MoveNext() and go to case 1. this.State = 1; IAsyncStateMachine this2 = this; // Cannot use "this" in lambda so create a local variable. this.currentTaskToAwait.ContinueWith(_ => this2.MoveNext()); // Callback break; case 1: // Now 2nd await is done. this.ResultOfAwait2 = this.currentTaskToAwait.Result; // Get 2nd await's result. HelperMethods.Continuation2(this.ResultOfAwait2); // Code after 2nd await. int resultToReturn = this.ResultOfAwait1 + this.ResultOfAwait2; // Code after 2nd await. // End with resultToReturn. this.State = -2; // -2 is end. this.ResultToReturn.SetResult(resultToReturn); break; } } catch (Exception exception) { // End with exception. this.State = -2; // -2 is end. this.ResultToReturn.SetException(exception); } } /// <summary> /// Configures the state machine with a heap-allocated replica. /// </summary> /// <param name="stateMachine">The heap-allocated replica.</param> [DebuggerHidden] void IAsyncStateMachine.SetStateMachine(IAsyncStateMachine stateMachine) { // No core logic. } } Only Task and TaskCompletionSource are involved in this version. And MultiCallMethodAsync() can be simplified to: [DebuggerStepThrough] [AsyncStateMachine(typeof(MultiCallMethodAsyncStateMachine))] // async internal static /*async*/ Task<int> MultiCallMethodAsync_(int arg0, int arg1, int arg2, int arg3) { MultiCallMethodAsyncStateMachine multiCallMethodAsyncStateMachine = new MultiCallMethodAsyncStateMachine() { Arg0 = arg0, Arg1 = arg1, Arg2 = arg2, Arg3 = arg3, ResultToReturn = new TaskCompletionSource<int>(), // -1: Begin // 0: 1st await is done // 1: 2nd await is done // ... // -2: End State = -1 }; (multiCallMethodAsyncStateMachine as IAsyncStateMachine).MoveNext(); // Original code are in this method. return multiCallMethodAsyncStateMachine.ResultToReturn.Task; } Now the whole state machine becomes very clear - it is about callback: Original code are split into pieces by “await”s, and each piece is put into each “case” in the state machine. Here the 2 awaits split the code into 3 pieces, so there are 3 “case”s. The “piece”s are chained by callback, that is done by Builder.AwaitUnsafeOnCompleted(callback), or currentTaskToAwait.ContinueWith(callback) in the simplified code. A previous “piece” will end with a Task (which is to be awaited), when the task is done, it will callback the next “piece”. The state machine’s state works with the “case”s to ensure the code “piece”s executes one after another. Callback Since it is about callback, the simplification  can go even further – the entire state machine can be completely purged. Now MultiCallMethodAsync() becomes: internal static Task<int> MultiCallMethodAsync(int arg0, int arg1, int arg2, int arg3) { TaskCompletionSource<int> taskCompletionSource = new TaskCompletionSource<int>(); try { // Oringinal code begins. HelperMethods.Before(); MethodAsync(arg0, arg1).ContinueWith(await1 => { int resultOfAwait1 = await1.Result; HelperMethods.Continuation1(resultOfAwait1); MethodAsync(arg2, arg3).ContinueWith(await2 => { int resultOfAwait2 = await2.Result; HelperMethods.Continuation2(resultOfAwait2); int resultToReturn = resultOfAwait1 + resultOfAwait2; // Oringinal code ends. taskCompletionSource.SetResult(resultToReturn); }); }); } catch (Exception exception) { taskCompletionSource.SetException(exception); } return taskCompletionSource.Task; } Please compare with the original async / await code: HelperMethods.Before(); int resultOfAwait1 = await MethodAsync(arg0, arg1); HelperMethods.Continuation1(resultOfAwait1); int resultOfAwait2 = await MethodAsync(arg2, arg3); HelperMethods.Continuation2(resultOfAwait2); int resultToReturn = resultOfAwait1 + resultOfAwait2; return resultToReturn; Yeah that is the magic of C# async / await: Await is literally pretending to wait. In a await expression, a Task object will be return immediately so that caller is not blocked. The continuation code is compiled as that Task’s callback code. When that task is done, continuation code will execute. Please notice that many details inside the state machine are omitted for simplicity, like context caring, etc. If you want to have a detailed picture, please do check out the source code of AsyncTaskMethodBuilder and TaskAwaiter.

    Read the article

  • Using the ASP.NET Cache to cache data in a Model or Business Object layer, without a dependency on System.Web in the layer - Part One.

    - by Rhames
    ASP.NET applications can make use of the System.Web.Caching.Cache object to cache data and prevent repeated expensive calls to a database or other store. However, ideally an application should make use of caching at the point where data is retrieved from the database, which typically is inside a Business Objects or Model layer. One of the key features of using a UI pattern such as Model-View-Presenter (MVP) or Model-View-Controller (MVC) is that the Model and Presenter (or Controller) layers are developed without any knowledge of the UI layer. Introducing a dependency on System.Web into the Model layer would break this independence of the Model from the View. This article gives a solution to this problem, using dependency injection to inject the caching implementation into the Model layer at runtime. This allows caching to be used within the Model layer, without any knowledge of the actual caching mechanism that will be used. Create a sample application to use the caching solution Create a test SQL Server database This solution uses a SQL Server database with the same Sales data used in my previous post on calculating running totals. The advantage of using this data is that it gives nice slow queries that will exaggerate the effect of using caching! To create the data, first create a new SQL database called CacheSample. Next run the following script to create the Sale table and populate it: USE CacheSample GO   CREATE TABLE Sale(DayCount smallint, Sales money) CREATE CLUSTERED INDEX ndx_DayCount ON Sale(DayCount) go INSERT Sale VALUES (1,120) INSERT Sale VALUES (2,60) INSERT Sale VALUES (3,125) INSERT Sale VALUES (4,40)   DECLARE @DayCount smallint, @Sales money SET @DayCount = 5 SET @Sales = 10   WHILE @DayCount < 5000  BEGIN  INSERT Sale VALUES (@DayCount,@Sales)  SET @DayCount = @DayCount + 1  SET @Sales = @Sales + 15  END Next create a stored procedure to calculate the running total, and return a specified number of rows from the Sale table, using the following script: USE [CacheSample] GO   SET ANSI_NULLS ON GO   SET QUOTED_IDENTIFIER ON GO   -- ============================================= -- Author:        Robin -- Create date: -- Description:   -- ============================================= CREATE PROCEDURE [dbo].[spGetRunningTotals]       -- Add the parameters for the stored procedure here       @HighestDayCount smallint = null AS BEGIN       -- SET NOCOUNT ON added to prevent extra result sets from       -- interfering with SELECT statements.       SET NOCOUNT ON;         IF @HighestDayCount IS NULL             SELECT @HighestDayCount = MAX(DayCount) FROM dbo.Sale                   DECLARE @SaleTbl TABLE (DayCount smallint, Sales money, RunningTotal money)         DECLARE @DayCount smallint,                   @Sales money,                   @RunningTotal money         SET @RunningTotal = 0       SET @DayCount = 0         DECLARE rt_cursor CURSOR       FOR       SELECT DayCount, Sales       FROM Sale       ORDER BY DayCount         OPEN rt_cursor         FETCH NEXT FROM rt_cursor INTO @DayCount,@Sales         WHILE @@FETCH_STATUS = 0 AND @DayCount <= @HighestDayCount        BEGIN        SET @RunningTotal = @RunningTotal + @Sales        INSERT @SaleTbl VALUES (@DayCount,@Sales,@RunningTotal)        FETCH NEXT FROM rt_cursor INTO @DayCount,@Sales        END         CLOSE rt_cursor       DEALLOCATE rt_cursor         SELECT DayCount, Sales, RunningTotal       FROM @SaleTbl   END   GO   Create the Sample ASP.NET application In Visual Studio create a new solution and add a class library project called CacheSample.BusinessObjects and an ASP.NET web application called CacheSample.UI. The CacheSample.BusinessObjects project will contain a single class to represent a Sale data item, with all the code to retrieve the sales from the database included in it for simplicity (normally I would at least have a separate Repository or other object that is responsible for retrieving data, and probably a data access layer as well, but for this sample I want to keep it simple). The C# code for the Sale class is shown below: using System; using System.Collections.Generic; using System.Data; using System.Data.SqlClient;   namespace CacheSample.BusinessObjects {     public class Sale     {         public Int16 DayCount { get; set; }         public decimal Sales { get; set; }         public decimal RunningTotal { get; set; }           public static IEnumerable<Sale> GetSales(int? highestDayCount)         {             List<Sale> sales = new List<Sale>();               SqlParameter highestDayCountParameter = new SqlParameter("@HighestDayCount", SqlDbType.SmallInt);             if (highestDayCount.HasValue)                 highestDayCountParameter.Value = highestDayCount;             else                 highestDayCountParameter.Value = DBNull.Value;               string connectionStr = System.Configuration.ConfigurationManager .ConnectionStrings["CacheSample"].ConnectionString;               using(SqlConnection sqlConn = new SqlConnection(connectionStr))             using (SqlCommand sqlCmd = sqlConn.CreateCommand())             {                 sqlCmd.CommandText = "spGetRunningTotals";                 sqlCmd.CommandType = CommandType.StoredProcedure;                 sqlCmd.Parameters.Add(highestDayCountParameter);                   sqlConn.Open();                   using (SqlDataReader dr = sqlCmd.ExecuteReader())                 {                     while (dr.Read())                     {                         Sale newSale = new Sale();                         newSale.DayCount = dr.GetInt16(0);                         newSale.Sales = dr.GetDecimal(1);                         newSale.RunningTotal = dr.GetDecimal(2);                           sales.Add(newSale);                     }                 }             }               return sales;         }     } }   The static GetSale() method makes a call to the spGetRunningTotals stored procedure and then reads each row from the returned SqlDataReader into an instance of the Sale class, it then returns a List of the Sale objects, as IEnnumerable<Sale>. A reference to System.Configuration needs to be added to the CacheSample.BusinessObjects project so that the connection string can be read from the web.config file. In the CacheSample.UI ASP.NET project, create a single web page called ShowSales.aspx, and make this the default start up page. This page will contain a single button to call the GetSales() method and a label to display the results. The html mark up and the C# code behind are shown below: ShowSales.aspx <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="ShowSales.aspx.cs" Inherits="CacheSample.UI.ShowSales" %>   <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">   <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server">     <title>Cache Sample - Show All Sales</title> </head> <body>     <form id="form1" runat="server">     <div>         <asp:Button ID="btnTest1" runat="server" onclick="btnTest1_Click"             Text="Get All Sales" />         &nbsp;&nbsp;&nbsp;         <asp:Label ID="lblResults" runat="server"></asp:Label>         </div>     </form> </body> </html>   ShowSales.aspx.cs using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls;   using CacheSample.BusinessObjects;   namespace CacheSample.UI {     public partial class ShowSales : System.Web.UI.Page     {         protected void Page_Load(object sender, EventArgs e)         {         }           protected void btnTest1_Click(object sender, EventArgs e)         {             System.Diagnostics.Stopwatch stopWatch = new System.Diagnostics.Stopwatch();             stopWatch.Start();               var sales = Sale.GetSales(null);               var lastSales = sales.Last();               stopWatch.Stop();               lblResults.Text = string.Format( "Count of Sales: {0}, Last DayCount: {1}, Total Sales: {2}. Query took {3} ms", sales.Count(), lastSales.DayCount, lastSales.RunningTotal, stopWatch.ElapsedMilliseconds);         }       } }   Finally we need to add a connection string to the CacheSample SQL Server database, called CacheSample, to the web.config file: <?xmlversion="1.0"?>   <configuration>    <connectionStrings>     <addname="CacheSample"          connectionString="data source=.\SQLEXPRESS;Integrated Security=SSPI;Initial Catalog=CacheSample"          providerName="System.Data.SqlClient" />  </connectionStrings>    <system.web>     <compilationdebug="true"targetFramework="4.0" />  </system.web>   </configuration>   Run the application and click the button a few times to see how long each call to the database takes. On my system, each query takes about 450ms. Next I shall look at a solution to use the ASP.NET caching to cache the data returned by the query, so that subsequent requests to the GetSales() method are much faster. Adding Data Caching Support I am going to create my caching support in a separate project called CacheSample.Caching, so the next step is to add a class library to the solution. We shall be using the application configuration to define the implementation of our caching system, so we need a reference to System.Configuration adding to the project. ICacheProvider<T> Interface The first step in adding caching to our application is to define an interface, called ICacheProvider, in the CacheSample.Caching project, with methods to retrieve any data from the cache or to retrieve the data from the data source if it is not present in the cache. Dependency Injection will then be used to inject an implementation of this interface at runtime, allowing the users of the interface (i.e. the CacheSample.BusinessObjects project) to be completely unaware of how the caching is actually implemented. As data of any type maybe retrieved from the data source, it makes sense to use generics in the interface, with a generic type parameter defining the data type associated with a particular instance of the cache interface implementation. The C# code for the ICacheProvider interface is shown below: using System; using System.Collections.Generic;   namespace CacheSample.Caching {     public interface ICacheProvider     {     }       public interface ICacheProvider<T> : ICacheProvider     {         T Fetch(string key, Func<T> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry);           IEnumerable<T> Fetch(string key, Func<IEnumerable<T>> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry);     } }   The empty non-generic interface will be used as a type in a Dictionary generic collection later to store instances of the ICacheProvider<T> implementation for reuse, I prefer to use a base interface when doing this, as I think the alternative of using object makes for less clear code. The ICacheProvider<T> interface defines two overloaded Fetch methods, the difference between these is that one will return a single instance of the type T and the other will return an IEnumerable<T>, providing support for easy caching of collections of data items. Both methods will take a key parameter, which will uniquely identify the cached data, a delegate of type Func<T> or Func<IEnumerable<T>> which will provide the code to retrieve the data from the store if it is not present in the cache, and absolute or relative expiry policies to define when a cached item should expire. Note that at present there is no support for cache dependencies, but I shall be showing a method of adding this in part two of this article. CacheProviderFactory Class We need a mechanism of creating instances of our ICacheProvider<T> interface, using Dependency Injection to get the implementation of the interface. To do this we shall create a CacheProviderFactory static class in the CacheSample.Caching project. This factory will provide a generic static method called GetCacheProvider<T>(), which shall return instances of ICacheProvider<T>. We can then call this factory method with the relevant data type (for example the Sale class in the CacheSample.BusinessObject project) to get a instance of ICacheProvider for that type (e.g. call CacheProviderFactory.GetCacheProvider<Sale>() to get the ICacheProvider<Sale> implementation). The C# code for the CacheProviderFactory is shown below: using System; using System.Collections.Generic;   using CacheSample.Caching.Configuration;   namespace CacheSample.Caching {     public static class CacheProviderFactory     {         private static Dictionary<Type, ICacheProvider> cacheProviders = new Dictionary<Type, ICacheProvider>();         private static object syncRoot = new object();           ///<summary>         /// Factory method to create or retrieve an implementation of the  /// ICacheProvider interface for type <typeparamref name="T"/>.         ///</summary>         ///<typeparam name="T">  /// The type that this cache provider instance will work with  ///</typeparam>         ///<returns>An instance of the implementation of ICacheProvider for type  ///<typeparamref name="T"/>, as specified by the application  /// configuration</returns>         public static ICacheProvider<T> GetCacheProvider<T>()         {             ICacheProvider<T> cacheProvider = null;             // Get the Type reference for the type parameter T             Type typeOfT = typeof(T);               // Lock the access to the cacheProviders dictionary             // so multiple threads can work with it             lock (syncRoot)             {                 // First check if an instance of the ICacheProvider implementation  // already exists in the cacheProviders dictionary for the type T                 if (cacheProviders.ContainsKey(typeOfT))                     cacheProvider = (ICacheProvider<T>)cacheProviders[typeOfT];                 else                 {                     // There is not already an instance of the ICacheProvider in       // cacheProviders for the type T                     // so we need to create one                       // Get the Type reference for the application's implementation of       // ICacheProvider from the configuration                     Type cacheProviderType = Type.GetType(CacheProviderConfigurationSection.Current. CacheProviderType);                     if (cacheProviderType != null)                     {                         // Now get a Type reference for the Cache Provider with the                         // type T generic parameter                         Type typeOfCacheProviderTypeForT = cacheProviderType.MakeGenericType(new Type[] { typeOfT });                         if (typeOfCacheProviderTypeForT != null)                         {                             // Create the instance of the Cache Provider and add it to // the cacheProviders dictionary for future use                             cacheProvider = (ICacheProvider<T>)Activator. CreateInstance(typeOfCacheProviderTypeForT);                             cacheProviders.Add(typeOfT, cacheProvider);                         }                     }                 }             }               return cacheProvider;                 }     } }   As this code uses Activator.CreateInstance() to create instances of the ICacheProvider<T> implementation, which is a slow process, the factory class maintains a Dictionary of the previously created instances so that a cache provider needs to be created only once for each type. The type of the implementation of ICacheProvider<T> is read from a custom configuration section in the application configuration file, via the CacheProviderConfigurationSection class, which is described below. CacheProviderConfigurationSection Class The implementation of ICacheProvider<T> will be specified in a custom configuration section in the application’s configuration. To handle this create a folder in the CacheSample.Caching project called Configuration, and add a class called CacheProviderConfigurationSection to this folder. This class will extend the System.Configuration.ConfigurationSection class, and will contain a single string property called CacheProviderType. The C# code for this class is shown below: using System; using System.Configuration;   namespace CacheSample.Caching.Configuration {     internal class CacheProviderConfigurationSection : ConfigurationSection     {         public static CacheProviderConfigurationSection Current         {             get             {                 return (CacheProviderConfigurationSection) ConfigurationManager.GetSection("cacheProvider");             }         }           [ConfigurationProperty("type", IsRequired=true)]         public string CacheProviderType         {             get             {                 return (string)this["type"];             }         }     } }   Adding Data Caching to the Sales Class We now have enough code in place to add caching to the GetSales() method in the CacheSample.BusinessObjects.Sale class, even though we do not yet have an implementation of the ICacheProvider<T> interface. We need to add a reference to the CacheSample.Caching project to CacheSample.BusinessObjects so that we can use the ICacheProvider<T> interface within the GetSales() method. Once the reference is added, we can first create a unique string key based on the method name and the parameter value, so that the same cache key is used for repeated calls to the method with the same parameter values. Then we get an instance of the cache provider for the Sales type, using the CacheProviderFactory, and pass the existing code to retrieve the data from the database as the retrievalMethod delegate in a call to the Cache Provider Fetch() method. The C# code for the modified GetSales() method is shown below: public static IEnumerable<Sale> GetSales(int? highestDayCount) {     string cacheKey = string.Format("CacheSample.BusinessObjects.GetSalesWithCache({0})", highestDayCount);       return CacheSample.Caching.CacheProviderFactory. GetCacheProvider<Sale>().Fetch(cacheKey,         delegate()         {             List<Sale> sales = new List<Sale>();               SqlParameter highestDayCountParameter = new SqlParameter("@HighestDayCount", SqlDbType.SmallInt);             if (highestDayCount.HasValue)                 highestDayCountParameter.Value = highestDayCount;             else                 highestDayCountParameter.Value = DBNull.Value;               string connectionStr = System.Configuration.ConfigurationManager. ConnectionStrings["CacheSample"].ConnectionString;               using (SqlConnection sqlConn = new SqlConnection(connectionStr))             using (SqlCommand sqlCmd = sqlConn.CreateCommand())             {                 sqlCmd.CommandText = "spGetRunningTotals";                 sqlCmd.CommandType = CommandType.StoredProcedure;                 sqlCmd.Parameters.Add(highestDayCountParameter);                   sqlConn.Open();                   using (SqlDataReader dr = sqlCmd.ExecuteReader())                 {                     while (dr.Read())                     {                         Sale newSale = new Sale();                         newSale.DayCount = dr.GetInt16(0);                         newSale.Sales = dr.GetDecimal(1);                         newSale.RunningTotal = dr.GetDecimal(2);                           sales.Add(newSale);                     }                 }             }               return sales;         },         null,         new TimeSpan(0, 10, 0)); }     This example passes the code to retrieve the Sales data from the database to the Cache Provider as an anonymous method, however it could also be written as a lambda. The main advantage of using an anonymous function (method or lambda) is that the code inside the anonymous function can access the parameters passed to the GetSales() method. Finally the absolute expiry is set to null, and the relative expiry set to 10 minutes, to indicate that the cache entry should be removed 10 minutes after the last request for the data. As the ICacheProvider<T> has a Fetch() method that returns IEnumerable<T>, we can simply return the results of the Fetch() method to the caller of the GetSales() method. This should be all that is needed for the GetSales() method to now retrieve data from a cache after the first time the data has be retrieved from the database. Implementing a ASP.NET Cache Provider The final step is to actually implement the ICacheProvider<T> interface, and add the implementation details to the web.config file for the dependency injection. The cache provider implementation needs to have access to System.Web. Therefore it could be placed in the CacheSample.UI project, or in its own project that has a reference to System.Web. Implementing the Cache Provider in a separate project is my favoured approach. Create a new project inside the solution called CacheSample.CacheProvider, and add references to System.Web and CacheSample.Caching to this project. Add a class to the project called AspNetCacheProvider. Make the class a generic class by adding the generic parameter <T> and indicate that the class implements ICacheProvider<T>. The C# code for the AspNetCacheProvider class is shown below: using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Caching;   using CacheSample.Caching;   namespace CacheSample.CacheProvider {     public class AspNetCacheProvider<T> : ICacheProvider<T>     {         #region ICacheProvider<T> Members           public T Fetch(string key, Func<T> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry)         {             return FetchAndCache<T>(key, retrieveData, absoluteExpiry, relativeExpiry);         }           public IEnumerable<T> Fetch(string key, Func<IEnumerable<T>> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry)         {             return FetchAndCache<IEnumerable<T>>(key, retrieveData, absoluteExpiry, relativeExpiry);         }           #endregion           #region Helper Methods           private U FetchAndCache<U>(string key, Func<U> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry)         {             U value;             if (!TryGetValue<U>(key, out value))             {                 value = retrieveData();                 if (!absoluteExpiry.HasValue)                     absoluteExpiry = Cache.NoAbsoluteExpiration;                   if (!relativeExpiry.HasValue)                     relativeExpiry = Cache.NoSlidingExpiration;                   HttpContext.Current.Cache.Insert(key, value, null, absoluteExpiry.Value, relativeExpiry.Value);             }             return value;         }           private bool TryGetValue<U>(string key, out U value)         {             object cachedValue = HttpContext.Current.Cache.Get(key);             if (cachedValue == null)             {                 value = default(U);                 return false;             }             else             {                 try                 {                     value = (U)cachedValue;                     return true;                 }                 catch                 {                     value = default(U);                     return false;                 }             }         }           #endregion       } }   The two interface Fetch() methods call a private method called FetchAndCache(). This method first checks for a element in the HttpContext.Current.Cache with the specified cache key, and if so tries to cast this to the specified type (either T or IEnumerable<T>). If the cached element is found, the FetchAndCache() method simply returns it. If it is not found in the cache, the method calls the retrievalMethod delegate to get the data from the data source, and then adds this to the HttpContext.Current.Cache. The final step is to add the AspNetCacheProvider class to the relevant custom configuration section in the CacheSample.UI.Web.Config file. To do this there needs to be a <configSections> element added as the first element in <configuration>. This will match a custom section called <cacheProvider> with the CacheProviderConfigurationSection. Then we add a <cacheProvider> element, with a type property set to the fully qualified assembly name of the AspNetCacheProvider class, as shown below: <?xmlversion="1.0"?>   <configuration>  <configSections>     <sectionname="cacheProvider" type="CacheSample.Base.Configuration.CacheProviderConfigurationSection, CacheSample.Base" />  </configSections>    <connectionStrings>     <addname="CacheSample"          connectionString="data source=.\SQLEXPRESS;Integrated Security=SSPI;Initial Catalog=CacheSample"          providerName="System.Data.SqlClient" />  </connectionStrings>    <cacheProvidertype="CacheSample.CacheProvider.AspNetCacheProvider`1, CacheSample.CacheProvider, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null">  </cacheProvider>    <system.web>     <compilationdebug="true"targetFramework="4.0" />  </system.web>   </configuration>   One point to note is that the fully qualified assembly name of the AspNetCacheProvider class includes the notation `1 after the class name, which indicates that it is a generic class with a single generic type parameter. The CacheSample.UI project needs to have references added to CacheSample.Caching and CacheSample.CacheProvider so that the actual application is aware of the relevant cache provider implementation. Conclusion After implementing this solution, you should have a working cache provider mechanism, that will allow the middle and data access layers to implement caching support when retrieving data, without any knowledge of the actually caching implementation. If the UI is not ASP.NET based, if for example it is Winforms or WPF, the implementation of ICacheProvider<T> would be written around whatever technology is available. It could even be a standalone caching system that takes full responsibility for adding and removing items from a global store. The next part of this article will show how this caching mechanism may be extended to provide support for cache dependencies, such as the System.Web.Caching.SqlCacheDependency. Another possible extension would be to cache the cache provider implementations instead of storing them in a static Dictionary in the CacheProviderFactory. This would prevent a build up of seldom used cache providers in the application memory, as they could be removed from the cache if not used often enough, although in reality there are probably unlikely to be vast numbers of cache provider implementation instances, as most applications do not have a massive number of business object or model types.

    Read the article

  • Understanding C# async / await (1) Compilation

    - by Dixin
    Now the async / await keywords are in C#. Just like the async and ! in F#, this new C# feature provides great convenience. There are many nice documents talking about how to use async / await in specific scenarios, like using async methods in ASP.NET 4.5 and in ASP.NET MVC 4, etc. In this article we will look at the real code working behind the syntax sugar. According to MSDN: The async modifier indicates that the method, lambda expression, or anonymous method that it modifies is asynchronous. Since lambda expression / anonymous method will be compiled to normal method, we will focus on normal async method. Preparation First of all, Some helper methods need to make up. internal class HelperMethods { internal static int Method(int arg0, int arg1) { // Do some IO. WebClient client = new WebClient(); Enumerable.Repeat("http://weblogs.asp.net/dixin", 10) .Select(client.DownloadString).ToArray(); int result = arg0 + arg1; return result; } internal static Task<int> MethodTask(int arg0, int arg1) { Task<int> task = new Task<int>(() => Method(arg0, arg1)); task.Start(); // Hot task (started task) should always be returned. return task; } internal static void Before() { } internal static void Continuation1(int arg) { } internal static void Continuation2(int arg) { } } Here Method() is a long running method doing some IO. Then MethodTask() wraps it into a Task and return that Task. Nothing special here. Await something in async method Since MethodTask() returns Task, let’s try to await it: internal class AsyncMethods { internal static async Task<int> MethodAsync(int arg0, int arg1) { int result = await HelperMethods.MethodTask(arg0, arg1); return result; } } Because we used await in the method, async must be put on the method. Now we get the first async method. According to the naming convenience, it is named MethodAsync. Of course a async method can be awaited. So we have a CallMethodAsync() to call MethodAsync(): internal class AsyncMethods { internal static async Task<int> CallMethodAsync(int arg0, int arg1) { int result = await MethodAsync(arg0, arg1); return result; } } After compilation, MethodAsync() and CallMethodAsync() becomes the same logic. This is the code of MethodAsyc(): internal class CompiledAsyncMethods { [DebuggerStepThrough] [AsyncStateMachine(typeof(MethodAsyncStateMachine))] // async internal static /*async*/ Task<int> MethodAsync(int arg0, int arg1) { MethodAsyncStateMachine methodAsyncStateMachine = new MethodAsyncStateMachine() { Arg0 = arg0, Arg1 = arg1, Builder = AsyncTaskMethodBuilder<int>.Create(), State = -1 }; methodAsyncStateMachine.Builder.Start(ref methodAsyncStateMachine); return methodAsyncStateMachine.Builder.Task; } } It just creates and starts a state machine, MethodAsyncStateMachine: [CompilerGenerated] [StructLayout(LayoutKind.Auto)] internal struct MethodAsyncStateMachine : IAsyncStateMachine { public int State; public AsyncTaskMethodBuilder<int> Builder; public int Arg0; public int Arg1; public int Result; private TaskAwaiter<int> awaitor; void IAsyncStateMachine.MoveNext() { try { if (this.State != 0) { this.awaitor = HelperMethods.MethodTask(this.Arg0, this.Arg1).GetAwaiter(); if (!this.awaitor.IsCompleted) { this.State = 0; this.Builder.AwaitUnsafeOnCompleted(ref this.awaitor, ref this); return; } } else { this.State = -1; } this.Result = this.awaitor.GetResult(); } catch (Exception exception) { this.State = -2; this.Builder.SetException(exception); return; } this.State = -2; this.Builder.SetResult(this.Result); } [DebuggerHidden] void IAsyncStateMachine.SetStateMachine(IAsyncStateMachine param0) { this.Builder.SetStateMachine(param0); } } The generated code has been refactored, so it is readable and can be compiled. Several things can be observed here: The async modifier is gone, which shows, unlike other modifiers (e.g. static), there is no such IL/CLR level “async” stuff. It becomes a AsyncStateMachineAttribute. This is similar to the compilation of extension method. The generated state machine is very similar to the state machine of C# yield syntax sugar. The local variables (arg0, arg1, result) are compiled to fields of the state machine. The real code (await HelperMethods.MethodTask(arg0, arg1)) is compiled into MoveNext(): HelperMethods.MethodTask(this.Arg0, this.Arg1).GetAwaiter(). CallMethodAsync() will create and start its own state machine CallMethodAsyncStateMachine: internal class CompiledAsyncMethods { [DebuggerStepThrough] [AsyncStateMachine(typeof(CallMethodAsyncStateMachine))] // async internal static /*async*/ Task<int> CallMethodAsync(int arg0, int arg1) { CallMethodAsyncStateMachine callMethodAsyncStateMachine = new CallMethodAsyncStateMachine() { Arg0 = arg0, Arg1 = arg1, Builder = AsyncTaskMethodBuilder<int>.Create(), State = -1 }; callMethodAsyncStateMachine.Builder.Start(ref callMethodAsyncStateMachine); return callMethodAsyncStateMachine.Builder.Task; } } CallMethodAsyncStateMachine has the same logic as MethodAsyncStateMachine above. The detail of the state machine will be discussed soon. Now it is clear that: async /await is a C# language level syntax sugar. There is no difference to await a async method or a normal method. As long as a method returns Task, it is awaitable. State machine and continuation To demonstrate more details in the state machine, a more complex method is created: internal class AsyncMethods { internal static async Task<int> MultiCallMethodAsync(int arg0, int arg1, int arg2, int arg3) { HelperMethods.Before(); int resultOfAwait1 = await MethodAsync(arg0, arg1); HelperMethods.Continuation1(resultOfAwait1); int resultOfAwait2 = await MethodAsync(arg2, arg3); HelperMethods.Continuation2(resultOfAwait2); int resultToReturn = resultOfAwait1 + resultOfAwait2; return resultToReturn; } } In this method: There are multiple awaits. There are code before the awaits, and continuation code after each await After compilation, this multi-await method becomes the same as above single-await methods: internal class CompiledAsyncMethods { [DebuggerStepThrough] [AsyncStateMachine(typeof(MultiCallMethodAsyncStateMachine))] // async internal static /*async*/ Task<int> MultiCallMethodAsync(int arg0, int arg1, int arg2, int arg3) { MultiCallMethodAsyncStateMachine multiCallMethodAsyncStateMachine = new MultiCallMethodAsyncStateMachine() { Arg0 = arg0, Arg1 = arg1, Arg2 = arg2, Arg3 = arg3, Builder = AsyncTaskMethodBuilder<int>.Create(), State = -1 }; multiCallMethodAsyncStateMachine.Builder.Start(ref multiCallMethodAsyncStateMachine); return multiCallMethodAsyncStateMachine.Builder.Task; } } It creates and starts one single state machine, MultiCallMethodAsyncStateMachine: [CompilerGenerated] [StructLayout(LayoutKind.Auto)] internal struct MultiCallMethodAsyncStateMachine : IAsyncStateMachine { public int State; public AsyncTaskMethodBuilder<int> Builder; public int Arg0; public int Arg1; public int Arg2; public int Arg3; public int ResultOfAwait1; public int ResultOfAwait2; public int ResultToReturn; private TaskAwaiter<int> awaiter; void IAsyncStateMachine.MoveNext() { try { switch (this.State) { case -1: HelperMethods.Before(); this.awaiter = AsyncMethods.MethodAsync(this.Arg0, this.Arg1).GetAwaiter(); if (!this.awaiter.IsCompleted) { this.State = 0; this.Builder.AwaitUnsafeOnCompleted(ref this.awaiter, ref this); } break; case 0: this.ResultOfAwait1 = this.awaiter.GetResult(); HelperMethods.Continuation1(this.ResultOfAwait1); this.awaiter = AsyncMethods.MethodAsync(this.Arg2, this.Arg3).GetAwaiter(); if (!this.awaiter.IsCompleted) { this.State = 1; this.Builder.AwaitUnsafeOnCompleted(ref this.awaiter, ref this); } break; case 1: this.ResultOfAwait2 = this.awaiter.GetResult(); HelperMethods.Continuation2(this.ResultOfAwait2); this.ResultToReturn = this.ResultOfAwait1 + this.ResultOfAwait2; this.State = -2; this.Builder.SetResult(this.ResultToReturn); break; } } catch (Exception exception) { this.State = -2; this.Builder.SetException(exception); } } [DebuggerHidden] void IAsyncStateMachine.SetStateMachine(IAsyncStateMachine stateMachine) { this.Builder.SetStateMachine(stateMachine); } } Once again, the above state machine code is already refactored, but it still has a lot of things. More clean up can be done if we only keep the core logic, and the state machine can become very simple: [CompilerGenerated] [StructLayout(LayoutKind.Auto)] internal struct MultiCallMethodAsyncStateMachine : IAsyncStateMachine { // State: // -1: Begin // 0: 1st await is done // 1: 2nd await is done // ... // -2: End public int State; public TaskCompletionSource<int> ResultToReturn; // int resultToReturn ... public int Arg0; // int Arg0 public int Arg1; // int arg1 public int Arg2; // int arg2 public int Arg3; // int arg3 public int ResultOfAwait1; // int resultOfAwait1 ... public int ResultOfAwait2; // int resultOfAwait2 ... private Task<int> currentTaskToAwait; /// <summary> /// Moves the state machine to its next state. /// </summary> public void MoveNext() // IAsyncStateMachine member. { try { switch (this.State) { // Original code is split by "await"s into "case"s: // case -1: // HelperMethods.Before(); // MethodAsync(Arg0, arg1); // case 0: // int resultOfAwait1 = await ... // HelperMethods.Continuation1(resultOfAwait1); // MethodAsync(arg2, arg3); // case 1: // int resultOfAwait2 = await ... // HelperMethods.Continuation2(resultOfAwait2); // int resultToReturn = resultOfAwait1 + resultOfAwait2; // return resultToReturn; case -1: // -1 is begin. HelperMethods.Before(); // Code before 1st await. this.currentTaskToAwait = AsyncMethods.MethodAsync(this.Arg0, this.Arg1); // 1st task to await // When this.currentTaskToAwait is done, run this.MoveNext() and go to case 0. this.State = 0; MultiCallMethodAsyncStateMachine that1 = this; // Cannot use "this" in lambda so create a local variable. this.currentTaskToAwait.ContinueWith(_ => that1.MoveNext()); break; case 0: // Now 1st await is done. this.ResultOfAwait1 = this.currentTaskToAwait.Result; // Get 1st await's result. HelperMethods.Continuation1(this.ResultOfAwait1); // Code after 1st await and before 2nd await. this.currentTaskToAwait = AsyncMethods.MethodAsync(this.Arg2, this.Arg3); // 2nd task to await // When this.currentTaskToAwait is done, run this.MoveNext() and go to case 1. this.State = 1; MultiCallMethodAsyncStateMachine that2 = this; this.currentTaskToAwait.ContinueWith(_ => that2.MoveNext()); break; case 1: // Now 2nd await is done. this.ResultOfAwait2 = this.currentTaskToAwait.Result; // Get 2nd await's result. HelperMethods.Continuation2(this.ResultOfAwait2); // Code after 2nd await. int resultToReturn = this.ResultOfAwait1 + this.ResultOfAwait2; // Code after 2nd await. // End with resultToReturn. this.State = -2; // -2 is end. this.ResultToReturn.SetResult(resultToReturn); break; } } catch (Exception exception) { // End with exception. this.State = -2; // -2 is end. this.ResultToReturn.SetException(exception); } } /// <summary> /// Configures the state machine with a heap-allocated replica. /// </summary> /// <param name="stateMachine">The heap-allocated replica.</param> [DebuggerHidden] public void SetStateMachine(IAsyncStateMachine stateMachine) // IAsyncStateMachine member. { // No core logic. } } Only Task and TaskCompletionSource are involved in this version. And MultiCallMethodAsync() can be simplified to: [DebuggerStepThrough] [AsyncStateMachine(typeof(MultiCallMethodAsyncStateMachine))] // async internal static /*async*/ Task<int> MultiCallMethodAsync(int arg0, int arg1, int arg2, int arg3) { MultiCallMethodAsyncStateMachine multiCallMethodAsyncStateMachine = new MultiCallMethodAsyncStateMachine() { Arg0 = arg0, Arg1 = arg1, Arg2 = arg2, Arg3 = arg3, ResultToReturn = new TaskCompletionSource<int>(), // -1: Begin // 0: 1st await is done // 1: 2nd await is done // ... // -2: End State = -1 }; multiCallMethodAsyncStateMachine.MoveNext(); // Original code are moved into this method. return multiCallMethodAsyncStateMachine.ResultToReturn.Task; } Now the whole state machine becomes very clean - it is about callback: Original code are split into pieces by “await”s, and each piece is put into each “case” in the state machine. Here the 2 awaits split the code into 3 pieces, so there are 3 “case”s. The “piece”s are chained by callback, that is done by Builder.AwaitUnsafeOnCompleted(callback), or currentTaskToAwait.ContinueWith(callback) in the simplified code. A previous “piece” will end with a Task (which is to be awaited), when the task is done, it will callback the next “piece”. The state machine’s state works with the “case”s to ensure the code “piece”s executes one after another. Callback If we focus on the point of callback, the simplification  can go even further – the entire state machine can be completely purged, and we can just keep the code inside MoveNext(). Now MultiCallMethodAsync() becomes: internal static Task<int> MultiCallMethodAsync(int arg0, int arg1, int arg2, int arg3) { TaskCompletionSource<int> taskCompletionSource = new TaskCompletionSource<int>(); try { // Oringinal code begins. HelperMethods.Before(); MethodAsync(arg0, arg1).ContinueWith(await1 => { int resultOfAwait1 = await1.Result; HelperMethods.Continuation1(resultOfAwait1); MethodAsync(arg2, arg3).ContinueWith(await2 => { int resultOfAwait2 = await2.Result; HelperMethods.Continuation2(resultOfAwait2); int resultToReturn = resultOfAwait1 + resultOfAwait2; // Oringinal code ends. taskCompletionSource.SetResult(resultToReturn); }); }); } catch (Exception exception) { taskCompletionSource.SetException(exception); } return taskCompletionSource.Task; } Please compare with the original async / await code: HelperMethods.Before(); int resultOfAwait1 = await MethodAsync(arg0, arg1); HelperMethods.Continuation1(resultOfAwait1); int resultOfAwait2 = await MethodAsync(arg2, arg3); HelperMethods.Continuation2(resultOfAwait2); int resultToReturn = resultOfAwait1 + resultOfAwait2; return resultToReturn; Yeah that is the magic of C# async / await: Await is not to wait. In a await expression, a Task object will be return immediately so that execution is not blocked. The continuation code is compiled as that Task’s callback code. When that task is done, continuation code will execute. Please notice that many details inside the state machine are omitted for simplicity, like context caring, etc. If you want to have a detailed picture, please do check out the source code of AsyncTaskMethodBuilder and TaskAwaiter.

    Read the article

  • Need help making site available externally

    - by White Island
    I'm trying to open a hole in the firewall (ASA 5505, v8.2) to allow external access to a Web application. Via ASDM (6.3?), I've added the server as a Public Server, which creates a static NAT entry [I'm using the public IP that is assigned to 'dynamic NAT--outgoing' for the LAN, after confirming on the Cisco forums that it wouldn't bring everyone's access crashing down] and an incoming rule "any... public_ip... https... allow" but traffic is still not getting through. When I look at the log viewer, it says it's denied by access-group outside_access_in, implicit rule, which is "any any ip deny" I haven't had much experience with Cisco management. I can't see what I'm missing to allow this connection through, and I'm wondering if there's anything else special I have to add. I tried adding a rule (several variations) within that access-group to allow https to the server, but it never made a difference. Maybe I haven't found the right combination? :P I also made sure the Windows firewall is open on port 443, although I'm pretty sure the current problem is Cisco, because of the logs. :) Any ideas? If you need more information, please let me know. Thanks Edit: First of all, I had this backward. (Sorry) Traffic is being blocked by access-group "inside_access_out" which is what confused me in the first place. I guess I confused myself again in the midst of typing the question. Here, I believe, is the pertinent information. Please let me know what you see wrong. access-list acl_in extended permit tcp any host PUBLIC_IP eq https access-list acl_in extended permit icmp CS_WAN_IPs 255.255.255.240 any access-list acl_in remark Allow Vendor connections to LAN access-list acl_in extended permit tcp host Vendor any object-group RemoteDesktop access-list acl_in remark NetworkScanner scan-to-email incoming (from smtp.mail.microsoftonline.com to PCs) access-list acl_in extended permit object-group TCPUDP any object-group Scan-to-email host NetworkScanner object-group Scan-to-email access-list acl_out extended permit icmp any any access-list acl_out extended permit tcp any any access-list acl_out extended permit udp any any access-list SSLVPNSplitTunnel standard permit LAN_Subnet 255.255.255.0 access-list nonat extended permit ip VPN_Subnet 255.255.255.0 LAN_Subnet 255.255.255.0 access-list nonat extended permit ip LAN_Subnet 255.255.255.0 VPN_Subnet 255.255.255.0 access-list inside_access_out remark NetworkScanner Scan-to-email outgoing (from scanner to Internet) access-list inside_access_out extended permit object-group TCPUDP host NetworkScanner object-group Scan-to-email any object-group Scan-to-email access-list inside_access_out extended permit tcp any interface outside eq https static (inside,outside) PUBLIC_IP LOCAL_IP[server object] netmask 255.255.255.255 I wasn't sure if I needed to reverse that "static" entry, since I got my question mixed up... and also with that last access-list entry, I tried interface inside and outside - neither proved successful... and I wasn't sure about whether it should be www, since the site is running on https. I assumed it should only be https.

    Read the article

  • Is a cluster the most cost effective redundancy method for windows server 2003?

    - by Ryan
    We had a server with bad ram which caused a long outage while they figured it out and our client facing apps had to go down for a while. We are coming up with a solution for instant fail-over but are not sure what the most cost effective method would be. Is a windows server cluster the best method for this? Also note we are using Parallels Virtuozzo if that makes any difference here. We found Parallels has a documented method for setting this up but it said it required a Domain Controller as well as a Fiber connection to shared storage, is all that really needed? Thanks.

    Read the article

  • London User Group Meetings this week (19th/20th May); 26th May-Agile Data Warehousing; 17th June-Kim

    - by tonyrogerson
    Got two user group meetings in London for you, we've also started the Cuppa Corner sessions - the first 3 are up on the site - A trip to First Normal Form, Lookup and Cache Transform in SSIS and Pipeline Limiter in SSIS - we are aiming for at least one per week. WhereScape are doing a breakfast meeting on Agile techniques to Data Warehousing and Kimberly Tripp and Paul Randal are over in June for a 1 day master class. Finally a 3 day performance and monitoring workshop on 22- 24th June in London...(read more)

    Read the article

  • How to create a virtual network with Azure Connect

    - by Herve Roggero
    If you are trying to establish a virtual network between machines located in disparate networks, you can either use VPN, Virtual Network or Azure Connect. If you want to establish a connection between machines located in Windows Azure, you should consider using the Virtual Network service. If you want to establish a connection between local machines and Virtual Machines in Windows Azure, you may be able to use your existing VPN device (assuming you have one), as long as the device is supported by Microsoft. If the VPN device you are using isn’t supported, or if you are trying to create a virtual network between machines from disparate networks (such as machines located in another cloud provider), you can use Azure Connect. This blog post explains how Azure Connect can help you create virtual networks between multiple servers in the cloud, various servers in different cloud environments, and on-premise. Note: Azure Connect is currently in Technical Preview. About Azure Connect Let’s do a quick review of Azure Connect. This technology implements an IPSec tunnel from machines to to a relay service located in the Microsoft cloud (Azure). So in essence, Azure Connect doesn’t provide a point-to-point connection between machines; the network communication is tunneled through the relay service. The relay service in turn offers a mechanism to enforce basic communication rules that you define through Groups. We will review this later. You could network two or more VMs in the Azure cloud (although you should consider using a Virtual Network if you go this route), or servers in the Azure cloud and other machines in the Amazon cloud for example, or even two or more on-premise servers located in different locations for which a direct network connection is not an option. You can place any number of machines in your topology. Azure Connect gives you great flexibility on how you want to build your virtual network across various environments. So Azure Connect makes sense when you want to: Connect machines located in different cloud providers Connect on-premise machines running in different locations Connect Azure VMs with on-premise (if you do not have a VPN device, or if your device is not supported) Connect Azure Roles (Worker Roles, Web Roles) with on-premise servers or in other cloud providers The diagram below shows you a high level network topology that involves machines in the Windows Azure cloud, other cloud providers and on-premise. You should note that the only required component in this diagram is the Relay itself. The other machines are optional (although your network is useful only if you have two or more machines involved). Relay agents are currently available in three geographic areas: US, Europe and Asia. You can change which region you want to use in the Windows Azure management portal. High Level Network Topology With Azure Connect Azure Connect Agent Azure Connect establishes a virtual network and creates virtual adapters on your machines; these virtual adapters communicate through the Relay using IPSec. This is achieved by installing an agent (the Azure Connect Agent) on all the machines you want in your network topology. However, you do not need to install the agent on Worker Roles and Web Roles; that’s because the agent is already installed for you. Any other machine, including Virtual Machines in Windows Azure, needs the agent installed.  To install the agent, simply go to your Windows Azure portal (http://windows.azure.com) and click on Networks on the bottom left panel. You will see a list of subscriptions under Connect. If you select a subscription, you will be able to click on the Install Local Endpoint icon on top. Clicking on this icon will begin the download and installation process for the agent. Activating Roles for Azure Connect As previously mentioned, you do not need to install the Azure Connect Agent on Worker Roles and Web Roles because it is already loaded. However, you do need to activate them if you want the roles to participate in your network topology. To do this, you will need to click on the Get Activation Token icon. The activation token must then be copied and placed in the configuration file of your roles. For more information on how to perform this step, visit MSDN at http://msdn.microsoft.com/en-us/library/windowsazure/gg432964.aspx. Firewall Rules Note that specific firewall rules must exist to allow the agent to communicate through the Relay. You will need to allow TCP 443 and ICMPv6. For additional information, please visit MSDN at http://msdn.microsoft.com/en-us/library/windowsazure/gg433061.aspx. CA Certificates You can optionally require agents to sign their activation request with the Relay using a trusted certificate issued by a Certificate Authority (CA). Click on Activation Options to learn more. Groups To create your network topology you must first create a group. A group represents a logical container of endpoints (or machines) that can communicate through the Relay. You can create multiple groups allowing you to manage network communication differently. For example you could create a DEVELOPMENT group and a PRODUCTION group. To add an endpoint you must first install an agent that will create a virtual adapter on the machine on which it is installed (as discussed in the previous section). Once you have created a group and installed the agents, the machines will appear in the Windows Azure management portal and you can start assigning machines to groups. The next figure shows you that I created a group called LocalGroup and assigned two machines (both on-premise) to that group. Groups and Computers in Azure Connect As I mentioned previously you can allow these machines to establish a network connection. To do this, you must enable the Interconnected option in the group. The following diagram shows you the definition of the group. In this topology I chose to include local machines only, but I could also add worker roles and web roles in the Azure Roles section (you must first activate your roles, as discussed previously). You could also add other Groups, allowing you to manage inter-group communication. Defining a Group in Azure Connect Testing the Connection Now that my agents have been installed on my two machines, the group defined and the Interconnected option checked, I can test the connection between my machines. The next screenshot shows you that I sent a PING request to DEVLAP02 from DEVDSK02. The PING request was successful. Note however that the time is in the hundreds of milliseconds on average. That is to be expected because the machines are connecting through the Relay located in the cloud. Going through the Relay introduces an extra hop in the communication chain, so if your systems rely on high performance, you may want to conduct some basic performance tests. Sending a PING Request Through The Relay Conclusion As you can see, creating a network topology between machines using the Azure Connect service is simple. It took me less than five minutes to create the above configuration, including the time it took to install the Azure Connect agents on the two machines. The flexibility of Azure Connect allows you to create a virtual network between disparate environments, as long as your operating systems are supported by the agent. For more information on Azure Connect, visit the MSDN website at http://msdn.microsoft.com/en-us/library/windowsazure/gg432997.aspx. About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting, a company specialized in cloud computing products and services. Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" from Apress and runs the Azure Florida Association (on LinkedIn: http://www.linkedin.com/groups?gid=4177626). For more information on Blue Syntax Consulting, visit www.bluesyntax.net. Special Thanks I would like thank those that helped me figure out how Azure Connect works: Marcel Meijer - http://blogs.msmvps.com/marcelmeijer/ Michael Wood - Http://www.mvwood.com Glenn Block - http://www.codebetter.com/glennblock Yves Goeleven - http://cloudshaper.wordpress.com/ Sandrino Di Mattia - http://fabriccontroller.net/ Mike Martin - http://techmike2kx.wordpress.com

    Read the article

  • Desktop Fun: Runic Style Fonts

    - by Asian Angel
    Most of the time regular fonts are just what you need for documents, invitations, or adding text to images. But what if you are in the mood for something unusual or unique to add that perfect touch? If you like older runic style writing, then enjoy finding some new favorites for your collection with our Runic Style Fonts collection. Temple photo by ShinyShiny. Note: To manage the fonts on your Windows 7, Vista, & XP systems see our article here. The Runic Style Fonts Sable Download Worn Manuscript Download JSL Ancient Download Antropos Download Cave Gyrl Download The Roman Runes Alliance Download Ancient Geek Download Troll Download Runish Quill MK *includes two font types Download DS RUNEnglish 2 Download Runes Written *includes two font types Download Wolves And Ravens Download Art Greco Download Dalek Download Glagolitic AOE Download Linear B Download Cartouche Download Greywolf Glyphs *includes 62 individual characters Note: This group represents A – Z in all capital letters. Note: This group represents A – Z in all lower case letters. Note: This group represents the numbers 0 – 9. Download Africain *includes 62 individual characters Note: This group represents A – Z in all capital letters. Note: This group represents A – Z in all lower case letters. Note: This group represents the numbers 0 – 9. Download Cave Writings *includes 52 individual characters Note: This group represents A – Z in all capital letters. Note: This group represents A – Z in all lower case letters. Download For more great ways to customize your computer be certain to look through our Desktop Fun section. Latest Features How-To Geek ETC HTG Projects: How to Create Your Own Custom Papercraft Toy How to Combine Rescue Disks to Create the Ultimate Windows Repair Disk What is Camera Raw, and Why Would a Professional Prefer it to JPG? The How-To Geek Guide to Audio Editing: The Basics How To Boot 10 Different Live CDs From 1 USB Flash Drive The 20 Best How-To Geek Linux Articles of 2010 Five Sleek Audi R8 Car Themes for Chrome and Iron MS Notepad Replacement Metapad Returns with a New Beta Version Spybot Search and Destroy Now Available as a Portable App (PortableApps.com) ShapeShifter: What Are Dreams? [Video] This Computer Runs on Geek Power Wallpaper Bones, Clocks, and Counters; A Look at the First 35,000 Years of Computing

    Read the article

  • mounting ext4 fs with block size of 65536

    - by seaquest
    I am doing some benchmarking on EXT4 performance on Compact Flash media. I have created an ext4 fs with block size of 65536. however I can not mount it on ubuntu-10.10-netbook-i386. (it is already mounting ext4 fs with 4096 bytes of block sizes) According to my readings on ext4 it should allow such big block sized fs. I want to hear your comments. root@ubuntu:~# mkfs.ext4 -b 65536 /dev/sda3 Warning: blocksize 65536 not usable on most systems. mke2fs 1.41.12 (17-May-2010) mkfs.ext4: 65536-byte blocks too big for system (max 4096) Proceed anyway? (y,n) y Warning: 65536-byte blocks too big for system (max 4096), forced to continue Filesystem label= OS type: Linux Block size=65536 (log=6) Fragment size=65536 (log=6) Stride=0 blocks, Stripe width=0 blocks 19968 inodes, 19830 blocks 991 blocks (5.00%) reserved for the super user First data block=0 1 block group 65528 blocks per group, 65528 fragments per group 19968 inodes per group Writing inode tables: done Creating journal (1024 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 37 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. root@ubuntu:~# tune2fs -l /dev/sda3 tune2fs 1.41.12 (17-May-2010) Filesystem volume name: <none> Last mounted on: <not available> Filesystem UUID: 4cf3f507-e7b4-463c-be11-5b408097099b Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 19968 Block count: 19830 Reserved block count: 991 Free blocks: 18720 Free inodes: 19957 First block: 0 Block size: 65536 Fragment size: 65536 Blocks per group: 65528 Fragments per group: 65528 Inodes per group: 19968 Inode blocks per group: 78 Flex block group size: 16 Filesystem created: Sat Feb 5 14:39:55 2011 Last mount time: n/a Last write time: Sat Feb 5 14:40:02 2011 Mount count: 0 Maximum mount count: 37 Last checked: Sat Feb 5 14:39:55 2011 Check interval: 15552000 (6 months) Next check after: Thu Aug 4 14:39:55 2011 Lifetime writes: 70 MB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: afb5b570-9d47-4786-bad2-4aacb3b73516 Journal backup: inode blocks root@ubuntu:~# mount -t ext4 /dev/sda3 /mnt/ mount: wrong fs type, bad option, bad superblock on /dev/sda3, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so

    Read the article

  • Using PreApplicationStartMethod for ASP.NET 4.0 Application to Initialize assemblies

    - by ChrisD
    Sometimes your ASP.NET application needs to hook up some code before even the Application is started. Assemblies supports a custom attribute called PreApplicationStartMethod which can be applied to any assembly that should be loaded to your ASP.NET application, and the ASP.NET engine will call the method you specify within it before actually running any of code defined in the application. Lets discuss how to use it using Steps : 1. Add an assembly to an application and add this custom attribute to the AssemblyInfo.cs. Remember, the method you speicify for initialize should be public static void method without any argument. Lets define a method Initialize. You need to write : [assembly:PreApplicationStartMethod(typeof(MyInitializer.InitializeType), "InitializeApp")] 2. After you define this to an assembly you need to add some code inside InitializeType.InitializeApp method within the assembly. public static class InitializeType {     public static void InitializeApp()     {           // Initialize application     } } 3. You must reference this class library so that when the application starts and ASP.NET starts loading the dependent assemblies, it will call the method InitializeApp automatically. Warning Even though you can use this attribute easily, you should be aware that you can define these kind of method in all of your assemblies that you reference, but there is no guarantee in what order each of the method to be called. Hence it is recommended to define this method to be isolated and without side effect of other dependent assemblies. The method InitializeApp will be called way before the Application_start event or even before the App_code is compiled. This attribute is mainly used to write code for registering assemblies or build providers. Read Documentation I hope this post would come helpful.

    Read the article

  • Is having a class have a handleAction(type) method bad practice?

    - by zhenka
    My web application became a little too complicated to do everything in a controller so I had to build large wrapper classes for ORM models. The possible actions a user can trigger are all similar and after a certain point I realized that the best way to go would be to just have constructor method receive action type as a parameter to take care of the small differences internally, as opposed to either passing many arguments or doing a lot of things in the controller. Is this a good practice? I can't really give details for privacy issues.

    Read the article

  • mdadm: Win7-install created a boot partition on one of my RAID6 drives. How to rebuild?

    - by EXIT_FAILURE
    My problem happened when I attempted to install Windows 7 on it's own SSD. The Linux OS I used which has knowledge of the software RAID system is on a SSD that I disconnected prior to the install. This was so that windows (or I) wouldn't inadvertently mess it up. However, and in retrospect, foolishly, I left the RAID disks connected, thinking that windows wouldn't be so ridiculous as to mess with a HDD that it sees as just unallocated space. Boy was I wrong! After copying over the installation files to the SSD (as expected and desired), it also created an ntfs partition on one of the RAID disks. Both unexpected and totally undesired! . I changed out the SSDs again, and booted up in linux. mdadm didn't seem to have any problem assembling the array as before, but if I tried to mount the array, I got the error message: mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so dmesg: EXT4-fs (md0): ext4_check_descriptors: Block bitmap for group 0 not in group (block 1318081259)! EXT4-fs (md0): group descriptors corrupted! I then used qparted to delete the newly created ntfs partition on /dev/sdd so that it matched the other three /dev/sd{b,c,e}, and requested a resync of my array with echo repair > /sys/block/md0/md/sync_action This took around 4 hours, and upon completion, dmesg reports: md: md0: requested-resync done. A bit brief after a 4-hour task, though I'm unsure as to where other log files exist (I also seem to have messed up my sendmail configuration). In any case: No change reported according to mdadm, everything checks out. mdadm -D /dev/md0 still reports: Version : 1.2 Creation Time : Wed May 23 22:18:45 2012 Raid Level : raid6 Array Size : 3907026848 (3726.03 GiB 4000.80 GB) Used Dev Size : 1953513424 (1863.02 GiB 2000.40 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Mon May 26 12:41:58 2014 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 4K Name : okamilinkun:0 UUID : 0c97ebf3:098864d8:126f44e3:e4337102 Events : 423 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 2 8 48 2 active sync /dev/sdd 3 8 64 3 active sync /dev/sde Trying to mount it still reports: mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so and dmesg: EXT4-fs (md0): ext4_check_descriptors: Block bitmap for group 0 not in group (block 1318081259)! EXT4-fs (md0): group descriptors corrupted! I'm a bit unsure where to proceed from here, and trying stuff "to see if it works" is a bit too risky for me. This is what I suggest I should attempt to do: Tell mdadm that /dev/sdd (the one that windows wrote into) isn't reliable anymore, pretend it is newly re-introduced to the array, and reconstruct its content based on the other three drives. I also could be totally wrong in my assumptions, that the creation of the ntfs partition on /dev/sdd and subsequent deletion has changed something that cannot be fixed this way. My question: Help, what should I do? If I should do what I suggested , how do I do that? From reading documentation, etc, I would think maybe: mdadm --manage /dev/md0 --set-faulty /dev/sdd mdadm --manage /dev/md0 --remove /dev/sdd mdadm --manage /dev/md0 --re-add /dev/sdd However, the documentation examples suggest /dev/sdd1, which seems strange to me, as there is no partition there as far as linux is concerned, just unallocated space. Maybe these commands won't work without. Maybe it makes sense to mirror the partition table of one of the other raid devices that weren't touched, before --re-add. Something like: sfdisk -d /dev/sdb | sfdisk /dev/sdd Bonus question: Why would the Windows 7 installation do something so st...potentially dangerous? Update I went ahead and marked /dev/sdd as faulty, and removed it (not physically) from the array: # mdadm --manage /dev/md0 --set-faulty /dev/sdd # mdadm --manage /dev/md0 --remove /dev/sdd However, attempting to --re-add was disallowed: # mdadm --manage /dev/md0 --re-add /dev/sdd mdadm: --re-add for /dev/sdd to /dev/md0 is not possible --add, was fine. # mdadm --manage /dev/md0 --add /dev/sdd mdadm -D /dev/md0 now reports the state as clean, degraded, recovering, and /dev/sdd as spare rebuilding. /proc/mdstat shows the recovery progress: md0 : active raid6 sdd[4] sdc[1] sde[3] sdb[0] 3907026848 blocks super 1.2 level 6, 4k chunk, algorithm 2 [4/3] [UU_U] [>....................] recovery = 2.1% (42887780/1953513424) finish=348.7min speed=91297K/sec nmon also shows expected output: ¦sdb 0% 87.3 0.0| > |¦ ¦sdc 71% 109.1 0.0|RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR > |¦ ¦sdd 40% 0.0 87.3|WWWWWWWWWWWWWWWWWWWW > |¦ ¦sde 0% 87.3 0.0|> || It looks good so far. Crossing my fingers for another five+ hours :) Update 2 The recovery of /dev/sdd finished, with dmesg output: [44972.599552] md: md0: recovery done. [44972.682811] RAID conf printout: [44972.682815] --- level:6 rd:4 wd:4 [44972.682817] disk 0, o:1, dev:sdb [44972.682819] disk 1, o:1, dev:sdc [44972.682820] disk 2, o:1, dev:sdd [44972.682821] disk 3, o:1, dev:sde Attempting mount /dev/md0 reports: mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so And on dmesg: [44984.159908] EXT4-fs (md0): ext4_check_descriptors: Block bitmap for group 0 not in group (block 1318081259)! [44984.159912] EXT4-fs (md0): group descriptors corrupted! I'm not sure what do do now. Suggestions? Output of dumpe2fs /dev/md0: dumpe2fs 1.42.8 (20-Jun-2013) Filesystem volume name: Atlas Last mounted on: /mnt/atlas Filesystem UUID: e7bfb6a4-c907-4aa0-9b55-9528817bfd70 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 244195328 Block count: 976756712 Reserved block count: 48837835 Free blocks: 92000180 Free inodes: 243414877 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 791 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 RAID stripe width: 2 Flex block group size: 16 Filesystem created: Thu May 24 07:22:41 2012 Last mount time: Sun May 25 23:44:38 2014 Last write time: Sun May 25 23:46:42 2014 Mount count: 341 Maximum mount count: -1 Last checked: Thu May 24 07:22:41 2012 Check interval: 0 (<none>) Lifetime writes: 4357 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: e177a374-0b90-4eaa-b78f-d734aae13051 Journal backup: inode blocks dumpe2fs: Corrupt extent header while reading journal super block

    Read the article

  • Is there an antipattern to describe this method of coding?

    - by P.Brian.Mackey
    I have a codebase where the programmer tended to wrap things up in areas that don't make sense. For example, given an Error log we have you can log via ErrorLog.Log(ex, "friendly message"); He added various other means to accomplish the exact same task. E.G. SomeClass.Log(ex, "friendly message"); Which simply turns around and calls the first method. This adds levels of complexity with no added benefit. Is there an anti-pattern to describe this?

    Read the article

  • Algorithms for pairing a rating system to an assignment queue

    - by blunders
    Attempting to research how to allow a group of people to effectively rank a set of objects (each group member will have contributed one object to the group), and then assign each member an object that's not their own based on: Their ratings of the objects, Their objects rating, and The object remaining to be assigned. Idea is to attempt to assign objects to people based on the groups rating of their contribution to the group relative to other member's contribution, the the personal preferences expressed via the ratings. Any suggestions for: Further research, Refining the statement of the problem/solution, or A solution.

    Read the article

  • protected abstract override Foo(); &ndash; er... what?

    - by Muljadi Budiman
    A couple of weeks back, a co-worker was pondering a situation he was facing.  He was looking at the following class hierarchy: abstract class OriginalBase { protected virtual void Test() { } } abstract class SecondaryBase : OriginalBase { } class FirstConcrete : SecondaryBase { } class SecondConcrete : SecondaryBase { } Basically, the first 2 classes are abstract classes, but the OriginalBase class has Test implemented as a virtual method.  What he needed was to force concrete class implementations to provide a proper body for the Test method, but he can’t do mark the method as abstract since it is already implemented in the OriginalBase class. One way to solve this is to hide the original implementation and then force further derived classes to properly implemented another method that will replace it.  The code will look like the following: abstract class OriginalBase { protected virtual void Test() { } } abstract class SecondaryBase : OriginalBase { protected sealed override void Test() { Test2(); } protected abstract void Test2(); } class FirstConcrete : SecondaryBase { // Have to override Test2 here } class SecondConcrete : SecondaryBase { // Have to override Test2 here } With the above code, SecondaryBase class will seal the Test method so it can no longer be overridden.  Then it also made an abstract method Test2 available, which will force the concrete classes to override and provide the proper implementation.  Calling Test will properly call the proper Test2 implementation in each respective concrete classes. I was wondering if there’s a way to tell the compiler to treat the Test method in SecondaryBase as abstract, and apparently you can, by combining the abstract and override keywords.  The code looks like the following: abstract class OriginalBase { protected virtual void Test() { } } abstract class SecondaryBase : OriginalBase { protected abstract override void Test(); } class FirstConcrete : SecondaryBase { // Have to override Test here } class SecondConcrete : SecondaryBase { // Have to override Test here } The method signature makes it look a bit funky, because most people will treat the override keyword to mean you then need to provide the implementation as well, but the effect is exactly as we desired.  The concepts are still valid: you’re overriding the Test method from its original implementation in the OriginalBase class, but you don’t want to implement it, rather you want to classes that derive from SecondaryBase to provide the proper implementation, so you also make it as an abstract method. I don’t think I’ve ever seen this before in the wild, so it was pretty neat to find that the compiler does support this case.

    Read the article

  • RadioButtons and Lambda Expressions

    - by MightyZot
    Radio buttons operate in groups. They are used to present mutually exclusive lists of options. Since I started programming in Windows 20 years ago, I have always been frustrated about how they are implemented. To make them operate as a group, you put your radio buttons in a group box. Conversely, to group radio buttons in HTML, you simply give them all the same name. Radio buttons with the same name or ID in HTML operate as one mutually exclusive group of options. In C#, all your radio buttons must have unique names and you use group boxes to group them. I’m in the process of converting some old code to C# and I’m tasked with creating a user control with groups of radio buttons on it. I started out writing the traditional switch…case statements to check the appropriate radio button based upon value, loops to uncheck them all, etc. Then it occurred to me that I could stick the radio buttons in a Dictionary or List and use Lambda expressions to make my code a lot more maintainable. So, here is what I ended up with: Here is a dictionary that contains my list of radio buttons and their values. I used their values as the keys, so that I can select them by value. Now, instead of using loops and switch…case statements to control the radio buttons, I use the lambda syntax and extension methods. Selecting a Radio Button by Value This code is inside of a property accessor, so “value” represents the value passed into the property accessor. The “First” extension method uses the delegate represented by the lambda expression to select the radio button (actually KeyValuePair) that represents the passed in value. Finally, the resulting checkbox is checked. Since the radio buttons are in the same group, they function as a group, the appropriate radio button is selected while the others are unselected. Reading the Value This is the get accessor for the property that returns the value of the checked radio button. Now, if you’re using binding, this code is likely not necessary; however, I didn’t want to use binding in this case, so I think this is a good alternative to the traditional loops and switch…case statements.

    Read the article

  • Raid superblock missing on single parition. Recovery needed!

    - by user171639
    Ok so I have a 2 TB raid 1 setup that has three partitions: sdc1: linux sdc2: swap sdc3: LVM for data However the LVM will no longer mount. So I thought that I would take the first drive, mount it in linux (ive done this b4), and reset the spare drive to copy the data. Normally I can mount a single drive for data recovery using: sudo su apt-get install mdadm lvm2 mdadm --assemble --scan modprobe dm-mod vgscan vgchange -ay c mount -o ro /dev/c/c /mnt Unfortunately, vgscan doesnot recognize the data partition. It appears as though the superblock on the first drive's data partition was erased while syncing with the second. So now I cannot mount that partition and the second drive is stuck in spare mode. Any ideas? Or a way to force mount the data partition just to copy the data? knoppix@Microknoppix:~$ sudo su root@Microknoppix:/home/knoppix# apt-get install mdadm lvm2 Reading package lists... Done Building dependency tree Reading state information... Done lvm2 is already the newest version. mdadm is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 551 not upgraded. root@Microknoppix:/home/knoppix# mdadm --assemble --scan mdadm: /dev/md/1 has been started with 1 drive (out of 2). mdadm: /dev/md/0 has been started with 1 drive (out of 2). root@Microknoppix:/home/knoppix# modprobe dm-mod root@Microknoppix:/home/knoppix# vgscan Reading all physical volumes. This may take a while... No volume groups found root@Microknoppix:/home/knoppix# cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdc1[2] 4193268 blocks super 1.2 [2/1] [U_] md1 : active raid1 sdc2[2] 524276 blocks super 1.2 [2/1] [U_] unused devices: <none> root@Microknoppix:/home/knoppix# mdadm -v --assemble --auto=yes /dev/md2 /dev/sdc3 mdadm: looking for devices for /dev/md2 mdadm: no recogniseable superblock on /dev/sdc3 mdadm: /dev/sdc3 has no superblock - assembly aborted root@Microknoppix:/home/knoppix# dumpe2fs /dev/md0 | grep -i superblock dumpe2fs 1.42.4 (12-Jun-2012) Primary superblock at 0, Group descriptors at 1-1 Backup superblock at 32768, Group descriptors at 32769-32769 Backup superblock at 98304, Group descriptors at 98305-98305 Backup superblock at 163840, Group descriptors at 163841-163841 Backup superblock at 229376, Group descriptors at 229377-229377 Backup superblock at 294912, Group descriptors at 294913-294913 Backup superblock at 819200, Group descriptors at 819201-819201 Backup superblock at 884736, Group descriptors at 884737-884737 root@Microknoppix:/home/knoppix# Notes: I can read the super block from the spare drive. I was gonna try and restore the superblock from one of the backups, but i dont know how or if this would work. I also heard creating a new array (mdadm --create) using the same parameters will not delete the data on the drive but i didnt want to risk it. Recommendations?

    Read the article

  • Is there an application or method to log of data transfers?

    - by Gaurav_Java
    My friend asked me for some files that I let him take from my system. I did not see he doing that. Then I was left with a doubt: what extra files or data did he take from my system? I was thinking is here any application or method which shows what data is copied to which USB (if name available then shows name or otherwise device id) and what data is being copied to Ubuntu machine . It is some like history of USB and System data. I think this feature exists in KDE This will really useful in may ways. It provides real time and monitoring utility to monitor USB mass storage devices activities on any machine.

    Read the article

  • Reliable method for google analytics tracking for print advertising campaign?

    - by chrisjlee
    A client is looking to track advertising clicks through a newspaper ad to measure success. They have rigid business requirements that it will be a unique domain... e.g. foowidgetsnews.net instead of foodwidgets.com/contact-form-page.php What is the most reliable method of building redirected url to a landing page so it will be tracked in google analytics as a direct hit from the newspaper? Finally, we would like to track the foowidgetsnews.net as the main url in google analytics because 301 redirect isn't tracked in google analytics like the way we would like it to.

    Read the article

  • Is the php method md5() secure? Can it be used for passwords? [migrated]

    - by awiebe
    So executing a php script causes the form values to be sent to the server, and then they are processed. If you want to store a password in your db than you want it to be a cryptographic hash(so your client side is secure, can you generate an md5 using php securely( without submitting the user:password pair in the clear), or is there an alternative standard method of doing this, without having the unecrypted pasword leaving the clients machine? Sorry if this is a stupid question I'm kind of new at this. I think this can be done somehow using https, and on that note if a site's login page does not use https, does that mean that while the databse storage is secure, the transportation is not?

    Read the article

  • Help with mysql sum and group query and managing jquery graph results.

    - by Scarface
    Hey guys, I have a system I am trying to design that will retrieve information from a database, so that it can be plotted in a jquery graph. I need to retrieve the information and somehow put it in the necessary format (for example two coordinates var d = [[1269417600000, 10],[1269504000000, 15]];). My table that I am selecting from is a table that stores user votes with fields: points_id (1=vote up ,2=vote down), user_id, timestamp, and topic_id. What I need to do is select all the votes and somehow group them into respective days and then sum the difference between 1 votes and 2 votes for each day. I then need to somehow display the data in the appropriate plotting format shown earlier. For example April 1, 4 votes. The data needs to be separated by commas, except the last plot entry, so I am not sure how to approach that. I showed an example below of the kind of thing I need but it is not correct, echo "var d=["; $query=mysql_query("SELECT *, SUM(IF(points_id = \"1\", 1,0))-SUM(IF([points_id = \"2\", 1,0)) AS 'total' FROM points LEFT JOIN topic on topic.topic_id=points.topic_id WHERE topic.creator='$user' GROUP by timestamp HAVING certain time interval"); while ($row=mysql_fetch_assoc($query)){ $timestamp=$row['timestamp']; $votes=$row['total']; echo "[$timestamp,$vote],"; } echo "];";

    Read the article

  • Is there a way to get number of connections in Signalr hub group?

    - by pajo
    Here is my problem, I want to track if user is online or offline and notify other clients about it. I'm using hubs and implemented both IConnected and IDisconnect interfaces. My idea was to send notification to all clients when hub detects connect or disconnect. By default when user refreshes page he will get new connection id and eventually previous connection will call disconnect notifying other clients user is offline even though he's actually online. I tired to use my own ConnectionIdFactory returning username for connection id but with multiple tabs opened at some point it will detect user connectionid disconnected and after that client side hub will try to unsuccessfully connect to the hub in endless loop wasting memory and cpu making browser almost unusable. I needed to fix it fast so I removed my factory and now I add every new connection to the group using username, so I can easily notify single user on all connections, but then I have problem of detecting if user is online or offline as I don't know how many active connection user is having. So I'm wondering is there a way to get number of connections in one group? Or if anybody has some better idea how to track when user goes offline? I'm using Signalr 0.4

    Read the article

< Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >