Search Results

Search found 533 results on 22 pages for 'variant'.

Page 4/22 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • How do I pass W3 validation for Google checkout url?

    - by Dinesh
    When I do validate the page in W3 validation, I got few errors with below code, <input type="image" name="Google Checkout" alt="Fast checkout through Google" src="https://sandbox.google.com/checkout/buttons/checkout.gif?merchant_id=xxxxxxxxx&w=168&h=44&style=white&variant=text&loc=en_US" / Errors are as follows, 1.cannot generate system identifier for general entity "w" 2.reference to entity "w" for which no system identifier could be generated 3.general entity "h" not defined and no default entity 4.reference to entity "h" for which no system identifier could be generated 5.general entity "style" not defined and no default entity 6.reference to entity "style" for which no system identifier could be generated 7.general entity "variant" not defined and no default entity 8.reference to entity "variant" for which no system identifier could be generated 9.general entity "loc" not defined and no default entity 10.reference to entity "loc" for which no system identifier could be generated This is the only errors comes from the URL; is there way to pass W3 validation for this URL.

    Read the article

  • Java Compiler: Optimization of "cascaded" ifs and best practices?

    - by jens
    Hello, does the Java Compiler optimize a statement like this if (a == true) { if (b == true) { if (c == true) { if(d == true) { //code to process stands here } } } } to if (a == true && b==true && c==true && d == true) So thats my first question: Do both take exactly the same "CPU Cycles" or is the first variant "slowlier". My Second questin is, is the first variant with the cascaded if considered bad programming style as it is so verbose? (I like the first variant as I can better logically group my expressions and better comment them (my if statements are more complex than in the example), but maybe thats bad proramming style?) and even slowlier, thats why I am asking... Thanks Jens

    Read the article

  • Why the difference in speed?

    - by AngryHacker
    Consider this code: function Foo(ds as OtherDLL.BaseObj) dim lngRowIndex as long dim lngColIndex as long for lngRowIndex = 1 to ubound(ds.Data, 2) for lngColIndex = 1 to ds.Columns.Count Debug.Print ds.Data(lngRowIndex, lngColIndex) next next end function OK, a little context. Parameter ds is of type OtherDLL.BaseObj which is defined in a referenced ActiveX DLL. ds.Data is a variant 2-dimensional array (one dimension carries the data, the other one carries the column index. ds.Columns is a Collection of columns in 'ds.Data`. Assuming there are at least 400 rows of data and 25 columns, this code takes about 15 seconds to run on my machine. Kind of unbelievable. However if I copy the variant array to a local variable, so: function Foo(ds as OtherDLL.BaseObj) dim lngRowIndex as long dim lngColIndex as long dim v as variant v = ds.Data for lngRowIndex = 1 to ubound(v, 2) for lngColIndex = 1 to ds.Columns.Count Debug.Print v(lngRowIndex, lngColIndex) next next end function the entire thing processes in barely any noticeable time (basically close to 0). Why?

    Read the article

  • Converting an IP address to a number:

    - by Quandary
    Question: When I convert the IP address 192.168.115.67 to a number, is it done like this: 192*256^3 + 168*256^2+115*256^1+67*256^0 = 3232265027 or like this: 192*256^0 + 168*256^1+115*256^2+67*256^3 = 1131653312 I find both variants online, and frankly it doesn't matter as long as reverse it according to the conversion process. But I want to calculate the IP V6 from the IPv4 address, and it seems both variants are on the web... resulting in different IPv6 addresses, and only one can be correct... I use the 1131653312 variant, as 1131653312 is the variant .NET gives me, but 3232265027 is the variant I find on the web for IPv4 to IPv6 conversion...

    Read the article

  • C#/.NET Little Wonders: The Generic Func Delegates

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. Back in one of my three original “Little Wonders” Trilogy of posts, I had listed generic delegates as one of the Little Wonders of .NET.  Later, someone posted a comment saying said that they would love more detail on the generic delegates and their uses, since my original entry just scratched the surface of them. Last week, I began our look at some of the handy generic delegates built into .NET with a description of delegates in general, and the Action family of delegates.  For this week, I’ll launch into a look at the Func family of generic delegates and how they can be used to support generic, reusable algorithms and classes. Quick Delegate Recap Delegates are similar to function pointers in C++ in that they allow you to store a reference to a method.  They can store references to either static or instance methods, and can actually be used to chain several methods together in one delegate. Delegates are very type-safe and can be satisfied with any standard method, anonymous method, or a lambda expression.  They can also be null as well (refers to no method), so care should be taken to make sure that the delegate is not null before you invoke it. Delegates are defined using the keyword delegate, where the delegate’s type name is placed where you would typically place the method name: 1: // This delegate matches any method that takes string, returns nothing 2: public delegate void Log(string message); This delegate defines a delegate type named Log that can be used to store references to any method(s) that satisfies its signature (whether instance, static, lambda expression, etc.). Delegate instances then can be assigned zero (null) or more methods using the operator = which replaces the existing delegate chain, or by using the operator += which adds a method to the end of a delegate chain: 1: // creates a delegate instance named currentLogger defaulted to Console.WriteLine (static method) 2: Log currentLogger = Console.Out.WriteLine; 3:  4: // invokes the delegate, which writes to the console out 5: currentLogger("Hi Standard Out!"); 6:  7: // append a delegate to Console.Error.WriteLine to go to std error 8: currentLogger += Console.Error.WriteLine; 9:  10: // invokes the delegate chain and writes message to std out and std err 11: currentLogger("Hi Standard Out and Error!"); While delegates give us a lot of power, it can be cumbersome to re-create fairly standard delegate definitions repeatedly, for this purpose the generic delegates were introduced in various stages in .NET.  These support various method types with particular signatures. Note: a caveat with generic delegates is that while they can support multiple parameters, they do not match methods that contains ref or out parameters. If you want to a delegate to represent methods that takes ref or out parameters, you will need to create a custom delegate. We’ve got the Func… delegates Just like it’s cousin, the Action delegate family, the Func delegate family gives us a lot of power to use generic delegates to make classes and algorithms more generic.  Using them keeps us from having to define a new delegate type when need to make a class or algorithm generic. Remember that the point of the Action delegate family was to be able to perform an “action” on an item, with no return results.  Thus Action delegates can be used to represent most methods that take 0 to 16 arguments but return void.  You can assign a method The Func delegate family was introduced in .NET 3.5 with the advent of LINQ, and gives us the power to define a function that can be called on 0 to 16 arguments and returns a result.  Thus, the main difference between Action and Func, from a delegate perspective, is that Actions return nothing, but Funcs return a result. The Func family of delegates have signatures as follows: Func<TResult> – matches a method that takes no arguments, and returns value of type TResult. Func<T, TResult> – matches a method that takes an argument of type T, and returns value of type TResult. Func<T1, T2, TResult> – matches a method that takes arguments of type T1 and T2, and returns value of type TResult. Func<T1, T2, …, TResult> – and so on up to 16 arguments, and returns value of type TResult. These are handy because they quickly allow you to be able to specify that a method or class you design will perform a function to produce a result as long as the method you specify meets the signature. For example, let’s say you were designing a generic aggregator, and you wanted to allow the user to define how the values will be aggregated into the result (i.e. Sum, Min, Max, etc…).  To do this, we would ask the user of our class to pass in a method that would take the current total, the next value, and produce a new total.  A class like this could look like: 1: public sealed class Aggregator<TValue, TResult> 2: { 3: // holds method that takes previous result, combines with next value, creates new result 4: private Func<TResult, TValue, TResult> _aggregationMethod; 5:  6: // gets or sets the current result of aggregation 7: public TResult Result { get; private set; } 8:  9: // construct the aggregator given the method to use to aggregate values 10: public Aggregator(Func<TResult, TValue, TResult> aggregationMethod = null) 11: { 12: if (aggregationMethod == null) throw new ArgumentNullException("aggregationMethod"); 13:  14: _aggregationMethod = aggregationMethod; 15: } 16:  17: // method to add next value 18: public void Aggregate(TValue nextValue) 19: { 20: // performs the aggregation method function on the current result and next and sets to current result 21: Result = _aggregationMethod(Result, nextValue); 22: } 23: } Of course, LINQ already has an Aggregate extension method, but that works on a sequence of IEnumerable<T>, whereas this is designed to work more with aggregating single results over time (such as keeping track of a max response time for a service). We could then use this generic aggregator to find the sum of a series of values over time, or the max of a series of values over time (among other things): 1: // creates an aggregator that adds the next to the total to sum the values 2: var sumAggregator = new Aggregator<int, int>((total, next) => total + next); 3:  4: // creates an aggregator (using static method) that returns the max of previous result and next 5: var maxAggregator = new Aggregator<int, int>(Math.Max); So, if we were timing the response time of a web method every time it was called, we could pass that response time to both of these aggregators to get an idea of the total time spent in that web method, and the max time spent in any one call to the web method: 1: // total will be 13 and max 13 2: int responseTime = 13; 3: sumAggregator.Aggregate(responseTime); 4: maxAggregator.Aggregate(responseTime); 5:  6: // total will be 20 and max still 13 7: responseTime = 7; 8: sumAggregator.Aggregate(responseTime); 9: maxAggregator.Aggregate(responseTime); 10:  11: // total will be 40 and max now 20 12: responseTime = 20; 13: sumAggregator.Aggregate(responseTime); 14: maxAggregator.Aggregate(responseTime); The Func delegate family is useful for making generic algorithms and classes, and in particular allows the caller of the method or user of the class to specify a function to be performed in order to generate a result. What is the result of a Func delegate chain? If you remember, we said earlier that you can assign multiple methods to a delegate by using the += operator to chain them.  So how does this affect delegates such as Func that return a value, when applied to something like the code below? 1: Func<int, int, int> combo = null; 2:  3: // What if we wanted to aggregate the sum and max together? 4: combo += (total, next) => total + next; 5: combo += Math.Max; 6:  7: // what is the result? 8: var comboAggregator = new Aggregator<int, int>(combo); Well, in .NET if you chain multiple methods in a delegate, they will all get invoked, but the result of the delegate is the result of the last method invoked in the chain.  Thus, this aggregator would always result in the Math.Max() result.  The other chained method (the sum) gets executed first, but it’s result is thrown away: 1: // result is 13 2: int responseTime = 13; 3: comboAggregator.Aggregate(responseTime); 4:  5: // result is still 13 6: responseTime = 7; 7: comboAggregator.Aggregate(responseTime); 8:  9: // result is now 20 10: responseTime = 20; 11: comboAggregator.Aggregate(responseTime); So remember, you can chain multiple Func (or other delegates that return values) together, but if you do so you will only get the last executed result. Func delegates and co-variance/contra-variance in .NET 4.0 Just like the Action delegate, as of .NET 4.0, the Func delegate family is contra-variant on its arguments.  In addition, it is co-variant on its return type.  To support this, in .NET 4.0 the signatures of the Func delegates changed to: Func<out TResult> – matches a method that takes no arguments, and returns value of type TResult (or a more derived type). Func<in T, out TResult> – matches a method that takes an argument of type T (or a less derived type), and returns value of type TResult(or a more derived type). Func<in T1, in T2, out TResult> – matches a method that takes arguments of type T1 and T2 (or less derived types), and returns value of type TResult (or a more derived type). Func<in T1, in T2, …, out TResult> – and so on up to 16 arguments, and returns value of type TResult (or a more derived type). Notice the addition of the in and out keywords before each of the generic type placeholders.  As we saw last week, the in keyword is used to specify that a generic type can be contra-variant -- it can match the given type or a type that is less derived.  However, the out keyword, is used to specify that a generic type can be co-variant -- it can match the given type or a type that is more derived. On contra-variance, if you are saying you need an function that will accept a string, you can just as easily give it an function that accepts an object.  In other words, if you say “give me an function that will process dogs”, I could pass you a method that will process any animal, because all dogs are animals.  On the co-variance side, if you are saying you need a function that returns an object, you can just as easily pass it a function that returns a string because any string returned from the given method can be accepted by a delegate expecting an object result, since string is more derived.  Once again, in other words, if you say “give me a method that creates an animal”, I can pass you a method that will create a dog, because all dogs are animals. It really all makes sense, you can pass a more specific thing to a less specific parameter, and you can return a more specific thing as a less specific result.  In other words, pay attention to the direction the item travels (parameters go in, results come out).  Keeping that in mind, you can always pass more specific things in and return more specific things out. For example, in the code below, we have a method that takes a Func<object> to generate an object, but we can pass it a Func<string> because the return type of object can obviously accept a return value of string as well: 1: // since Func<object> is co-variant, this will access Func<string>, etc... 2: public static string Sequence(int count, Func<object> generator) 3: { 4: var builder = new StringBuilder(); 5:  6: for (int i=0; i<count; i++) 7: { 8: object value = generator(); 9: builder.Append(value); 10: } 11:  12: return builder.ToString(); 13: } Even though the method above takes a Func<object>, we can pass a Func<string> because the TResult type placeholder is co-variant and accepts types that are more derived as well: 1: // delegate that's typed to return string. 2: Func<string> stringGenerator = () => DateTime.Now.ToString(); 3:  4: // This will work in .NET 4.0, but not in previous versions 5: Sequence(100, stringGenerator); Previous versions of .NET implemented some forms of co-variance and contra-variance before, but .NET 4.0 goes one step further and allows you to pass or assign an Func<A, BResult> to a Func<Y, ZResult> as long as A is less derived (or same) as Y, and BResult is more derived (or same) as ZResult. Sidebar: The Func and the Predicate A method that takes one argument and returns a bool is generally thought of as a predicate.  Predicates are used to examine an item and determine whether that item satisfies a particular condition.  Predicates are typically unary, but you may also have binary and other predicates as well. Predicates are often used to filter results, such as in the LINQ Where() extension method: 1: var numbers = new[] { 1, 2, 4, 13, 8, 10, 27 }; 2:  3: // call Where() using a predicate which determines if the number is even 4: var evens = numbers.Where(num => num % 2 == 0); As of .NET 3.5, predicates are typically represented as Func<T, bool> where T is the type of the item to examine.  Previous to .NET 3.5, there was a Predicate<T> type that tended to be used (which we’ll discuss next week) and is still supported, but most developers recommend using Func<T, bool> now, as it prevents confusion with overloads that accept unary predicates and binary predicates, etc.: 1: // this seems more confusing as an overload set, because of Predicate vs Func 2: public static SomeMethod(Predicate<int> unaryPredicate) { } 3: public static SomeMethod(Func<int, int, bool> binaryPredicate) { } 4:  5: // this seems more consistent as an overload set, since just uses Func 6: public static SomeMethod(Func<int, bool> unaryPredicate) { } 7: public static SomeMethod(Func<int, int, bool> binaryPredicate) { } Also, even though Predicate<T> and Func<T, bool> match the same signatures, they are separate types!  Thus you cannot assign a Predicate<T> instance to a Func<T, bool> instance and vice versa: 1: // the same method, lambda expression, etc can be assigned to both 2: Predicate<int> isEven = i => (i % 2) == 0; 3: Func<int, bool> alsoIsEven = i => (i % 2) == 0; 4:  5: // but the delegate instances cannot be directly assigned, strongly typed! 6: // ERROR: cannot convert type... 7: isEven = alsoIsEven; 8:  9: // however, you can assign by wrapping in a new instance: 10: isEven = new Predicate<int>(alsoIsEven); 11: alsoIsEven = new Func<int, bool>(isEven); So, the general advice that seems to come from most developers is that Predicate<T> is still supported, but we should use Func<T, bool> for consistency in .NET 3.5 and above. Sidebar: Func as a Generator for Unit Testing One area of difficulty in unit testing can be unit testing code that is based on time of day.  We’d still want to unit test our code to make sure the logic is accurate, but we don’t want the results of our unit tests to be dependent on the time they are run. One way (of many) around this is to create an internal generator that will produce the “current” time of day.  This would default to returning result from DateTime.Now (or some other method), but we could inject specific times for our unit testing.  Generators are typically methods that return (generate) a value for use in a class/method. For example, say we are creating a CacheItem<T> class that represents an item in the cache, and we want to make sure the item shows as expired if the age is more than 30 seconds.  Such a class could look like: 1: // responsible for maintaining an item of type T in the cache 2: public sealed class CacheItem<T> 3: { 4: // helper method that returns the current time 5: private static Func<DateTime> _timeGenerator = () => DateTime.Now; 6:  7: // allows internal access to the time generator 8: internal static Func<DateTime> TimeGenerator 9: { 10: get { return _timeGenerator; } 11: set { _timeGenerator = value; } 12: } 13:  14: // time the item was cached 15: public DateTime CachedTime { get; private set; } 16:  17: // the item cached 18: public T Value { get; private set; } 19:  20: // item is expired if older than 30 seconds 21: public bool IsExpired 22: { 23: get { return _timeGenerator() - CachedTime > TimeSpan.FromSeconds(30.0); } 24: } 25:  26: // creates the new cached item, setting cached time to "current" time 27: public CacheItem(T value) 28: { 29: Value = value; 30: CachedTime = _timeGenerator(); 31: } 32: } Then, we can use this construct to unit test our CacheItem<T> without any time dependencies: 1: var baseTime = DateTime.Now; 2:  3: // start with current time stored above (so doesn't drift) 4: CacheItem<int>.TimeGenerator = () => baseTime; 5:  6: var target = new CacheItem<int>(13); 7:  8: // now add 15 seconds, should still be non-expired 9: CacheItem<int>.TimeGenerator = () => baseTime.AddSeconds(15); 10:  11: Assert.IsFalse(target.IsExpired); 12:  13: // now add 31 seconds, should now be expired 14: CacheItem<int>.TimeGenerator = () => baseTime.AddSeconds(31); 15:  16: Assert.IsTrue(target.IsExpired); Now we can unit test for 1 second before, 1 second after, 1 millisecond before, 1 day after, etc.  Func delegates can be a handy tool for this type of value generation to support more testable code.  Summary Generic delegates give us a lot of power to make truly generic algorithms and classes.  The Func family of delegates is a great way to be able to specify functions to calculate a result based on 0-16 arguments.  Stay tuned in the weeks that follow for other generic delegates in the .NET Framework!   Tweet Technorati Tags: .NET, C#, CSharp, Little Wonders, Generics, Func, Delegates

    Read the article

  • USB-creator: Error erasing device: Unknown or unsupported erase type

    - by Mike Williamson
    I created a live usb using usb-creator-gtk. I installed Ubuntu with it and all was good with the world. Now I am trying to use the same memory stick and create a live USB for 14.04 and I get the following error when trying to erase the disk. org.freedesktop.DBus.Python.gi._glib.GError: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/dbus/service.py", line 707, in _message_cb retval = candidate_method(self, *args, **keywords) File "/usr/share/usb-creator/usb-creator-helper", line 239, in Format block.call_format_sync('dos', GLib.Variant('a{sv}', {'erase': GLib.Variant('s', '')}), None) gi._glib.GError: GDBus.Error:org.freedesktop.UDisks2.Error.Failed: Error erasing device: Unknown or unsupported erase type `' How can I fix this so I can create a new live USB?

    Read the article

  • How to get current gnome keyboad layout from terminal

    - by ftiaronsem
    For usage in a bash script, I need to get the gnome keyboard layout the user is currently using. For example if the user sets its keyboard layout to en-us , I need a bash command that prints me this. How can I get that information? Update: setxkbmap -query is unfortunatelly not working. Below is the ouput with the en (first command) and the de (second command) layout activated. Switching keyboard layout seems to be have some relation with gnome session configuration setxkbmap -query rules: evdev model: pc105 layout: us,de variant: , options: terminate:ctrl_alt_bksp,lv3:ralt_switch,grp:alts_toggle setxkbmap -query rules: evdev model: pc105 layout: us,de variant: , options: terminate:ctrl_alt_bksp,lv3:ralt_switch,grp:alts_toggle

    Read the article

  • API Design Techniques

    - by Dehumanizer
    Is it right or more beautiful to name the functions with an prefix, like in Qt? Or using "many" namespaces, but 'normal' names for functions? For example, slOpenFile(); //"sl" means "some lib" vs some_lib::file_functions::openFile(); UPD: I've read somewhere that the first variant(using some prefix) is better, because the API users can perform 'fast' search among the documentation and in the Internet. E.g. by typing the magic prefix search engine starts to advice the exact functions. Is it enough to use the first variant?

    Read the article

  • Keyboard Layouts Plugin forgets settings, unable find workaround

    - by Honza Javorek
    I use Xubuntu. As everyone knows, Keyboard Layouts Plugin is very, very buggy and it still forgets my settings. It drives me crazy - I have to set them again and again every time I wake up or turn on my laptop. So I found a solution - put into my .bashrc this: setxkbmap -option '' -option grp:alt_shift_toggle cz,us -variant querty That should set my toggle to Alt+shift and my layouts to Czech QUERTY and plain US English as a second one. Voilà, that seems to work! I could use Keyboard Layouts Plugin only as an indicator, that's okay. However, it doesn't work well. The problem is that it ignores -variant setting. More or less. In Keyboard Layouts Plugin I actually see Czech QUERTY selected, but in reality my keyboard types QUERTZ. That's insane :-( Could anyone help, please?

    Read the article

  • Need this VB code changed into Python.

    - by David
    I wrote this code in VB to label columns in a table but now im writting a python script to automate the process and i can't make it work. Any thoughts?? Static v1 as variant Static v2 as variant Dim Output as double Dim Start as double Start = 1 If v2 = [XMIN] Then Output = v1 Else Output = v1 + 1 End If v1 = Output v2 = [XMIN]

    Read the article

  • Model binding nested collections in ASP.NET MVC

    - by MartinHN
    Hi I'm using Steve Sanderson's BeginCollectionItem helper with ASP.NET MVC 2 to model bind a collection if items. That works fine, as long as the Model of the collection items does not contain another collection. I have a model like this: -Product --Variants ---IncludedAttributes Whenever I render and model bind the Variants collection, it works jusst fine. But with the IncludedAttributes collection, I cannot use the BeginCollectionItem helper because the id and names value won't honor the id and names value that was produced for it's parent Variant: <div class="variant"> <input type="hidden" value="bbd4fdd4-fa22-49f9-8a5e-3ff7e2942126" autocomplete="off" name="Variants.index"> <input type="hidden" value="0" name="Variants[bbd4fdd4-fa22-49f9-8a5e-3ff7e2942126].SlotAmount" id="Variants_bbd4fdd4-fa22-49f9-8a5e-3ff7e2942126__SlotAmount"> <table class="included-attributes"> <input type="hidden" value="0" name="Variants.IncludedAttributes[c5989db5-b1e1-485b-b09d-a9e50dd1d2cb].Id" id="Variants_IncludedAttributes_c5989db5-b1e1-485b-b09d-a9e50dd1d2cb__Id" class="attribute-id"> <tr> <td> <input type="hidden" value="0" name="Variants.IncludedAttributes[c5989db5-b1e1-485b-b09d-a9e50dd1d2cb].Id" id="Variants_IncludedAttributes_c5989db5-b1e1-485b-b09d-a9e50dd1d2cb__Id" class="attribute-id"> </td> </tr> </table> </div> If you look at the name of the first hidden field inside the table, it is Variants.IncludedAttributes - where it should have been Variants[bbd4fdd4-fa22-49f9-8a5e-3ff7e2942126].IncludedAttributes[...]... That is because when I call BeginCollectionItem the second time (On the IncludedAttributes collection) there's given no information about the item index value of it's parent Variant. My code for rendering a Variant looks like this: <div class="product-variant round-content-box grid_6" data-id="<%: Model.AttributeType.Id %>"> <h2><%: Model.AttributeType.AttributeTypeName %></h2> <div class="box-content"> <% using (Html.BeginCollectionItem("Variants")) { %> <div class="slot-amount"> <label class="inline" for="slotAmountSelectList"><%: Text.amountOfThisVariant %>:</label> <select id="slotAmountSelectList"><option value="1">1</option><option value="2">2</option></select> </div> <div class="add-values"> <label class="inline" for="txtProductAttributeSearch"><%: Text.addVariantItems %>:</label> <input type="text" id="txtProductAttributeSearch" class="product-attribute-search" /><span><%: Text.or %> <a class="select-from-list-link" href="#select-from-list" data-id="<%: Model.AttributeType.Id %>"><%: Text.selectFromList.ToLowerInvariant() %></a></span> <div class="clear"></div> </div> <%: Html.HiddenFor(m=>m.SlotAmount) %> <div class="included-attributes"> <table> <thead> <tr> <th><%: Text.name %></th> <th style="width: 80px;"><%: Text.price %></th> <th><%: Text.shipping %></th> <th style="width: 90px;"><%: Text.image %></th> </tr> </thead> <tbody> <% for (int i = 0; i < Model.IncludedAttributes.Count; i++) { %> <tr><%: Html.EditorFor(m => m.IncludedAttributes[i]) %></tr> <% } %> </tbody> </table> </div> <% } %> </div> </div> And the code for rendering an IncludedAttribute: <% using (Html.BeginCollectionItem("Variants.IncludedAttributes")) { %> <td> <%: Model.AttributeName %> <%: Html.HiddenFor(m => m.Id, new { @class = "attribute-id" })%> <%: Html.HiddenFor(m => m.ProductAttributeTypeId) %> </td> <td><%: Model.Price.ToCurrencyString() %></td> <td><%: Html.DropDownListFor(m => m.RequiredShippingTypeId, AppData.GetShippingTypesSelectListItems(Model.RequiredShippingTypeId)) %></td> <td><%: Model.ImageId %></td> <% } %>

    Read the article

  • c++ and c# speed compared

    - by Mack
    I was worried about C#'s speed when it deals with heavy calculations, when you need to use raw CPU power. I always thought that C++ is much faster than C# when it comes to calculations. So I did some quick tests. The first test computes prime numbers < an integer n, the second test computes some pandigital numbers. The idea for second test comes from here: Pandigital Numbers C# prime computation: using System; using System.Diagnostics; class Program { static int primes(int n) { uint i, j; int countprimes = 0; for (i = 1; i <= n; i++) { bool isprime = true; for (j = 2; j <= Math.Sqrt(i); j++) if ((i % j) == 0) { isprime = false; break; } if (isprime) countprimes++; } return countprimes; } static void Main(string[] args) { int n = int.Parse(Console.ReadLine()); Stopwatch sw = new Stopwatch(); sw.Start(); int res = primes(n); sw.Stop(); Console.WriteLine("I found {0} prime numbers between 0 and {1} in {2} msecs.", res, n, sw.ElapsedMilliseconds); Console.ReadKey(); } } C++ variant: #include <iostream> #include <ctime> int primes(unsigned long n) { unsigned long i, j; int countprimes = 0; for(i = 1; i <= n; i++) { int isprime = 1; for(j = 2; j < (i^(1/2)); j++) if(!(i%j)) { isprime = 0; break; } countprimes+= isprime; } return countprimes; } int main() { int n, res; cin>>n; unsigned int start = clock(); res = primes(n); int tprime = clock() - start; cout<<"\nI found "<<res<<" prime numbers between 1 and "<<n<<" in "<<tprime<<" msecs."; return 0; } When I ran the test trying to find primes < than 100,000, C# variant finished in 0.409 seconds and C++ variant in 5.553 seconds. When I ran them for 1,000,000 C# finished in 6.039 seconds and C++ in about 337 seconds. Pandigital test in C#: using System; using System.Diagnostics; class Program { static bool IsPandigital(int n) { int digits = 0; int count = 0; int tmp; for (; n > 0; n /= 10, ++count) { if ((tmp = digits) == (digits |= 1 << (n - ((n / 10) * 10) - 1))) return false; } return digits == (1 << count) - 1; } static void Main() { int pans = 0; Stopwatch sw = new Stopwatch(); sw.Start(); for (int i = 1; i <= 123456789; i++) { if (IsPandigital(i)) { pans++; } } sw.Stop(); Console.WriteLine("{0}pcs, {1}ms", pans, sw.ElapsedMilliseconds); Console.ReadKey(); } } Pandigital test in C++: #include <iostream> #include <ctime> using namespace std; int IsPandigital(int n) { int digits = 0; int count = 0; int tmp; for (; n > 0; n /= 10, ++count) { if ((tmp = digits) == (digits |= 1 << (n - ((n / 10) * 10) - 1))) return 0; } return digits == (1 << count) - 1; } int main() { int pans = 0; unsigned int start = clock(); for (int i = 1; i <= 123456789; i++) { if (IsPandigital(i)) { pans++; } } int ptime = clock() - start; cout<<"\nPans:"<<pans<<" time:"<<ptime; return 0; } C# variant runs in 29.906 seconds and C++ in about 36.298 seconds. I didn't touch any compiler switches and bot C# and C++ programs were compiled with debug options. Before I attempted to run the test I was worried that C# will lag well behind C++, but now it seems that there is a pretty big speed difference in C# favor. Can anybody explain this? C# is jitted and C++ is compiled native so it's normal that a C++ will be faster than a C# variant. Thanks for the answers!

    Read the article

  • How to create managed properties at site collection level in SharePoint2013

    - by ybbest
    In SharePoint2013, you can create managed properties at site collection. Today, I’d like to show you how to do so through PowerShell. 1. Define your managed properties and crawled properties and managed property Type in an external csv file. PowerShell script will read this file and create the managed and the mapping. 2. As you can see I also defined variant Type, this is because you need the variant type to create the crawled property. In order to have the crawled properties, you need to do a full crawl and also make sure you have data populated for your custom column. However, if you do not want to a full crawl to create those crawled properties, you can create them yourself by using the PowerShell; however you need to make sure the crawled properties you created have the same name if created by a full crawl. Managed properties type: Text = 1 Integer = 2 Decimal = 3 DateTime = 4 YesNo = 5 Binary = 6 Variant Type: Text = 31 Integer = 20 Decimal = 5 DateTime = 64 YesNo = 11 3. You can use the following script to create your managed properties at site collection level, the differences for creating managed property at site collection level is to pass in the site collection id. param( [string] $siteUrl="http://SP2013/", [string] $searchAppName = "Search Service Application", $ManagedPropertiesList=(IMPORT-CSV ".\ManagedProperties.csv") ) Add-PSSnapin Microsoft.SharePoint.PowerShell -ErrorAction SilentlyContinue $searchapp = $null function AppendLog { param ([string] $msg, [string] $msgColor) $currentDateTime = Get-Date $msg = $msg + " --- " + $currentDateTime if (!($logOnly -eq $True)) { # write to console Write-Host -f $msgColor $msg } # write to log file Add-Content $logFilePath $msg } $scriptPath = Split-Path $myInvocation.MyCommand.Path $logFilePath = $scriptPath + "\CreateManagedProperties_Log.txt" function CreateRefiner {param ([string] $crawledName, [string] $managedPropertyName, [Int32] $variantType, [Int32] $managedPropertyType,[System.GUID] $siteID) $cat = Get-SPEnterpriseSearchMetadataCategory –Identity SharePoint -SearchApplication $searchapp $crawledproperty = Get-SPEnterpriseSearchMetadataCrawledProperty -Name $crawledName -SearchApplication $searchapp -SiteCollection $siteID if($crawledproperty -eq $null) { Write-Host AppendLog "Creating Crawled Property for $managedPropertyName" Yellow $crawledproperty = New-SPEnterpriseSearchMetadataCrawledProperty -SearchApplication $searchapp -VariantType $variantType -SiteCollection $siteID -Category $cat -PropSet "00130329-0000-0130-c000-000000131346" -Name $crawledName -IsNameEnum $false } $managedproperty = Get-SPEnterpriseSearchMetadataManagedProperty -Identity $managedPropertyName -SearchApplication $searchapp -SiteCollection $siteID -ErrorAction SilentlyContinue if($managedproperty -eq $null) { Write-Host AppendLog "Creating Managed Property for $managedPropertyName" Yellow $managedproperty = New-SPEnterpriseSearchMetadataManagedProperty -Name $managedPropertyName -Type $managedPropertyType -SiteCollection $siteID -SearchApplication $searchapp -Queryable:$true -Retrievable:$true -FullTextQueriable:$true -RemoveDuplicates:$false -RespectPriority:$true -IncludeInMd5:$true } $mappedProperty = $crawledproperty.GetMappedManagedProperties() | ?{$_.Name -eq $managedProperty.Name } if($mappedProperty -eq $null) { Write-Host AppendLog "Creating Crawled -> Managed Property mapping for $managedPropertyName" Yellow New-SPEnterpriseSearchMetadataMapping -CrawledProperty $crawledproperty -ManagedProperty $managedproperty -SearchApplication $searchapp -SiteCollection $siteID } $mappedProperty = $crawledproperty.GetMappedManagedProperties() | ?{$_.Name -eq $managedProperty.Name } #Get-FASTSearchMetadataCrawledPropertyMapping -ManagedProperty $managedproperty } $searchapp = Get-SPEnterpriseSearchServiceApplication $searchAppName $site= Get-SPSite $siteUrl $siteId=$site.id Write-Host "Start creating Managed properties" $i = 1 FOREACH ($property in $ManagedPropertiesList) { $propertyName=$property.managedPropertyName $crawledName=$property.crawledName $managedPropertyType=$property.managedPropertyType $variantType=$property.variantType Write-Host $managedPropertyType Write-Host "Processing managed property $propertyName $($i)..." $i++ CreateRefiner $crawledName $propertyName $variantType $managedPropertyType $siteId Write-Host "Managed property created " $propertyName } Key Concepts Crawled Properties: Crawled properties are discovered by the search index service component when crawling content. Managed Properties: Properties that are part of the Search user experience, which means they are available for search results, advanced search, and so on, are managed properties. Mapping Crawled Properties to Managed Properties: To make a crawled property available for the Search experience—to make it available for Search queries and display it in Advanced Search and search results—you must map it to a managed property. References Administer search in SharePoint 2013 Preview Managing Metadata New-SPEnterpriseSearchMetadataCrawledProperty New-SPEnterpriseSearchMetadataManagedProperty Remove-SPEnterpriseSearchMetadataManagedProperty Overview of crawled and managed properties in SharePoint 2013 Preview Remove-SPEnterpriseSearchMetadataManagedProperty SharePoint 2013 – Search Service Application

    Read the article

  • How to create managed properties at site collection level in SharePoint2013

    - by ybbest
    In SharePoint2013, you can create managed properties at site collection. Today, I’d like to show you how to do so through PowerShell. 1. Define your managed properties and crawled properties and managed property Type in an external csv file. PowerShell script will read this file and create the managed and the mapping. 2. As you can see I also defined variant Type, this is because you need the variant type to create the crawled property. In order to have the crawled properties, you need to do a full crawl and also make sure you have data populated for your custom column. However, if you do not want to a full crawl to create those crawled properties, you can create them yourself by using the PowerShell; however you need to make sure the crawled properties you created have the same name if created by a full crawl. Managed properties type: Text = 1 Integer = 2 Decimal = 3 DateTime = 4 YesNo = 5 Binary = 6 Variant Type: Text = 31 Integer = 20 Decimal = 5 DateTime = 64 YesNo = 11 3. You can use the following script to create your managed properties at site collection level, the differences for creating managed property at site collection level is to pass in the site collection id. param( [string] $siteUrl="http://SP2013/", [string] $searchAppName = "Search Service Application", $ManagedPropertiesList=(IMPORT-CSV ".\ManagedProperties.csv") ) Add-PSSnapin Microsoft.SharePoint.PowerShell -ErrorAction SilentlyContinue $searchapp = $null function AppendLog { param ([string] $msg, [string] $msgColor) $currentDateTime = Get-Date $msg = $msg + " --- " + $currentDateTime if (!($logOnly -eq $True)) { # write to console Write-Host -f $msgColor $msg } # write to log file Add-Content $logFilePath $msg } $scriptPath = Split-Path $myInvocation.MyCommand.Path $logFilePath = $scriptPath + "\CreateManagedProperties_Log.txt" function CreateRefiner {param ([string] $crawledName, [string] $managedPropertyName, [Int32] $variantType, [Int32] $managedPropertyType,[System.GUID] $siteID) $cat = Get-SPEnterpriseSearchMetadataCategory –Identity SharePoint -SearchApplication $searchapp $crawledproperty = Get-SPEnterpriseSearchMetadataCrawledProperty -Name $crawledName -SearchApplication $searchapp -SiteCollection $siteID if($crawledproperty -eq $null) { Write-Host AppendLog "Creating Crawled Property for $managedPropertyName" Yellow $crawledproperty = New-SPEnterpriseSearchMetadataCrawledProperty -SearchApplication $searchapp -VariantType $variantType -SiteCollection $siteID -Category $cat -PropSet "00130329-0000-0130-c000-000000131346" -Name $crawledName -IsNameEnum $false } $managedproperty = Get-SPEnterpriseSearchMetadataManagedProperty -Identity $managedPropertyName -SearchApplication $searchapp -SiteCollection $siteID -ErrorAction SilentlyContinue if($managedproperty -eq $null) { Write-Host AppendLog "Creating Managed Property for $managedPropertyName" Yellow $managedproperty = New-SPEnterpriseSearchMetadataManagedProperty -Name $managedPropertyName -Type $managedPropertyType -SiteCollection $siteID -SearchApplication $searchapp -Queryable:$true -Retrievable:$true -FullTextQueriable:$true -RemoveDuplicates:$false -RespectPriority:$true -IncludeInMd5:$true } $mappedProperty = $crawledproperty.GetMappedManagedProperties() | ?{$_.Name -eq $managedProperty.Name } if($mappedProperty -eq $null) { Write-Host AppendLog "Creating Crawled -> Managed Property mapping for $managedPropertyName" Yellow New-SPEnterpriseSearchMetadataMapping -CrawledProperty $crawledproperty -ManagedProperty $managedproperty -SearchApplication $searchapp -SiteCollection $siteID } $mappedProperty = $crawledproperty.GetMappedManagedProperties() | ?{$_.Name -eq $managedProperty.Name } #Get-FASTSearchMetadataCrawledPropertyMapping -ManagedProperty $managedproperty } $searchapp = Get-SPEnterpriseSearchServiceApplication $searchAppName $site= Get-SPSite $siteUrl $siteId=$site.id Write-Host "Start creating Managed properties" $i = 1 FOREACH ($property in $ManagedPropertiesList) { $propertyName=$property.managedPropertyName $crawledName=$property.crawledName $managedPropertyType=$property.managedPropertyType $variantType=$property.variantType Write-Host $managedPropertyType Write-Host "Processing managed property $propertyName $($i)..." $i++ CreateRefiner $crawledName $propertyName $variantType $managedPropertyType $siteId Write-Host "Managed property created " $propertyName } Key Concepts Crawled Properties: Crawled properties are discovered by the search index service component when crawling content. Managed Properties: Properties that are part of the Search user experience, which means they are available for search results, advanced search, and so on, are managed properties. Mapping Crawled Properties to Managed Properties: To make a crawled property available for the Search experience—to make it available for Search queries and display it in Advanced Search and search results—you must map it to a managed property. References Administer search in SharePoint 2013 Preview Managing Metadata

    Read the article

  • Access VBA sub with form as parameter doesn't alter the form

    - by Ski
    I have a Microsoft Access 2003 file with various tables of data. Each table also has a duplicate of it, with the name '[original table name]_working'. Depending on the user's choices in the switchboard, the form the user choose to view must switch its recordsource table to the working table. I refactored the relevant code to do such a thing into the following function today: Public Sub SetFormToWorking(ByRef frm As Form) With frm .RecordSource = rst![Argument] & "_working" .Requery Dim itm As Variant For Each itm In .Controls If TypeOf itm Is subForm Then With Item Dim childFields As Variant, masterFields As Variant childFields = .LinkChildFields masterFields = .LinkMasterFields .Form.RecordSource = .Form.RecordSource & "_working" .LinkChildFields = childFields .LinkMasterFields = masterFields .Form.Requery End With End If Next End With End Sub The lines of code that call the function look like this: SetFormToWorking Forms(rst![Argument]) and SetFormToWorking Forms(cmbTblList) For some reason, the above function doesn't change the recordsource for the form. I added the 'ByRef' keyword to the parameter just to be certain that it was passing by reference, but no dice. Hopefully someone here can tell me what I've done wrong?

    Read the article

  • Piloting Microsoft Word With Ole in C++ Builder : how to put Word in the foreground.

    - by Getz
    Hi! I've got a code (which works fine) for piloting word with C++ Builder. It's useful for reaching different bookmarks in the document. Variant vNom, vWDocuments, vWDocument, vMSWord, vSignets, vSignet; vNom = WideString("blabla.doc"); try { vMSWord = Variant::GetActiveObject("Word.Application"); } catch(...) { vMSWord = Variant::CreateObject("Word.Application"); } vMSWord.OlePropertySet("Visible", true); vWDocuments = vMSWord.OlePropertyGet("Documents"); vWDocument = vWDocuments.OleFunction("Open", vNom); vSignets = vWDocument.OlePropertyGet("BookMarks"); if (vSignets.OleFunction("Exists", signet)) { vSignet = vSignets.OleFunction("Item", signet); vSignet.OleFunction("Select"); } But once the document is opened, the user can no longer see when an other bookmark has been reached, since the application stays in background. Does anyone know how i can do to make Word displayed in the foreground, or to light-up the document in the taskbar? Thanks!

    Read the article

  • How do I show information to users belonging to different groups (web) in modx

    - by Gaurav Sharma
    Hello Everyone, I have created a website using modx evolution v1.0.2. The website that I have developed has 12 different types of users (categorized in groups). Each user will be shown a different price depending on the group to which he belongs. Till now I have been able to fetch the group name of current logged in user (created a snippet for that), but how can I achieve the above mentioned functionality so that each user should be able to see only the price that I have coded according to his group. For example: If a user is associated with the 'ocassional' group then he should be shown the price as , say, 50 bucks and if a user is associated with the 'regular' group then he should be shown the price as, say, 40 bucks I can easily do this by coding a single snippet for every product's variant, but there are a lot of variants (more than 100 and growing). I have created a resource(page) for every product and it's variant. Every variant has a price. It is this price that I want to be shown according to the logged in user group membership. I hope I am able to explain my query clearly. Please help me do this functionality. Thanks

    Read the article

  • Delphi 6 OleServer.pas Invoke memory leak

    - by Mike Davis
    There's a bug in delphi 6 which you can find some reference online for when you import a tlb the order of the parameters in an event invocation is reversed. It is reversed once in the imported header and once in TServerEventDIspatch.Invoke. you can find more information about it here: http://cc.embarcadero.com/Item/16496 somewhat related to this issue there appears to be a memory leak in TServerEventDispatch.Invoke with a parameter of a Variant of type Var_Array (maybe others, but this is the more obvious one i could see). The invoke code copies the args into a VarArray to be passed to the event handler and then copies the VarArray back to the args after the call, relevant code pasted below: // Set our array to appropriate length SetLength(VarArray, ParamCount); // Copy over data for I := Low(VarArray) to High(VarArray) do VarArray[I] := OleVariant(TDispParams(Params).rgvarg^[I]); // Invoke Server proxy class if FServer <> nil then FServer.InvokeEvent(DispID, VarArray); // Copy data back for I := Low(VarArray) to High(VarArray) do OleVariant(TDispParams(Params).rgvarg^[I]) := VarArray[I]; // Clean array SetLength(VarArray, 0); There are some obvious work-arounds in my case: if i skip the copying back in case of a VarArray parameter it fixes the leak. to not change the functionality i thought i should copy the data in the array instead of the variant back to the params but that can get complicated since it can hold other variants and seems to me that would need to be done recursively. Since a change in OleServer will have a ripple effect i want to make sure my change here is strictly correct. can anyone shed some light on exactly why memory is being leaked here? I can't seem to look up the callstack any lower than TServerEventDIspatch.Invoke (why is that?) I imagine that in the process of copying the Variant holding the VarArray back to the param list it added a reference to the array thus not allowing it to be release as normal but that's just a rough guess and i can't track down the code to back it up. Maybe someone with a better understanding of all this could shed some light?

    Read the article

  • MSForms.ListBox Type Mismatch in Access

    - by Jason
    I have an Access database where I use a Tab control (without tabs) to simulate a wizard. One of the tab pages has an MSForms.ListBox control called lstPorts, and a button named cmdAdd which adds the contents of a textbox to the List Box. I then try to keep the contents of the ListBox sorted. However, the call to the Sort method causes a type mismatch. Here is the cmdAdd_Click() code behind: Private Sub cmdAdd_Click() Dim test As MSForms.ListBox lstPorts2.AddItem (txtPortName) Call SortListBox(lstPorts2) End Sub Here is the SortListBox Sub: Public Sub SortListBox(ByRef oLb As MSForms.ListBox) Dim vaItems As Variant Dim i As Long, j As Long Dim vTemp As Variant 'Put the items in a variant array vaItems = oLb.List For i = LBound(vaItems, 1) To UBound(vaItems, 1) - 1 For j = i + 1 To UBound(vaItems, 1) If vaItems(i, 0) > vaItems(j, 0) Then vTemp = vaItems(i, 0) vaItems(i, 0) = vaItems(j, 0) vaItems(j, 0) = vTemp End If Next j Next i 'Clear the listbox oLb.Clear 'Add the sorted array back to the listbox For i = LBound(vaItems, 1) To UBound(vaItems, 1) oLb.AddItem vaItems(i, 0) Next i End Sub Any help out there? Since the Sort routine explicitly references the MSForms.ListBox, most of the results from Google aren't applicable. Jason

    Read the article

  • SignalR Auto Disconnect when Page Changed in AngularJS

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2014/05/30/signalr-auto-disconnect-when-page-changed-in-angularjs.aspxIf we are using SignalR, the connection lifecycle was handled by itself very well. For example when we connect to SignalR service from browser through SignalR JavaScript Client the connection will be established. And if we refresh the page, close the tab or browser, or navigate to another URL then the connection will be closed automatically. This information had been well documented here. In a browser, SignalR client code that maintains a SignalR connection runs in the JavaScript context of a web page. That's why the SignalR connection has to end when you navigate from one page to another, and that's why you have multiple connections with multiple connection IDs if you connect from multiple browser windows or tabs. When the user closes a browser window or tab, or navigates to a new page or refreshes the page, the SignalR connection immediately ends because SignalR client code handles that browser event for you and calls the "Stop" method. But unfortunately this behavior doesn't work if we are using SignalR with AngularJS. AngularJS is a single page application (SPA) framework created by Google. It hijacks browser's address change event, based on the route table user defined, launch proper view and controller. Hence in AngularJS we address was changed but the web page still there. All changes of the page content are triggered by Ajax. So there's no page unload and load events. This is the reason why SignalR cannot handle disconnect correctly when works with AngularJS. If we dig into the source code of SignalR JavaScript Client source code we will find something below. It monitors the browser page "unload" and "beforeunload" event and send the "stop" message to server to terminate connection. But in AngularJS page change events were hijacked, so SignalR will not receive them and will not stop the connection. 1: // wire the stop handler for when the user leaves the page 2: _pageWindow.bind("unload", function () { 3: connection.log("Window unloading, stopping the connection."); 4:  5: connection.stop(asyncAbort); 6: }); 7:  8: if (isFirefox11OrGreater) { 9: // Firefox does not fire cross-domain XHRs in the normal unload handler on tab close. 10: // #2400 11: _pageWindow.bind("beforeunload", function () { 12: // If connection.stop() runs runs in beforeunload and fails, it will also fail 13: // in unload unless connection.stop() runs after a timeout. 14: window.setTimeout(function () { 15: connection.stop(asyncAbort); 16: }, 0); 17: }); 18: }   Problem Reproduce In the codes below I created a very simple example to demonstrate this issue. Here is the SignalR server side code. 1: public class GreetingHub : Hub 2: { 3: public override Task OnConnected() 4: { 5: Debug.WriteLine(string.Format("Connected: {0}", Context.ConnectionId)); 6: return base.OnConnected(); 7: } 8:  9: public override Task OnDisconnected() 10: { 11: Debug.WriteLine(string.Format("Disconnected: {0}", Context.ConnectionId)); 12: return base.OnDisconnected(); 13: } 14:  15: public void Hello(string user) 16: { 17: Clients.All.hello(string.Format("Hello, {0}!", user)); 18: } 19: } Below is the configuration code which hosts SignalR hub in an ASP.NET WebAPI project with IIS Express. 1: public class Startup 2: { 3: public void Configuration(IAppBuilder app) 4: { 5: app.Map("/signalr", map => 6: { 7: map.UseCors(CorsOptions.AllowAll); 8: map.RunSignalR(new HubConfiguration() 9: { 10: EnableJavaScriptProxies = false 11: }); 12: }); 13: } 14: } Since we will host AngularJS application in Node.js in another process and port, the SignalR connection will be cross domain. So I need to enable CORS above. In client side I have a Node.js file to host AngularJS application as a web server. You can use any web server you like such as IIS, Apache, etc.. Below is the "index.html" page which contains a navigation bar so that I can change the page/state. As you can see I added jQuery, AngularJS, SignalR JavaScript Client Library as well as my AngularJS entry source file "app.js". 1: <html data-ng-app="demo"> 2: <head> 3: <script type="text/javascript" src="jquery-2.1.0.js"></script> 1:  2: <script type="text/javascript" src="angular.js"> 1: </script> 2: <script type="text/javascript" src="angular-ui-router.js"> 1: </script> 2: <script type="text/javascript" src="jquery.signalR-2.0.3.js"> 1: </script> 2: <script type="text/javascript" src="app.js"></script> 4: </head> 5: <body> 6: <h1>SignalR Auto Disconnect with AngularJS by Shaun</h1> 7: <div> 8: <a href="javascript:void(0)" data-ui-sref="view1">View 1</a> | 9: <a href="javascript:void(0)" data-ui-sref="view2">View 2</a> 10: </div> 11: <div data-ui-view></div> 12: </body> 13: </html> Below is the "app.js". My SignalR logic was in the "View1" page and it will connect to server once the controller was executed. User can specify a user name and send to server, all clients that located in this page will receive the server side greeting message through SignalR. 1: 'use strict'; 2:  3: var app = angular.module('demo', ['ui.router']); 4:  5: app.config(['$stateProvider', '$locationProvider', function ($stateProvider, $locationProvider) { 6: $stateProvider.state('view1', { 7: url: '/view1', 8: templateUrl: 'view1.html', 9: controller: 'View1Ctrl' }); 10:  11: $stateProvider.state('view2', { 12: url: '/view2', 13: templateUrl: 'view2.html', 14: controller: 'View2Ctrl' }); 15:  16: $locationProvider.html5Mode(true); 17: }]); 18:  19: app.value('$', $); 20: app.value('endpoint', 'http://localhost:60448'); 21: app.value('hub', 'GreetingHub'); 22:  23: app.controller('View1Ctrl', function ($scope, $, endpoint, hub) { 24: $scope.user = ''; 25: $scope.response = ''; 26:  27: $scope.greeting = function () { 28: proxy.invoke('Hello', $scope.user) 29: .done(function () {}) 30: .fail(function (error) { 31: console.log(error); 32: }); 33: }; 34:  35: var connection = $.hubConnection(endpoint); 36: var proxy = connection.createHubProxy(hub); 37: proxy.on('hello', function (response) { 38: $scope.$apply(function () { 39: $scope.response = response; 40: }); 41: }); 42: connection.start() 43: .done(function () { 44: console.log('signlar connection established'); 45: }) 46: .fail(function (error) { 47: console.log(error); 48: }); 49: }); 50:  51: app.controller('View2Ctrl', function ($scope, $) { 52: }); When we went to View1 the server side "OnConnect" method will be invoked as below. And in any page we send the message to server, all clients will got the response. If we close one of the client, the server side "OnDisconnect" method will be invoked which is correct. But is we click "View 2" link in the page "OnDisconnect" method will not be invoked even though the content and browser address had been changed. This might cause many SignalR connections remain between the client and server. Below is what happened after I clicked "View 1" and "View 2" links four times. As you can see there are 4 live connections.   Solution Since the reason of this issue is because, AngularJS hijacks the page event that SignalR need to stop the connection, we can handle AngularJS route or state change event and stop SignalR connect manually. In the code below I moved the "connection" variant to global scope, added a handler to "$stateChangeStart" and invoked "stop" method of "connection" if its state was not "disconnected". 1: var connection; 2: app.run(['$rootScope', function ($rootScope) { 3: $rootScope.$on('$stateChangeStart', function () { 4: if (connection && connection.state && connection.state !== 4 /* disconnected */) { 5: console.log('signlar connection abort'); 6: connection.stop(); 7: } 8: }); 9: }]); Now if we refresh the page and navigated to View 1, the connection will be opened. At this state if we clicked "View 2" link the content will be changed and the SignalR connection will be closed automatically.   Summary In this post I demonstrated an issue when we are using SignalR with AngularJS. The connection cannot be closed automatically when we navigate to other page/state in AngularJS. And the solution I mentioned below is to move the SignalR connection as a global variant and close it manually when AngularJS route/state changed. You can download the full sample code here. Moving the SignalR connection as a global variant might not be a best solution. It's just for easy to demo here. In production code I suggest wrapping all SignalR operations into an AngularJS factory. Since AngularJS factory is a singleton object, we can safely put the connection variant in the factory function scope.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Apple’s Sep 10th event confirmed. iPhone 5S and low cost iPhone 5C launch is expected

    - by Gopinath
    The much rumored Apple event on September 10th is confirmed. Apple sent official event invitations to media houses and popular bloggers across the globe with the title "This should brighten your day". For the past couple of months there are a lot of speculations on next generation iPhone. Media and bloggers are dubbing it as iPhone 5S and rumored to have finger print sensor for biometric authentication, 12- or 13-megapixel camera with dual-LED flash, and a gold-colored variant. Another speculated surprise Apple may pull out is a low cost variant of iPhone called as iPhone 5C. In order to fight Android penetration, Apple is speculated to announce a plastic iPhone in multiple bold colors similar to the Nokia phones. The new iPhones will be running on iOS 7, a new flat UI which is drastically different from previous versions. iOS 7 is in beta for several months and it heavily barrowed user interface clues from Microsoft’s Windows 8 operating system. What ever Apple is going to introduce on September 10th, gadget freaks and investors are eagerly waiting to see if Apple can continue innovating after Steve Jobs. Since 2011 this is the big launch

    Read the article

  • Using virt-install to mount multiple cdrom drives/images

    - by Dana the Sane
    I would like to create a windows xp guest from the windows xp upgrade cd I have, along with one of a few full versions I have around. However, when I reach the stage in the installer where I am prompted to insert a full version cd, the installer can't find it, i.e.: Setup could not read the CD you inserted, or the CD is not a valid Windows CD.. Is there a work-around for this?, my Googling didn't uncover anything. I've tried various combinations of mounting .iso files and specifying disks, such as: $sudo virt-install --accelerate --connect qemu:///system -n xpsp1 -r 2048 --disk ./vm/winxp_sp1.iso,device=cdrom --disk ./vm/windows.qcow2,size=12 --vnc --noautoconsole --os-type windows --os-variant winxp --vcpus 2 -c /dev/cdrom --check-cpu If I try to specify multiple cdrom drives, I receive an error: virt-install --accelerate --connect qemu:///system -n xpsp1 -r 2048 --disk ./vm/winxp_sp1.iso,device=cdrom --disk /dev/cdrom,device=cdrom --disk ./vm/windows.qcow2,size=12 --vnc --noautoconsole --os-type windows --os-variant winxp --vcpus 2 --check-cpu Starting install... ERROR IDE CDROM must use 'hdc', but target in use.

    Read the article

  • Diagnosing xmodmap errors

    - by intuited
    I'm getting this error when trying to use xmodmap to get rid of caps lock: $ xmodmap -e 'clear Lock' X Error of failed request: BadValue (integer parameter out of range for operation) Major opcode of failed request: 118 (X_SetModifierMapping) Value in failed request: 0x17 Serial number of failed request: 8 Current serial number in output stream: 8 I'm running xfce on Maverick "10.10" Meercat. This problem did not occur before I added the Keyboard Layouts applet to a panel; before doing that, I was able to run my xmodmap script to swap Esc and CapsLock: !Remap Caps_Lock as Escape remove Lock = Caps_Lock keysym Caps_Lock = Escape It may be relevant that I chose alt-capslock as the keyboard switch combo in the Keyboard Layouts preferences. I've had a similar problem before, on a different machine, running openbox. On that machine, this problem started when I upgraded to Lucid, and has persisted in Maverick (release 10.10). I reported a bug in xorg. However, it remains unclear whether it's really a problem with xorg, or if I'm just doing something wrong with my configuration. Have other people experienced this problem? Can someone shed some light on what's going on here? It seems there are quite a few layers involved, and I don't understand any of them particularly well, so any information would be helpful. update I've discovered that the problem is specifically triggered by adding the Canada layout variant "Multilingual" (ca-multix). If I instead add the variant "Multilingual (first part)", the problem does not occur. I think this will probably end up being a usable workaround, but I don't yet know what the difference between these variants is. I've filed a freedesktop issue, and am commenting on a related ubuntu issue.

    Read the article

  • Diagnosing xmodmap errors

    - by intuited
    I'm getting this error when trying to use xmodmap to get rid of caps lock: $ xmodmap -e 'clear Lock' X Error of failed request: BadValue (integer parameter out of range for operation) Major opcode of failed request: 118 (X_SetModifierMapping) Value in failed request: 0x17 Serial number of failed request: 8 Current serial number in output stream: 8 I'm running xfce on Maverick "10.10" Meercat. This problem did not occur before I added the Keyboard Layouts applet to a panel; before doing that, I was able to run my xmodmap script to swap Esc and CapsLock: !Remap Caps_Lock as Escape remove Lock = Caps_Lock keysym Caps_Lock = Escape It may be relevant that I chose alt-capslock as the keyboard switch combo in the Keyboard Layouts preferences. I've had a similar problem before, on a different machine, running openbox. On that machine, this problem started when I upgraded to Lucid, and has persisted in Maverick (release 10.10). I reported a bug in xorg. However, it remains unclear whether it's really a problem with xorg, or if I'm just doing something wrong with my configuration. Have other people experienced this problem? Can someone shed some light on what's going on here? It seems there are quite a few layers involved, and I don't understand any of them particularly well, so any information would be helpful. update I've discovered that the problem is specifically triggered by adding the Canada layout variant "Multilingual" (ca-multix). If I instead add the variant "Multilingual (first part)", the problem does not occur. I think this will probably end up being a usable workaround, but I don't yet know what the difference between these variants is. I've filed a freedesktop issue, and am commenting on a related ubuntu issue.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >