Search Results

Search found 44645 results on 1786 pages for 'manage add ons'.

Page 134/1786 | < Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >

  • Any non-custom way to manage iptables with fail2ban and libvirt+kvm?

    - by Peter Hansen
    I have an Ubuntu 9.04 server running libvirt/kvm and fail2ban (for SSH attacks). Both libvirt and fail2ban integrate with iptables in different ways. Libvirt uses (I think) some XML config and during startup (?) configures forwarding to the VM subnet. Fail2ban installs a custom chain (probably at init) and periodically modifies it to ban/unban probable attackers. I also need to install my own rules to forward various ports to servers running in VMs and on other machines, and set up rudimentary security (e.g. drop all INPUT traffic except the few ports I want open), and of course I'd like the ability to add/remove rules safely without restarting. It seems to me iptables is a powerful tool that's sorely lacking some sort of standardized way of juggling all this stuff. Every project, and every sysadmin, seems to do it differently! (And I think there's lots of "cargo cult" admin going on here, with people cloning crude approaches like "use iptables-save like so".) Short of figuring out the gory details of exactly how both of these (and potentially other) tools manipulate the netfilter tables, and developing my own scripts or just manually executing iptables commands, is there any way to safely work with iptables while not breaking the functionality of these other tools? Any nascent standards or projects defined to bring sanity to this area? Even a helpful web page I missed that might cover at least these two packages together?

    Read the article

  • An Xml Serializable PropertyBag Dictionary Class for .NET

    - by Rick Strahl
    I don't know about you but I frequently need property bags in my applications to store and possibly cache arbitrary data. Dictionary<T,V> works well for this although I always seem to be hunting for a more specific generic type that provides a string key based dictionary. There's string dictionary, but it only works with strings. There's Hashset<T> but it uses the actual values as keys. In most key value pair situations for me string is key value to work off. Dictionary<T,V> works well enough, but there are some issues with serialization of dictionaries in .NET. The .NET framework doesn't do well serializing IDictionary objects out of the box. The XmlSerializer doesn't support serialization of IDictionary via it's default serialization, and while the DataContractSerializer does support IDictionary serialization it produces some pretty atrocious XML. What doesn't work? First off Dictionary serialization with the Xml Serializer doesn't work so the following fails: [TestMethod] public void DictionaryXmlSerializerTest() { var bag = new Dictionary<string, object>(); bag.Add("key", "Value"); bag.Add("Key2", 100.10M); bag.Add("Key3", Guid.NewGuid()); bag.Add("Key4", DateTime.Now); bag.Add("Key5", true); bag.Add("Key7", new byte[3] { 42, 45, 66 }); TestContext.WriteLine(this.ToXml(bag)); } public string ToXml(object obj) { if (obj == null) return null; StringWriter sw = new StringWriter(); XmlSerializer ser = new XmlSerializer(obj.GetType()); ser.Serialize(sw, obj); return sw.ToString(); } The error you get with this is: System.NotSupportedException: The type System.Collections.Generic.Dictionary`2[[System.String, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089],[System.Object, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]] is not supported because it implements IDictionary. Got it! BTW, the same is true with binary serialization. Running the same code above against the DataContractSerializer does work: [TestMethod] public void DictionaryDataContextSerializerTest() { var bag = new Dictionary<string, object>(); bag.Add("key", "Value"); bag.Add("Key2", 100.10M); bag.Add("Key3", Guid.NewGuid()); bag.Add("Key4", DateTime.Now); bag.Add("Key5", true); bag.Add("Key7", new byte[3] { 42, 45, 66 }); TestContext.WriteLine(this.ToXmlDcs(bag)); } public string ToXmlDcs(object value, bool throwExceptions = false) { var ser = new DataContractSerializer(value.GetType(), null, int.MaxValue, true, false, null); MemoryStream ms = new MemoryStream(); ser.WriteObject(ms, value); return Encoding.UTF8.GetString(ms.ToArray(), 0, (int)ms.Length); } This DOES work but produces some pretty heinous XML (formatted with line breaks and indentation here): <ArrayOfKeyValueOfstringanyType xmlns="http://schemas.microsoft.com/2003/10/Serialization/Arrays" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <KeyValueOfstringanyType> <Key>key</Key> <Value i:type="a:string" xmlns:a="http://www.w3.org/2001/XMLSchema">Value</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key2</Key> <Value i:type="a:decimal" xmlns:a="http://www.w3.org/2001/XMLSchema">100.10</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key3</Key> <Value i:type="a:guid" xmlns:a="http://schemas.microsoft.com/2003/10/Serialization/">2cd46d2a-a636-4af4-979b-e834d39b6d37</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key4</Key> <Value i:type="a:dateTime" xmlns:a="http://www.w3.org/2001/XMLSchema">2011-09-19T17:17:05.4406999-07:00</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key5</Key> <Value i:type="a:boolean" xmlns:a="http://www.w3.org/2001/XMLSchema">true</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key7</Key> <Value i:type="a:base64Binary" xmlns:a="http://www.w3.org/2001/XMLSchema">Ki1C</Value> </KeyValueOfstringanyType> </ArrayOfKeyValueOfstringanyType> Ouch! That seriously hurts the eye! :-) Worse though it's extremely verbose with all those repetitive namespace declarations. It's good to know that it works in a pinch, but for a human readable/editable solution or something lightweight to store in a database it's not quite ideal. Why should I care? As a little background, in one of my applications I have a need for a flexible property bag that is used on a free form database field on an otherwise static entity. Basically what I have is a standard database record to which arbitrary properties can be added in an XML based string field. I intend to expose those arbitrary properties as a collection from field data stored in XML. The concept is pretty simple: When loading write the data to the collection, when the data is saved serialize the data into an XML string and store it into the database. When reading the data pick up the XML and if the collection on the entity is accessed automatically deserialize the XML into the Dictionary. (I'll talk more about this in another post). While the DataContext Serializer would work, it's verbosity is problematic both for size of the generated XML strings and the fact that users can manually edit this XML based property data in an advanced mode. A clean(er) layout certainly would be preferable and more user friendly. Custom XMLSerialization with a PropertyBag Class So… after a bunch of experimentation with different serialization formats I decided to create a custom PropertyBag class that provides for a serializable Dictionary. It's basically a custom Dictionary<TType,TValue> implementation with the keys always set as string keys. The result are PropertyBag<TValue> and PropertyBag (which defaults to the object type for values). The PropertyBag<TType> and PropertyBag classes provide these features: Subclassed from Dictionary<T,V> Implements IXmlSerializable with a cleanish XML format ToXml() and FromXml() methods to export and import to and from XML strings Static CreateFromXml() method to create an instance It's simple enough as it's merely a Dictionary<string,object> subclass but that supports serialization to a - what I think at least - cleaner XML format. The class is super simple to use: [TestMethod] public void PropertyBagTwoWayObjectSerializationTest() { var bag = new PropertyBag(); bag.Add("key", "Value"); bag.Add("Key2", 100.10M); bag.Add("Key3", Guid.NewGuid()); bag.Add("Key4", DateTime.Now); bag.Add("Key5", true); bag.Add("Key7", new byte[3] { 42,45,66 } ); bag.Add("Key8", null); bag.Add("Key9", new ComplexObject() { Name = "Rick", Entered = DateTime.Now, Count = 10 }); string xml = bag.ToXml(); TestContext.WriteLine(bag.ToXml()); bag.Clear(); bag.FromXml(xml); Assert.IsTrue(bag["key"] as string == "Value"); Assert.IsInstanceOfType( bag["Key3"], typeof(Guid)); Assert.IsNull(bag["Key8"]); //Assert.IsNull(bag["Key10"]); Assert.IsInstanceOfType(bag["Key9"], typeof(ComplexObject)); } This uses the PropertyBag class which uses a PropertyBag<string,object> - which means it returns untyped values of type object. I suspect for me this will be the most common scenario as I'd want to store arbitrary values in the PropertyBag rather than one specific type. The same code with a strongly typed PropertyBag<decimal> looks like this: [TestMethod] public void PropertyBagTwoWayValueTypeSerializationTest() { var bag = new PropertyBag<decimal>(); bag.Add("key", 10M); bag.Add("Key1", 100.10M); bag.Add("Key2", 200.10M); bag.Add("Key3", 300.10M); string xml = bag.ToXml(); TestContext.WriteLine(bag.ToXml()); bag.Clear(); bag.FromXml(xml); Assert.IsTrue(bag.Get("Key1") == 100.10M); Assert.IsTrue(bag.Get("Key3") == 300.10M); } and produces typed results of type decimal. The types can be either value or reference types the combination of which actually proved to be a little more tricky than anticipated due to null and specific string value checks required - getting the generic typing right required use of default(T) and Convert.ChangeType() to trick the compiler into playing nice. Of course the whole raison d'etre for this class is the XML serialization. You can see in the code above that we're doing a .ToXml() and .FromXml() to serialize to and from string. The XML produced for the first example looks like this: <?xml version="1.0" encoding="utf-8"?> <properties> <item> <key>key</key> <value>Value</value> </item> <item> <key>Key2</key> <value type="decimal">100.10</value> </item> <item> <key>Key3</key> <value type="___System.Guid"> <guid>f7a92032-0c6d-4e9d-9950-b15ff7cd207d</guid> </value> </item> <item> <key>Key4</key> <value type="datetime">2011-09-26T17:45:58.5789578-10:00</value> </item> <item> <key>Key5</key> <value type="boolean">true</value> </item> <item> <key>Key7</key> <value type="base64Binary">Ki1C</value> </item> <item> <key>Key8</key> <value type="nil" /> </item> <item> <key>Key9</key> <value type="___Westwind.Tools.Tests.PropertyBagTest+ComplexObject"> <ComplexObject> <Name>Rick</Name> <Entered>2011-09-26T17:45:58.5789578-10:00</Entered> <Count>10</Count> </ComplexObject> </value> </item> </properties>   The format is a bit cleaner than the DataContractSerializer. Each item is serialized into <key> <value> pairs. If the value is a string no type information is written. Since string tends to be the most common type this saves space and serialization processing. All other types are attributed. Simple types are mapped to XML types so things like decimal, datetime, boolean and base64Binary are encoded using their Xml type values. All other types are embedded with a hokey format that describes the .NET type preceded by a three underscores and then are encoded using the XmlSerializer. You can see this best above in the ComplexObject encoding. For custom types this isn't pretty either, but it's more concise than the DCS and it works as long as you're serializing back and forth between .NET clients at least. The XML generated from the second example that uses PropertyBag<decimal> looks like this: <?xml version="1.0" encoding="utf-8"?> <properties> <item> <key>key</key> <value type="decimal">10</value> </item> <item> <key>Key1</key> <value type="decimal">100.10</value> </item> <item> <key>Key2</key> <value type="decimal">200.10</value> </item> <item> <key>Key3</key> <value type="decimal">300.10</value> </item> </properties>   How does it work As I mentioned there's nothing fancy about this solution - it's little more than a subclass of Dictionary<T,V> that implements custom Xml Serialization and a couple of helper methods that facilitate getting the XML in and out of the class more easily. But it's proven very handy for a number of projects for me where dynamic data storage is required. Here's the code: /// <summary> /// Creates a serializable string/object dictionary that is XML serializable /// Encodes keys as element names and values as simple values with a type /// attribute that contains an XML type name. Complex names encode the type /// name with type='___namespace.classname' format followed by a standard xml /// serialized format. The latter serialization can be slow so it's not recommended /// to pass complex types if performance is critical. /// </summary> [XmlRoot("properties")] public class PropertyBag : PropertyBag<object> { /// <summary> /// Creates an instance of a propertybag from an Xml string /// </summary> /// <param name="xml">Serialize</param> /// <returns></returns> public static PropertyBag CreateFromXml(string xml) { var bag = new PropertyBag(); bag.FromXml(xml); return bag; } } /// <summary> /// Creates a serializable string for generic types that is XML serializable. /// /// Encodes keys as element names and values as simple values with a type /// attribute that contains an XML type name. Complex names encode the type /// name with type='___namespace.classname' format followed by a standard xml /// serialized format. The latter serialization can be slow so it's not recommended /// to pass complex types if performance is critical. /// </summary> /// <typeparam name="TValue">Must be a reference type. For value types use type object</typeparam> [XmlRoot("properties")] public class PropertyBag<TValue> : Dictionary<string, TValue>, IXmlSerializable { /// <summary> /// Not implemented - this means no schema information is passed /// so this won't work with ASMX/WCF services. /// </summary> /// <returns></returns> public System.Xml.Schema.XmlSchema GetSchema() { return null; } /// <summary> /// Serializes the dictionary to XML. Keys are /// serialized to element names and values as /// element values. An xml type attribute is embedded /// for each serialized element - a .NET type /// element is embedded for each complex type and /// prefixed with three underscores. /// </summary> /// <param name="writer"></param> public void WriteXml(System.Xml.XmlWriter writer) { foreach (string key in this.Keys) { TValue value = this[key]; Type type = null; if (value != null) type = value.GetType(); writer.WriteStartElement("item"); writer.WriteStartElement("key"); writer.WriteString(key as string); writer.WriteEndElement(); writer.WriteStartElement("value"); string xmlType = XmlUtils.MapTypeToXmlType(type); bool isCustom = false; // Type information attribute if not string if (value == null) { writer.WriteAttributeString("type", "nil"); } else if (!string.IsNullOrEmpty(xmlType)) { if (xmlType != "string") { writer.WriteStartAttribute("type"); writer.WriteString(xmlType); writer.WriteEndAttribute(); } } else { isCustom = true; xmlType = "___" + value.GetType().FullName; writer.WriteStartAttribute("type"); writer.WriteString(xmlType); writer.WriteEndAttribute(); } // Actual deserialization if (!isCustom) { if (value != null) writer.WriteValue(value); } else { XmlSerializer ser = new XmlSerializer(value.GetType()); ser.Serialize(writer, value); } writer.WriteEndElement(); // value writer.WriteEndElement(); // item } } /// <summary> /// Reads the custom serialized format /// </summary> /// <param name="reader"></param> public void ReadXml(System.Xml.XmlReader reader) { this.Clear(); while (reader.Read()) { if (reader.NodeType == XmlNodeType.Element && reader.Name == "key") { string xmlType = null; string name = reader.ReadElementContentAsString(); // item element reader.ReadToNextSibling("value"); if (reader.MoveToNextAttribute()) xmlType = reader.Value; reader.MoveToContent(); TValue value; if (xmlType == "nil") value = default(TValue); // null else if (string.IsNullOrEmpty(xmlType)) { // value is a string or object and we can assign TValue to value string strval = reader.ReadElementContentAsString(); value = (TValue) Convert.ChangeType(strval, typeof(TValue)); } else if (xmlType.StartsWith("___")) { while (reader.Read() && reader.NodeType != XmlNodeType.Element) { } Type type = ReflectionUtils.GetTypeFromName(xmlType.Substring(3)); //value = reader.ReadElementContentAs(type,null); XmlSerializer ser = new XmlSerializer(type); value = (TValue)ser.Deserialize(reader); } else value = (TValue)reader.ReadElementContentAs(XmlUtils.MapXmlTypeToType(xmlType), null); this.Add(name, value); } } } /// <summary> /// Serializes this dictionary to an XML string /// </summary> /// <returns>XML String or Null if it fails</returns> public string ToXml() { string xml = null; SerializationUtils.SerializeObject(this, out xml); return xml; } /// <summary> /// Deserializes from an XML string /// </summary> /// <param name="xml"></param> /// <returns>true or false</returns> public bool FromXml(string xml) { this.Clear(); // if xml string is empty we return an empty dictionary if (string.IsNullOrEmpty(xml)) return true; var result = SerializationUtils.DeSerializeObject(xml, this.GetType()) as PropertyBag<TValue>; if (result != null) { foreach (var item in result) { this.Add(item.Key, item.Value); } } else // null is a failure return false; return true; } /// <summary> /// Creates an instance of a propertybag from an Xml string /// </summary> /// <param name="xml"></param> /// <returns></returns> public static PropertyBag<TValue> CreateFromXml(string xml) { var bag = new PropertyBag<TValue>(); bag.FromXml(xml); return bag; } } } The code uses a couple of small helper classes SerializationUtils and XmlUtils for mapping Xml types to and from .NET, both of which are from the WestWind,Utilities project (which is the same project where PropertyBag lives) from the West Wind Web Toolkit. The code implements ReadXml and WriteXml for the IXmlSerializable implementation using old school XmlReaders and XmlWriters (because it's pretty simple stuff - no need for XLinq here). Then there are two helper methods .ToXml() and .FromXml() that basically allow your code to easily convert between XML and a PropertyBag object. In my code that's what I use to actually to persist to and from the entity XML property during .Load() and .Save() operations. It's sweet to be able to have a string key dictionary and then be able to turn around with 1 line of code to persist the whole thing to XML and back. Hopefully some of you will find this class as useful as I've found it. It's a simple solution to a common requirement in my applications and I've used the hell out of it in the  short time since I created it. Resources You can find the complete code for the two classes plus the helpers in the Subversion repository for Westwind.Utilities. You can grab the source files from there or download the whole project. You can also grab the full Westwind.Utilities assembly from NuGet and add it to your project if that's easier for you. PropertyBag Source Code SerializationUtils and XmlUtils Westwind.Utilities Assembly on NuGet (add from Visual Studio) © Rick Strahl, West Wind Technologies, 2005-2011Posted in .NET  CSharp   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Why don't %MEM values add up to mem in top?

    - by ben
    I'm currently debugging performance issues with my VPS and for that I'm trying to understand which of the processes eat the most memory. Reading top, here's what I get: Mem: 366544k total, 321396k used, 45148k free, 380k buffers Swap: 1048572k total, 592388k used, 456184k free, 7756k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 12339 ruby 20 0 844m 74m 2440 S 0 20.8 0:24.84 ruby 12363 ruby 20 0 844m 73m 1576 S 0 20.6 0:00.26 ruby 21117 ruby 20 0 171m 33m 1792 S 0 9.3 2:03.98 ruby 11846 ruby 20 0 858m 21m 1820 S 0 6.0 0:09.15 ruby 21277 ruby 20 0 219m 11m 1648 S 0 3.2 2:00.98 ruby 792 root 20 0 266m 10m 1024 S 0 3.0 1:40.06 ruby 532 mysql 20 0 234m 4760 1040 S 0 1.3 0:41.58 mysqld 793 root 20 0 250m 4616 984 S 0 1.3 1:20.55 ruby 586 root 20 0 156m 4532 848 S 0 1.2 6:17.10 god 12315 ruby 20 0 175m 2412 1900 S 0 0.7 0:07.55 ruby 3844 root 20 0 44036 2132 1028 S 0 0.6 1:08.22 ruby 10939 ruby 20 0 179m 1884 1724 S 0 0.5 0:08.33 ruby 4660 ruby 20 0 229m 1592 1440 S 0 0.4 2:55.46 ruby 3879 nobody 20 0 37428 964 520 S 0 0.3 0:01.99 nginx As you can see my memory is about 90% used (which is my issue) but when you add up the %MEM values, it goes to about 50-60% only. Same thing, RES doesn't add up to ~350mb. Why? Am I misunderstanding their meaning? Thanks

    Read the article

  • Would it be a good idea to work on letting people add arrays of numbers in javascript?

    - by OneThreeSeven
    I am a very mathematically oriented programmer, and I happen to be doing a lot of java script these days. I am really disappointed in the math aspects of javascript: the Math object is almost a joke because it has so few methods you can't use ^ for exponentiation the + operator is very limited, you cant add array's of numbers or do scalar multiplication on arrays Now I have written some pretty basic extensions to the Math object and have considered writing a library of advanced Math features, amazingly there doesn't seem to be any sort of standard library already out even for calculus, although there is one for vectors and matricies I was able find. The notation for working with vectors and matricies is really bad when you can't use the + operator on arrays, and you cant do scalar multiplication. For example, here is a hideous expression for subtracting two vectors, A - B: Math.vectorAddition(A,Math.scalarMultiplication(-1,B)); I have been looking for some kind of open-source project to contribute to for awhile, and even though my C++ is a bit rusty I would very much like to get into the code for V8 engine and extend the + operator to work on arrays, to get scalar multiplication to work, and possibly to get the ^ operator to work for exponentiation. These things would greatly enhance the utility of any mathematical javascript framework. I really don't know how to get involved in something like the V8 engine other than download the code and start working on it. Of course I'm afraid that since V8 is chrome specific, that without browser cross-compatibility a fundamental change of this type is likely to be rejected for V8. I was hoping someone could either tell me why this is a bad idea, or else give me some pointers about how to proceed at this point to get some kind of approval to add these features. Thanks!

    Read the article

  • How to enable the user to add background images to anchor links thought Wordpress admin panel? [closed]

    - by janoChen
    I have css selectors like this on in my style.css: .jimgMenu ul li.landscapes a { background: url(../images/landscapes.jpg) repeat scroll 0%; } What's the easiest way to enable the user to add background images to anchor links like the ones below? front-page.php: <div class="jimgMenu"> <ul> <li class="landscapes"><a href="#nogo">Landscapes</a></li> <li class="people"><a href="#nogo">People</a></li> <li class="nature"><a href="#nogo">Nature</a></li> <li class="abstract"><a href="#nogo">Abstract</a></li> <li class="urban"><a href="#nogo">Urban</a></li> <li class="people2"><a href="#nogo">People</a></li> </ul> </div> To illustrate: .jimgMenu ul li.landscapes a { background: url(<add background image>) repeat scroll 0%; } What that code would look like?

    Read the article

  • How do I add more than one command to /etc/rc.local?

    - by Andreas
    I want to add two power saving commands to /etc/rc.local file. This to dissable bluetooth: rfkill block bluetooth And this to reduce screen brightness: echo 3024 > /sys/class/backlight/intel_backlight/brightness Separately added to /etc/rc.local they work but not both of them together like this: #/bin/sh -e # # rc.local # # This script is executed at the end of each multiuser runlevel. # Make sure that the script will "exit 0" on success or any other # value on error. # # In order to enable or disable this script just change the execution # bits. # # By default this script does nothing. echo 3024 > /sys/class/backlight/intel_backlight/brightness rfkill block bluetooth exit 0 How do I add the two commands to get them properly executed at start-up? Update It turned out to be a timing issue. I fixed it by delaying the execution of the first command thus: (sleep 5; echo 3021 > /sys/class/backlight/intel_backlight/brightness)&

    Read the article

  • How to Add a Business Card, or vCard (.vcf) File, to a Signature in Outlook 2013 Without Displaying an Image

    - by Lori Kaufman
    Whenever you add a Business Card to your signature in Outlook 2013, the Signature Editor automatically generates a picture of it and includes that in the signature as well as attaching the .vcf file. However, there is a way to leave out the image. To remove the business card image from your signature but maintain the attached .vcf file, you must make a change to the registry. NOTE: Before making changes to the registry, be sure you back it up. We also recommend creating a restore point you can use to restore your system if something goes wrong. Before changing the registry, we must add the Business Card to the signature and save it so a .vcf file of the contact is created in the Signatures folder. To do this, click the File tab. Click Options in the menu list on the left side of the Account Information screen. On the Outlook Options dialog box, click Mail in the list of options on the left side of the dialog box. On the Mail screen, click Signatures in the Compose messages section. For this example, we will create a new signature to include the .vcf file for your business card without the image. Click New below the Select signature to edit box. Enter a name for the new signature, such as Business Card, and click OK. Enter text in the signature editor and format it the way you want or insert a different image or logo. Click Business Card above the signature editor. Select the contact you want to include in the signature on the Insert Business Card dialog box and click OK. Click Save below the Select signature to edit box. This creates a .vcf file for the business card in the Signatures folder. Click on the business card image in the signature and delete it. You should only see your formatted text or other image or logo in the signature editor. Click OK to save your new signature and close the signature editor. Close Outlook as well. Now, we will open the Registry Editor to add a key and value to indicate where to find the .vcf to include in the signature we just created. If you’re running Windows 8, press the Windows Key + X to open the command menu and select Run. You can also press the Windows Key + R to directly access the Run dialog box. NOTE: In Windows 7, select Run from the Start menu. In the Open edit box on the Run dialog box, enter “regedit” (without the quotes) and click OK. If the User Account Control dialog box displays, click Yes to continue. NOTE: You may not see this dialog box, depending on your User Account Control settings. Navigate to the following registry key: HKEY_CURRENT_USER\Software\Microsoft\Office\15.0\Outlook\Signatures Make sure the Signatures key is selected. Select New | String Value from the Edit menu. NOTE: You can also right-click in the empty space in the right pane and select New | String Value from the popup menu. Rename the new value to the name of the Signature you created. For this example, we named the value Business Card. Double-click on the new value. In the Value data edit box on the Edit String dialog box, enter the value indicating the location of the .vcf file to include in the signature. The format is: <signature name>_files\<name of .vcf file> For our example, the Value data should be as follows: Business Card_files\Lori Kaufman The name of the .vcf file is generally the contact name. If you’re not sure of what to enter for the Value data for the new key value, you can check the location and name of the .vcf file. To do this, open the Outlook Options dialog box and access the Mail screen as instructed earlier in this article. However, press and hold the Ctrl key while clicking the Signatures button. The Signatures folder opens in Windows Explorer. There should be a folder in the Signatures folder named after the signature you created with “_files” added to the end. For our example, the folder is named Business Card_files. Open this folder. In this folder, you should see a .vcf file with the name of your contact as the name of the file. For our contact, the file is named Lori Kaufman.vcf. The path to the .vcf file should be the name of the folder for the signature (Business Card_files), followed by a “\”, and the name of the .vcf file without the extension (Lori Kaufman). Putting these names together, you get the path that should be entered as the Value data in the new key you created in the Registry Editor. Business Card_files\Lori Kaufman Once you’ve entered the Value data for the new key, select Exit from the File menu to close the Registry Editor. Open Outlook and click New Email on the Home tab. Click Signature in the Include section of the New Mail Message tab and select your new signature from the drop-down menu. NOTE: If you made the new signature the default signature, it will be automatically inserted into the new mail message. The .vcf file is attached to the email message, but the business card image is not included. All you will see in the body of the email message is the text or other image you included in the signature. You can also choose to include an image of your business card in a signature with no .vcf file attached.     

    Read the article

  • What's the best name for a non-mutating "add" method on an immutable collection?

    - by Jon Skeet
    Sorry for the waffly title - if I could come up with a concise title, I wouldn't have to ask the question. Suppose I have an immutable list type. It has an operation Foo(x) which returns a new immutable list with the specified argument as an extra element at the end. So to build up a list of strings with values "Hello", "immutable", "world" you could write: var empty = new ImmutableList<string>(); var list1 = empty.Foo("Hello"); var list2 = list1.Foo("immutable"); var list3 = list2.Foo("word"); (This is C# code, and I'm most interested in a C# suggestion if you feel the language is important. It's not fundamentally a language question, but the idioms of the language may be important.) The important thing is that the existing lists are not altered by Foo - so empty.Count would still return 0. Another (more idiomatic) way of getting to the end result would be: var list = new ImmutableList<string>().Foo("Hello"); .Foo("immutable"); .Foo("word"); My question is: what's the best name for Foo? EDIT 3: As I reveal later on, the name of the type might not actually be ImmutableList<T>, which makes the position clear. Imagine instead that it's TestSuite and that it's immutable because the whole of the framework it's a part of is immutable... (End of edit 3) Options I've come up with so far: Add: common in .NET, but implies mutation of the original list Cons: I believe this is the normal name in functional languages, but meaningless to those without experience in such languages Plus: my favourite so far, it doesn't imply mutation to me. Apparently this is also used in Haskell but with slightly different expectations (a Haskell programmer might expect it to add two lists together rather than adding a single value to the other list). With: consistent with some other immutable conventions, but doesn't have quite the same "additionness" to it IMO. And: not very descriptive. Operator overload for + : I really don't like this much; I generally think operators should only be applied to lower level types. I'm willing to be persuaded though! The criteria I'm using for choosing are: Gives the correct impression of the result of the method call (i.e. that it's the original list with an extra element) Makes it as clear as possible that it doesn't mutate the existing list Sounds reasonable when chained together as in the second example above Please ask for more details if I'm not making myself clear enough... EDIT 1: Here's my reasoning for preferring Plus to Add. Consider these two lines of code: list.Add(foo); list.Plus(foo); In my view (and this is a personal thing) the latter is clearly buggy - it's like writing "x + 5;" as a statement on its own. The first line looks like it's okay, until you remember that it's immutable. In fact, the way that the plus operator on its own doesn't mutate its operands is another reason why Plus is my favourite. Without the slight ickiness of operator overloading, it still gives the same connotations, which include (for me) not mutating the operands (or method target in this case). EDIT 2: Reasons for not liking Add. Various answers are effectively: "Go with Add. That's what DateTime does, and String has Replace methods etc which don't make the immutability obvious." I agree - there's precedence here. However, I've seen plenty of people call DateTime.Add or String.Replace and expect mutation. There are loads of newsgroup questions (and probably SO ones if I dig around) which are answered by "You're ignoring the return value of String.Replace; strings are immutable, a new string gets returned." Now, I should reveal a subtlety to the question - the type might not actually be an immutable list, but a different immutable type. In particular, I'm working on a benchmarking framework where you add tests to a suite, and that creates a new suite. It might be obvious that: var list = new ImmutableList<string>(); list.Add("foo"); isn't going to accomplish anything, but it becomes a lot murkier when you change it to: var suite = new TestSuite<string, int>(); suite.Add(x => x.Length); That looks like it should be okay. Whereas this, to me, makes the mistake clearer: var suite = new TestSuite<string, int>(); suite.Plus(x => x.Length); That's just begging to be: var suite = new TestSuite<string, int>().Plus(x => x.Length); Ideally, I would like my users not to have to be told that the test suite is immutable. I want them to fall into the pit of success. This may not be possible, but I'd like to try. I apologise for over-simplifying the original question by talking only about an immutable list type. Not all collections are quite as self-descriptive as ImmutableList<T> :)

    Read the article

  • How to manage maintenance/bug-fix branches in Subversion when setup projects need to be built?

    - by Mike Spross
    We have a suite of related products written in VB6, with some C# and VB.NET projects, and all the source is kept in a single Subversion repository. We haven't been using branches in Subversion (although we do tag releases now), and simply do all development in trunk, creating new releases when the trunk is stable enough. This causes no end of grief when we release a new version, issues are found with it, and we have already begun working on new features or major changes to the trunk. In the past, we would address this in one of two ways, depending on the severity of the issues and how stable we thought the trunk was: Hurry to stabilize the trunk, fix the issues, and then release a maintenance update based on the HEAD revision, but this had the side effect of releases that fixed the bugs but introduced new issues because of half-finished features or bugfixes that were in trunk. Make customers wait until the next official release, which is usually a few months. We want to change our policies to better deal with this situation. I was considering creating a "maintenance branch" in Subversion whenever I tag an official release. Then, new development would continue in trunk, and I can periodically merge specific fixes from trunk into the maintenance branch, and create a maintenance release when enough fixes are accumulated, while we continue to work on the next major update in parallel. I know we could also have a more stable trunk and create a branch for new updates instead, but keeping current development in trunk seems simpler to me. The major problem is that while we can easily branch the source code from a release tag and recompile it to get the binaries for that release, I'm not sure how to handle the setup and installer projects. We use QSetup to create all of our setup programs, and right now when we need to modify a setup project, we just edit the project file in-place (all the setup projects and any dependencies that we don't compile ourselves are stored on a separate server, and we make sure to always compile the setup projects on that machine only). However, since we may add or remove files to the setup as our code changes, there is no guarantee that today's setup projects will work with yesterday's source code. I was going to put all the QSetup projects in Subversion to deal with this, but I see some problems with this approach. I want the creation of setup programs to be as automated as possible, and at the very least, I want a separate build machine where I can build the release that I want (grabbing the code from Subversion first), grab the setup project for that release from Subversion, recompile the setup, and then copy the setup to another place on the network for QA testing and eventual release to customers. However, when someone needs to change a setup project (to add a new dependency that trunk now requires or to make other changes), there is a problem. If they treat it like a source file and check it out on their own machine to edit it, they won't be able to add files to the project unless they first copy the files they need to add to the build machine (so they are available to other developers), then copy all the other dependencies from the build machine to their machine, making sure to match the folder structure exactly. The issue here is that QSetup uses absolute paths for any files added to a setup project. However, this means installing a bunch of setup dependencies onto development machines, which seems messy (and which could destabilize the development environment if someone accidentally runs the setup project on their machine). Also, how do we manage third-party dependencies? For example, if the current maintenance branch used MSXML 3.0 and the trunk now requires MSXML 4.0, we can't go back and create a maintenance release if we have already replaced the MSXML library on the build machine with the latest version (assuming both versions have the same filename). The only solution I can think is to either put all the third-party dependencies in Subversion along with the source code, or to make sure we put different library versions in separate folders (i.e. C:\Setup\Dependencies\MSXML\v3.0 and C:\Setup\Dependencies\MSXML\v4.0). Is one way "better" or more common than the other? Are there any best practices for dealing with this situation? Basically, if we release v2.0 of our software, we want to be able to release v2.0.1, v2.0.2, and v.2.0.3 while we work on v2.1, but the whole setup/installation project and setup dependency issue is making this more complicated than the typical "just create a branch in Subversion and recompile as needed" answer.

    Read the article

  • How to fix the size of the Hashtable and find the whether it has fix size or not?

    - by Vijjendra
    Hi All, I am trying to fix the size of the Hashtable with following code. Hashtable hashtable = new Hashtable(2); //Add elements in the Hashtable hashtable.Add("A", "Vijendra"); hashtable.Add("B", "Singh"); hashtable.Add("C", "Shakya"); hashtable.Add("D", "Delhi"); hashtable.Add("E", "Singh"); hashtable.Add("F", "Shakya"); hashtable.Add("G", "Delhi"); hashtable.Add("H", "Singh"); hashtable.Add("I", "Shakya"); hashtable.Add("J", "Delhi"); I have fix the size of this Hashtable is 2 but I can add more than 2 elements in this, why this happen, I am doing somthing wrong? I have tried to find out is this Hashtable have fix size of not with hashtable.IsFixedSize, it always returns false Please tell me where I am wrong, or there is another way..

    Read the article

  • What is it about DataTable Column Names with dots that makes them unsuitable for WPF's DataGrid cont

    - by Tom Ritter
    Run this, and be confused: <Window x:Class="Fucking_Data_Grids.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="MainWindow" Height="350" Width="525"> <StackPanel> <DataGrid Name="r1" ItemsSource="{Binding Path=.}"> </DataGrid> <DataGrid Name="r2" ItemsSource="{Binding Path=.}"> </DataGrid> </StackPanel> </Window> Codebehind: using System.Data; using System.Windows; namespace Fucking_Data_Grids { public partial class MainWindow : Window { public MainWindow() { InitializeComponent(); DataTable dt1, dt2; dt1 = new DataTable(); dt2 = new DataTable(); dt1.Columns.Add("a-name", typeof(string)); dt1.Columns.Add("b-name", typeof(string)); dt1.Rows.Add(new object[] { 1, "Hi" }); dt1.Rows.Add(new object[] { 2, "Hi" }); dt1.Rows.Add(new object[] { 3, "Hi" }); dt1.Rows.Add(new object[] { 4, "Hi" }); dt1.Rows.Add(new object[] { 5, "Hi" }); dt1.Rows.Add(new object[] { 6, "Hi" }); dt1.Rows.Add(new object[] { 7, "Hi" }); dt2.Columns.Add("a.name", typeof(string)); dt2.Columns.Add("b.name", typeof(string)); dt2.Rows.Add(new object[] { 1, "Hi" }); dt2.Rows.Add(new object[] { 2, "Hi" }); dt2.Rows.Add(new object[] { 3, "Hi" }); dt2.Rows.Add(new object[] { 4, "Hi" }); dt2.Rows.Add(new object[] { 5, "Hi" }); dt2.Rows.Add(new object[] { 6, "Hi" }); dt2.Rows.Add(new object[] { 7, "Hi" }); r1.DataContext = dt1; r2.DataContext = dt2; } } } I'll tell you what happens. The top datagrid is populated with column headers and data. The bottom datagrid has column headers but all the rows are blank.

    Read the article

  • Compressing a web service response for jQuery

    - by SirDemon
    I'm attempting to gzip a JSON response from an ASMX web service to be consumed on the client-side by jQuery. My web.config already has httpCompression set like so: (I'm using IIS 7) <httpCompression directory="%SystemDrive%\inetpub\temp\IIS Temporary Compressed Files" staticCompressionDisableCpuUsage="90" staticCompressionEnableCpuUsage="60" dynamicCompressionDisableCpuUsage="80" dynamicCompressionEnableCpuUsage="50"> <dynamicTypes> <add mimeType="application/javascript" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="text/css" enabled="true" /> <add mimeType="video/x-flv" enabled="true" /> <add mimeType="application/x-shockwave-flash" enabled="true" /> <add mimeType="text/javascript" enabled="true" /> <add mimeType="text/*" enabled="true" /> <add mimeType="application/json; charset=utf-8" enabled="true" /> </dynamicTypes> <staticTypes> <add mimeType="application/javascript" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="text/css" enabled="true" /> <add mimeType="video/x-flv" enabled="true" /> <add mimeType="application/x-shockwave-flash" enabled="true" /> <add mimeType="text/javascript" enabled="true" /> <add mimeType="text/*" enabled="true" /> </staticTypes> <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" /> </httpCompression> <urlCompression doDynamicCompression="true" doStaticCompression="true" /> Through fiddler I can see that normal aspx and other compressions work fine. However, the jQuery ajax request and response work as they should, only nothing gets compressed. What am I missing?

    Read the article

  • Vertical Scroll not working, are the guides but the screen does not scroll.

    - by Leandro
    package com.lcardenas.infoberry; import net.rim.device.api.system.DeviceInfo; import net.rim.device.api.system.GPRSInfo; import net.rim.device.api.system.Memory; import net.rim.device.api.ui.MenuItem; import net.rim.device.api.ui.component.Dialog; import net.rim.device.api.ui.component.LabelField; import net.rim.device.api.ui.component.Menu; import net.rim.device.api.ui.component.SeparatorField; import net.rim.device.api.ui.container.MainScreen; import net.rim.device.api.ui.container.VerticalFieldManager; import net.rim.device.api.ui.decor.Background; import net.rim.device.api.ui.decor.BackgroundFactory; public class vtnprincipal extends MainScreen { //llamamos a la clase principal private InfoBerry padre; //variables para el menu private MenuItem mnubateria; private MenuItem mnuestado; private MenuItem mnuacerca; public vtnprincipal(InfoBerry padre) { super(); this.padre = padre; } public void incventana(){ VerticalFieldManager _ventana = new VerticalFieldManager(VerticalFieldManager.VERTICAL_SCROLL | VerticalFieldManager.VERTICAL_SCROLLBAR); double tmemoria =((DeviceInfo.getTotalFlashSize()/1024)/1024.00); double fmemoria = ((Memory.getFlashFree()/1024)/1024.00); Background cyan = BackgroundFactory.createSolidBackground(0x00E0FFFF); Background gris = BackgroundFactory.createSolidBackground(0x00DCDCDC ); //Borramos todos de la pantalla this.deleteAll(); //llamamos al menu incMenu(); //DIBUJAMOS LA VENTANA try{ LabelField title = new LabelField("Info Berry", LabelField.FIELD_HCENTER | LabelField.USE_ALL_HEIGHT ); setTitle(title); _ventana.add(new LabelField("Información del Dispositivo", LabelField.FIELD_HCENTER |LabelField.RIGHT | LabelField.USE_ALL_HEIGHT | LabelField.NON_FOCUSABLE )); _ventana.add(new SeparatorField()); _ventana.add(new SeparatorField()); txthorizontal modelo = new txthorizontal("Modelo:", DeviceInfo.getDeviceName()); modelo.setBackground(gris); _ventana.add(modelo); txthorizontal pin = new txthorizontal("PIN:" , Integer.toHexString(DeviceInfo.getDeviceId()).toUpperCase()); pin.setBackground(cyan); _ventana.add(pin); txthorizontal imeid = new txthorizontal("IMEID:" , GPRSInfo.imeiToString(GPRSInfo.getIMEI())); imeid.setBackground(gris); _ventana.add(imeid); txthorizontal version= new txthorizontal("SO Versión:" , DeviceInfo.getSoftwareVersion()); version.setBackground(cyan); _ventana.add(version); txthorizontal plataforma= new txthorizontal("SO Plataforma:" , DeviceInfo.getPlatformVersion()); plataforma.setBackground(gris); _ventana.add(plataforma); txthorizontal numero= new txthorizontal("Numero Telefonico: " , "Hay que firmar"); numero.setBackground(cyan); _ventana.add(numero); _ventana.add(new SeparatorField()); _ventana.add(new SeparatorField()); _ventana.add(new LabelField("Memoria", LabelField.FIELD_HCENTER | LabelField.USE_ALL_HEIGHT | LabelField.NON_FOCUSABLE)); _ventana.add(new SeparatorField()); txthorizontal totalm= new txthorizontal("Memoria app Total:" , mmemoria(tmemoria) + " Mb"); totalm.setBackground(gris); _ventana.add(totalm); txthorizontal disponiblem= new txthorizontal("Memoria app Disponible:" , mmemoria(fmemoria) + " Mb"); disponiblem.setBackground(cyan); _ventana.add(disponiblem); ///txthorizontal estadoram = new txthorizontal("Memoria RAM:" , mmemoria(prueba) + " Mb"); //estadoram.setBackground(gris); //add(estadoram); _ventana.add(new SeparatorField()); _ventana.add(new SeparatorField()); this.add(_ventana); }catch(Exception e){ Dialog.alert("Excepción en clase vtnprincipal: " + e.toString()); } } //DIBUJAMOS EL MENU private void incMenu() { MenuItem.separator(30); mnubateria = new MenuItem("Bateria",40, 10) { public void run() { bateria(); } }; mnuestado = new MenuItem("Estado de Red", 50, 10) { public void run() { estado(); } }; mnuacerca = new MenuItem("Acerca de..", 60, 10) { public void run() { acerca(); } }; MenuItem.separator(70); }; // public void makeMenu(Menu menu, int instance) { if (!menu.isDisplayed()) { menu.deleteAll(); menu.add(MenuItem.separator(30)); menu.add(mnubateria); menu.add(mnuestado); menu.add(mnuacerca); menu.add(MenuItem.separator(60)); } } public void bateria(){ padre.vtnbateria.incventana(); padre.pushScreen(padre.vtnbateria); } public void estado(){ padre.vtnestado.incventana(); padre.pushScreen(padre.vtnestado); } public void acerca(){ padre.vtnacerca.incventana(); padre.pushScreen(padre.vtnacerca); } public boolean onClose(){ Dialog.alert("Hasta Luego"); System.exit(0); return true; } public double mmemoria(double x) { if ( x > 0 ) return Math.floor(x * 100) / 100; else return Math.ceil(x * 100) / 100; } }

    Read the article

  • How do you add a row to a databound datagridview using the datagridview in the gui?

    - by IsaacB
    Hi guys, I've bound a datagridview to a collection of objects. It's giving me a null reference exception when I try to add a new row to a collection (empty or otherwise) using the gui. More specifically, when there is a write event it does this. How can I override the behavior of the write event and manually add all of the columns contents to the object collection? I could disable the add new row feature and make an add button, but the built in datagridview row add looks so much more slick.

    Read the article

  • How to manage maintenance/bug-fix branches in Subversion when third-party installers are involved?

    - by Mike Spross
    We have a suite of related products written in VB6, with some C# and VB.NET projects, and all the source is kept in a single Subversion repository. We haven't been using branches in Subversion (although we do tag releases now), and simply do all development in trunk, creating new releases when the trunk is stable enough. This causes no end of grief when we release a new version, issues are found with it, and we have already begun working on new features or major changes to the trunk. In the past, we would address this in one of two ways, depending on the severity of the issues and how stable we thought the trunk was: Hurry to stabilize the trunk, fix the issues, and then release a maintenance update based on the HEAD revision, but this had the side effect of releases that fixed the bugs but introduced new issues because of half-finished features or bugfixes that were in trunk. Make customers wait until the next official release, which is usually a few months. We want to change our policies to better deal with this situation. I was considering creating a "maintenance branch" in Subversion whenever I tag an official release. Then, new development would continue in trunk, and I can periodically merge specific fixes from trunk into the maintenance branch, and create a maintenance release when enough fixes are accumulated, while we continue to work on the next major update in parallel. I know we could also have a more stable trunk and create a branch for new updates instead, but keeping current development in trunk seems simpler to me. The major problem is that while we can easily branch the source code from a release tag and recompile it to get the binaries for that release, I'm not sure how to handle the setup and installer projects. We use QSetup to create all of our setup programs, and right now when we need to modify a setup project, we just edit the project file in-place (all the setup projects and any dependencies that we don't compile ourselves are stored on a separate server, and we make sure to always compile the setup projects on that machine only). However, since we may add or remove files to the setup as our code changes, there is no guarantee that today's setup projects will work with yesterday's source code. I was going to put all the QSetup projects in Subversion to deal with this, but I see some problems with this approach. I want the creation of setup programs to be as automated as possible, and at the very least, I want a separate build machine where I can build the release that I want (grabbing the code from Subversion first), grab the setup project for that release from Subversion, recompile the setup, and then copy the setup to another place on the network for QA testing and eventual release to customers. However, when someone needs to change a setup project (to add a new dependency that trunk now requires or to make other changes), there is a problem. If they treat it like a source file and check it out on their own machine to edit it, they won't be able to add files to the project unless they first copy the files they need to add to the build machine (so they are available to other developers), then copy all the other dependencies from the build machine to their machine, making sure to match the folder structure exactly. The issue here is that QSetup uses absolute paths for any files added to a setup project. However, this means installing a bunch of setup dependencies onto development machines, which seems messy (and which could destabilize the development environment if someone accidentally runs the setup project on their machine). Also, how do we manage third-party dependencies? For example, if the current maintenance branch used MSXML 3.0 and the trunk now requires MSXML 4.0, we can't go back and create a maintenance release if we have already replaced the MSXML library on the build machine with the latest version (assuming both versions have the same filename). The only solution I can think is to either put all the third-party dependencies in Subversion along with the source code, or to make sure we put different library versions in separate folders (i.e. C:\Setup\Dependencies\MSXML\v3.0 and C:\Setup\Dependencies\MSXML\v4.0). Is one way "better" or more common than the other? Are there any best practices for dealing with this situation? Basically, if we release v2.0 of our software, we want to be able to release v2.0.1, v2.0.2, and v.2.0.3 while we work on v2.1, but the whole setup/installation project and setup dependency issue is making this more complicated than the the typical "just create a branch in Subversion and recompile as needed" answer.

    Read the article

  • How do I add a new ItemTemplate to a Guidance Automation Toolkit vs2010 .vsix package?

    - by Maslow
    I have a vs2008 GAT package that has been updated to work on vs2010 which walks a developer through creating a custom solution complete with a Service layer project, Domain, usercontrol, and unit test project. I'd like to add a new ItemTemplate to the package to create a user control or dialog that adheres to our current practices. So far it seems I need a Recipe, but I can not find any reference for the GuidancePackage.xml file and how to properly add things to it, let alone how to add a new Item Template or guidance steps and decisions. How can I add an item template to my add-in?

    Read the article

  • Why does Cache.Add return an object that represents the cached item?

    - by Pure.Krome
    From MSDN about the differences between Adding or Inserting an item the ASP.NET Cache: Note: The Add and Insert methods have the same signature, but there are subtle differences between them. First, calling the Add method returns an object that represents the cached item, while calling Insert does not. Second, their behavior is different if you call these methods and add an item to the Cache that is already stored there. The Insert method replaces the item, while the Add method fails. [emphasis mine] The second part is easy. No question about that. But with the first part, why would it want to return an object that represents the cached item? If I'm trying to Add an item to the cache, I already have/know what that item is? I don't get it. What is the reasoning behind this?

    Read the article

  • JPanel superclass doesn't add the components of its subclasses..

    - by Acryl
    Well, I use a JFrame that adds two JPanels. Those JPanels are superclasses, because I find it easier to separate different areas of the GUI into different classes. But here's the problem: I add the (superclass) JPanel to the JFrame I set the layout of the superclass JPanel to new BorderLayout(); I add components in the subclasses, like this: JPanel panel = new JPanel(); panel.setLayout(new GridLayout(2,1)); panel.add(new JLabel("Label:"); panel.add(new JTextField(); add(panel, BorderLayout.NORTH); But it doesn't show. What do I do wrong? I've tried it without an additional JPanel in the subclasses, but it doesn't work either. I use jdk 1.6

    Read the article

  • Symfony2 same form, different entities NOT related

    - by user1381537
    I'm trying to write one form for submitting against MySQL DB, but I can't get it working, I've tried a lot of things (separate forms, create an ->add('foo', new foo()) to a field, and trying to parse plain SQL with a normal HTML form is my only solution, which is obviously not the best. This is my DB structure: As you can see I need to insert the comments textarea to ticketcomments among the user who wrote it, etc. On crmentity the description field. Then on ticketcf the fields that I need to submit from form, are this (because you wont know if I don't tell you because of the field names): tcf.cf594 AS Type, tcf.cf675 AS Suscription, tcf.cf770 AS ID_PRODUCT, tcf.cf746 AS NotificationDate, tcf.cf747 AS ResponseDate, tcf.cf748 AS ResolutionDate, And, of course, every table needs to have the same ticketid id for the submitted form, so we can retrieve it with one simple query. It will be easy to do with plain SQL instead of using DQL and Symfony2 forms, but is not a good way to do it. Also, here's my "Ticket list" query, if you need it to have it more clear... SELECT t.ticketNo AS Ticket, t.title AS Asunto, t.status AS Estado, t.updateLog AS LOG, t.hours AS Horas, t.solution AS Solucion, t.priority AS Prioridad, tcf.cf594 AS Tipo, tcf.cf675 AS Suscripcion, tcf.cf770 AS IDPROD, tcf.cf746 AS F_Noti, tcf.cf747 AS F_Resp, tcf.cf748 AS F_Reso, CONCAT (cd.firstname, cd.lastname) AS Contacto, crm.description AS Descripcion, crm.crmid AS id FROM WbsGoclientsBundle:VtigerTroubletickets t INNER JOIN WbsGoclientsBundle:VtigerTicketcf tcf WITH t.ticketid = tcf.ticketid INNER JOIN WbsGoclientsBundle:VtigerContactdetails cd WITH t.parentId = cd.contactid INNER JOIN WbsGoclientsBundle:VtigerCrmentity crm WITH t.ticketid = crm.crmid WHERE t.parentId IN ( SELECT cd1.contactid FROM WbsGoclientsBundle:VtigerContactdetails cd1 WHERE cd1.accountid = ( SELECT cd2.accountid FROM WbsGoclientsBundle:VtigerContactdetails cd2 WHERE cd2.contactid = :contactid)) AND t.status <> \'Closed\' And also "Ticket details" query (which is not in DQL format yet, only SQL) is so simple, it only retrieve the comments field and createdtime from ticketcomments appended to this query so we have all the fields... Thank you. This is a test form, using troubletickets and ticketcomments, it's returning errores because I can't set a comments field because troubletickets doesn't has it, but I need that field to be submitted to ticketcomments ... VtigerTicketcommentsType <?php namespace WbsGo\clientsBundle\Form\Type; use Symfony\Component\Form\AbstractType, Symfony\Component\Form\FormBuilderInterface; use Symfony\Component\OptionsResolver\OptionsResolverInterface; class VtigerTicketcommentsType extends AbstractType { public function buildForm(FormBuilderInterface $builder, array $options) { $builder ->add('ticketid') ->add('comments') ->add('ownerid') ->add('ownertype') ->add('createdtype') ; } public function setDefaultOptions(OptionsResolverInterface $resolver) { $resolver->setDefaults(array( 'data_class' => 'WbsGo\clientsBundle\Entity\VtigerTicketcomments' )); } public function getName() { return 'comments'; } } OpenTicketType.php <?php namespace WbsGo\clientsBundle\Form; use Symfony\Component\Form\AbstractType, Symfony\Component\Form\FormBuilderInterface ; use WbsGo\clientsBundle\Form\Type\VtigerTicketcommentsType; use Symfony\Component\OptionsResolver\OptionsResolverInterface; class OpenTicketType extends AbstractType { public function buildForm(FormBuilderInterface $builder, array $options) { $builder ->add('title') ->add('priority') ->add('solution') ->add('comments', 'collection', array( 'type' => new VtigerTicketcommentsType() )) ; } public function setDefaultOptions(OptionsResolverInterface $resolver) { $resolver->setDefaults(array( 'data_class' => 'WbsGo\clientsBundle\Entity\VtigerTroubletickets' )); } public function getName() { return 'ticket'; } } TicketController.php <?php namespace WbsGo\clientsBundle\Controller; use Symfony\Bundle\FrameworkBundle\Controller\Controller; use WbsGo\clientsBundle\Entity\VtigerTroubletickets; use WbsGo\clientsBundle\Entity\VtigerTicketcomments; use WbsGo\clientsBundle\Form\OpenTicketType; use Symfony\Component\HttpFoundation\Request; class TicketController extends Controller { public function indexAction() { $em = $this->getDoctrine()->getManager(); $tickets = $em ->getRepository('WbsGoclientsBundle:VtigerTroubletickets') ->findAllOpenByCustomerId($this->getUser()->getId()); $userdata = $this->getDoctrine()->getManager() ->getRepository('WbsGoclientsBundle:VtigerContactdetails') ->findContact($this->getUser()->getId()); return $this ->render('WbsGoclientsBundle:Ticket:index.html.twig', array('tickets' => $tickets, 'userdata' => $userdata)); } public function addAction() { $assets = $this->getDoctrine()->getManager() ->getRepository('WbsGoclientsBundle:VtigerAssets') ->findAssetByAccountId($this->getUser()->getId()); $assetlist = array(); foreach ($assets as $key => $v) { $assetlist[$key] = $key; } $form = $this->createForm(new OpenTicketType(), new VtigerTroubletickets()); return $this ->render('WbsGoclientsBundle:Ticket:add.html.twig', array('form' => $form->createView(), 'assets' => $assets,)); } } This is the error Symfony2 is returning Neither the property "comments" nor one of the methods "getComments()", "isComments()", "hasComments()", "_get()" or "_call()" exist and have public access in class "WbsGo\clientsBundle\Entity\VtigerTroubletickets". EDIT 2 This code is actually rendering my forms, but I need help in order to submit each XXXType form to its corresponding table. public function buildForm(FormBuilderInterface $builder, array $options) { $builder ->add('descripcion') ->add('prioridad') ->add('solucion') ->add('comment', new VtigerTicketcommentsType() ) ->add('contacto') ->add('suscripcion') ->add('producto', 'entity', array( 'class' => 'WbsGo\clientsBundle\Entity\VtigerAssets', 'property' => 'assetname', 'empty_value' => '--SELECT--', 'query_builder' => function(\WbsGo\clientsBundle\Entity\VtigerAssetsRepository $repository) { //return $repository->findAssetByAccountId($this->customerId); return $repository->createQueryBuilder('a') ->select('a') ->where('a.account = (SELECT cd.accountid FROM WbsGoclientsBundle:VtigerContactdetails cd WHERE cd.contactid = ?1)') ->setParameter(1, $this->customerId); } ) ) ->add('hardware') ->add('backup') ->add('web') ->add('restore') ->add('customerId') ; } I also removed ->add('ticketid') from VtigerTicketcommentsType.php because it has relationship and is not needed. it's auto_incremental and must be generated once everything is submitted.

    Read the article

  • 8-Puzzle Solution executes infinitely [migrated]

    - by Ashwin
    I am looking for a solution to 8-puzzle problem using the A* Algorithm. I found this project on the internet. Please see the files - proj1 and EightPuzzle. The proj1 contains the entry point for the program(the main() function) and EightPuzzle describes a particular state of the puzzle. Each state is an object of the 8-puzzle. I feel that there is nothing wrong in the logic. But it loops forever for these two inputs that I have tried : {8,2,7,5,1,6,3,0,4} and {3,1,6,8,4,5,7,2,0}. Both of them are valid input states. What is wrong with the code? Note For better viewing copy the code in a Notepad++ or some other text editor(which has the capability to recognize java source file) because there are lot of comments in the code. Since A* requires a heuristic, they have provided the option of using manhattan distance and a heuristic that calculates the number of misplaced tiles. And to ensure that the best heuristic is executed first, they have implemented a PriorityQueue. The compareTo() function is implemented in the EightPuzzle class. The input to the program can be changed by changing the value of p1d in the main() function of proj1 class. The reason I am telling that there exists solution for the two my above inputs is because the applet here solves them. Please ensure that you select 8-puzzle from teh options in the applet. EDITI gave this input {0,5,7,6,8,1,2,4,3}. It took about 10 seconds and gave a result with 26 moves. But the applet gave a result with 24 moves in 0.0001 seconds with A*. For quick reference I have pasted the the two classes without the comments : EightPuzzle import java.util.*; public class EightPuzzle implements Comparable <Object> { int[] puzzle = new int[9]; int h_n= 0; int hueristic_type = 0; int g_n = 0; int f_n = 0; EightPuzzle parent = null; public EightPuzzle(int[] p, int h_type, int cost) { this.puzzle = p; this.hueristic_type = h_type; this.h_n = (h_type == 1) ? h1(p) : h2(p); this.g_n = cost; this.f_n = h_n + g_n; } public int getF_n() { return f_n; } public void setParent(EightPuzzle input) { this.parent = input; } public EightPuzzle getParent() { return this.parent; } public int inversions() { /* * Definition: For any other configuration besides the goal, * whenever a tile with a greater number on it precedes a * tile with a smaller number, the two tiles are said to be inverted */ int inversion = 0; for(int i = 0; i < this.puzzle.length; i++ ) { for(int j = 0; j < i; j++) { if(this.puzzle[i] != 0 && this.puzzle[j] != 0) { if(this.puzzle[i] < this.puzzle[j]) inversion++; } } } return inversion; } public int h1(int[] list) // h1 = the number of misplaced tiles { int gn = 0; for(int i = 0; i < list.length; i++) { if(list[i] != i && list[i] != 0) gn++; } return gn; } public LinkedList<EightPuzzle> getChildren() { LinkedList<EightPuzzle> children = new LinkedList<EightPuzzle>(); int loc = 0; int temparray[] = new int[this.puzzle.length]; EightPuzzle rightP, upP, downP, leftP; while(this.puzzle[loc] != 0) { loc++; } if(loc % 3 == 0){ temparray = this.puzzle.clone(); temparray[loc] = temparray[loc + 1]; temparray[loc + 1] = 0; rightP = new EightPuzzle(temparray, this.hueristic_type, this.g_n + 1); rightP.setParent(this); children.add(rightP); }else if(loc % 3 == 1){ //add one child swaps with right temparray = this.puzzle.clone(); temparray[loc] = temparray[loc + 1]; temparray[loc + 1] = 0; rightP = new EightPuzzle(temparray, this.hueristic_type, this.g_n + 1); rightP.setParent(this); children.add(rightP); //add one child swaps with left temparray = this.puzzle.clone(); temparray[loc] = temparray[loc - 1]; temparray[loc - 1] = 0; leftP = new EightPuzzle(temparray, this.hueristic_type, this.g_n + 1); leftP.setParent(this); children.add(leftP); }else if(loc % 3 == 2){ // add one child swaps with left temparray = this.puzzle.clone(); temparray[loc] = temparray[loc - 1]; temparray[loc - 1] = 0; leftP = new EightPuzzle(temparray, this.hueristic_type, this.g_n + 1); leftP.setParent(this); children.add(leftP); } if(loc / 3 == 0){ //add one child swaps with lower temparray = this.puzzle.clone(); temparray[loc] = temparray[loc + 3]; temparray[loc + 3] = 0; downP = new EightPuzzle(temparray, this.hueristic_type, this.g_n + 1); downP.setParent(this); children.add(downP); }else if(loc / 3 == 1 ){ //add one child, swap with upper temparray = this.puzzle.clone(); temparray[loc] = temparray[loc - 3]; temparray[loc - 3] = 0; upP = new EightPuzzle(temparray, this.hueristic_type, this.g_n + 1); upP.setParent(this); children.add(upP); //add one child, swap with lower temparray = this.puzzle.clone(); temparray[loc] = temparray[loc + 3]; temparray[loc + 3] = 0; downP = new EightPuzzle(temparray, this.hueristic_type, this.g_n + 1); downP.setParent(this); children.add(downP); }else if (loc / 3 == 2 ){ //add one child, swap with upper temparray = this.puzzle.clone(); temparray[loc] = temparray[loc - 3]; temparray[loc - 3] = 0; upP = new EightPuzzle(temparray, this.hueristic_type, this.g_n + 1); upP.setParent(this); children.add(upP); } return children; } public int h2(int[] list) // h2 = the sum of the distances of the tiles from their goal positions // for each item find its goal position // calculate how many positions it needs to move to get into that position { int gn = 0; int row = 0; int col = 0; for(int i = 0; i < list.length; i++) { if(list[i] != 0) { row = list[i] / 3; col = list[i] % 3; row = Math.abs(row - (i / 3)); col = Math.abs(col - (i % 3)); gn += row; gn += col; } } return gn; } public String toString() { String x = ""; for(int i = 0; i < this.puzzle.length; i++){ x += puzzle[i] + " "; if((i + 1) % 3 == 0) x += "\n"; } return x; } public int compareTo(Object input) { if (this.f_n < ((EightPuzzle) input).getF_n()) return -1; else if (this.f_n > ((EightPuzzle) input).getF_n()) return 1; return 0; } public boolean equals(EightPuzzle test){ if(this.f_n != test.getF_n()) return false; for(int i = 0 ; i < this.puzzle.length; i++) { if(this.puzzle[i] != test.puzzle[i]) return false; } return true; } public boolean mapEquals(EightPuzzle test){ for(int i = 0 ; i < this.puzzle.length; i++) { if(this.puzzle[i] != test.puzzle[i]) return false; } return true; } } proj1 import java.util.*; public class proj1 { /** * @param args */ public static void main(String[] args) { int[] p1d = {1, 4, 2, 3, 0, 5, 6, 7, 8}; int hueristic = 2; EightPuzzle start = new EightPuzzle(p1d, hueristic, 0); int[] win = { 0, 1, 2, 3, 4, 5, 6, 7, 8}; EightPuzzle goal = new EightPuzzle(win, hueristic, 0); astar(start, goal); } public static void astar(EightPuzzle start, EightPuzzle goal) { if(start.inversions() % 2 == 1) { System.out.println("Unsolvable"); return; } // function A*(start,goal) // closedset := the empty set // The set of nodes already evaluated. LinkedList<EightPuzzle> closedset = new LinkedList<EightPuzzle>(); // openset := set containing the initial node // The set of tentative nodes to be evaluated. priority queue PriorityQueue<EightPuzzle> openset = new PriorityQueue<EightPuzzle>(); openset.add(start); while(openset.size() > 0){ // x := the node in openset having the lowest f_score[] value EightPuzzle x = openset.peek(); // if x = goal if(x.mapEquals(goal)) { // return reconstruct_path(came_from, came_from[goal]) Stack<EightPuzzle> toDisplay = reconstruct(x); System.out.println("Printing solution... "); System.out.println(start.toString()); print(toDisplay); return; } // remove x from openset // add x to closedset closedset.add(openset.poll()); LinkedList <EightPuzzle> neighbor = x.getChildren(); // foreach y in neighbor_nodes(x) while(neighbor.size() > 0) { EightPuzzle y = neighbor.removeFirst(); // if y in closedset if(closedset.contains(y)){ // continue continue; } // tentative_g_score := g_score[x] + dist_between(x,y) // // if y not in openset if(!closedset.contains(y)){ // add y to openset openset.add(y); // } // } // } } public static void print(Stack<EightPuzzle> x) { while(!x.isEmpty()) { EightPuzzle temp = x.pop(); System.out.println(temp.toString()); } } public static Stack<EightPuzzle> reconstruct(EightPuzzle winner) { Stack<EightPuzzle> correctOutput = new Stack<EightPuzzle>(); while(winner.getParent() != null) { correctOutput.add(winner); winner = winner.getParent(); } return correctOutput; } }

    Read the article

  • Fusion CRM Release 7 RCDs and TOIs Now Available!

    - by Richard Lefebvre
    Fusion CRM Release 7 Release Content Documents (RCD) and Transfer of Information (TOI) presentations are now available. In addition, you can find 245 new or changed product features for Release 7 on Oracle Product Features. All the new RCDs and TOIs can be found on the Fusion Learning Center: Customer Relationship Management TOIs - Customer Center, Define Segmentation Strategy, Enterprise Contracts, Oracle Social Network, Sales, and Territory Management Business Process Model (BPM) RCDs - Customer Service, Marketing, Order Fulfillment, and Sales Financials BPM RCDs - Asset Lifecycle Management, Cash and Treasury Management, and Financial Control and Reporting Human Capital Management TOIs - Workforce Development, Compensation, Benefits, Worker Performance, Workforce Profiles, Enterprise Structures, Talent Review, Manage Transaction and Batch Processing, Delete HCM Storage Data, and Load Batch Data BPM RCDs - Compensation Management, Enterprise Information Management, Workforce Deployment, and Workforce Development Procurement TOI - Requisitions BPM RCD - Procurement Project Portfolio Management TOIs - Project Resources, Evaluate and Assign Resources, Maintain Resource Assignments, Manage Resource Demand, Manage Resource Supply, Manage Resource Utilization and Analytics, Project Management, Set Up Project Management BPM RCD - Project Management Supply Chain Management TOIs - Manage New Product Definition and Approval, Manage Product Change Orders, Product Hub, Define Item Class BPM RCDs - Materials Management and Logistics, Product Management and Supply Chain Planning Partners and customers can access the content from the following locations: Partner access: BPM RCDs and TOIs Oracle Partner Network Fusion Learning Center New Feature RCDs Oracle Product Features Customer access: TOIs My Oracle Support (Note:1528594.1) BPM RCDs My Oracle Support (Note:1559828.1) New Feature RCDs Oracle Product Features

    Read the article

  • Add Recaptcha and GridView to an ASP.NET 3.5 Guestbook using MS SQL Server and VB.NET

    This is the conclusion to a four-part ASP.NET 3.5 guest book application tutorial series. In this last part you will learn how to integrate Recaptcha which is used to prevent spam automatic bot submission. Also to be discussed is how to add a GridView web control which is used to display all guest book comments retrieved from the database.... Download a Free Trial of Windows 7 Reduce Management Costs and Improve Productivity with Windows 7

    Read the article

  • Using Durandal to Create Single Page Apps

    - by Stephen.Walther
    A few days ago, I gave a talk on building Single Page Apps on the Microsoft Stack. In that talk, I recommended that people use Knockout, Sammy, and RequireJS to build their presentation layer and use the ASP.NET Web API to expose data from their server. After I gave the talk, several people contacted me and suggested that I investigate a new open-source JavaScript library named Durandal. Durandal stitches together Knockout, Sammy, and RequireJS to make it easier to use these technologies together. In this blog entry, I want to provide a brief walkthrough of using Durandal to create a simple Single Page App. I am going to demonstrate how you can create a simple Movies App which contains (virtual) pages for viewing a list of movies, adding new movies, and viewing movie details. The goal of this blog entry is to give you a sense of what it is like to build apps with Durandal. Installing Durandal First things first. How do you get Durandal? The GitHub project for Durandal is located here: https://github.com/BlueSpire/Durandal The Wiki — located at the GitHub project — contains all of the current documentation for Durandal. Currently, the documentation is a little sparse, but it is enough to get you started. Instead of downloading the Durandal source from GitHub, a better option for getting started with Durandal is to install one of the Durandal NuGet packages. I built the Movies App described in this blog entry by first creating a new ASP.NET MVC 4 Web Application with the Basic Template. Next, I executed the following command from the Package Manager Console: Install-Package Durandal.StarterKit As you can see from the screenshot of the Package Manager Console above, the Durandal Starter Kit package has several dependencies including: · jQuery · Knockout · Sammy · Twitter Bootstrap The Durandal Starter Kit package includes a sample Durandal application. You can get to the Starter Kit app by navigating to the Durandal controller. Unfortunately, when I first tried to run the Starter Kit app, I got an error because the Starter Kit is hard-coded to use a particular version of jQuery which is already out of date. You can fix this issue by modifying the App_Start\DurandalBundleConfig.cs file so it is jQuery version agnostic like this: bundles.Add( new ScriptBundle("~/scripts/vendor") .Include("~/Scripts/jquery-{version}.js") .Include("~/Scripts/knockout-{version}.js") .Include("~/Scripts/sammy-{version}.js") // .Include("~/Scripts/jquery-1.9.0.min.js") // .Include("~/Scripts/knockout-2.2.1.js") // .Include("~/Scripts/sammy-0.7.4.min.js") .Include("~/Scripts/bootstrap.min.js") ); The recommendation is that you create a Durandal app in a folder off your project root named App. The App folder in the Starter Kit contains the following subfolders and files: · durandal – This folder contains the actual durandal JavaScript library. · viewmodels – This folder contains all of your application’s view models. · views – This folder contains all of your application’s views. · main.js — This file contains all of the JavaScript startup code for your app including the client-side routing configuration. · main-built.js – This file contains an optimized version of your application. You need to build this file by using the RequireJS optimizer (unfortunately, before you can run the optimizer, you must first install NodeJS). For the purpose of this blog entry, I wanted to start from scratch when building the Movies app, so I deleted all of these files and folders except for the durandal folder which contains the durandal library. Creating the ASP.NET MVC Controller and View A Durandal app is built using a single server-side ASP.NET MVC controller and ASP.NET MVC view. A Durandal app is a Single Page App. When you navigate between pages, you are not navigating to new pages on the server. Instead, you are loading new virtual pages into the one-and-only-one server-side view. For the Movies app, I created the following ASP.NET MVC Home controller: public class HomeController : Controller { public ActionResult Index() { return View(); } } There is nothing special about the Home controller – it is as basic as it gets. Next, I created the following server-side ASP.NET view. This is the one-and-only server-side view used by the Movies app: @{ Layout = null; } <!DOCTYPE html> <html> <head> <title>Index</title> </head> <body> <div id="applicationHost"> Loading app.... </div> @Scripts.Render("~/scripts/vendor") <script type="text/javascript" src="~/App/durandal/amd/require.js" data-main="/App/main"></script> </body> </html> Notice that I set the Layout property for the view to the value null. If you neglect to do this, then the default ASP.NET MVC layout will be applied to the view and you will get the <!DOCTYPE> and opening and closing <html> tags twice. Next, notice that the view contains a DIV element with the Id applicationHost. This marks the area where virtual pages are loaded. When you navigate from page to page in a Durandal app, HTML page fragments are retrieved from the server and stuck in the applicationHost DIV element. Inside the applicationHost element, you can place any content which you want to display when a Durandal app is starting up. For example, you can create a fancy splash screen. I opted for simply displaying the text “Loading app…”: Next, notice the view above includes a call to the Scripts.Render() helper. This helper renders out all of the JavaScript files required by the Durandal library such as jQuery and Knockout. Remember to fix the App_Start\DurandalBundleConfig.cs as described above or Durandal will attempt to load an old version of jQuery and throw a JavaScript exception and stop working. Your application JavaScript code is not included in the scripts rendered by the Scripts.Render helper. Your application code is loaded dynamically by RequireJS with the help of the following SCRIPT element located at the bottom of the view: <script type="text/javascript" src="~/App/durandal/amd/require.js" data-main="/App/main"></script> The data-main attribute on the SCRIPT element causes RequireJS to load your /app/main.js JavaScript file to kick-off your Durandal app. Creating the Durandal Main.js File The Durandal Main.js JavaScript file, located in your App folder, contains all of the code required to configure the behavior of Durandal. Here’s what the Main.js file looks like in the case of the Movies app: require.config({ paths: { 'text': 'durandal/amd/text' } }); define(function (require) { var app = require('durandal/app'), viewLocator = require('durandal/viewLocator'), system = require('durandal/system'), router = require('durandal/plugins/router'); //>>excludeStart("build", true); system.debug(true); //>>excludeEnd("build"); app.start().then(function () { //Replace 'viewmodels' in the moduleId with 'views' to locate the view. //Look for partial views in a 'views' folder in the root. viewLocator.useConvention(); //configure routing router.useConvention(); router.mapNav("movies/show"); router.mapNav("movies/add"); router.mapNav("movies/details/:id"); app.adaptToDevice(); //Show the app by setting the root view model for our application with a transition. app.setRoot('viewmodels/shell', 'entrance'); }); }); There are three important things to notice about the main.js file above. First, notice that it contains a section which enables debugging which looks like this: //>>excludeStart(“build”, true); system.debug(true); //>>excludeEnd(“build”); This code enables debugging for your Durandal app which is very useful when things go wrong. When you call system.debug(true), Durandal writes out debugging information to your browser JavaScript console. For example, you can use the debugging information to diagnose issues with your client-side routes: (The funny looking //> symbols around the system.debug() call are RequireJS optimizer pragmas). The main.js file is also the place where you configure your client-side routes. In the case of the Movies app, the main.js file is used to configure routes for three page: the movies show, add, and details pages. //configure routing router.useConvention(); router.mapNav("movies/show"); router.mapNav("movies/add"); router.mapNav("movies/details/:id");   The route for movie details includes a route parameter named id. Later, we will use the id parameter to lookup and display the details for the right movie. Finally, the main.js file above contains the following line of code: //Show the app by setting the root view model for our application with a transition. app.setRoot('viewmodels/shell', 'entrance'); This line of code causes Durandal to load up a JavaScript file named shell.js and an HTML fragment named shell.html. I’ll discuss the shell in the next section. Creating the Durandal Shell You can think of the Durandal shell as the layout or master page for a Durandal app. The shell is where you put all of the content which you want to remain constant as a user navigates from virtual page to virtual page. For example, the shell is a great place to put your website logo and navigation links. The Durandal shell is composed from two parts: a JavaScript file and an HTML file. Here’s what the HTML file looks like for the Movies app: <h1>Movies App</h1> <div class="container-fluid page-host"> <!--ko compose: { model: router.activeItem, //wiring the router afterCompose: router.afterCompose, //wiring the router transition:'entrance', //use the 'entrance' transition when switching views cacheViews:true //telling composition to keep views in the dom, and reuse them (only a good idea with singleton view models) }--><!--/ko--> </div> And here is what the JavaScript file looks like: define(function (require) { var router = require('durandal/plugins/router'); return { router: router, activate: function () { return router.activate('movies/show'); } }; }); The JavaScript file contains the view model for the shell. This view model returns the Durandal router so you can access the list of configured routes from your shell. Notice that the JavaScript file includes a function named activate(). This function loads the movies/show page as the first page in the Movies app. If you want to create a different default Durandal page, then pass the name of a different age to the router.activate() method. Creating the Movies Show Page Durandal pages are created out of a view model and a view. The view model contains all of the data and view logic required for the view. The view contains all of the HTML markup for rendering the view model. Let’s start with the movies show page. The movies show page displays a list of movies. The view model for the show page looks like this: define(function (require) { var moviesRepository = require("repositories/moviesRepository"); return { movies: ko.observable(), activate: function() { this.movies(moviesRepository.listMovies()); } }; }); You create a view model by defining a new RequireJS module (see http://requirejs.org). You create a RequireJS module by placing all of your JavaScript code into an anonymous function passed to the RequireJS define() method. A RequireJS module has two parts. You retrieve all of the modules which your module requires at the top of your module. The code above depends on another RequireJS module named repositories/moviesRepository. Next, you return the implementation of your module. The code above returns a JavaScript object which contains a property named movies and a method named activate. The activate() method is a magic method which Durandal calls whenever it activates your view model. Your view model is activated whenever you navigate to a page which uses it. In the code above, the activate() method is used to get the list of movies from the movies repository and assign the list to the view model movies property. The HTML for the movies show page looks like this: <table> <thead> <tr> <th>Title</th><th>Director</th> </tr> </thead> <tbody data-bind="foreach:movies"> <tr> <td data-bind="text:title"></td> <td data-bind="text:director"></td> <td><a data-bind="attr:{href:'#/movies/details/'+id}">Details</a></td> </tr> </tbody> </table> <a href="#/movies/add">Add Movie</a> Notice that this is an HTML fragment. This fragment will be stuffed into the page-host DIV element in the shell.html file which is stuffed, in turn, into the applicationHost DIV element in the server-side MVC view. The HTML markup above contains data-bind attributes used by Knockout to display the list of movies (To learn more about Knockout, visit http://knockoutjs.com). The list of movies from the view model is displayed in an HTML table. Notice that the page includes a link to a page for adding a new movie. The link uses the following URL which starts with a hash: #/movies/add. Because the link starts with a hash, clicking the link does not cause a request back to the server. Instead, you navigate to the movies/add page virtually. Creating the Movies Add Page The movies add page also consists of a view model and view. The add page enables you to add a new movie to the movie database. Here’s the view model for the add page: define(function (require) { var app = require('durandal/app'); var router = require('durandal/plugins/router'); var moviesRepository = require("repositories/moviesRepository"); return { movieToAdd: { title: ko.observable(), director: ko.observable() }, activate: function () { this.movieToAdd.title(""); this.movieToAdd.director(""); this._movieAdded = false; }, canDeactivate: function () { if (this._movieAdded == false) { return app.showMessage('Are you sure you want to leave this page?', 'Navigate', ['Yes', 'No']); } else { return true; } }, addMovie: function () { // Add movie to db moviesRepository.addMovie(ko.toJS(this.movieToAdd)); // flag new movie this._movieAdded = true; // return to list of movies router.navigateTo("#/movies/show"); } }; }); The view model contains one property named movieToAdd which is bound to the add movie form. The view model also has the following three methods: 1. activate() – This method is called by Durandal when you navigate to the add movie page. The activate() method resets the add movie form by clearing out the movie title and director properties. 2. canDeactivate() – This method is called by Durandal when you attempt to navigate away from the add movie page. If you return false then navigation is cancelled. 3. addMovie() – This method executes when the add movie form is submitted. This code adds the new movie to the movie repository. I really like the Durandal canDeactivate() method. In the code above, I use the canDeactivate() method to show a warning to a user if they navigate away from the add movie page – either by clicking the Cancel button or by hitting the browser back button – before submitting the add movie form: The view for the add movie page looks like this: <form data-bind="submit:addMovie"> <fieldset> <legend>Add Movie</legend> <div> <label> Title: <input data-bind="value:movieToAdd.title" required /> </label> </div> <div> <label> Director: <input data-bind="value:movieToAdd.director" required /> </label> </div> <div> <input type="submit" value="Add" /> <a href="#/movies/show">Cancel</a> </div> </fieldset> </form> I am using Knockout to bind the movieToAdd property from the view model to the INPUT elements of the HTML form. Notice that the FORM element includes a data-bind attribute which invokes the addMovie() method from the view model when the HTML form is submitted. Creating the Movies Details Page You navigate to the movies details Page by clicking the Details link which appears next to each movie in the movies show page: The Details links pass the movie ids to the details page: #/movies/details/0 #/movies/details/1 #/movies/details/2 Here’s what the view model for the movies details page looks like: define(function (require) { var router = require('durandal/plugins/router'); var moviesRepository = require("repositories/moviesRepository"); return { movieToShow: { title: ko.observable(), director: ko.observable() }, activate: function (context) { // Grab movie from repository var movie = moviesRepository.getMovie(context.id); // Add to view model this.movieToShow.title(movie.title); this.movieToShow.director(movie.director); } }; }); Notice that the view model activate() method accepts a parameter named context. You can take advantage of the context parameter to retrieve route parameters such as the movie Id. In the code above, the context.id property is used to retrieve the correct movie from the movie repository and the movie is assigned to a property named movieToShow exposed by the view model. The movie details view displays the movieToShow property by taking advantage of Knockout bindings: <div> <h2 data-bind="text:movieToShow.title"></h2> directed by <span data-bind="text:movieToShow.director"></span> </div> Summary The goal of this blog entry was to walkthrough building a simple Single Page App using Durandal and to get a feel for what it is like to use this library. I really like how Durandal stitches together Knockout, Sammy, and RequireJS and establishes patterns for using these libraries to build Single Page Apps. Having a standard pattern which developers on a team can use to build new pages is super valuable. Once you get the hang of it, using Durandal to create new virtual pages is dead simple. Just define a new route, view model, and view and you are done. I also appreciate the fact that Durandal did not attempt to re-invent the wheel and that Durandal leverages existing JavaScript libraries such as Knockout, RequireJS, and Sammy. These existing libraries are powerful libraries and I have already invested a considerable amount of time in learning how to use them. Durandal makes it easier to use these libraries together without losing any of their power. Durandal has some additional interesting features which I have not had a chance to play with yet. For example, you can use the RequireJS optimizer to combine and minify all of a Durandal app’s code. Also, Durandal supports a way to create custom widgets (client-side controls) by composing widgets from a controller and view. You can download the code for the Movies app by clicking the following link (this is a Visual Studio 2012 project): Durandal Movie App

    Read the article

  • Configure Forms based authentication in SharePoint 2010

    - by sreejukg
      Configuring form authentication is a straight forward task in SharePoint. Mostly public facing websites built on SharePoint requires form based authentication. Recently, one of the WCM implementation where I was included in the project team required registration system. Any internet user can register to the site and the site offering them some membership specific functionalities once the user logged in. Since the registration open for all, I don’t want to store all those users in Active Directory. I have decided to use Forms based authentication for those users. This is a typical scenario of form authentication in SharePoint implementation. To implement form authentication you require the following A data store where you are storing the users – technically this can be active directory, SQL server database, LDAP etc. Form authentication will redirect the user to the login page, if the request is not authenticated. In the login page, there will be controls that validate the user inputs against the configured data store. In this article, I am going to use SQL server database with ASP.Net membership API’s to configure form based authentication in SharePoint 2010. This article assumes that you have SQL membership database available. I already configured the membership and roles database using aspnet_regsql command. If you want to know how to configure membership database using aspnet_regsql command, read the below blog post. http://weblogs.asp.net/sreejukg/archive/2011/06/16/usage-of-aspnet-regsql-exe-in-asp-net-4.aspx The snapshot of the database after implementing membership and role manager is as follows. I have used the database name “aspnetdb_claim”. Make sure you have created the database and make sure your database contains tables and stored procedures for membership. Create a web application with claims based authentication. This article assumes you already created a web application using claims based authentication. If you want to enable forms based authentication in SharePoint 2010, you must enable claims based authentication. Read this post for creating a web application using claims based authentication. http://weblogs.asp.net/sreejukg/archive/2011/06/15/create-a-web-application-in-sharepoint-2010-using-claims-based-authentication.aspx  You make sure, you have selected enable form authentication, and then selected Membership provider and Role manager name. To make sure you are done with the configuration, navigate to central administration website, from central administration, navigate to the Web Applications page, select the web application and click on icon, you will see the authentication providers for the current web application. Go to the section Claims authentication types, and make sure you have enabled forms based authentication. As mentioned in the snapshot, I have named the membership provider as SPFormAuthMembership and role manager as SPFormAuthRoleManager. You can choose your own names as you need. Modify the configuration files(Web.Config) to enable form authentication There are three applications that needs to be configured to support form authentication. The following are those applications. Central Administration If you want to assign permissions to web application using the credentials from form authentication, you need to update Central Administration configuration. If you do not want to access form authentication credentials from Central Administration, just leave this step.  STS service application Security Token service is the service application that issues security token when users are logging in. You need to modify the configuration of STS application to make sure users are able to login. To find the STS application, follow the following steps Go to the IIS Manager Expand the sites Node, you will see SharePoint Web Services Expand SharePoint Web Services, you can see SecurityTokenServiceApplication Right click SecuritytokenServiceApplication and click explore, it will open the corresponding file system. By default, the path for STS is C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\WebServices\SecurityToken You need to modify the configuration file available in the mentioned location. The web application that needs to be enabled with form authentication. You need to modify the configuration of your web application to make sure your web application identifies users from the form authentication.   Based on the above, I am going to modify the web configuration. At end of each step, I have mentioned the expected output. I recommend you to go step by step and after each step, make sure the configuration changes are working as expected. If you do everything all together, and test your application at the end, you may face difficulties in troubleshooting the configuration errors. Modifications for Central Administration Web.Config Open the web.config for the Central administration in a text editor. I always prefer Visual Studio, for editing web.config. In most cases, the path of the web.config for the central administration website is as follows C:\inetpub\wwwroot\wss\VirtualDirectories\<port number> Make sure you keep a backup copy of the web.config, before editing it. Let me summarize what we are going to do with Central Administration web.config. First I am going to add a connection string that points to the form authentication database, that I created as mentioned in previous steps. Then I need to add a membership provider and a role manager with the corresponding connectionstring. Then I need to update the peoplepickerwildcards section to make sure the users are appearing in search results. By default there is no connection string available in the web.config of Central Administration. Add a connection string just after the configsections element. The below is the connection string I have used all over the article. <add name="FormAuthConnString" connectionString="Initial Catalog=yourdatabasename;data source=databaseservername;Integrated Security=SSPI;" /> Once you added the connection string, the web.config look similar to Now add membership provider to the code. In web.config for CA, there will be <membership> tag, search for it. You will find membership and role manager under the <system.web> element. Under the membership providers section add the below code… <add name="SPFormAuthMembership" type="System.Web.Security.SqlMembershipProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" applicationName="FormAuthApplication" connectionStringName="FormAuthConnString" /> After adding memberhip element, see the snapshot of the web.config. Now you need to add role manager element to the web.config. Insider providers element under rolemanager, add the below code. <add name="SPFormAuthRoleManager" type="System.Web.Security.SqlRoleProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" applicationName="FormAuthApplication" connectionStringName="FormAuthConnString" /> After adding, your role manager will look similar to the following. As a last step, you need to update the people picker wildcard element in web.config, so that the users from your membership provider are available for browsing in Central Administration. Search for PeoplePickerWildcards in the web.config, add the following inside the <PeoplePickerWildcards> tag. <add key="SPFormAuthMembership" value="%" /> After adding this element, your web.config will look like After completing these steps, you can browse the users available in the SQL server database from central administration website. Go to the site collection administrator’s page from central administration. Select the site collection you have created for form authentication. Click on the people picker icon, choose Forms Auth and click on the search icon, you will see the users listed from the SQL server database. Once you complete these steps, make sure the users are available for browsing from central administration website. If you are unable to find the users, there must be some errors in the configuration, check windows event logs to find related errors and fix them. Change the web.config for STS application Open the web.config for STS application in text editor. By default, STS web.config does not have system.Web or connectionstrings section. Just after the System.Webserver element, add the following code. <connectionStrings> <add name="FormAuthConnString" connectionString="Initial Catalog=aspnetdb_claim;data source=sp2010_db;Integrated Security=SSPI;" /> </connectionStrings> <system.web> <roleManager enabled="true" cacheRolesInCookie="false" cookieName=".ASPXROLES" cookieTimeout="30" cookiePath="/" cookieRequireSSL="false" cookieSlidingExpiration="true" cookieProtection="All" createPersistentCookie="false" maxCachedResults="25"> <providers> <add name="SPFormAuthRoleManager" type="System.Web.Security.SqlRoleProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" applicationName="FormAuthApplication" connectionStringName="FormAuthConnString" /> </providers> </roleManager> <membership userIsOnlineTimeWindow="15" hashAlgorithmType=""> <providers> <add name="SPFormAuthMembership" type="System.Web.Security.SqlMembershipProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" applicationName="FormAuthApplication" connectionStringName="FormAuthConnString" /> </providers> </membership> </system.web> See the snapshot of the web.config after adding the required elements. After adding this, you should be able to login using the credentials from SQL server. Try assigning a user as primary/secondary administrator for your site collection from Central Administration and login to your site using form authentication. If you made everything correct, you should be able to login. This means you have successfully completed configuration of STS Configuration of Web Application for Form Authentication As a last step, you need to modify the web.config of the form authentication web application. Once you have done this, you should be able to grant permissions to users stored in the membership database. Open the Web.config of the web application you created for form authentication. You can find the web.config for the application under the path C:\inetpub\wwwroot\wss\VirtualDirectories\<port number> Basically you need to add connection string, membership provider, role manager and update the people picker wild card configuration. Add the connection string (same as the one you added to the web.config in Central Administration). See the screenshot after the connection string has added. Search for <membership> in the web.config, you will find this inside system.web element. There will be other providers already available there. You add your form authentication membership provider (similar to the one added to Central Administration web.config) to the provider element under membership. Find the snapshot of membership configuration as follows. Search for <roleManager> element in web.config, add the new provider name under providers section of the roleManager element. See the snapshot of web.config after new provider added. Now you need to configure the peoplepickerwildcard configuration in web.config. As I specified earlier, this is to make sure, you can locate the users by entering a part of their username. Add the following line under the <PeoplePickerWildcards> element in web.config. See the screenshot of the peoplePickerWildcards element after the element has been added. Now you have completed all the setup for form authentication. Navigate to the web application. From the site actions -> site settings -> go to peope and groups Click on new -> add users, it will popup the people picker dialog. Click on the icon, select Form Auth, enter a username in the search textbox, and click on search icon. See the screenshot of admin search when I tried searching the users If it displays the user, it means you are done with the configuration. If you add users to the form authentication database, the users will be able to access SharePoint portal as normal.

    Read the article

< Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >