Search Results

Search found 7902 results on 317 pages for 'structure'.

Page 221/317 | < Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >

  • Reorganized JSON

    - by couatl
    Need to be reorganized JSON into a new structure. Python. for example { 'a' : 1, 'b' : 1, 'd' : {'d1' : '1', 'd2' : 2}, 'm' : [ {'x' : 6, 'y' : 5, 'z' : {'foo' : 'foo1', 'bar : 'bar1'} }, {'x' : 8, 'y' : 8, 'z' : {'foo' : 'foo2', 'bar : 'bar2'} } ... ] } to { 'new_a' : 1, 'new_d' : {'new_d1' : '1', 'new_d2' : 2}, 'new_m' : [ {'new_x' : 6, 'new_z' : {'new_foo' : 'foo1', new_'bar : 'bar1'} }, {'new_x' : 8, 'new_z' : {'new_foo' : 'foo2', 'new_bar : 'bar2'} } ... ] } There is the idea of ??a new form of an old JSON Is there a more elegant way of that? import json new_data = {} new_data['new_a'] = old_data['a'] new_data['new_d'] = {} new_data['new_d']['new_d1'] = old_data['d']['d1'] new_data['new_d']['new_d2'] = old_data['d']['d2'] new_data['new_m'] = {} new_m = [] for m in old_data: new_m.append({'new_x' : m['x'], 'new_z' : {'new_foo' ....

    Read the article

  • Failure retrieving contents of directory

    - by Bondye
    Currently I have a couple of websites. My problem is that if I login on 1 specific domain with any of my programs (using notepadd++, FileZilla and Netbeans) the program stops at the content listing. I had it correctly running, (I'm working on a project on this domain for more than a year now) and suddenly I broke it somehow. This only happens on 1 specific domain, all other domains (from other hosts) are working. My colleague (next to me with same ip address) is able to login on this domain. Notepadd++ says: Failure retrieving contents of directory Filezilla says: Failed to retrieve directory listing Netbean popups: Upload files on save failed. (Because I have the setting upload on save enabled.) What I tried: First I thought it's my firewall, I disabled firewall but no result. Also notice that all other domain are working. Maby a blacklist with my ip address? No my colleague has the same ip address. Could anyone help me on this? Notepad++ Log [NppFTP] Everything initialized -> TYPE I Connecting -> Quit 220 ProFTPD 1.3.3e Server ready. -> USER username 331 Password required for domain -> PASS *HIDDEN* 230 User username logged in -> TYPE A 200 Type set to A -> MODE S 200 Mode set to S -> STRU F 200 Structure set to F -> CWD /domains/domain.nl/ 250 CWD command successful Connected -> CWD /domains/domain.nl/ 250 CWD command successful -> PASV 227 Entering Passive Mode (194,247,31,xx,137,xx). -> LIST -al Failure retrieving contents of directory /domains/domain.nl/ Filezilla log Status: Verbinden met 194.247.xx.xx:21... Status: Verbinding aangemaakt, welkomstbericht afwachten... Antwoord: 220 ProFTPD 1.3.3e Server ready. Commando: USER username Antwoord: 331 Password required for username Commando: PASS ******** Antwoord: 230 User username logged in Commando: SYST Antwoord: 215 UNIX Type: L8 Commando: FEAT Antwoord: 211-Features: Antwoord: MDTM Antwoord: MFMT Antwoord: LANG en-US;ja-JP;zh-TW;it-IT;fr-FR;zh-CN;ru-RU;bg-BG;ko-KR Antwoord: TVFS Antwoord: UTF8 Antwoord: AUTH TLS Antwoord: MFF modify;UNIX.group;UNIX.mode; Antwoord: MLST modify*;perm*;size*;type*;unique*;UNIX.group*;UNIX.mode*;UNIX.owner*; Antwoord: PBSZ Antwoord: PROT Antwoord: REST STREAM Antwoord: SIZE Antwoord: 211 End Commando: OPTS UTF8 ON Antwoord: 200 UTF8 set to on Status: Verbonden Status: Mappenlijst ophalen... Commando: PWD Antwoord: 257 "/" is the current directory Commando: TYPE I Antwoord: 200 Type set to I Commando: PASV Antwoord: 227 Entering Passive Mode (194,247,31,xx,xxx,xx). Commando: MLSD Fout: Verbinding verloren Fout: Ontvangen van mappenlijst is mislukt Sorry that it's dutch.

    Read the article

  • Creating a dynamic, extensible C# Expando Object

    - by Rick Strahl
    I love dynamic functionality in a strongly typed language because it offers us the best of both worlds. In C# (or any of the main .NET languages) we now have the dynamic type that provides a host of dynamic features for the static C# language. One place where I've found dynamic to be incredibly useful is in building extensible types or types that expose traditionally non-object data (like dictionaries) in easier to use and more readable syntax. I wrote about a couple of these for accessing old school ADO.NET DataRows and DataReaders more easily for example. These classes are dynamic wrappers that provide easier syntax and auto-type conversions which greatly simplifies code clutter and increases clarity in existing code. ExpandoObject in .NET 4.0 Another great use case for dynamic objects is the ability to create extensible objects - objects that start out with a set of static members and then can add additional properties and even methods dynamically. The .NET 4.0 framework actually includes an ExpandoObject class which provides a very dynamic object that allows you to add properties and methods on the fly and then access them again. For example with ExpandoObject you can do stuff like this:dynamic expand = new ExpandoObject(); expand.Name = "Rick"; expand.HelloWorld = (Func<string, string>) ((string name) => { return "Hello " + name; }); Console.WriteLine(expand.Name); Console.WriteLine(expand.HelloWorld("Dufus")); Internally ExpandoObject uses a Dictionary like structure and interface to store properties and methods and then allows you to add and access properties and methods easily. As cool as ExpandoObject is it has a few shortcomings too: It's a sealed type so you can't use it as a base class It only works off 'properties' in the internal Dictionary - you can't expose existing type data It doesn't serialize to XML or with DataContractSerializer/DataContractJsonSerializer Expando - A truly extensible Object ExpandoObject is nice if you just need a dynamic container for a dictionary like structure. However, if you want to build an extensible object that starts out with a set of strongly typed properties and then allows you to extend it, ExpandoObject does not work because it's a sealed class that can't be inherited. I started thinking about this very scenario for one of my applications I'm building for a customer. In this system we are connecting to various different user stores. Each user store has the same basic requirements for username, password, name etc. But then each store also has a number of extended properties that is available to each application. In the real world scenario the data is loaded from the database in a data reader and the known properties are assigned from the known fields in the database. All unknown fields are then 'added' to the expando object dynamically. In the past I've done this very thing with a separate property - Properties - just like I do for this class. But the property and dictionary syntax is not ideal and tedious to work with. I started thinking about how to represent these extra property structures. One way certainly would be to add a Dictionary, or an ExpandoObject to hold all those extra properties. But wouldn't it be nice if the application could actually extend an existing object that looks something like this as you can with the Expando object:public class User : Westwind.Utilities.Dynamic.Expando { public string Email { get; set; } public string Password { get; set; } public string Name { get; set; } public bool Active { get; set; } public DateTime? ExpiresOn { get; set; } } and then simply start extending the properties of this object dynamically? Using the Expando object I describe later you can now do the following:[TestMethod] public void UserExampleTest() { var user = new User(); // Set strongly typed properties user.Email = "[email protected]"; user.Password = "nonya123"; user.Name = "Rickochet"; user.Active = true; // Now add dynamic properties dynamic duser = user; duser.Entered = DateTime.Now; duser.Accesses = 1; // you can also add dynamic props via indexer user["NickName"] = "AntiSocialX"; duser["WebSite"] = "http://www.west-wind.com/weblog"; // Access strong type through dynamic ref Assert.AreEqual(user.Name,duser.Name); // Access strong type through indexer Assert.AreEqual(user.Password,user["Password"]); // access dyanmically added value through indexer Assert.AreEqual(duser.Entered,user["Entered"]); // access index added value through dynamic Assert.AreEqual(user["NickName"],duser.NickName); // loop through all properties dynamic AND strong type properties (true) foreach (var prop in user.GetProperties(true)) { object val = prop.Value; if (val == null) val = "null"; Console.WriteLine(prop.Key + ": " + val.ToString()); } } As you can see this code somewhat blurs the line between a static and dynamic type. You start with a strongly typed object that has a fixed set of properties. You can then cast the object to dynamic (as I discussed in my last post) and add additional properties to the object. You can also use an indexer to add dynamic properties to the object. To access the strongly typed properties you can use either the strongly typed instance, the indexer or the dynamic cast of the object. Personally I think it's kinda cool to have an easy way to access strongly typed properties by string which can make some data scenarios much easier. To access the 'dynamically added' properties you can use either the indexer on the strongly typed object, or property syntax on the dynamic cast. Using the dynamic type allows all three modes to work on both strongly typed and dynamic properties. Finally you can iterate over all properties, both dynamic and strongly typed if you chose. Lots of flexibility. Note also that by default the Expando object works against the (this) instance meaning it extends the current object. You can also pass in a separate instance to the constructor in which case that object will be used to iterate over to find properties rather than this. Using this approach provides some really interesting functionality when use the dynamic type. To use this we have to add an explicit constructor to the Expando subclass:public class User : Westwind.Utilities.Dynamic.Expando { public string Email { get; set; } public string Password { get; set; } public string Name { get; set; } public bool Active { get; set; } public DateTime? ExpiresOn { get; set; } public User() : base() { } // only required if you want to mix in seperate instance public User(object instance) : base(instance) { } } to allow the instance to be passed. When you do you can now do:[TestMethod] public void ExpandoMixinTest() { // have Expando work on Addresses var user = new User( new Address() ); // cast to dynamicAccessToPropertyTest dynamic duser = user; // Set strongly typed properties duser.Email = "[email protected]"; user.Password = "nonya123"; // Set properties on address object duser.Address = "32 Kaiea"; //duser.Phone = "808-123-2131"; // set dynamic properties duser.NonExistantProperty = "This works too"; // shows default value Address.Phone value Console.WriteLine(duser.Phone); } Using the dynamic cast in this case allows you to access *three* different 'objects': The strong type properties, the dynamically added properties in the dictionary and the properties of the instance passed in! Effectively this gives you a way to simulate multiple inheritance (which is scary - so be very careful with this, but you can do it). How Expando works Behind the scenes Expando is a DynamicObject subclass as I discussed in my last post. By implementing a few of DynamicObject's methods you can basically create a type that can trap 'property missing' and 'method missing' operations. When you access a non-existant property a known method is fired that our code can intercept and provide a value for. Internally Expando uses a custom dictionary implementation to hold the dynamic properties you might add to your expandable object. Let's look at code first. The code for the Expando type is straight forward and given what it provides relatively short. Here it is.using System; using System.Collections.Generic; using System.Linq; using System.Dynamic; using System.Reflection; namespace Westwind.Utilities.Dynamic { /// <summary> /// Class that provides extensible properties and methods. This /// dynamic object stores 'extra' properties in a dictionary or /// checks the actual properties of the instance. /// /// This means you can subclass this expando and retrieve either /// native properties or properties from values in the dictionary. /// /// This type allows you three ways to access its properties: /// /// Directly: any explicitly declared properties are accessible /// Dynamic: dynamic cast allows access to dictionary and native properties/methods /// Dictionary: Any of the extended properties are accessible via IDictionary interface /// </summary> [Serializable] public class Expando : DynamicObject, IDynamicMetaObjectProvider { /// <summary> /// Instance of object passed in /// </summary> object Instance; /// <summary> /// Cached type of the instance /// </summary> Type InstanceType; PropertyInfo[] InstancePropertyInfo { get { if (_InstancePropertyInfo == null && Instance != null) _InstancePropertyInfo = Instance.GetType().GetProperties(BindingFlags.Instance | BindingFlags.Public | BindingFlags.DeclaredOnly); return _InstancePropertyInfo; } } PropertyInfo[] _InstancePropertyInfo; /// <summary> /// String Dictionary that contains the extra dynamic values /// stored on this object/instance /// </summary> /// <remarks>Using PropertyBag to support XML Serialization of the dictionary</remarks> public PropertyBag Properties = new PropertyBag(); //public Dictionary<string,object> Properties = new Dictionary<string, object>(); /// <summary> /// This constructor just works off the internal dictionary and any /// public properties of this object. /// /// Note you can subclass Expando. /// </summary> public Expando() { Initialize(this); } /// <summary> /// Allows passing in an existing instance variable to 'extend'. /// </summary> /// <remarks> /// You can pass in null here if you don't want to /// check native properties and only check the Dictionary! /// </remarks> /// <param name="instance"></param> public Expando(object instance) { Initialize(instance); } protected virtual void Initialize(object instance) { Instance = instance; if (instance != null) InstanceType = instance.GetType(); } /// <summary> /// Try to retrieve a member by name first from instance properties /// followed by the collection entries. /// </summary> /// <param name="binder"></param> /// <param name="result"></param> /// <returns></returns> public override bool TryGetMember(GetMemberBinder binder, out object result) { result = null; // first check the Properties collection for member if (Properties.Keys.Contains(binder.Name)) { result = Properties[binder.Name]; return true; } // Next check for Public properties via Reflection if (Instance != null) { try { return GetProperty(Instance, binder.Name, out result); } catch { } } // failed to retrieve a property result = null; return false; } /// <summary> /// Property setter implementation tries to retrieve value from instance /// first then into this object /// </summary> /// <param name="binder"></param> /// <param name="value"></param> /// <returns></returns> public override bool TrySetMember(SetMemberBinder binder, object value) { // first check to see if there's a native property to set if (Instance != null) { try { bool result = SetProperty(Instance, binder.Name, value); if (result) return true; } catch { } } // no match - set or add to dictionary Properties[binder.Name] = value; return true; } /// <summary> /// Dynamic invocation method. Currently allows only for Reflection based /// operation (no ability to add methods dynamically). /// </summary> /// <param name="binder"></param> /// <param name="args"></param> /// <param name="result"></param> /// <returns></returns> public override bool TryInvokeMember(InvokeMemberBinder binder, object[] args, out object result) { if (Instance != null) { try { // check instance passed in for methods to invoke if (InvokeMethod(Instance, binder.Name, args, out result)) return true; } catch { } } result = null; return false; } /// <summary> /// Reflection Helper method to retrieve a property /// </summary> /// <param name="instance"></param> /// <param name="name"></param> /// <param name="result"></param> /// <returns></returns> protected bool GetProperty(object instance, string name, out object result) { if (instance == null) instance = this; var miArray = InstanceType.GetMember(name, BindingFlags.Public | BindingFlags.GetProperty | BindingFlags.Instance); if (miArray != null && miArray.Length > 0) { var mi = miArray[0]; if (mi.MemberType == MemberTypes.Property) { result = ((PropertyInfo)mi).GetValue(instance,null); return true; } } result = null; return false; } /// <summary> /// Reflection helper method to set a property value /// </summary> /// <param name="instance"></param> /// <param name="name"></param> /// <param name="value"></param> /// <returns></returns> protected bool SetProperty(object instance, string name, object value) { if (instance == null) instance = this; var miArray = InstanceType.GetMember(name, BindingFlags.Public | BindingFlags.SetProperty | BindingFlags.Instance); if (miArray != null && miArray.Length > 0) { var mi = miArray[0]; if (mi.MemberType == MemberTypes.Property) { ((PropertyInfo)mi).SetValue(Instance, value, null); return true; } } return false; } /// <summary> /// Reflection helper method to invoke a method /// </summary> /// <param name="instance"></param> /// <param name="name"></param> /// <param name="args"></param> /// <param name="result"></param> /// <returns></returns> protected bool InvokeMethod(object instance, string name, object[] args, out object result) { if (instance == null) instance = this; // Look at the instanceType var miArray = InstanceType.GetMember(name, BindingFlags.InvokeMethod | BindingFlags.Public | BindingFlags.Instance); if (miArray != null && miArray.Length > 0) { var mi = miArray[0] as MethodInfo; result = mi.Invoke(Instance, args); return true; } result = null; return false; } /// <summary> /// Convenience method that provides a string Indexer /// to the Properties collection AND the strongly typed /// properties of the object by name. /// /// // dynamic /// exp["Address"] = "112 nowhere lane"; /// // strong /// var name = exp["StronglyTypedProperty"] as string; /// </summary> /// <remarks> /// The getter checks the Properties dictionary first /// then looks in PropertyInfo for properties. /// The setter checks the instance properties before /// checking the Properties dictionary. /// </remarks> /// <param name="key"></param> /// /// <returns></returns> public object this[string key] { get { try { // try to get from properties collection first return Properties[key]; } catch (KeyNotFoundException ex) { // try reflection on instanceType object result = null; if (GetProperty(Instance, key, out result)) return result; // nope doesn't exist throw; } } set { if (Properties.ContainsKey(key)) { Properties[key] = value; return; } // check instance for existance of type first var miArray = InstanceType.GetMember(key, BindingFlags.Public | BindingFlags.GetProperty); if (miArray != null && miArray.Length > 0) SetProperty(Instance, key, value); else Properties[key] = value; } } /// <summary> /// Returns and the properties of /// </summary> /// <param name="includeProperties"></param> /// <returns></returns> public IEnumerable<KeyValuePair<string,object>> GetProperties(bool includeInstanceProperties = false) { if (includeInstanceProperties && Instance != null) { foreach (var prop in this.InstancePropertyInfo) yield return new KeyValuePair<string, object>(prop.Name, prop.GetValue(Instance, null)); } foreach (var key in this.Properties.Keys) yield return new KeyValuePair<string, object>(key, this.Properties[key]); } /// <summary> /// Checks whether a property exists in the Property collection /// or as a property on the instance /// </summary> /// <param name="item"></param> /// <returns></returns> public bool Contains(KeyValuePair<string, object> item, bool includeInstanceProperties = false) { bool res = Properties.ContainsKey(item.Key); if (res) return true; if (includeInstanceProperties && Instance != null) { foreach (var prop in this.InstancePropertyInfo) { if (prop.Name == item.Key) return true; } } return false; } } } Although the Expando class supports an indexer, it doesn't actually implement IDictionary or even IEnumerable. It only provides the indexer and Contains() and GetProperties() methods, that work against the Properties dictionary AND the internal instance. The reason for not implementing IDictionary is that a) it doesn't add much value since you can access the Properties dictionary directly and that b) I wanted to keep the interface to class very lean so that it can serve as an entity type if desired. Implementing these IDictionary (or even IEnumerable) causes LINQ extension methods to pop up on the type which obscures the property interface and would only confuse the purpose of the type. IDictionary and IEnumerable are also problematic for XML and JSON Serialization - the XML Serializer doesn't serialize IDictionary<string,object>, nor does the DataContractSerializer. The JavaScriptSerializer does serialize, but it treats the entire object like a dictionary and doesn't serialize the strongly typed properties of the type, only the dictionary values which is also not desirable. Hence the decision to stick with only implementing the indexer to support the user["CustomProperty"] functionality and leaving iteration functions to the publicly exposed Properties dictionary. Note that the Dictionary used here is a custom PropertyBag class I created to allow for serialization to work. One important aspect for my apps is that whatever custom properties get added they have to be accessible to AJAX clients since the particular app I'm working on is a SIngle Page Web app where most of the Web access is through JSON AJAX calls. PropertyBag can serialize to XML and one way serialize to JSON using the JavaScript serializer (not the DCS serializers though). The key components that make Expando work in this code are the Properties Dictionary and the TryGetMember() and TrySetMember() methods. The Properties collection is public so if you choose you can explicitly access the collection to get better performance or to manipulate the members in internal code (like loading up dynamic values form a database). Notice that TryGetMember() and TrySetMember() both work against the dictionary AND the internal instance to retrieve and set properties. This means that user["Name"] works against native properties of the object as does user["Name"] = "RogaDugDog". What's your Use Case? This is still an early prototype but I've plugged it into one of my customer's applications and so far it's working very well. The key features for me were the ability to easily extend the type with values coming from a database and exposing those values in a nice and easy to use manner. I'm also finding that using this type of object for ViewModels works very well to add custom properties to view models. I suspect there will be lots of uses for this - I've been using the extra dictionary approach to extensibility for years - using a dynamic type to make the syntax cleaner is just a bonus here. What can you think of to use this for? Resources Source Code and Tests (GitHub) Also integrated in Westwind.Utilities of the West Wind Web Toolkit West Wind Utilities NuGet© Rick Strahl, West Wind Technologies, 2005-2012Posted in CSharp  .NET  Dynamic Types   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Can't remove GPT data from MBR

    - by user2373121
    I am having difficulty getting the Ubuntu installer (and gparted) to recognize the partitions on my MBR type disk. Other operating systems and disk tools read the disk structure and the files on it fine. I have used fixparts to write a new MBR but the issue persists. I assume the issue stems from the Protective MBR data still present on the disk but I am at a loss as to how to remove it while preserving my NTFS data partition. Microsoft Windows [Version 6.1.7601] Copyright (c) 2009 Microsoft Corporation. All rights reserved. c:\Users\mike\Desktop\fixpartsfixparts 3: FixParts 0.8.8 Loading MBR data from 3: Warning: 0xEE partition doesn't start on sector 1. This can cause problems in some OSes. MBR command (? for help): Running gdisk shows Microsoft Windows [Version 6.1.7601] Copyright (c) 2009 Microsoft Corporation. All rights reserved. c:\Users\mike\Desktop\fixparts>gdisk 3: GPT fdisk (gdisk) version 0.8.7 Partition table scan: MBR: MBR only BSD: not present APM: not present GPT: not present *************************************************************** Found invalid GPT and valid MBR; converting MBR to GPT format in memory. THIS OPERATION IS POTENTIALLY DESTRUCTIVE! Exit by typing 'q' if you don't want to convert your MBR partitions to GPT format! *************************************************************** ************************************************************************ Most versions of Windows cannot boot from a GPT disk, and most varieties prior to Vista cannot read GPT disks. Therefore, you should exit now unless you understand the implications of converting MBR to GPT or creating a new GPT disk layout! ************************************************************************ Are you SURE you want to continue? (Y/N): y Command (? for help): p Disk 3:: 2930277168 sectors, 1.4 TiB Logical sector size: 512 bytes Disk identifier (GUID): BFE92CE8-F93D-4141-82B8-816AD06FB36E Partition table holds up to 128 entries First usable sector is 34, last usable sector is 2930277134 Partitions will be aligned on 2048-sector boundaries Total free space is 163846893 sectors (78.1 GiB) Number Start (sector) End (sector) Size Code Name 1 163842048 2930272255 1.3 TiB 0700 Microsoft basic data Command (? for help): r Recovery/transformation command (? for help): o Disk size is 2930277168 sectors (1.4 TiB) MBR disk identifier: 0x00000000 MBR partitions: Number Boot Start Sector End Sector Status Code 1 1 2930277167 primary 0xEE Recovery/transformation command (? for help): q

    Read the article

  • USB packets - receive wrong data

    - by regorianer
    i have a little python script which shows me the packets of an enocean device and does some events depending on the packet type. unfortunately it doesn't work because i'm getting wrong packets. Parts of the python script (used pySerial): Blockquote ser = serial.Serial('/dev/ttyUSB1',57600,bytesize = serial.EIGHTBITS,timeout = 1, parity = serial.PARITY_NONE , rtscts = 0) print 'clearing buffer' s = ser.read(10000) print 'start read' while 1: s = ser.read(1) for character in s: sys.stdout.write(" %s" % character.encode('hex')) print 'end' ser.close() output baudrate 57600: e0 e0 00 e0 00 e0 e0 e0 e0 e0 00 e0 e0 00 00 00 00 00 00 00 e0 e0 e0 00 00 00 00 e0 e0 e0 00 00 e0 e0 e0 e0 e0 00 e0 00 e0 e0 e0 e0 e0 00 e0 e0 00 00 00 00 00 00 e0 e0 e0 00 00 00 00 e0 e0 e0 00 00 e0 e0 e0 output baudrate 9600: a5 5a 0b 05 10 00 00 00 00 15 c4 56 20 6f a5 5a 0b 05 00 00 00 00 00 15 c4 56 20 5f linux terminal baudrate 57600: $stty -F /dev/ttyUSB1 57600 $stty < /dev/ttyUSB1 speed 57600 baud; line = 0; eof = ^A; min = 0; time = 0; -brkint -icrnl -imaxbel -opost -onlcr -isig -icanon -iexten -echo -echoe -echok -echoctl -echoke $while (true) do cat -A /dev/ttyUSB1 ; done myfile $hexdump -C myfile 00000000 4d 2d 60 4d 2d 60 5e 40 4d 2d 60 5e 40 4d 2d 60 |M-M-^@M-^@M-| 00000010 4d 2d 60 4d 2d 60 4d 2d 60 4d 2d 60 5e 40 4d 2d |M-M-M-M-^@M-| 00000020 60 4d 2d 60 5e 40 5e 40 5e 40 5e 40 5e 40 5e 40 |M-^@^@^@^@^@^@| 00000030 5e 40 4d 2d 60 4d 2d 60 4d 2d 60 5e 40 5e 40 5e |^@M-M-M-`^@^@^| 00000040 40 5e 40 4d 2d 60 4d 2d 60 4d 2d 60 |@^@M-M-M-`| 0000004c linux terminal baudrate 9600: $hexdump -C myfile2 00000000 5e 40 5e 55 4d 2d 44 56 30 4d 2d 3f 5e 40 5e 40 |^@^UM-DV0M-?^@^@| 00000010 5e 55 4d 2d 44 56 20 5f |^UM-DV _| 00000018 the specification says: 0x55 sync byte 1st 0xNNNN data length bytes (2 bytes) 0x07 opt length byte 0x01 type byte CRC, data, opt data und nochmal CRC but I'm not getting this packet structure. The output of the python script differs from the one I get via the terminal. I also wrote the python part with C, but the output is the same as with python As the USB receiver a BSC-BoR USB Receiver/Sender is used The EnOcean device is a simple button

    Read the article

  • An Introduction to Meteor

    - by Stephen.Walther
    The goal of this blog post is to give you a brief introduction to Meteor which is a framework for building Single Page Apps. In this blog entry, I provide a walkthrough of building a simple Movie database app. What is special about Meteor? Meteor has two jaw-dropping features: Live HTML – If you make any changes to the HTML, CSS, JavaScript, or data on the server then every client shows the changes automatically without a browser refresh. For example, if you change the background color of a page to yellow then every open browser will show the new yellow background color without a refresh. Or, if you add a new movie to a collection of movies, then every open browser will display the new movie automatically. With Live HTML, users no longer need a refresh button. Changes to an application happen everywhere automatically without any effort. The Meteor framework handles all of the messy details of keeping all of the clients in sync with the server for you. Latency Compensation – When you modify data on the client, these modifications appear as if they happened on the server without any delay. For example, if you create a new movie then the movie appears instantly. However, that is all an illusion. In the background, Meteor updates the database with the new movie. If, for whatever reason, the movie cannot be added to the database then Meteor removes the movie from the client automatically. Latency compensation is extremely important for creating a responsive web application. You want the user to be able to make instant modifications in the browser and the framework to handle the details of updating the database without slowing down the user. Installing Meteor Meteor is licensed under the open-source MIT license and you can start building production apps with the framework right now. Be warned that Meteor is still in the “early preview” stage. It has not reached a 1.0 release. According to the Meteor FAQ, Meteor will reach version 1.0 in “More than a month, less than a year.” Don’t be scared away by that. You should be aware that, unlike most open source projects, Meteor has financial backing. The Meteor project received an $11.2 million round of financing from Andreessen Horowitz. So, it would be a good bet that this project will reach the 1.0 mark. And, if it doesn’t, the framework as it exists right now is still very powerful. Meteor runs on top of Node.js. You write Meteor apps by writing JavaScript which runs both on the client and on the server. You can build Meteor apps on Windows, Mac, or Linux (Although the support for Windows is still officially unofficial). If you want to install Meteor on Windows then download the MSI from the following URL: http://win.meteor.com/ If you want to install Meteor on Mac/Linux then run the following CURL command from your terminal: curl https://install.meteor.com | /bin/sh Meteor will install all of its dependencies automatically including Node.js. However, I recommend that you install Node.js before installing Meteor by installing Node.js from the following address: http://nodejs.org/ If you let Meteor install Node.js then Meteor won’t install NPM which is the standard package manager for Node.js. If you install Node.js and then you install Meteor then you get NPM automatically. Creating a New Meteor App To get a sense of how Meteor works, I am going to walk through the steps required to create a simple Movie database app. Our app will display a list of movies and contain a form for creating a new movie. The first thing that we need to do is create our new Meteor app. Open a command prompt/terminal window and execute the following command: Meteor create MovieApp After you execute this command, you should see something like the following: Follow the instructions: execute cd MovieApp to change to your MovieApp directory, and run the meteor command. Executing the meteor command starts Meteor on port 3000. Open up your favorite web browser and navigate to http://localhost:3000 and you should see the default Meteor Hello World page: Open up your favorite development environment to see what the Meteor app looks like. Open the MovieApp folder which we just created. Here’s what the MovieApp looks like in Visual Studio 2012: Notice that our MovieApp contains three files named MovieApp.css, MovieApp.html, and MovieApp.js. In other words, it contains a Cascading Style Sheet file, an HTML file, and a JavaScript file. Just for fun, let’s see how the Live HTML feature works. Open up multiple browsers and point each browser at http://localhost:3000. Now, open the MovieApp.html page and modify the text “Hello World!” to “Hello Cruel World!” and save the change. The text in all of the browsers should update automatically without a browser refresh. Pretty amazing, right? Controlling Where JavaScript Executes You write a Meteor app using JavaScript. Some of the JavaScript executes on the client (the browser) and some of the JavaScript executes on the server and some of the JavaScript executes in both places. For a super simple app, you can use the Meteor.isServer and Meteor.isClient properties to control where your JavaScript code executes. For example, the following JavaScript contains a section of code which executes on the server and a section of code which executes in the browser: if (Meteor.isClient) { console.log("Hello Browser!"); } if (Meteor.isServer) { console.log("Hello Server!"); } console.log("Hello Browser and Server!"); When you run the app, the message “Hello Browser!” is written to the browser JavaScript console. The message “Hello Server!” is written to the command/terminal window where you ran Meteor. Finally, the message “Hello Browser and Server!” is execute on both the browser and server and the message appears in both places. For simple apps, using Meteor.isClient and Meteor.isServer to control where JavaScript executes is fine. For more complex apps, you should create separate folders for your server and client code. Here are the folders which you can use in a Meteor app: · client – This folder contains any JavaScript which executes only on the client. · server – This folder contains any JavaScript which executes only on the server. · common – This folder contains any JavaScript code which executes on both the client and server. · lib – This folder contains any JavaScript files which you want to execute before any other JavaScript files. · public – This folder contains static application assets such as images. For the Movie App, we need the client, server, and common folders. Delete the existing MovieApp.js, MovieApp.html, and MovieApp.css files. We will create new files in the right locations later in this walkthrough. Combining HTML, CSS, and JavaScript Files Meteor combines all of your JavaScript files, and all of your Cascading Style Sheet files, and all of your HTML files automatically. If you want to create one humongous JavaScript file which contains all of the code for your app then that is your business. However, if you want to build a more maintainable application, then you should break your JavaScript files into many separate JavaScript files and let Meteor combine them for you. Meteor also combines all of your HTML files into a single file. HTML files are allowed to have the following top-level elements: <head> — All <head> files are combined into a single <head> and served with the initial page load. <body> — All <body> files are combined into a single <body> and served with the initial page load. <template> — All <template> files are compiled into JavaScript templates. Because you are creating a single page app, a Meteor app typically will contain a single HTML file for the <head> and <body> content. However, a Meteor app typically will contain several template files. In other words, all of the interesting stuff happens within the <template> files. Displaying a List of Movies Let me start building the Movie App by displaying a list of movies. In order to display a list of movies, we need to create the following four files: · client\movies.html – Contains the HTML for the <head> and <body> of the page for the Movie app. · client\moviesTemplate.html – Contains the HTML template for displaying the list of movies. · client\movies.js – Contains the JavaScript for supplying data to the moviesTemplate. · server\movies.js – Contains the JavaScript for seeding the database with movies. After you create these files, your folder structure should looks like this: Here’s what the client\movies.html file looks like: <head> <title>My Movie App</title> </head> <body> <h1>Movies</h1> {{> moviesTemplate }} </body>   Notice that it contains <head> and <body> top-level elements. The <body> element includes the moviesTemplate with the syntax {{> moviesTemplate }}. The moviesTemplate is defined in the client/moviesTemplate.html file: <template name="moviesTemplate"> <ul> {{#each movies}} <li> {{title}} </li> {{/each}} </ul> </template> By default, Meteor uses the Handlebars templating library. In the moviesTemplate above, Handlebars is used to loop through each of the movies using {{#each}}…{{/each}} and display the title for each movie using {{title}}. The client\movies.js JavaScript file is used to bind the moviesTemplate to the Movies collection on the client. Here’s what this JavaScript file looks like: // Declare client Movies collection Movies = new Meteor.Collection("movies"); // Bind moviesTemplate to Movies collection Template.moviesTemplate.movies = function () { return Movies.find(); }; The Movies collection is a client-side proxy for the server-side Movies database collection. Whenever you want to interact with the collection of Movies stored in the database, you use the Movies collection instead of communicating back to the server. The moviesTemplate is bound to the Movies collection by assigning a function to the Template.moviesTemplate.movies property. The function simply returns all of the movies from the Movies collection. The final file which we need is the server-side server\movies.js file: // Declare server Movies collection Movies = new Meteor.Collection("movies"); // Seed the movie database with a few movies Meteor.startup(function () { if (Movies.find().count() == 0) { Movies.insert({ title: "Star Wars", director: "Lucas" }); Movies.insert({ title: "Memento", director: "Nolan" }); Movies.insert({ title: "King Kong", director: "Jackson" }); } }); The server\movies.js file does two things. First, it declares the server-side Meteor Movies collection. When you declare a server-side Meteor collection, a collection is created in the MongoDB database associated with your Meteor app automatically (Meteor uses MongoDB as its database automatically). Second, the server\movies.js file seeds the Movies collection (MongoDB collection) with three movies. Seeding the database gives us some movies to look at when we open the Movies app in a browser. Creating New Movies Let me modify the Movies Database App so that we can add new movies to the database of movies. First, I need to create a new template file – named client\movieForm.html – which contains an HTML form for creating a new movie: <template name="movieForm"> <fieldset> <legend>Add New Movie</legend> <form> <div> <label> Title: <input id="title" /> </label> </div> <div> <label> Director: <input id="director" /> </label> </div> <div> <input type="submit" value="Add Movie" /> </div> </form> </fieldset> </template> In order for the new form to show up, I need to modify the client\movies.html file to include the movieForm.html template. Notice that I added {{> movieForm }} to the client\movies.html file: <head> <title>My Movie App</title> </head> <body> <h1>Movies</h1> {{> moviesTemplate }} {{> movieForm }} </body> After I make these modifications, our Movie app will display the form: The next step is to handle the submit event for the movie form. Below, I’ve modified the client\movies.js file so that it contains a handler for the submit event raised when you submit the form contained in the movieForm.html template: // Declare client Movies collection Movies = new Meteor.Collection("movies"); // Bind moviesTemplate to Movies collection Template.moviesTemplate.movies = function () { return Movies.find(); }; // Handle movieForm events Template.movieForm.events = { 'submit': function (e, tmpl) { // Don't postback e.preventDefault(); // create the new movie var newMovie = { title: tmpl.find("#title").value, director: tmpl.find("#director").value }; // add the movie to the db Movies.insert(newMovie); } }; The Template.movieForm.events property contains an event map which maps event names to handlers. In this case, I am mapping the form submit event to an anonymous function which handles the event. In the event handler, I am first preventing a postback by calling e.preventDefault(). This is a single page app, no postbacks are allowed! Next, I am grabbing the new movie from the HTML form. I’m taking advantage of the template find() method to retrieve the form field values. Finally, I am calling Movies.insert() to insert the new movie into the Movies collection. Here, I am explicitly inserting the new movie into the client-side Movies collection. Meteor inserts the new movie into the server-side Movies collection behind the scenes. When Meteor inserts the movie into the server-side collection, the new movie is added to the MongoDB database associated with the Movies app automatically. If server-side insertion fails for whatever reasons – for example, your internet connection is lost – then Meteor will remove the movie from the client-side Movies collection automatically. In other words, Meteor takes care of keeping the client Movies collection and the server Movies collection in sync. If you open multiple browsers, and add movies, then you should notice that all of the movies appear on all of the open browser automatically. You don’t need to refresh individual browsers to update the client-side Movies collection. Meteor keeps everything synchronized between the browsers and server for you. Removing the Insecure Module To make it easier to develop and debug a new Meteor app, by default, you can modify the database directly from the client. For example, you can delete all of the data in the database by opening up your browser console window and executing multiple Movies.remove() commands. Obviously, enabling anyone to modify your database from the browser is not a good idea in a production application. Before you make a Meteor app public, you should first run the meteor remove insecure command from a command/terminal window: Running meteor remove insecure removes the insecure package from the Movie app. Unfortunately, it also breaks our Movie app. We’ll get an “Access denied” error in our browser console whenever we try to insert a new movie. No worries. I’ll fix this issue in the next section. Creating Meteor Methods By taking advantage of Meteor Methods, you can create methods which can be invoked on both the client and the server. By taking advantage of Meteor Methods you can: 1. Perform form validation on both the client and the server. For example, even if an evil hacker bypasses your client code, you can still prevent the hacker from submitting an invalid value for a form field by enforcing validation on the server. 2. Simulate database operations on the client but actually perform the operations on the server. Let me show you how we can modify our Movie app so it uses Meteor Methods to insert a new movie. First, we need to create a new file named common\methods.js which contains the definition of our Meteor Methods: Meteor.methods({ addMovie: function (newMovie) { // Perform form validation if (newMovie.title == "") { throw new Meteor.Error(413, "Missing title!"); } if (newMovie.director == "") { throw new Meteor.Error(413, "Missing director!"); } // Insert movie (simulate on client, do it on server) return Movies.insert(newMovie); } }); The addMovie() method is called from both the client and the server. This method does two things. First, it performs some basic validation. If you don’t enter a title or you don’t enter a director then an error is thrown. Second, the addMovie() method inserts the new movie into the Movies collection. When called on the client, inserting the new movie into the Movies collection just updates the collection. When called on the server, inserting the new movie into the Movies collection causes the database (MongoDB) to be updated with the new movie. You must add the common\methods.js file to the common folder so it will get executed on both the client and the server. Our folder structure now looks like this: We actually call the addMovie() method within our client code in the client\movies.js file. Here’s what the updated file looks like: // Declare client Movies collection Movies = new Meteor.Collection("movies"); // Bind moviesTemplate to Movies collection Template.moviesTemplate.movies = function () { return Movies.find(); }; // Handle movieForm events Template.movieForm.events = { 'submit': function (e, tmpl) { // Don't postback e.preventDefault(); // create the new movie var newMovie = { title: tmpl.find("#title").value, director: tmpl.find("#director").value }; // add the movie to the db Meteor.call( "addMovie", newMovie, function (err, result) { if (err) { alert("Could not add movie " + err.reason); } } ); } }; The addMovie() method is called – on both the client and the server – by calling the Meteor.call() method. This method accepts the following parameters: · The string name of the method to call. · The data to pass to the method (You can actually pass multiple params for the data if you like). · A callback function to invoke after the method completes. In the JavaScript code above, the addMovie() method is called with the new movie retrieved from the HTML form. The callback checks for an error. If there is an error then the error reason is displayed in an alert (please don’t use alerts for validation errors in a production app because they are ugly!). Summary The goal of this blog post was to provide you with a brief walk through of a simple Meteor app. I showed you how you can create a simple Movie Database app which enables you to display a list of movies and create new movies. I also explained why it is important to remove the Meteor insecure package from a production app. I showed you how to use Meteor Methods to insert data into the database instead of doing it directly from the client. I’m very impressed with the Meteor framework. The support for Live HTML and Latency Compensation are required features for many real world Single Page Apps but implementing these features by hand is not easy. Meteor makes it easy.

    Read the article

  • Resources such as libraries, engines and frameworks to make Javacript-based MMORTS? [closed]

    - by hhh
    I am looking for resources outlined to make a MMORTS with Javascript as the client-side, probably just a simple canvas for the frontend. The guy in the video here mentions that JavaScript is one of the most misunderstood language -- and I do believe that. I think one can make quite cool games with it in the future. So I am now proactively looking for resources and perhaps some ideas. My first idea contained Node.js, C and NetBSD/bozohttpd (or the-4-7-chars' *ix-thing with green-logo -thing, move the q here) but I acknowledge my beginner -style approach -- this issue is broad and not only for one person to make it all-the-time-improved project! So I think perfect for community to tinker. Some games and examples possibly easy to make into MMORTS BrowserQuest here under MPL 2.0 and its content licensed under CC-BY-SA 3.0 (source here) [proprietary] LoU here and built with JS/Qooxdoo/c#/Windows-Server/ISS/etc, source. MY ANSWER BEGINS HERE TO BE MOVED BELOW, REQUIRING RE-OPENING. PLEASE, VOTE TO OPEN IT -- HELP US TO TINKER! My answer Generic Is there an MMO-related research body? Although about Android, certain things also appropriate with JS -game: Are there any 2D gaming libraries/frameworks/engines for Android? Why is it so hard to develop a MMO? Browser based MMO Architecture MMO architecture - Highly Scalable with Reporting capabilities What are the Elements of an MMO Game? Is this the right architecture for our MMORPG mobile game? Looking for architectures to develop massive multiplayer game server Information on seamless MMO server architecture Game-mechanics (search) Question sounding like about LoU: What are the different ways to balance an online multiplayer game where user spend different amounts of time online? Building an instance system What are the different ways to balance an online multiplayer game where user spend different amounts of time online? Hosting is it possible to make a MMO starting with scalable hosting? Should I keep login server apart from game server? MMO techniques, algorithms and resources for keeping bandwidth low? MMO Proxy Server Javascript and Client-based things What do I need to do a MMORTS in JavaScript with small amount of Developers? How to update the monsters in my MMO server using Node.js and Socket.IO Are there any good html 5 mmo design tutorials? Networking Loadbalancing Questions Something about TCP, routers, NAT, etc: How do I start writing an MMO game server? Who does the AI calculations in an MMO? They need someone more knowledgable to work with, a lot of cases where the same words mean different things. Data Structures What data structure should I use for a Diablo/WoW-style talent tree? Game Engine Need an engine for MMO mockup Helper sites http://www.gamedev.net/page/index.html

    Read the article

  • CDN CNAMEs not resolving to customer origin

    - by Donald Jenkins
    I have set up an Edgecast CDN to mirror all my static content. Because I use the root of my domain (donaldjenkins.com) to host my main site—using Google Analytics which sets cookies—I've stored the corresponding static files in a separate cookieless domain (donaldjenkins.info) which is used only for this purpose. I've set it up (using this guide for general guidance), with the following structure, based on a combination of customer origin and CDN origin to make the most of the chosen short domain name and provide meaningful URLs: http://donaldjenkins.info:80 is set as the customer origin for the content stored in the CDN at directory http://wac.62E0.edgecastcdn.net/8062E0/donaldjenkins.info; I've then set up various subdomains of a separate domain, the conveniently-named cdn.dj, as CDN-origin Edge CNAMEs for each of the corresponding static content types: js.cdn.dj points to the origin directory http://wac.62E0.edgecastcdn.net/0062E0/donaldjenkins.info/js; css.cdn.dj points to the origin directory http://wac.62E0.edgecastcdn.net/0062E0/donaldjenkins.info/css; images.cdn.dj points to the origin directory http://wac.62E0.edgecastcdn.net/0062E0/donaldjenkins.info/images and so on. This results in some pretty nice, short, clear URLs. The DNS zone file for cdn.dj (yes, it's a real domain name registered in Djibouti) is set properly: cdn.dj 43200 IN A 205.186.157.162 css.cdn.dj 43200 IN CNAME wac.62E0.edgecastcdn.net. images.cdn.dj 43200 IN CNAME wac.62E0.edgecastcdn.net. js.cdn.dj 43200 IN CNAME wac.62E0.edgecastcdn.net. The DNS resolves to the Edgecast URL: $ host js.cdn.dj js.cdn.dj is an alias for wac.62E0.edgecastcdn.net. wac.62E0.edgecastcdn.net is an alias for gs1.wac.edgecastcdn.net. gs1.wac.edgecastcdn.net has address 93.184.220.20 But whenever I try to fetch a file in any of the directories to which the CNAME assets map, I get a 404: $ curl http://js.cdn.dj/combined.js <?xml version="1.0" encoding="iso-8859-1"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <title>404 - Not Found</title> </head> <body> <h1>404 - Not Found</h1> </body> </html> despite the fact that the corresponding customer origin file exists: $ curl http://donaldjenkins.info/js/combined.js fetches the content of the combined.js file. Yet it's been more than enough time for the DNS to propagate since I set up the CDN. There's obviously some glaring mistake in the above-described setup, and I'm a bit of a novice with CDNs—but any suggestions would be gratefully received.

    Read the article

  • Debugging OWB generated SAP ABAP code executed through RFC

    - by Anil Menon
    Within OWB if you need to execute ABAP code using RFC you will have to use the SAP Function Module RFC_ABAP_INSTALL_AND_RUN. This function module is specified during the creation of the SAP source location. Usually in a Production environment a copy of this function module is used due to security restrictions. When you execute the mapping by using this Function Module you can’t see the actual ABAP code that is passed on to the SAP system. In case you want to take a look at the code that will be executed on the SAP system you need to use a custom Function Module in SAP. The easiest way to do this is to make a copy of the Function Module RFC_ABAP_INSTALL_AND_RUN and call it say Z_TEST_FM. Then edit the code of the Function Module in SAP as below FUNCTION Z_TEST_FM . DATA: BEGIN OF listobj OCCURS 20. INCLUDE STRUCTURE abaplist. DATA: END OF listobj. DATA: begin_of_line(72). DATA: line_end_char(1). DATA: line_length type I. DATA: lin(72). loop at program. append program-line to WRITES. endloop. ENDFUNCTION. Within OWB edit the SAP Location and use Z_TEST_FM as the “Execution Function Module” instead of  RFC_ABAP_INSTALL_AND_RUN. Then register this location. The Mapping you want to debug will have to be deployed. After deployment you can right click the mapping and click on “Start”.   After clicking start the “Input Parameters” screen will be displayed. You can make changes here if you need to. Check that the parameter BACKGROUND is set to “TRUE”. After Clicking “OK” the log for the execution will be displayed. The execution of Mappings will always fail when you use the above function module. Clicking on the icon “I” (information) the ABAP code will be displayed.   The ABAP code displayed is the code that is passed through the Function Module. You can also find the code by going through the log files on the server which hosts the OWB repository. The logs will be located under <OWB_HOME>/owb/log. Patch #12951045 is recommended while using the SAP Connector with OWB 11.2.0.2. For recommended patches for other releases please check with Oracle Support at http://support.oracle.com

    Read the article

  • Silverlight Cream for March 06, 2010 -- #808

    - by Dave Campbell
    In this Issue: András Velvárt, felix corke, Colin Eberhardt, Christopher Bennage, Gergely Orosz, Entity Spaces Team Blog, Mike Taulty(-2-), Jit Ghosh, and Jesse Liberty. Shoutouts: Jeremy Likness expands on the Silverlight Team's post Vancouver Olympics - How'd We Do That? Gavin Wignall has a post up Creating a 360 photograph of an object with Silverlight Photosynth From SilverlightCream.com: Transforming an Ugly Duckling into a Graceful Swan With Expression Blend and Silverlight - Part 2 Intro Animation András Velvárt has part 2 of his Transformation series up at SilverlightShow... he's taking the initro animation to a new length, allowing playback even... cool video tutorial! Free Silverlight 4 beta skin! felix corke has a Silerlight 4 theme up for us all to use. If you like a dark theme like Blend, you'll like this... I like it! Linq to Visual Tree Colin Eberhardt has a great tutorial up for using LINQ to query the WPF or Silverlight Visual Tree while retaining the tree structure. He also has links out to other techniques. XAML Attributes on Separate Lines Christopher Bennage has a post up showing how to easily get all your XAML attributes on separate lines using a VS menu option... I didn't know that! Using built-in, embedded and streamed fonts in Silverlight Gergely Orosz has a post up at ScottLogic going over Fonts in Silverlight -- built-in, embedded, or streamed, and examples with code. EntitySpaces 2010 Two Part Series on Silverlight and WCF Entity Spaces Team Blog has a pair of videos up on Entity Spaces 2010, WCF, and Silverlight. Part 1 is the intro and explanation, part 2 is a full-up app demonstrating it. MEF, Silverlight and the DeploymentCatalog In an attempt to respond fully to a query, Mike Taulty literally pushed the record button and took off on what became a tutorial video on building a real Silverlight app utilizing MEF. Silverlight 4, Experiment with Pluggable Navigation and a WCF Data Service Mike Taulty has an experiment detailed on his blog about pluggable navigation and Silverlight 4. He walks through the history of how we got to this point then takes on in an example... good external links too Enhancing Silverlight Video Experiences with Contextual Data This is a post on the MSDN Magazine site where Jit Ghosh has a great long post about not only Smooth Streaming with Silverlight, but also adding context data to your video. When Is It OK To Hack? Read what all Jesse Liberty gets involved in when he's trying to get something out the door and has to work around a problem. Just about as interesting are the comments ... check it out and leave your own! Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    MIX10

    Read the article

  • MVVM Light Toolkit V3 SP1 for Windows Phone 7

    - by Laurent Bugnion
    He he I start to sound like Microsoft… Anyway… I just released a service pack (SP1) for MVVM Light Toolkit V3. Why? Well mostly because I worked a bit more with the Windows Phone 7 tools that were released at MIX0, and I noticed a few things that could be better in the Windows Phone 7 template. Also, I only found out at MIX that you can actually install custom project templates for Visual Studio Express. For some reason I thought it was not possible. The best way to solve these issues is through a service pack, which consists of a few zip files. Simply follow the instructions on the “Installing Manually” page. You can go ahead and overwrite the files that were installed with V3, all the file structure and names are exactly the same. What? So what do you get in this service pack that was not already in V3? (for more info about what’s new in V3, check the What’s New page). Project and Item templates for Visual Studio 10 Express (phone edition). Unzip these files in your “My Documents” folder, and you can now create a new MVVM Light application in the WinPhone7 version of Visual Studio 2010 Express. Signed assemblies: All the assemblies are now signed, which is a requirement in certain build configurations. XML documentation files: Thanks to Matt Casto for pinging me and reminding me that I had forgotten to include them (doh). New and improved Windows Phone 7 assemblies and templates: This one deserves its own section (see below). What was wrong with the old Silverlight 3 assemblies in Windows Phone 7 projects? It was kind of weird. Functionality wise, it was working just right. However, if you noticed, the EventToCommand behavior was not visible in the Assets tab of Expression Blend, under Behaviors, where it should normally have been. The reason was that even though the Windows Phone 7 is using Silverlight 3, the System.Windows.Interactivity that Blend was expecting is the version that is normally used in Silverlight 4. Yeah, I know, it’s weird. This led me to create a specific version of these assemblies for the phone. The assemblies are located into C:\Program Files\Laurent Bugnion (GalaSoft)\Mvvm Light Toolkit\Binaries\WP7. There are 3 DLLs: GalaSoft.MvvmLight.WP7.dll with RelayCommand, Messenger and ViewModelBase GalaSoft.MvvmLight.Extras.WP7.dll with EventToCommand and DispatcherHelper System.Windows.Interactivity.dll which is the same DLL installed in the Blend SDK, and which is needed for the EventToCommand behavior to work. Happy coding! That’s all! Download and install the service pack according to the instructions on the Installation page, and create your first MVVM Light application for the phone (a blog post will follow later with more details).   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Ad-hoc taxonomy: owning the chess set doesn't mean you decide how the little horsey moves

    - by Roger Hart
    There was one of those little laugh-or-cry moments recently when I heard an anecdote about content strategy failings at a major online retailer. The story goes a bit like this: successful company in a highly commoditized marketplace succeeds on price and largely ignores its content team. Being relatively entrepreneurial, the founders are still knocking around, and occasionally like to "take an interest". One day, they decree that clothing sold on the site can no longer be described as "unisex", because this sounds old fashioned. Sad now. Let me just reiterate for the folks at the back: large retailer, commoditized market place, differentiating on price. That's inherently unstable. Sooner or later, they're going to need one or both of competitive differentiation and significant optimization. I can't speak for the latter, since I'm hypothesizing off a raft of rumour, but one of the simpler paths to the former is to become - or rather acknowledge that they are - a content business. Regardless, they need highly-searchable terminology. Even in the face of tooth and claw resistance to noticing the fundamental position content occupies in driving sales (and SEO) on the web, there's a clear information problem here. Dilettante taxonomy is a disaster. Ok, so this is a small example, but that kind of makes it a good one. Unisex probably is the best way of describing clothing designed to suit either men or women interchangeably. It certainly takes less time to type (and read). It's established terminology, and as a single word, it's significantly better for web readability than a phrasal workaround. Something like "fits men or women" is short, by could fall foul of clause-level discard in web scanning. It's not an adjective, so for intuitive reading it's never going to be near the start of a title or description. It would also clutter up search results, and impose cognitive load in list scanning. Sorry kids, it's just worse. Even if "unisex" were an archaism (which it isn't), the only thing that would weigh against its being more usable and concise terminology would be evidence that this archaism were hurting conversions. Good luck with that. We once - briefly - called one of our products a "Can of worms". It was a bundle in a bug-tracking suite, and we thought it sounded terribly cool. Guess how well that sold. We have information and content professionals for a reason: to make sure that whatever we put in front of users is optimised to meet user and business goals. If that thinking doesn't inform style guides, taxonomy, messaging, title structure, and so forth, you might as well be finger painting.

    Read the article

  • SQLAuthority News – SQL Server 2012 – Microsoft Learning Training and Certification

    - by pinaldave
    Here is the conversion I had right after I had posted my earlier blog post about Download Microsoft SQL Server 2012 RTM Now. Rajesh: So SQL Server is available for me to download? Pinal: Yes, sure check the link here. Rajesh: It is trial do you know when it will be available for everybody? Pinal: I think you mean General Availability (GA) which is on April 1st, 2012. Rajesh: I want to have head start with SQL Server 2012 examination and I want to know every single Exam 70-461: Querying Microsoft SQL Server 2012 This exam is intended for SQL Server database administrators, implementers, system engineers, and developers with two or more years of experience who are seeking to prove their skills and knowledge in writing queries. Exam 70-462: Administering Microsoft SQL Server 2012 Databases This exam is intended for Database Professionals who perform installation, maintenance, and configuration tasks as their primary areas of responsibility. They will often set up database systems and are responsible for making sure those systems operate efficiently. Exam 70-463: Implementing a Data Warehouse with Microsoft SQL Server 2012 The primary audience for this exam is Extract Transform Load (ETL) and Data Warehouse Developers.  They are most likely to focus on hands-on work creating business intelligence (BI) solutions including data cleansing, ETL, and Data Warehouse implementation. Exam 70-464: Developing Microsoft SQL Server 2012 Databases This exam is intended for database professionals who build and implement databases across an organization while ensuring high levels of data availability. They perform tasks including creating database files, creating data types and tables,  planning, creating, and optimizing indexes, implementing data integrity, implementing views, stored procedures, and functions, and managing transactions and locks. Exam 70-465: Designing Database Solutions for Microsoft SQL Server 2012 This exam is intended for database professionals who design and build database solutions in an organization.  They are responsible for the creation of plans and designs for database structure, storage, objects, and servers. Exam 70-466: Implementing Data Models and Reports with Microsoft SQL Server 2012 The primary audience for this exam is BI Developers.  They are most likely to focus on hands-on work creating the BI solution including implementing multi-dimensional data models, implementing and maintaining OLAP cubes, and creating information displays used in business decision making Exam 70-467: Designing Business Intelligence Solutions with Microsoft SQL Server 2012 The primary audience for this exam is the BI Architect.  BI Architects are responsible for the overall design of the BI infrastructure, including how it relates to other data systems in use. Looking at Rajesh’s passion, I am motivated too! I may want to start attempting the exams in near future. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Design considerations on JSON schema for scalars with a consistent attachment property

    - by casperOne
    I'm trying to create a JSON schema for the results of doing statistical analysis based on disparate pieces of data. The current schema I have looks something like this: { // Basic key information. video : "http://www.youtube.com/watch?v=7uwfjpfK0jo", start : "00:00:00", end : null, // For results of analysis, to be populated: // *** This is where it gets interesting *** analysis : { game : { value: "Super Street Fighter 4: Arcade Edition Ver. 2012", confidence: 0.9725 } teams : [ { player : { value : "Desk", confidence: 0.95, } characters : [ { value : "Hakan", confidence: 0.80 } ] } ] } } The issue is the tuples that are used to store a value and the confidence related to that value (i.e. { value : "some value", confidence : 0.85 }), populated after the results of the analysis. This leads to a creep of this tuple for every value. Take a fully-fleshed out value from the characters array: { name : { value : "Hakan", confidence: 0.80 } ultra : { value: 1, confidence: 0.90 } } As the structures that represent the values become more and more detailed (and more analysis is done on them to try and determine the confidence behind that analysis), the nesting of the tuples adds great deal of noise to the overall structure, considering that the final result (when verified) will be: { name : "Hakan", ultra : 1 } (And recall that this is just a nested value) In .NET (in which I'll be using to work with this data), I'd have a little helper like this: public class UnknownValue<T> { T Value { get; set; } double? Confidence { get; set; } } Which I'd then use like so: public class Character { public UnknownValue<Character> Name { get; set; } } While the same as the JSON representation in code, it doesn't have the same creep because I don't have to redefine the tuple every time and property accessors hide the appearance of creep. Of course, this is an apples-to-oranges comparison, the above is code while the JSON is data. Is there a more formalized/cleaner/best practice way of containing the creep of these tuples in JSON, or is the approach above an accepted approach for the type of data I'm trying to store (and I'm just perceiving it the wrong way)? Note, this is being represented in JSON because this will ultimately go in a document database (something like RavenDB or elasticsearch). I'm not concerned about being able to serialize into the object above, because I can always use data transfer objects to facilitate getting data into/out of my underlying data store.

    Read the article

  • Defining Social Media Terms

    - by David Dorf
    As I talk about social in the context of retail, I sometimes get tripped up on different terms. I know what I mean, but the audience may have something else in mind. So I decided to see if I could find some well accepted definitions for common terms. While there are definitions on the Internet, I'm not sure they are well accepted. After reviewing several, here's what I came up with: Social Network: a structure of individuals and groups connected together by commonality. That seems pretty straightforward. A group of friends, co-workers, music fans, etc. The key here is that they have something in common that connects them. Social Media: Internet channels that support the collaborative publishing of information by and for social networks. The key here is to differentiate between traditional one-way media, and conversational social media. When its social its two-way, allowing both the publishing and consuming of information. Examples are blogs, wikis, Twitter, Facebook, etc. Social Marketing: the use of social media for marketing, public relations, and customer service. Wikipedia actually includes "selling" here but I think that's separate from marketing, as you'll see further down below. Most people look at social media as entertainment, but the marketing angle adds business value. This is where retailers discover and engage customers to build a relationship. Social Merchandising: the integration of social media and product discovery. Whereas marketing is focused more on brand image, customer engagement, and promotions, merchandising is more directly trying to convert browsers into purchases. This includes deciding what customers want, often by asking the social network, and deciding how to position products to the social network. Social Selling: the incorporation of e-commerce into social media. While on a social media site, social selling enables the purchasing of goods/services in the user's context, without leaving the social media channel. If a user clicks on an advertisement and is taken to an e-commerce site, then that's really just web advertising and not social selling. Well, do these terms and definitions make sense? Let me know what you think.

    Read the article

  • How to handle "circular dependency" in dependency injection

    - by Roel
    The title says "Circular Dependency", but it is not the correct wording, because to me the design seems solid. However, consider the following scenario, where the blue parts are given from external partner, and orange is my own implementation. Also assume there is more then one ConcreteMain, but I want to use a specific one. (In reality, each class has some more dependencies, but I tried to simplify it here) I would like to instanciate all of this with Depency Injection (Unity), but I obviously get a StackOverflowException on the following code, because Runner tries to instantiate ConcreteMain, and ConcreteMain needs a Runner. IUnityContainer ioc = new UnityContainer(); ioc.RegisterType<IMain, ConcreteMain>() .RegisterType<IMainCallback, Runner>(); var runner = ioc.Resolve<Runner>(); How can I avouid this? Is there any way to structure this so that I can use it with DI? The scenario I'm doing now is setting everything up manually, but that puts a hard dependency on ConcreteMain in the class which instantiates it. This is what i'm trying to avoid (with Unity registrations in configuration). All source code below (very simplified example!); public class Program { public static void Main(string[] args) { IUnityContainer ioc = new UnityContainer(); ioc.RegisterType<IMain, ConcreteMain>() .RegisterType<IMainCallback, Runner>(); var runner = ioc.Resolve<Runner>(); Console.WriteLine("invoking runner..."); runner.DoSomethingAwesome(); Console.ReadLine(); } } public class Runner : IMainCallback { private readonly IMain mainServer; public Runner(IMain mainServer) { this.mainServer = mainServer; } public void DoSomethingAwesome() { Console.WriteLine("trying to do something awesome"); mainServer.DoSomething(); } public void SomethingIsDone(object something) { Console.WriteLine("hey look, something is finally done."); } } public interface IMain { void DoSomething(); } public interface IMainCallback { void SomethingIsDone(object something); } public abstract class AbstractMain : IMain { protected readonly IMainCallback callback; protected AbstractMain(IMainCallback callback) { this.callback = callback; } public abstract void DoSomething(); } public class ConcreteMain : AbstractMain { public ConcreteMain(IMainCallback callback) : base(callback){} public override void DoSomething() { Console.WriteLine("starting to do something..."); var task = Task.Factory.StartNew(() =>{ Thread.Sleep(5000);/*very long running task*/ }); task.ContinueWith(t => callback.SomethingIsDone(true)); } }

    Read the article

  • Are You Afraid of Each Other? Study Shows CMO’s/CIO’s Missing Benefits of Collaboration

    - by Mike Stiles
    Remember that person in school you spent months being too scared to talk to?  Then when you finally did, it led to a wonderful friendship…if not something more. New research from Oracle, Social Media Today and Leader Networks shows marketing and IT need to get over whatever’s holding them back and start reaping the benefits of collaboration. Back in the old days of just a few years ago, marketing could stay on their side of the building, IT could stay on their side of the building, and both could refer to the other as “those guys.” Today, the structure of organizations is shifting from islands to “us,” one integrated body where each part knows what the other parts are doing, and all parts work together in accomplishing job one…a winning customer experience. Ignore that, and you start losing. Give your reluctance to change priority over the benefits of new collaborations, and you start losing. You’re either working together and accelerating forward or getting in the way of each other’s separate agendas and grinding down…much to your competitors’ delight. The study reveals a basic current truth: those who are collaborating in marketing and IT report being more effective, however less than 1/3 report collaborating even “frequently.” In other words, this is obviously a good thing, so we’d better not do it. Smart. The white paper, “Socially Driven Collaboration,” set out to explore how today’s always-changing digital, social and mobile landscape is forcing change across the enterprise, whether it’s welcomed or not. Part of what it found is marketing and IT leaders are not unaware of what’s going on and see their roles evolving. And both know the ability to collaborate more effectively now exists. And of those who are collaborating, over 2/3 say they’re “more effective” professionally because of it. Yet even if you don’t want to take the Oracle study’s word for it, an August 2013 Accenture study of 400 senior marketing and 250 IT executives revealed only 10% think CMO/CIO collaboration is at the right level. There’s a lot of room for improvement here, and not just around people. Collaboration is also being called for across processes and technologies. Business benefits of such collaboration cited in the Oracle study include stronger marketing messages, faster speed-to-market, greater product adoption, faster discovery of product and service shortcomings, and reduction in project costs. Those are the benefits you will cheat yourself out of by keeping “those guys” at arm’s length and continuing to try to function in traditional roles while modern business and the consumer is changing around you. “Intelligence is the ability to adapt to change.” –Stephen Hawking @mikestilesPhoto: istockphoto

    Read the article

  • Big Data – Interacting with Hadoop – What is PIG? – What is PIG Latin? – Day 16 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the HIVE in Big Data Story. In this article we will understand what is PIG and PIG Latin in Big Data Story. Yahoo started working on Pig for their application deployment on Hadoop. The goal of Yahoo to manage their unstructured data. What is Pig and What is Pig Latin? Pig is a high level platform for creating MapReduce programs used with Hadoop and the language we use for this platform is called PIG Latin. The pig was designed to make Hadoop more user-friendly and approachable by power-users and nondevelopers. PIG is an interactive execution environment supporting Pig Latin language. The language Pig Latin has supported loading and processing of input data with series of transforming to produce desired results. PIG has two different execution environments 1) Local Mode – In this case all the scripts run on a single machine. 2) Hadoop – In this case all the scripts run on Hadoop Cluster. Pig Latin vs SQL Pig essentially creates set of map and reduce jobs under the hoods. Due to same users does not have to now write, compile and build solution for Big Data. The pig is very similar to SQL in many ways. The Ping Latin language provide an abstraction layer over the data. It focuses on the data and not the structure under the hood. Pig Latin is a very powerful language and it can do various operations like loading and storing data, streaming data, filtering data as well various data operations related to strings. The major difference between SQL and Pig Latin is that PIG is procedural and SQL is declarative. In simpler words, Pig Latin is very similar to SQ Lexecution plan and that makes it much easier for programmers to build various processes. Whereas SQL handles trees naturally, Pig Latin follows directed acyclic graph (DAG). DAGs is used to model several different kinds of structures in mathematics and computer science. DAG Tomorrow In tomorrow’s blog post we will discuss about very important components of the Big Data Ecosystem – Zookeeper. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • VS.NET 2010 SP1, Win 7, Parallels, and a MBP&ndash;Hell, my friends&hellip;HELL!

    - by D'Arcy Lussier
    LightSwitch Beta 2 is out. That’s how all this started. All I wanted was to install it on my MBP’s Win7 Parallels VM. But as I’m finding with running a Win7 VM on a MBP, nothing is as easy as it should be. First my MBP froze during the SP1 installation. Not my VM crashing, the entire machine freezing…no mouse, nothing. Had to do a hard reset. BLECH. Then we’re back and I try to re-install SP1 (since the first try obviously failed). I get met with a dialog asking me where silverlight_sdk.msi was. It was *nowhere*! So I hit the net and download it from Microsoft’s site. Unfortunately, it only downloads an exe and not the individual files which would include the msi. Here’s what I did: - Download the tools for Silverlight 4 (http://www.microsoft.com/downloads/en/details.aspx?FamilyID=b3deb194-ca86-4fb6-a716-b67c2604a139&displaylang=en) - Run it, but don’t hit the install or next button when the dialog comes up - Look in your file structure for a folder with a weird name…bunch of numbers and letters. This is a temp folder that the exe creates and dumps all the necessary setup files into, and clears away after its done. - Inside this folder you’ll find the silverlight_sdk.msi (hooray!). Just copy it to a different location on the C drive. You can then cancel installation. Ok, so that takes care of that…but then running the SP1 installer I get hit with *another* dialog asking for the WCF RIA Services SP1 msi. Now it looks like this MSI is part of the Silverlight Tools package because you’ll see the MSI, but the VS.NET 2010 SP1 installer will thumb its nose at this unworthy msi…for whatever reason. So instead, go here: http://www.silverlight.net/getstarted/riaservices/ …and click on the “Install WCF Ria Services Sp1…” option. This downloads the msi, which you should save to your C drive and direct the VS.NET 2010 SP1 installer to. Then, if you’ve done all that, been good all year, and not made any little children cry, you *might* just be able to install VS.NET 2010 SP1 on your Parallels VM. If you were playing that “Take a shot every time he writes VS.NET 2010 Sp1” drinking game, then you’re drunk…which is a better place to be than where I am right now: watching the installation progress bar slowly creep to completion, hoping there’s no more surprises in store. D

    Read the article

  • A* navigational mesh path finding

    - by theguywholikeslinux
    So I've been making this top down 2D java game in this framework called Greenfoot [1] and I've been working on the AI for the guys you are gonna fight. I want them to be able to move around the world realistically so I soon realized, amongst a couple of other things, I would need some kind of pathfinding. I have made two A* prototypes. One is grid based and then I made one that works with waypoints so now I need to work out a way to get from a 2d "map" of the obstacles/buildings to a graph of nodes that I can make a path from. The actual pathfinding seems fine, just my open and closed lists could use a more efficient data structure, but I'll get to that if and when I need to. I intend to use a navigational mesh for all the reasons out lined in this post on ai-blog.net [2]. However, the problem I have faced is that what A* thinks is the shortest path from the polygon centres/edges is not necessarily the shortest path if you travel through any part of the node. To get a better idea you can see the question I asked on stackoverflow [3]. I got a good answer concerning a visibility graph. I have since purchased the book (Computational Geometry: Algorithms and Applications [4]) and read further into the topic, however I am still in favour of a navigational mesh (See "Managing Complexity" [5] from Amit’s Notes about Path-Finding [6]). (As a side note, maybe I could possibly use Theta* to convert multiple waypoints into one straight line if the first and last are not obscured. Or each time I move back check to the waypoint before last to see if I can go straight from that to this) So basically what I want is a navigational mesh where once I have put it through a funnel algorithm (e.g. this one from Digesting Duck [7]) I will get the true shortest path, rather than get one that is the shortest path following node to node only, but not the actual shortest given that you can go through some polygons and skip nodes/edges. Oh and I also want to know how you suggest storing the information concerning the polygons. For the waypoint prototype example I made I just had each node as an object and stored a list of all the other nodes you could travel to from that node, I'm guessing that won't work with polygons? and how to I tell if a polygon is open/traversable or if it is a solid object? How do I store which nodes make up the polygon? Finally, for the record: I do want to programme this by myself from scratch even though there are already other solutions available and I don't intend to be (re) using this code in anything other than this game so it does not matter that it will inevitably be poor quality. http://greenfoot.org http://www.ai-blog.net/archives/000152.html http://stackoverflow.com/q/7585515/ http://www.cs.uu.nl/geobook/ http://theory.stanford.edu/~amitp/GameProgramming/MapRepresentations.html http://theory.stanford.edu/~amitp/GameProgramming/ http://digestingduck.blogspot.com/2010/03/simple-stupid-funnel-algorithm.html

    Read the article

  • CISDI Cloud - Industrial Cloud Computing Platform based on Oracle Products

    - by Wenyu Duan
    In today's era, Cloud Computing is becoming integral to the vision and corporate strategy of leading organizations and is often seen as a key business driver to achieve growth and innovation. Headquartered in Chongqing, China, CISDI Engineering Co., Ltd. is a large state-owned engineering company, offering consulting, engineering design, EPC contracting, and equipment integration services to steel producers all over the world. With over 50 years of experience, CISDI offers quality services for every aspect of production for projects in the metal industry and the company has evolved into a leading international engineering service group with 18 subsidiaries providing complete lifecycle for E&C projects. CISDI group delegation led by Mr. Zhaohui Yu, CEO of CISDI Group, Mr. Zhiyou Li, CEO of CISDI Info, Mr. Qing Peng, CTO of CISDI Info and Mr. Xin Xiao, Head of CISDI Info's R&D joined Oracle OpenWorld 2012 and presented a very impressive cloud initiative case in their session titled “E&C Industry Solution in CISDI Cloud - An Industrial Cloud Computing Platform Based on Oracle Products”. CISDI group plans to expand through three phases in the construction of its cloud computing platform: first, it will relocate its existing technologies to Oracle systems, along with establishing private cloud for CISDI; secondly, it will gradually provide mixed cloud services for its subsidiaries and partners; and finally it plans to launch an industrial cloud with a highly mature, secure and scalable environment providing cloud services for customers in the engineering construction and steel industries, among others. “CISDI Cloud” will become the growth engine for the organization to expand its global reach through online services and achieving the strategic objective of being the preferred choice of E&C companies worldwide. The new cloud computing platform is designed to provide access to the shared computing resources pool in a self-service, dynamic, elastic and measurable way. It’s flexible and scalable grid structure can support elastic expansion and sustainable growth, and can bring significant benefits in speed, agility and efficiency. Further, the platform can greatly cut down deployment and maintenance costs. CISDI delegation highlighted these points as the key reasons why the group decided to have a strategic collaboration with Oracle for building this world class industrial cloud - - Oracle’s strategy: Open, Complete and Integrated - Oracle as the only company who can provide engineered system, with complete product chain of hardware and software - Exadata, Exalogic, EM 12c to provide solid foundation for "CISDI Cloud" The cloud blueprint and advanced architecture for industrial cloud computing platform presented in the session shows how Oracle products and technologies together with industrial applications from CISDI can provide end-end portfolio of E&C industry services in cloud. CISDI group was recognized for business leadership and innovative solutions and was presented with Engineering and Construction Industry Excellence Award during Oracle OpenWorld.

    Read the article

  • How to speed up this simple mysql query?

    - by Jim Thio
    The query is simple: SELECT TB.ID, TB.Latitude, TB.Longitude, 111151.29341326*SQRT(pow(-6.185-TB.Latitude,2)+pow(106.773-TB.Longitude,2)*cos(-6.185*0.017453292519943)*cos(TB.Latitude*0.017453292519943)) AS Distance FROM `tablebusiness` AS TB WHERE -6.2767668133836 < TB.Latitude AND TB.Latitude < -6.0932331866164 AND FoursquarePeopleCount >5 AND 106.68123318662 < TB.Longitude AND TB.Longitude <106.86476681338 ORDER BY Distance See, we just look at all business within a rectangle. 1.6 million rows. Within that small rectangle there are only 67,565 businesses. The structure of the table is 1 ID varchar(250) utf8_unicode_ci No None Change Change Drop Drop More Show more actions 2 Email varchar(400) utf8_unicode_ci Yes NULL Change Change Drop Drop More Show more actions 3 InBuildingAddress varchar(400) utf8_unicode_ci Yes NULL Change Change Drop Drop More Show more actions 4 Price int(10) Yes NULL Change Change Drop Drop More Show more actions 5 Street varchar(400) utf8_unicode_ci Yes NULL Change Change Drop Drop More Show more actions 6 Title varchar(400) utf8_unicode_ci Yes NULL Change Change Drop Drop More Show more actions 7 Website varchar(400) utf8_unicode_ci Yes NULL Change Change Drop Drop More Show more actions 8 Zip varchar(400) utf8_unicode_ci Yes NULL Change Change Drop Drop More Show more actions 9 Rating Star double Yes NULL Change Change Drop Drop More Show more actions 10 Rating Weight double Yes NULL Change Change Drop Drop More Show more actions 11 Latitude double Yes NULL Change Change Drop Drop More Show more actions 12 Longitude double Yes NULL Change Change Drop Drop More Show more actions 13 Building varchar(200) utf8_unicode_ci Yes NULL Change Change Drop Drop More Show more actions 14 City varchar(100) utf8_unicode_ci No None Change Change Drop Drop More Show more actions 15 OpeningHour varchar(400) utf8_unicode_ci Yes NULL Change Change Drop Drop More Show more actions 16 TimeStamp timestamp on update CURRENT_TIMESTAMP No CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP Change Change Drop Drop More Show more actions 17 CountViews int(11) Yes NULL Change Change Drop Drop More Show more actions The indexes are: Edit Edit Drop Drop PRIMARY BTREE Yes No ID 1965990 A Edit Edit Drop Drop City BTREE No No City 131066 A Edit Edit Drop Drop Building BTREE No No Building 21 A YES Edit Edit Drop Drop OpeningHour BTREE No No OpeningHour (255) 21 A YES Edit Edit Drop Drop Email BTREE No No Email (255) 21 A YES Edit Edit Drop Drop InBuildingAddress BTREE No No InBuildingAddress (255) 21 A YES Edit Edit Drop Drop Price BTREE No No Price 21 A YES Edit Edit Drop Drop Street BTREE No No Street (255) 982995 A YES Edit Edit Drop Drop Title BTREE No No Title (255) 1965990 A YES Edit Edit Drop Drop Website BTREE No No Website (255) 491497 A YES Edit Edit Drop Drop Zip BTREE No No Zip (255) 178726 A YES Edit Edit Drop Drop Rating Star BTREE No No Rating Star 21 A YES Edit Edit Drop Drop Rating Weight BTREE No No Rating Weight 21 A YES Edit Edit Drop Drop Latitude BTREE No No Latitude 1965990 A YES Edit Edit Drop Drop Longitude BTREE No No Longitude 1965990 A YES The query took forever. I think there has to be something wrong there. Showing rows 0 - 29 ( 67,565 total, Query took 12.4767 sec)

    Read the article

  • BAM Data Control in multiple ADF Faces Components

    - by [email protected]
    As we know Oracle BAM data control instance sharing is not supported.When two or more ADF Faces components must display the same data, and are bound to the same Oracle BAM data control definition, we have to make sure that we wrap each ADF Faces component in an ADF task flow, and set the Data Control Scope to isolated. This blog will show a small sample to demonstrate this. In this sample we will create a Pie and Bar using same BAM DC, such that both components use same Data control but have isolated scope.This sample can be downloaded  fromSample1.zip Set-up: Create a BAM data control using employees DO (sample) Steps: Right click on View Controller project and select "New->ADF Task Flow" Check "Create Bounded Task Flow" and give some meaningful name (ex:EmpPieTF.xml ) to the TaskFlow(TF) and click on "OK"CreateTF.bmpFrom the "Components Palette", drag and drop "View" into the task flow diagram. Give a meaningful name to the view. Double Click and Click "Ok" for  "Create New JSF Page Fragment" From "Data Controls" drag and drop "Employees->Query"  into this jsff page as "Graph->Pie" (Pie: Sales_Number and Slices: Salesperson) Repeat step 1 through 4 for another Task Flow (ex: EmpBarTF). From "Data Controls" drag and drop "Employees->Query"  into this jsff page as "Graph->Bar" (Bars :Sales_Number and X-axis : Salesperson). Open the Taskflow created in step 2. In the Structure Pane, right click on "Task Flow Definition -EmpPieTF" Click "Insert inside Task Flow Definition - EmpPieTF -> ADF Task Flow -> Data Control Scope". Click "OK"TFDCScope.bmpFor the "Data Control Scope", In the Property Inspector ->General section, change data control scope from Shared to Isolated. Repeat step 8 through 11 for the 2nd Task flow created. Now create a new jspx page example: Main.jspxDrag and drop both the Task flows (ex: "EmpPieTF" and "EmpBarTF") as regions. Surround with panel components as needed.Run the page Main.jspxMainPage.bmpNow when the page runs although both components are created using same Data control the bindings are not shared and each component will have a separate instance of the data control.

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • Just another web startup - platform comparison

    - by Holland
    I'm looking to do a web startup which involves something along the lines of an ecommerce site, yet a little more in depth than that. While it's something that I would rather not go into detail with in terms of the initial idea, I can specify (on a basic level) what would be required of the website. If you have any observations or opinions derived from personal experience, which relate to what you see here, I'd appreciate it if you could share these. Paypal's API interaction (definitely). From what I've read about their API, integration with it into their website is VERY expensive, so I'd probably hold off on that until I've (hopefully) generated money and write my own simple credit-card interaction system. SQL Backend (obviously) PostgreSQL seems like a pretty good choice, as from what I've read, it's structure is a bit more "object-oriented" than, say, MySQL. Then again, I've used MySQL before and haven't had much problem with it whatsoever. Would it be worth learning PostgreSQL for this purpose? Java or .Net implementation (Preferably Mono, so I can use .Net while hosting the website using Apache). The reason for this is because, frankly, while I know PHP is a great platform to develop websites with, I hate developing with it. Before someone chimes in and flames me for saying that, note that I have nothing against the language, I just don't like it for my purposes. While Mono may be good to go with, I'm aware that ASP.Net MVC 3 hasn't been released for Mono yet, which may be a pain to work with, without their Razor syntax. Ontop of that, it seems Java is completely FULL of class libraries which deal with web development, that can be downloaded from the web. If anyone has any experience with these, I'd appreciate if that were posted. From what I've read about Spring and Struts2, they seem to be the best out there - especially since they're (AFAIK) MVC. I've considered Python and Django, which do seem REALLY nice, but I don't know much Python, and I'd rather start with something that I already know (language-wise; not framework-wise) than dive into learning a language AND a new framework. I'd REALLY like to be able to host my website via Apache, rather than using Windows Server or anything like that, as, frankly, I hate their setup. I'm not dissing it in any way, shape, or form, I'm just saying I dislike it. <3 terminal config. If there is a good reason to with Windows Server, however, I'd be willing to learn it. C# has a lot of things that Java appears not to have, including Delegates, unsigned types, and LINQ. Is there anything that Java has which can counter these?

    Read the article

< Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >