Search Results

Search found 10951 results on 439 pages for 'high definition'.

Page 66/439 | < Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >

  • Django User M2M relationship

    - by Antonio
    When trying to syncdb with the following models: class Contact(models.Model): user_from = models.ForeignKey(User,related_name='from_user') user_to = models.ForeignKey(User, related_name='to_user') class Meta: unique_together = (('user_from', 'user_to'),) User.add_to_class('following', models.ManyToManyField('self', through=Contact, related_name='followers', symmetrical=False)) I get the following error: Error: One or more models did not validate: auth.user: Accessor for m2m field 'following' clashes with related m2m field 'User.followers'. Add a related_name argument to the definition for 'following'. auth.user: Reverse query name for m2m field 'following' clashes with related m2m field 'User.followers'. Add a related_name argument to the definition for 'following'. auth.user: The model User has two manually-defined m2m relations through the model Contact, which is not permitted. Please consider using an extra field on your intermediary model instead. auth.user: Accessor for m2m field 'following' clashes with related m2m field 'User.followers'. Add a related_name argument to the definition for 'following'. auth.user: Reverse query name for m2m field 'following' clashes with related m2m field 'User.followers'. Add a related_name argument to the definition for 'following'.

    Read the article

  • Creating circular generic references

    - by M. Jessup
    I am writing an application to do some distributed calculations in a peer to peer network. In defining the network I have two class the P2PNetwork and P2PClient. I want these to be generic and so have the definitions of: P2PNetwork<T extends P2PClient<? extends P2PNetwork<T>>> P2PClient<T extends P2PNetwork<? extends T>> with P2PClient defining a method of setNetwork(T network). What I am hoping to describe with this code is: A P2PNetwork is constituted of clients of a certain type A P2PClient may only belong to a network whose clients consist of the same type as this client (the circular-reference) This seems correct to me but if I try to create a non-generic version such as MyP2PClient<MyP2PNetwork<? extends MyP2PClient>> myClient; and other variants I receive numerous errors from the compiler. So my questions are as follows: Is a generic circular reference even possible (I have never seen anything explicitly forbidding it)? Is the above generic definition a correct definition of such a circular relationship? If it is valid, is it the "correct" way to describe such a relationship (i.e. is there another valid definition, which is stylistically preferred)? How would I properly define a non-generic instance of a Client and Server given the proper generic definition?

    Read the article

  • Why this Either-monad code does not type check?

    - by pf_miles
    instance Monad (Either a) where return = Left fail = Right Left x >>= f = f x Right x >>= _ = Right x this code frag in 'baby.hs' caused the horrible compilation error: Prelude> :l baby [1 of 1] Compiling Main ( baby.hs, interpreted ) baby.hs:2:18: Couldn't match expected type `a1' against inferred type `a' `a1' is a rigid type variable bound by the type signature for `return' at <no location info> `a' is a rigid type variable bound by the instance declaration at baby.hs:1:23 In the expression: Left In the definition of `return': return = Left In the instance declaration for `Monad (Either a)' baby.hs:3:16: Couldn't match expected type `[Char]' against inferred type `a1' `a1' is a rigid type variable bound by the type signature for `fail' at <no location info> Expected type: String Inferred type: a1 In the expression: Right In the definition of `fail': fail = Right baby.hs:4:26: Couldn't match expected type `a1' against inferred type `a' `a1' is a rigid type variable bound by the type signature for `>>=' at <no location info> `a' is a rigid type variable bound by the instance declaration at baby.hs:1:23 In the first argument of `f', namely `x' In the expression: f x In the definition of `>>=': Left x >>= f = f x baby.hs:5:31: Couldn't match expected type `b' against inferred type `a' `b' is a rigid type variable bound by the type signature for `>>=' at <no location info> `a' is a rigid type variable bound by the instance declaration at baby.hs:1:23 In the first argument of `Right', namely `x' In the expression: Right x In the definition of `>>=': Right x >>= _ = Right x Failed, modules loaded: none. why this happen? and how could I make this code compile ? thanks for any help~

    Read the article

  • Struts2 Tiles in Google app engine

    - by user365941
    I am trying to build an java web application using struts2 and tiles in Google App Engine. Below is my tiles.xml file <!DOCTYPE tiles-definitions PUBLIC "-//Apache Software Foundation//DTD Tiles Configuration 2.0//EN" "http://tiles.apache.org/dtds/tiles-config_2_0.dtd"> <tiles-definitions> <definition name="baseLayout" template="BaseLayout.jsp"> <put-attribute name="title" value="" /> <put-attribute name="header" value="Header.jsp" /> <put-attribute name="body" value="" /> <put-attribute name="footer" value="Footer.jsp" /> </definition> <definition name="/welcome.tiles" extends="baseLayout"> <put-attribute name="title" value="Welcome" /> <put-attribute name="body" value="Welcome.jsp" /> </definition> </tiles-definitions> But when I run the app,I am not getting any error. it just prints "Header.jsp Welcome.jsp Footer.jsp". It does not show the actual jsp pages. Please advise on what needs to be done. Thanks in advance Regards

    Read the article

  • Android, how to use DexClassLoader to dynamically replace an Activity or Service

    - by RickNotFred
    I am trying to do something similar to this stackoverflow posting. What I want to do is to read the definition of an activity or service from the SD card. To avoid manifest permission issues, I create a shell version of this activity in the .apk, but try to replace it with an activity of the same name residing on the SD card at run time. Unfortunately, I am able to load the activity class definition from the SD card using DexClassLoader, but the original class definition is the one that is executed. Is there a way to specify that the new class definition replaces the old one, or any suggestions on avoiding the manifest permission issues without actually providing the needed activity in the package? The code sample: ClassLoader cl = new DexClassLoader("/sdcard/mypath/My.apk", getFilesDir().getAbsolutePath(), null, MainActivity.class.getClassLoader()); try { Class<?> c = cl.loadClass("com.android.my.path.to.a.loaded.activity"); Intent i = new Intent(getBaseContext(), c); startActivity(i); } catch (Exception e) { Intead of launching the com.android.my.path.to.a.loaded.activity specified in /sdcard/mypath/My.apk, it launches the activity statically loaded into the project.

    Read the article

  • Cyclic Reference - protocols and subclasses

    - by blindJesse
    I'm getting some cyclic reference (I think) problems between a few classes that require imported headers due to either subclassing or protocol definitions. I can explain why things are set up this way but I'm not sure it's essential. Basically these classes are managing reciprocal to-many data relationships. The layout is this: Class A imports Class B because it's a delegate of Class B and needs its protocol definition. Class B imports Class C because it's a subclass of Class C. Class C imports Class A because it's a delegate of Class A and needs its protocol definition. Here's some sample code that illustrates the problem. The errors I'm getting are as follows: In Class A - "Can't find protocol definition for Class_B_Delegate". In Class B - "Can't find interface declaration for Class C - superclass of Class B." In Class C - "Can't find protocol definition for Class_A_Delegate". Class A header: #import <Foundation/Foundation.h> #import "Class_B.h" @protocol Class_A_Delegate @end @interface Class_A : NSObject <Class_B_Delegate> { } @end Class B header: #import <Foundation/Foundation.h> #import "Class_C.h" @protocol Class_B_Delegate <NSObject> @end @interface Class_B : Class_C { } @end Class C Header: #import <Foundation/Foundation.h> #import "Class_A.h" @interface Class_C : NSObject <Class_A_Delegate> { } @end

    Read the article

  • c++ function scope

    - by Myx
    I have a main function in A.cpp which has the following relevant two lines of code: B definition(input_file); definition.Print(); In B.h I have the following relevant lines of code: class B { public: // Constructors B(void); B(const char *filename); ~B(void); // File input int ParseLSFile(const char *filename); // Debugging void Print(void); // Data int var1; double var2; vector<char* > var3; map<char*, vector<char* > > var4; } In B.cpp, I have the following function signatures (sorry for being redundant): B::B(void) : var1(-1), var2(numeric_limits<double>::infinity()) { } B::B(const char *filename) { B *def = new B(); def->ParseLSFile(filename); } B::~B(void) { // Free memory for var3 and var 4 } int B::ParseLSFile(const char *filename) { // assign var1, var2, var3, and var4 values } void B::Print(void) { // print contents of var1, var2, var3, and var4 to stdout } So when I call Print() from within B::ParseLSFile(...), then the contents of my structures print correctly to stdout. However, when I call definition.Print() from A.cpp, my structures are empty or contain garbage. Can anyone recommend the correct way to initialize/pass my structures so that I can access them outside of the scope of my function definition? Thanks.

    Read the article

  • Using StructureMap to create classes by a name?

    - by Bevan
    How can I use StructureMap to resolve to an appropriate implementation of an interface based on a name stored in an attribute? In my project, I have many different kinds of widgets, each descending from IWidget, and each decorated with an attribute specifying the kind of associated element. To illustrate: [Configuration("header")] public class HeaderWidget : IWidget { } [Configuration("linegraph")] public class LineGraphWidget : IWidget { } When processing my (XML) configuration file, I want to obtain an instance of the appropriate concrete class based on the name of the element I'm processing. public IWidget CreateWidget(XElement definition) { var kind = definition.Name.LocalName; var widget = // What goes here? widget.Configure(definition); return widget; } Each definition should result in a different widget being created - I don't need or want the instances to be shared. In the past I've written plenty of code to do this kind of thing manually, including writing a custom "roll-your-own" IoC container for one project. However, one of my goals with this project is to become proficient with StructureMap instead of reinventing the wheel. I think I've already managed to set up automatic scanning of assemblies so that StructureMap knows about all my IWidget implementations: public class WidgetRegistration : Registry { public WidgetRegistration() { Scan( scanner => { scanner.AssembliesFromApplicationBaseDirectory(); scanner.AddAllTypesOf<IWidget>(); }); } } However, this isn't registering the names of my widgets with StructureMap. What do I need to add to make my scenario work? (While I am trying to use StructureMap in this project, an answer showing me how to solve this problem with a different DI/IoC tool would still be valuable.)

    Read the article

  • Type errors when using same name

    - by lykimq
    I have 3 files: 1) cpf0.ml type string = char list type url = string type var = string type name = string type symbol = | Symbol_name of name 2) problem.ml: type symbol = | Ident of string 3) test.ml open Problem;; open Cpf0;; let symbol b = function | Symbol_name n -> Ident n When I combine test.ml: ocamlc -c test.ml. I received an error: This expression has type Cpf0.name = char list but an expression was expected of type string Could you please help me to correct it? Thank you very much EDIT: Thank you for your answer. I want to explain more about these 3 files: Because I am working with extraction in Coq to Ocaml type: cpf0.ml is generated from cpf.v : Require Import String. Definition string := string. Definition name := string. Inductive symbol := | Symbol_name : name -> symbol. The code extraction.v: Set Extraction Optimize. Extraction Language Ocaml. Require ExtrOcamlBasic ExtrOcamlString. Extraction Blacklist cpf list. where ExtrOcamlString I opened: open Cpf0;; in problem.ml, and I got a new problem because in problem.ml they have another definition for type string This expression has type Cpf0.string = char list but an expression was expected of type Util.StrSet.elt = string Here is a definition in util.ml defined type string: module Str = struct type t = string end;; module StrOrd = Ord.Make (Str);; module StrSet = Set.Make (StrOrd);; module StrMap = Map.Make (StrOrd);; let set_add_chk x s = if StrSet.mem x s then failwith (x ^ " already declared") else StrSet.add x s;; I was trying to change t = string to t = char list, but if I do that I have to change a lot of function it depend on (for example: set_add_chk above). Could you please give me a good idea? how I would do in this case.

    Read the article

  • How to handle build rule with unknown targets in OMake when target list generator is built

    - by Michael E
    I have a project which uses OMake for its build system, and I am trying to handle a rather tough corner case. I have some definition files and a tool which can take these definition files and create GraphViz files. There are two problems, though: Each definition file can produce multiple graphs, and the list of graphs it can produce is encoded in the file. My dump tool does have a -list option which lists all the graphs a definition file will produce. This dump tool is built in the source tree. I want this list available in the OMakefile so I can use other rules to convert the DOT files to SVG, and have a phony target depend on all the SVGs (goal: a single build command which builds SVG descriptions of all my graphs). If I only had the first problem, it would be easy - I would run the tool to build a list, and then use that list to build a target which invokes the dumper to output the GraphViz files. However, I am rather stuck on forcing the dump tool to be built before it is needed. If this were make, I would just run make recursively to build the dump tool. OMake does not allow recursive invocation, however, and the build function is only usable from osh. Any suggestions for a good solution to this problem?

    Read the article

  • Grab two parts of a single, short string

    - by TankorSmash
    I'm looking to fill a python dict with TAG:definition pairs, and I'm using RegExr http://gskinner.com/RegExr/ to write the regex My first step is to parse a line, from http://www.id3.org/id3v2.3.0, or http://pastebin.com/VJEBGauL and pull out the ID3 tag and the associated definition. For example the first line: 4.20 AENC [#sec4.20 Audio encryption] would look like this myDict = {'AENC' : 'Audio encryption'} To grab the tag name, I've got it looking for at least 3 spaces, then 4 characters, then 4 spaces: {3}[a-zA-Z0-9]{4} {4} That part is easy enough. The second part, the definition, is not working out for me. So far, I've got (?<=(\[#.+?)) A Which should find, but not include the [# as well as an indeterminded set of characters until it finds: _A, but it's failing. If I remove .+? and replace _A with s it works out alright. What is going wrong? *The underscores represent spaces, which don't show up on SO. How do I grab the definition, ie,(Audio encryption) of the ID3v2 tag from the line, using RegEx?

    Read the article

  • A simple Dynamic Proxy

    - by Abhijeet Patel
    Frameworks such as EF4 and MOQ do what most developers consider "dark magic". For instance in EF4, when you use a POCO for an entity you can opt-in to get behaviors such as "lazy-loading" and "change tracking" at runtime merely by ensuring that your type has the following characteristics: The class must be public and not sealed. The class must have a public or protected parameter-less constructor. The class must have public or protected properties Adhere to this and your type is magically endowed with these behaviors without any additional programming on your part. Behind the scenes the framework subclasses your type at runtime and creates a "dynamic proxy" which has these additional behaviors and when you navigate properties of your POCO, the framework replaces the POCO type with derived type instances. The MOQ framework does simlar magic. Let's say you have a simple interface:   public interface IFoo      {          int GetNum();      }   We can verify that the GetNum() was invoked on a mock like so:   var mock = new Mock<IFoo>(MockBehavior.Default);   mock.Setup(f => f.GetNum());   var num = mock.Object.GetNum();   mock.Verify(f => f.GetNum());   Beind the scenes the MOQ framework is generating a dynamic proxy by implementing IFoo at runtime. the call to moq.Object returns the dynamic proxy on which we then call "GetNum" and then verify that this method was invoked. No dark magic at all, just clever programming is what's going on here, just not visible and hence appears magical! Let's create a simple dynamic proxy generator which accepts an interface type and dynamically creates a proxy implementing the interface type specified at runtime.     public static class DynamicProxyGenerator   {       public static T GetInstanceFor<T>()       {           Type typeOfT = typeof(T);           var methodInfos = typeOfT.GetMethods();           AssemblyName assName = new AssemblyName("testAssembly");           var assBuilder = AppDomain.CurrentDomain.DefineDynamicAssembly(assName, AssemblyBuilderAccess.RunAndSave);           var moduleBuilder = assBuilder.DefineDynamicModule("testModule", "test.dll");           var typeBuilder = moduleBuilder.DefineType(typeOfT.Name + "Proxy", TypeAttributes.Public);              typeBuilder.AddInterfaceImplementation(typeOfT);           var ctorBuilder = typeBuilder.DefineConstructor(                     MethodAttributes.Public,                     CallingConventions.Standard,                     new Type[] { });           var ilGenerator = ctorBuilder.GetILGenerator();           ilGenerator.EmitWriteLine("Creating Proxy instance");           ilGenerator.Emit(OpCodes.Ret);           foreach (var methodInfo in methodInfos)           {               var methodBuilder = typeBuilder.DefineMethod(                   methodInfo.Name,                   MethodAttributes.Public | MethodAttributes.Virtual,                   methodInfo.ReturnType,                   methodInfo.GetParameters().Select(p => p.GetType()).ToArray()                   );               var methodILGen = methodBuilder.GetILGenerator();               methodILGen.EmitWriteLine("I'm a proxy");               if (methodInfo.ReturnType == typeof(void))               {                   methodILGen.Emit(OpCodes.Ret);               }               else               {                   if (methodInfo.ReturnType.IsValueType || methodInfo.ReturnType.IsEnum)                   {                       MethodInfo getMethod = typeof(Activator).GetMethod(/span>"CreateInstance",new Type[]{typeof((Type)});                                               LocalBuilder lb = methodILGen.DeclareLocal(methodInfo.ReturnType);                       methodILGen.Emit(OpCodes.Ldtoken, lb.LocalType);                       methodILGen.Emit(OpCodes.Call, typeofype).GetMethod("GetTypeFromHandle"));  ));                       methodILGen.Emit(OpCodes.Callvirt, getMethod);                       methodILGen.Emit(OpCodes.Unbox_Any, lb.LocalType);                                                              }                 else                   {                       methodILGen.Emit(OpCodes.Ldnull);                   }                   methodILGen.Emit(OpCodes.Ret);               }               typeBuilder.DefineMethodOverride(methodBuilder, methodInfo);           }                     Type constructedType = typeBuilder.CreateType();           var instance = Activator.CreateInstance(constructedType);           return (T)instance;       }   }   Dynamic proxies are created by calling into the following main types: AssemblyBuilder, TypeBuilder, Modulebuilder and ILGenerator. These types enable dynamically creating an assembly and emitting .NET modules and types in that assembly, all using IL instructions. Let's break down the code above a bit and examine it piece by piece                Type typeOfT = typeof(T);              var methodInfos = typeOfT.GetMethods();              AssemblyName assName = new AssemblyName("testAssembly");              var assBuilder = AppDomain.CurrentDomain.DefineDynamicAssembly(assName, AssemblyBuilderAccess.RunAndSave);              var moduleBuilder = assBuilder.DefineDynamicModule("testModule", "test.dll");              var typeBuilder = moduleBuilder.DefineType(typeOfT.Name + "Proxy", TypeAttributes.Public);   We are instructing the runtime to create an assembly caled "test.dll"and in this assembly we then emit a new module called "testModule". We then emit a new type definition of name "typeName"Proxy into this new module. This is the definition for the "dynamic proxy" for type T                 typeBuilder.AddInterfaceImplementation(typeOfT);               var ctorBuilder = typeBuilder.DefineConstructor(                         MethodAttributes.Public,                         CallingConventions.Standard,                         new Type[] { });               var ilGenerator = ctorBuilder.GetILGenerator();               ilGenerator.EmitWriteLine("Creating Proxy instance");               ilGenerator.Emit(OpCodes.Ret);   The newly created type implements type T and defines a default parameterless constructor in which we emit a call to Console.WriteLine. This call is not necessary but we do this so that we can see first hand that when the proxy is constructed, when our default constructor is invoked.   var methodBuilder = typeBuilder.DefineMethod(                      methodInfo.Name,                      MethodAttributes.Public | MethodAttributes.Virtual,                      methodInfo.ReturnType,                      methodInfo.GetParameters().Select(p => p.GetType()).ToArray()                      );   We then iterate over each method declared on type T and add a method definition of the same name into our "dynamic proxy" definition     if (methodInfo.ReturnType == typeof(void))   {       methodILGen.Emit(OpCodes.Ret);   }   If the return type specified in the method declaration of T is void we simply return.     if (methodInfo.ReturnType.IsValueType || methodInfo.ReturnType.IsEnum)   {                               MethodInfo getMethod = typeof(Activator).GetMethod("CreateInstance",                                                         new Type[]{typeof(Type)});                               LocalBuilder lb = methodILGen.DeclareLocal(methodInfo.ReturnType);                                                     methodILGen.Emit(OpCodes.Ldtoken, lb.LocalType);       methodILGen.Emit(OpCodes.Call, typeof(Type).GetMethod("GetTypeFromHandle"));       methodILGen.Emit(OpCodes.Callvirt, getMethod);       methodILGen.Emit(OpCodes.Unbox_Any, lb.LocalType);   }   If the return type in the method declaration of T is either a value type or an enum, then we need to create an instance of the value type and return that instance the caller. In order to accomplish that we need to do the following: 1) Get a handle to the Activator.CreateInstance method 2) Declare a local variable which represents the Type of the return type(i.e the type object of the return type) specified on the method declaration of T(obtained from the MethodInfo) and push this Type object onto the evaluation stack. In reality a RuntimeTypeHandle is what is pushed onto the stack. 3) Invoke the "GetTypeFromHandle" method(a static method in the Type class) passing in the RuntimeTypeHandle pushed onto the stack previously as an argument, the result of this invocation is a Type object (representing the method's return type) which is pushed onto the top of the evaluation stack. 4) Invoke Activator.CreateInstance passing in the Type object from step 3, the result of this invocation is an instance of the value type boxed as a reference type and pushed onto the top of the evaluation stack. 5) Unbox the result and place it into the local variable of the return type defined in step 2   methodILGen.Emit(OpCodes.Ldnull);   If the return type is a reference type then we just load a null onto the evaluation stack   methodILGen.Emit(OpCodes.Ret);   Emit a a return statement to return whatever is on top of the evaluation stack(null or an instance of a value type) back to the caller     Type constructedType = typeBuilder.CreateType();   var instance = Activator.CreateInstance(constructedType);   return (T)instance;   Now that we have a definition of the "dynamic proxy" implementing all the methods declared on T, we can now create an instance of the proxy type and return that out typed as T. The caller can now invoke the generator and request a dynamic proxy for any type T. In our example when the client invokes GetNum() we get back "0". Lets add a new method on the interface called DayOfWeek GetDay()   public interface IFoo      {          int GetNum();          DayOfWeek GetDay();      }   When GetDay() is invoked, the "dynamic proxy" returns "Sunday" since that is the default value for the DayOfWeek enum This is a very trivial example of dynammic proxies, frameworks like MOQ have a way more sophisticated implementation of this paradigm where in you can instruct the framework to create proxies which return specified values for a method implementation.

    Read the article

  • Curing the Database-Application mismatch

    - by Phil Factor
    If an application requires access to a database, then you have to be able to deploy it so as to be version-compatible with the database, in phase. If you can deploy both together, then the application and database must normally be deployed at the same version in which they, together, passed integration and functional testing.  When a single database supports more than one application, then the problem gets more interesting. I’ll need to be more precise here. It is actually the application-interface definition of the database that needs to be in a compatible ‘version’.  Most databases that get into production have no separate application-interface; in other words they are ‘close-coupled’.  For this vast majority, the whole database is the application-interface, and applications are free to wander through the bowels of the database scot-free.  If you’ve spurned the perceived wisdom of application architects to have a defined application-interface within the database that is based on views and stored procedures, any version-mismatch will be as sensitive as a kitten.  A team that creates an application that makes direct access to base tables in a database will have to put a lot of energy into keeping Database and Application in sync, to say nothing of having to tackle issues such as security and audit. It is not the obvious route to development nirvana. I’ve been in countless tense meetings with application developers who initially bridle instinctively at the apparent restrictions of being ‘banned’ from the base tables or routines of a database.  There is no good technical reason for needing that sort of access that I’ve ever come across.  Everything that the application wants can be delivered via a set of views and procedures, and with far less pain for all concerned: This is the application-interface.  If more than zero developers are creating a database-driven application, then the project will benefit from the loose-coupling that an application interface brings. What is important here is that the database development role is separated from the application development role, even if it is the same developer performing both roles. The idea of an application-interface with a database is as old as I can remember. The big corporate or government databases generally supported several applications, and there was little option. When a new application wanted access to an existing corporate database, the developers, and myself as technical architect, would have to meet with hatchet-faced DBAs and production staff to work out an interface. Sure, they would talk up the effort involved for budgetary reasons, but it was routine work, because it decoupled the database from its supporting applications. We’d be given our own stored procedures. One of them, I still remember, had ninety-two parameters. All database access was encapsulated in one application-module. If you have a stable defined application-interface with the database (Yes, one for each application usually) you need to keep the external definitions of the components of this interface in version control, linked with the application source,  and carefully track and negotiate any changes between database developers and application developers.  Essentially, the application development team owns the interface definition, and the onus is on the Database developers to implement it and maintain it, in conformance.  Internally, the database can then make all sorts of changes and refactoring, as long as source control is maintained.  If the application interface passes all the comprehensive integration and functional tests for the particular version they were designed for, nothing is broken. Your performance-testing can ‘hang’ on the same interface, since databases are judged on the performance of the application, not an ‘internal’ database process. The database developers have responsibility for maintaining the application-interface, but not its definition,  as they refactor the database. This is easily tested on a daily basis since the tests are normally automated. In this setting, the deployment can proceed if the more stable application-interface, rather than the continuously-changing database, passes all tests for the version of the application. Normally, if all goes well, a database with a well-designed application interface can evolve gracefully without changing the external appearance of the interface, and this is confirmed by integration tests that check the interface, and which hopefully don’t need to be altered at all often.  If the application is rapidly changing its ‘domain model’  in the light of an increased understanding of the application domain, then it can change the interface definitions and the database developers need only implement the interface rather than refactor the underlying database.  The test team will also have to redo the functional and integration tests which are, of course ‘written to’ the definition.  The Database developers will find it easier if these tests are done before their re-wiring  job to implement the new interface. If, at the other extreme, an application receives no further development work but survives unchanged, the database can continue to change and develop to keep pace with the requirements of the other applications it supports, and needs only to take care that the application interface is never broken. Testing is easy since your automated scripts to test the interface do not need to change. The database developers will, of course, maintain their own source control for the database, and will be likely to maintain versions for all major releases. However, this will not need to be shared with the applications that the database servers. On the other hand, the definition of the application interfaces should be within the application source. Changes in it have to be subject to change-control procedures, as they will require a chain of tests. Once you allow, instead of an application-interface, an intimate relationship between application and database, we are in the realms of impedance mismatch, over and above the obvious security problems.  Part of this impedance problem is a difference in development practices. Whereas the application has to be regularly built and integrated, this isn’t necessarily the case with the database.  An RDBMS is inherently multi-user and self-integrating. If the developers work together on the database, then a subsequent integration of the database on a staging server doesn’t often bring nasty surprises. A separate database-integration process is only needed if the database is deliberately built in a way that mimics the application development process, but which hampers the normal database-development techniques.  This process is like demanding a official walking with a red flag in front of a motor car.  In order to closely coordinate databases with applications, entire databases have to be ‘versioned’, so that an application version can be matched with a database version to produce a working build without errors.  There is no natural process to ‘version’ databases.  Each development project will have to define a system for maintaining the version level. A curious paradox occurs in development when there is no formal application-interface. When the strains and cracks happen, the extra meetings, bureaucracy, and activity required to maintain accurate deployments looks to IT management like work. They see activity, and it looks good. Work means progress.  Management then smile on the design choices made. In IT, good design work doesn’t necessarily look good, and vice versa.

    Read the article

  • _default_ VirtualHost overlap on port 443, the first has precedence

    - by Mohit Jain
    I have two ruby on rails 3 applications running on same server, (ubuntu 10.04), both with SSL. Here is my apache config file: <VirtualHost *:80> ServerName example1.com DocumentRoot /home/me/example1/production/current/public </VirtualHost> <VirtualHost *:443> ServerName example1.com DocumentRoot /home/me/example1/production/current/public SSLEngine on SSLCertificateFile /home/me/example1/production/shared/example1.crt SSLCertificateKeyFile /home/me/example1/production/shared/example1.key SSLCertificateChainFile /home/me/example1/production/shared/gd_bundle.crt SSLProtocol -all +TLSv1 +SSLv3 SSLCipherSuite HIGH:MEDIUM:!aNULL:+SHA1:+MD5:+HIGH:+MEDIUM </VirtualHost> <VirtualHost *:80> ServerName example2.com DocumentRoot /home/me/example2/production/current/public </VirtualHost> <VirtualHost *:443> ServerName example2.com DocumentRoot /home/me/example2/production/current/public SSLEngine on SSLCertificateFile /home/me/example2/production/shared/iwanto.crt SSLCertificateKeyFile /home/me/example2/production/shared/iwanto.key SSLCertificateChainFile /home/me/example2/production/shared/gd_bundle.crt SSLProtocol -all +TLSv1 +SSLv3 SSLCipherSuite HIGH:MEDIUM:!aNULL:+SHA1:+MD5:+HIGH:+MEDIUM </VirtualHost> Whats the issue: On restarting my server it gives me some output like this: * Restarting web server apache2 [Sun Jun 17 17:57:49 2012] [warn] _default_ VirtualHost overlap on port 443, the first has precedence ... waiting [Sun Jun 17 17:57:50 2012] [warn] _default_ VirtualHost overlap on port 443, the first has precedence On googling why this issue is coming I got something like this: You cannot use name based virtual hosts with SSL because the SSL handshake (when the browser accepts the secure Web server's certificate) occurs before the HTTP request, which identifies the appropriate name based virtual host. If you plan to use name-based virtual hosts, remember that they only work with your non-secure Web server. But not able to figure out how to run two ssl application on same server. Can any one help me?

    Read the article

  • Add Background Images and Themes to Windows 7 Media Center

    - by DigitalGeekery
    Are you tired of the same Windows Media Center look and feel? Today we’ll show you how change the background and apply themes to WMC. Changing the Basic Color Scheme in WMC There are a couple of very basic color scheme options built in to Windows 7 Media Center. From the WMC Start Menu, select Settings on the Tasks strip and then select General. On the General settings screen select Visual and Sound Effects.   Under Color scheme you’ll find options for Windows Media Center standard, High contrast white, and High contrast black. Simply select a color scheme and click Save before exiting.   If you have used Media Center before you are familiar with the standard blue default theme. There is also the high contrast white. And, the high contrast black. Changing the Background Image with Media Center Studio Themes and custom backgrounds need to be added with the third-party software, Media Center Studio. You can find the download link at the end of this article. You can use your own high resolution photo, or download one from the Internet. For best results, you’ll want to find an image that meets or exceeds the resolution of your monitor. Also, using a darker colored background image is ideal as it should contrast better with the lighter colored text of the start menu. Once you’ve downloaded and installed Media Center Studio (link below), open the application select the Home tab on the ribbon and make sure you are on the Themes tab below. Click New. Select Biography from the left pane and type in a name for your new theme.   Next, click on the triangle next to Images to expand the list below. You’ll want to browse to Images > Common > Background. You should see a list of PNG image files located below Background. We will want to swap out the COMMON.ANIMATED.BACKGROUND.PNG and the COMMON.BACKGROUND.PNG images. Select COMMON.ANIMATED.BACKGROUND.PNG and click on the Browse button on the right.   Browse for your photo and click Open. Your selected image will appear on the left pane. Now, do the same for the COMMON.BACKGROUND.PNG. When finished, select the Home tab on the ribbon at the top and click Save.   Now switch to the Themes tab on the ribbon and the Themes tab below. (There are two Themes tabs which can be a bit confusing). Select your theme on the right pane and click Apply. Note: You won’t see the image backgrounds displayed. Your theme will be applied to Media Center. Close out of Media Center Studio and open Windows Media Center to check out your new background.   You can load multiple backgrounds images and switch them periodically as your mood changes. You might like to find a nice background featuring your favorite movie or TV show.   Perhaps you can even find a background of your favorite sports team.   Installing Themes with Media Center Studio Theme7MC has made available a small group of Media Center Studio Theme packs that are simple to download and install. You can find the download link below. Note: Before installing a theme, turn off any extenders and close Windows Media Center. Download any (or all) of the Theme7MC theme packages to your Media Center PC. Open Media Center Studio, select the Themes tab (the one at the top) and click Import Theme.   Browse for the theme you wish to import and click Open. Select your theme from the themes pane and click Apply. Media Center Studio will proceed to apply your theme. You should then see your new theme appear under Current theme on the left theme pane. Close out of Media Center Studio. Open Media Center and enjoy your new theme. Conclusion Media Center Studio runs on Windows 7 or Vista and gives users a solution for personalizing their Media Center backgrounds. It is a Beta application, however, so it still has a few bugs. Currently, there are only a handful of themes available at Themes7MC, but what they have is pretty slick. If you’d like to further customize the look of Media Center, check out our previous article on how to customize the Media Center start menu with Media Center Studio. Downloads Media Center Studio Theme7MC Similar Articles Productive Geek Tips Using Netflix Watchnow in Windows Vista Media Center (Gmedia)How To Rip a Music CD in Windows 7 Media CenterAutomatically Mount and View ISO files in Windows 7 Media CenterSchedule Updates for Windows Media CenterIntegrate Hulu Desktop and Windows Media Center in Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 Acronis Online Backup AceStock, a Tiny Desktop Quote Monitor Gmail Button Addon (Firefox) Hyperwords addon (Firefox) Backup Outlook 2010 Daily Motivator (Firefox) FetchMp3 Can Download Videos & Convert Them to Mp3

    Read the article

  • AngularJS on top of ASP.NET: Moving the MVC framework out to the browser

    - by Varun Chatterji
    Heavily drawing inspiration from Ruby on Rails, MVC4’s convention over configuration model of development soon became the Holy Grail of .NET web development. The MVC model brought with it the goodness of proper separation of concerns between business logic, data, and the presentation logic. However, the MVC paradigm, was still one in which server side .NET code could be mixed with presentation code. The Razor templating engine, though cleaner than its predecessors, still encouraged and allowed you to mix .NET server side code with presentation logic. Thus, for example, if the developer required a certain <div> tag to be shown if a particular variable ShowDiv was true in the View’s model, the code could look like the following: Fig 1: To show a div or not. Server side .NET code is used in the View Mixing .NET code with HTML in views can soon get very messy. Wouldn’t it be nice if the presentation layer (HTML) could be pure HTML? Also, in the ASP.NET MVC model, some of the business logic invariably resides in the controller. It is tempting to use an anti­pattern like the one shown above to control whether a div should be shown or not. However, best practice would indicate that the Controller should not be aware of the div. The ShowDiv variable in the model should not exist. A controller should ideally, only be used to do the plumbing of getting the data populated in the model and nothing else. The view (ideally pure HTML) should render the presentation layer based on the model. In this article we will see how Angular JS, a new JavaScript framework by Google can be used effectively to build web applications where: 1. Views are pure HTML 2. Controllers (in the server sense) are pure REST based API calls 3. The presentation layer is loaded as needed from partial HTML only files. What is MVVM? MVVM short for Model View View Model is a new paradigm in web development. In this paradigm, the Model and View stuff exists on the client side through javascript instead of being processed on the server through postbacks. These frameworks are JavaScript frameworks that facilitate the clear separation of the “frontend” or the data rendering logic from the “backend” which is typically just a REST based API that loads and processes data through a resource model. The frameworks are called MVVM as a change to the Model (through javascript) gets reflected in the view immediately i.e. Model > View. Also, a change on the view (through manual input) gets reflected in the model immediately i.e. View > Model. The following figure shows this conceptually (comments are shown in red): Fig 2: Demonstration of MVVM in action In Fig 2, two text boxes are bound to the same variable model.myInt. Thus, changing the view manually (changing one text box through keyboard input) also changes the other textbox in real time demonstrating V > M property of a MVVM framework. Furthermore, clicking the button adds 1 to the value of model.myInt thus changing the model through JavaScript. This immediately updates the view (the value in the two textboxes) thus demonstrating the M > V property of a MVVM framework. Thus we see that the model in a MVVM JavaScript framework can be regarded as “the single source of truth“. This is an important concept. Angular is one such MVVM framework. We shall use it to build a simple app that sends SMS messages to a particular number. Application, Routes, Views, Controllers, Scope and Models Angular can be used in many ways to construct web applications. For this article, we shall only focus on building Single Page Applications (SPAs). Many of the approaches we will follow in this article have alternatives. It is beyond the scope of this article to explain every nuance in detail but we shall try to touch upon the basic concepts and end up with a working application that can be used to send SMS messages using Sent.ly Plus (a service that is itself built using Angular). Before you read on, we would like to urge you to forget what you know about Models, Views, Controllers and Routes in the ASP.NET MVC4 framework. All these words have different meanings in the Angular world. Whenever these words are used in this article, they will refer to Angular concepts and not ASP.NET MVC4 concepts. The following figure shows the skeleton of the root page of an SPA: Fig 3: The skeleton of a SPA The skeleton of the application is based on the Bootstrap starter template which can be found at: http://getbootstrap.com/examples/starter­template/ Apart from loading the Angular, jQuery and Bootstrap JavaScript libraries, it also loads our custom scripts /app/js/controllers.js /app/js/app.js These scripts define the routes, views and controllers which we shall come to in a moment. Application Notice that the body tag (Fig. 3) has an extra attribute: ng­app=”smsApp” Providing this tag “bootstraps” our single page application. It tells Angular to load a “module” called smsApp. This “module” is defined /app/js/app.js angular.module('smsApp', ['smsApp.controllers', function () {}]) Fig 4: The definition of our application module The line shows above, declares a module called smsApp. It also declares that this module “depends” on another module called “smsApp.controllers”. The smsApp.controllers module will contain all the controllers for our SPA. Routing and Views Notice that in the Navbar (in Fig 3) we have included two hyperlinks to: “#/app” “#/help” This is how Angular handles routing. Since the URLs start with “#”, they are actually just bookmarks (and not server side resources). However, our route definition (in /app/js/app.js) gives these URLs a special meaning within the Angular framework. angular.module('smsApp', ['smsApp.controllers', function () { }]) //Configure the routes .config(['$routeProvider', function ($routeProvider) { $routeProvider.when('/binding', { templateUrl: '/app/partials/bindingexample.html', controller: 'BindingController' }); }]); Fig 5: The definition of a route with an associated partial view and controller As we can see from the previous code sample, we are using the $routeProvider object in the configuration of our smsApp module. Notice how the code “asks for” the $routeProvider object by specifying it as a dependency in the [] braces and then defining a function that accepts it as a parameter. This is known as dependency injection. Please refer to the following link if you want to delve into this topic: http://docs.angularjs.org/guide/di What the above code snippet is doing is that it is telling Angular that when the URL is “#/binding”, then it should load the HTML snippet (“partial view”) found at /app/partials/bindingexample.html. Also, for this URL, Angular should load the controller called “BindingController”. We have also marked the div with the class “container” (in Fig 3) with the ng­view attribute. This attribute tells Angular that views (partial HTML pages) defined in the routes will be loaded within this div. You can see that the Angular JavaScript framework, unlike many other frameworks, works purely by extending HTML tags and attributes. It also allows you to extend HTML with your own tags and attributes (through directives) if you so desire, you can find out more about directives at the following URL: http://www.codeproject.com/Articles/607873/Extending­HTML­with­AngularJS­Directives Controllers and Models We have seen how we define what views and controllers should be loaded for a particular route. Let us now consider how controllers are defined. Our controllers are defined in the file /app/js/controllers.js. The following snippet shows the definition of the “BindingController” which is loaded when we hit the URL http://localhost:port/index.html#/binding (as we have defined in the route earlier as shown in Fig 5). Remember that we had defined that our application module “smsApp” depends on the “smsApp.controllers” module (see Fig 4). The code snippet below shows how the “BindingController” defined in the route shown in Fig 5 is defined in the module smsApp.controllers: angular.module('smsApp.controllers', [function () { }]) .controller('BindingController', ['$scope', function ($scope) { $scope.model = {}; $scope.model.myInt = 6; $scope.addOne = function () { $scope.model.myInt++; } }]); Fig 6: The definition of a controller in the “smsApp.controllers” module. The pieces are falling in place! Remember Fig.2? That was the code of a partial view that was loaded within the container div of the skeleton SPA shown in Fig 3. The route definition shown in Fig 5 also defined that the controller called “BindingController” (shown in Fig 6.) was loaded when we loaded the URL: http://localhost:22544/index.html#/binding The button in Fig 2 was marked with the attribute ng­click=”addOne()” which added 1 to the value of model.myInt. In Fig 6, we can see that this function is actually defined in the “BindingController”. Scope We can see from Fig 6, that in the definition of “BindingController”, we defined a dependency on $scope and then, as usual, defined a function which “asks for” $scope as per the dependency injection pattern. So what is $scope? Any guesses? As you might have guessed a scope is a particular “address space” where variables and functions may be defined. This has a similar meaning to scope in a programming language like C#. Model: The Scope is not the Model It is tempting to assign variables in the scope directly. For example, we could have defined myInt as $scope.myInt = 6 in Fig 6 instead of $scope.model.myInt = 6. The reason why this is a bad idea is that scope in hierarchical in Angular. Thus if we were to define a controller which was defined within the another controller (nested controllers), then the inner controller would inherit the scope of the parent controller. This inheritance would follow JavaScript prototypal inheritance. Let’s say the parent controller defined a variable through $scope.myInt = 6. The child controller would inherit the scope through java prototypical inheritance. This basically means that the child scope has a variable myInt that points to the parent scopes myInt variable. Now if we assigned the value of myInt in the parent, the child scope would be updated with the same value as the child scope’s myInt variable points to the parent scope’s myInt variable. However, if we were to assign the value of the myInt variable in the child scope, then the link of that variable to the parent scope would be broken as the variable myInt in the child scope now points to the value 6 and not to the parent scope’s myInt variable. But, if we defined a variable model in the parent scope, then the child scope will also have a variable model that points to the model variable in the parent scope. Updating the value of $scope.model.myInt in the parent scope would change the model variable in the child scope too as the variable is pointed to the model variable in the parent scope. Now changing the value of $scope.model.myInt in the child scope would ALSO change the value in the parent scope. This is because the model reference in the child scope is pointed to the scope variable in the parent. We did no new assignment to the model variable in the child scope. We only changed an attribute of the model variable. Since the model variable (in the child scope) points to the model variable in the parent scope, we have successfully changed the value of myInt in the parent scope. Thus the value of $scope.model.myInt in the parent scope becomes the “single source of truth“. This is a tricky concept, thus it is considered good practice to NOT use scope inheritance. More info on prototypal inheritance in Angular can be found in the “JavaScript Prototypal Inheritance” section at the following URL: https://github.com/angular/angular.js/wiki/Understanding­Scopes. Building It: An Angular JS application using a .NET Web API Backend Now that we have a perspective on the basic components of an MVVM application built using Angular, let’s build something useful. We will build an application that can be used to send out SMS messages to a given phone number. The following diagram describes the architecture of the application we are going to build: Fig 7: Broad application architecture We are going to add an HTML Partial to our project. This partial will contain the form fields that will accept the phone number and message that needs to be sent as an SMS. It will also display all the messages that have previously been sent. All the executable code that is run on the occurrence of events (button clicks etc.) in the view resides in the controller. The controller interacts with the ASP.NET WebAPI to get a history of SMS messages, add a message etc. through a REST based API. For the purposes of simplicity, we will use an in memory data structure for the purposes of creating this application. Thus, the tasks ahead of us are: Creating the REST WebApi with GET, PUT, POST, DELETE methods. Creating the SmsView.html partial Creating the SmsController controller with methods that are called from the SmsView.html partial Add a new route that loads the controller and the partial. 1. Creating the REST WebAPI This is a simple task that should be quite straightforward to any .NET developer. The following listing shows our ApiController: public class SmsMessage { public string to { get; set; } public string message { get; set; } } public class SmsResource : SmsMessage { public int smsId { get; set; } } public class SmsResourceController : ApiController { public static Dictionary<int, SmsResource> messages = new Dictionary<int, SmsResource>(); public static int currentId = 0; // GET api/<controller> public List<SmsResource> Get() { List<SmsResource> result = new List<SmsResource>(); foreach (int key in messages.Keys) { result.Add(messages[key]); } return result; } // GET api/<controller>/5 public SmsResource Get(int id) { if (messages.ContainsKey(id)) return messages[id]; return null; } // POST api/<controller> public List<SmsResource> Post([FromBody] SmsMessage value) { //Synchronize on messages so we don't have id collisions lock (messages) { SmsResource res = (SmsResource) value; res.smsId = currentId++; messages.Add(res.smsId, res); //SentlyPlusSmsSender.SendMessage(value.to, value.message); return Get(); } } // PUT api/<controller>/5 public List<SmsResource> Put(int id, [FromBody] SmsMessage value) { //Synchronize on messages so we don't have id collisions lock (messages) { if (messages.ContainsKey(id)) { //Update the message messages[id].message = value.message; messages[id].to = value.message; } return Get(); } } // DELETE api/<controller>/5 public List<SmsResource> Delete(int id) { if (messages.ContainsKey(id)) { messages.Remove(id); } return Get(); } } Once this class is defined, we should be able to access the WebAPI by a simple GET request using the browser: http://localhost:port/api/SmsResource Notice the commented line: //SentlyPlusSmsSender.SendMessage The SentlyPlusSmsSender class is defined in the attached solution. We have shown this line as commented as we want to explain the core Angular concepts. If you load the attached solution, this line is uncommented in the source and an actual SMS will be sent! By default, the API returns XML. For consumption of the API in Angular, we would like it to return JSON. To change the default to JSON, we make the following change to WebApiConfig.cs file located in the App_Start folder. public static class WebApiConfig { public static void Register(HttpConfiguration config) { config.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); var appXmlType = config.Formatters.XmlFormatter. SupportedMediaTypes. FirstOrDefault( t => t.MediaType == "application/xml"); config.Formatters.XmlFormatter.SupportedMediaTypes.Remove(appXmlType); } } We now have our backend REST Api which we can consume from Angular! 2. Creating the SmsView.html partial This simple partial will define two fields: the destination phone number (international format starting with a +) and the message. These fields will be bound to model.phoneNumber and model.message. We will also add a button that we shall hook up to sendMessage() in the controller. A list of all previously sent messages (bound to model.allMessages) will also be displayed below the form input. The following code shows the code for the partial: <!--­­ If model.errorMessage is defined, then render the error div -­­> <div class="alert alert-­danger alert-­dismissable" style="margin­-top: 30px;" ng­-show="model.errorMessage != undefined"> <button type="button" class="close" data­dismiss="alert" aria­hidden="true">&times;</button> <strong>Error!</strong> <br /> {{ model.errorMessage }} </div> <!--­­ The input fields bound to the model --­­> <div class="well" style="margin-­top: 30px;"> <table style="width: 100%;"> <tr> <td style="width: 45%; text-­align: center;"> <input type="text" placeholder="Phone number (eg; +44 7778 609466)" ng­-model="model.phoneNumber" class="form-­control" style="width: 90%" onkeypress="return checkPhoneInput();" /> </td> <td style="width: 45%; text-­align: center;"> <input type="text" placeholder="Message" ng­-model="model.message" class="form-­control" style="width: 90%" /> </td> <td style="text-­align: center;"> <button class="btn btn-­danger" ng-­click="sendMessage();" ng-­disabled="model.isAjaxInProgress" style="margin­right: 5px;">Send</button> <img src="/Content/ajax-­loader.gif" ng­-show="model.isAjaxInProgress" /> </td> </tr> </table> </div> <!--­­ The past messages ­­--> <div style="margin-­top: 30px;"> <!­­-- The following div is shown if there are no past messages --­­> <div ng­-show="model.allMessages.length == 0"> No messages have been sent yet! </div> <!--­­ The following div is shown if there are some past messages --­­> <div ng-­show="model.allMessages.length == 0"> <table style="width: 100%;" class="table table-­striped"> <tr> <td>Phone Number</td> <td>Message</td> <td></td> </tr> <!--­­ The ng-­repeat directive is line the repeater control in .NET, but as you can see this partial is pure HTML which is much cleaner --> <tr ng-­repeat="message in model.allMessages"> <td>{{ message.to }}</td> <td>{{ message.message }}</td> <td> <button class="btn btn-­danger" ng-­click="delete(message.smsId);" ng­-disabled="model.isAjaxInProgress">Delete</button> </td> </tr> </table> </div> </div> The above code is commented and should be self explanatory. Conditional rendering is achieved through using the ng-­show=”condition” attribute on various div tags. Input fields are bound to the model and the send button is bound to the sendMessage() function in the controller as through the ng­click=”sendMessage()” attribute defined on the button tag. While AJAX calls are taking place, the controller sets model.isAjaxInProgress to true. Based on this variable, buttons are disabled through the ng-­disabled directive which is added as an attribute to the buttons. The ng-­repeat directive added as an attribute to the tr tag causes the table row to be rendered multiple times much like an ASP.NET repeater. 3. Creating the SmsController controller The penultimate piece of our application is the controller which responds to events from our view and interacts with our MVC4 REST WebAPI. The following listing shows the code we need to add to /app/js/controllers.js. Note that controller definitions can be chained. Also note that this controller “asks for” the $http service. The $http service is a simple way in Angular to do AJAX. So far we have only encountered modules, controllers, views and directives in Angular. The $http is new entity in Angular called a service. More information on Angular services can be found at the following URL: http://docs.angularjs.org/guide/dev_guide.services.understanding_services. .controller('SmsController', ['$scope', '$http', function ($scope, $http) { //We define the model $scope.model = {}; //We define the allMessages array in the model //that will contain all the messages sent so far $scope.model.allMessages = []; //The error if any $scope.model.errorMessage = undefined; //We initially load data so set the isAjaxInProgress = true; $scope.model.isAjaxInProgress = true; //Load all the messages $http({ url: '/api/smsresource', method: "GET" }). success(function (data, status, headers, config) { this callback will be called asynchronously //when the response is available $scope.model.allMessages = data; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }). error(function (data, status, headers, config) { //called asynchronously if an error occurs //or server returns response with an error status. $scope.model.errorMessage = "Error occurred status:" + status; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }); $scope.delete = function (id) { //We are making an ajax call so we set this to true $scope.model.isAjaxInProgress = true; $http({ url: '/api/smsresource/' + id, method: "DELETE" }). success(function (data, status, headers, config) { // this callback will be called asynchronously // when the response is available $scope.model.allMessages = data; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }); error(function (data, status, headers, config) { // called asynchronously if an error occurs // or server returns response with an error status. $scope.model.errorMessage = "Error occurred status:" + status; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }); } $scope.sendMessage = function () { $scope.model.errorMessage = undefined; var message = ''; if($scope.model.message != undefined) message = $scope.model.message.trim(); if ($scope.model.phoneNumber == undefined || $scope.model.phoneNumber == '' || $scope.model.phoneNumber.length < 10 || $scope.model.phoneNumber[0] != '+') { $scope.model.errorMessage = "You must enter a valid phone number in international format. Eg: +44 7778 609466"; return; } if (message.length == 0) { $scope.model.errorMessage = "You must specify a message!"; return; } //We are making an ajax call so we set this to true $scope.model.isAjaxInProgress = true; $http({ url: '/api/smsresource', method: "POST", data: { to: $scope.model.phoneNumber, message: $scope.model.message } }). success(function (data, status, headers, config) { // this callback will be called asynchronously // when the response is available $scope.model.allMessages = data; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }). error(function (data, status, headers, config) { // called asynchronously if an error occurs // or server returns response with an error status. $scope.model.errorMessage = "Error occurred status:" + status // We are done with AJAX loading $scope.model.isAjaxInProgress = false; }); } }]); We can see from the previous listing how the functions that are called from the view are defined in the controller. It should also be evident how easy it is to make AJAX calls to consume our MVC4 REST WebAPI. Now we are left with the final piece. We need to define a route that associates a particular path with the view we have defined and the controller we have defined. 4. Add a new route that loads the controller and the partial This is the easiest part of the puzzle. We simply define another route in the /app/js/app.js file: $routeProvider.when('/sms', { templateUrl: '/app/partials/smsview.html', controller: 'SmsController' }); Conclusion In this article we have seen how much of the server side functionality in the MVC4 framework can be moved to the browser thus delivering a snappy and fast user interface. We have seen how we can build client side HTML only views that avoid the messy syntax offered by server side Razor views. We have built a functioning app from the ground up. The significant advantage of this approach to building web apps is that the front end can be completely platform independent. Even though we used ASP.NET to create our REST API, we could just easily have used any other language such as Node.js, Ruby etc without changing a single line of our front end code. Angular is a rich framework and we have only touched on basic functionality required to create a SPA. For readers who wish to delve further into the Angular framework, we would recommend the following URL as a starting point: http://docs.angularjs.org/misc/started. To get started with the code for this project: Sign up for an account at http://plus.sent.ly (free) Add your phone number Go to the “My Identies Page” Note Down your Sender ID, Consumer Key and Consumer Secret Download the code for this article at: https://docs.google.com/file/d/0BzjEWqSE31yoZjZlV0d0R2Y3eW8/edit?usp=sharing Change the values of Sender Id, Consumer Key and Consumer Secret in the web.config file Run the project through Visual Studio!

    Read the article

  • HSDPA modem only working on certain USB ports

    - by nabulke
    Depending on which USB port I use to connect my HSDPA modem, the network manager will connect to the internet or not. I used to work (i.e. established a internet connection automatically) on all ports, but over time it simply stopped on some ports. lsusb output in all cases looks like that (Device ID varies depending on USB port): Bus 001 Device 009: ID 12d1:1003 Huawei Technologies Co., Ltd. E220 HSDPA Modem / E270 HSDPA/HSUPA Modem Any ideas what could cause this behaviour? What can I do to fix this? ADDED One additional information about the modem: if connected via USB it will be available as as harddrive AND as a HSDPA modem (kind of a duality...). In the error case, it will only be shown as a harddrive. ADDITIONAL INFO AS REQUESTED MODEM NOT WORKING Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 002: ID 413c:8000 Dell Computer Corp. BC02 Bluetooth Adapter Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 007: ID 12d1:1003 Huawei Technologies Co., Ltd. E220 HSDPA Modem / E270 HSDPA/HSUPA Modem Bus 001 Device 005: ID 046d:c00c Logitech, Inc. Optical Wheel Mouse Bus 001 Device 004: ID 05e3:0608 Genesys Logic, Inc. USB-2.0 4-Port HUB Bus 001 Device 003: ID 413c:0058 Dell Computer Corp. Port Replicator Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub laptop:~$ dmesg | grep 'usb' [ 0.225371] usbcore: registered new interface driver usbfs [ 0.225387] usbcore: registered new interface driver hub [ 0.225418] usbcore: registered new device driver usb [ 0.504291] usb usb1: configuration #1 chosen from 1 choice [ 0.504767] usb usb2: configuration #1 chosen from 1 choice [ 0.505046] usb usb3: configuration #1 chosen from 1 choice [ 0.505601] usb usb4: configuration #1 chosen from 1 choice [ 1.061064] usb 1-6: new high speed USB device using ehci_hcd and address 3 [ 1.192636] usb 1-6: configuration #1 chosen from 1 choice [ 1.447006] usb 2-2: new full speed USB device using uhci_hcd and address 2 [ 1.634908] usb 2-2: configuration #1 chosen from 1 choice [ 1.708164] usb 1-6.1: new high speed USB device using ehci_hcd and address 4 [ 1.801668] usb 1-6.1: configuration #1 chosen from 1 choice [ 2.076279] usb 1-6.1.1: new low speed USB device using ehci_hcd and address 5 [ 2.174932] usb 1-6.1.1: configuration #1 chosen from 1 choice [ 6.580315] usb 1-6.1.2: new high speed USB device using ehci_hcd and address6 [ 6.683479] usb 1-6.1.2: configuration #1 chosen from 1 choice [ 20.018671] usbcore: registered new interface driver btusb [ 20.131703] usbcore: registered new interface driver usb-storage [ 20.131988] usb-storage: device found at 6 [ 20.131991] usb-storage: waiting for device to settle before scanning [ 20.207981] usb 1-6.1.2: USB disconnect, address 6 [ 20.291499] usbcore: registered new interface driver hiddev [ 20.297052] input: Logitech USB Mouse as /devices/pci0000:00/0000:00:1d.7/usb1/1-6/1-6.1/1-6.1.1/1-6.1.1:1.0/input/input6 [ 20.297465] generic-usb 0003:046D:C00C.0001: input,hidraw0: USB HID v1.10 Mouse [Logitech USB Mouse] on usb-0000:00:1d.7-6.1.1/input0 [ 20.297534] usbcore: registered new interface driver usbhid [ 20.297803] usbhid: v2.6:USB HID core driver [ 26.552360] usb 1-6.1.2: new high speed USB device using ehci_hcd and address 7 [ 26.663506] usb 1-6.1.2: configuration #1 chosen from 1 choice [ 26.709628] usb-storage: device found at 7 [ 26.709631] usb-storage: waiting for device to settle before scanning [ 26.732387] usb-storage: device found at 7 [ 26.732390] usb-storage: waiting for device to settle before scanning [ 31.709568] usb-storage: device scan complete [ 31.733676] usb-storage: device scan complete MODEM WORKING Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 002: ID 046d:c00c Logitech, Inc. Optical Wheel Mouse Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 002: ID 413c:8000 Dell Computer Corp. BC02 Bluetooth Adapter Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 004: ID 12d1:1003 Huawei Technologies Co., Ltd. E220 HSDPA Modem / E270 HSDPA/HSUPA Modem Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub dmesg | grep 'usb' [ 0.134811] usbcore: registered new interface driver usbfs [ 0.134826] usbcore: registered new interface driver hub [ 0.134858] usbcore: registered new device driver usb [ 0.360327] usb usb1: configuration #1 chosen from 1 choice [ 0.360783] usb usb2: configuration #1 chosen from 1 choice [ 0.361061] usb usb3: configuration #1 chosen from 1 choice [ 0.361611] usb usb4: configuration #1 chosen from 1 choice [ 1.144122] usb 2-2: new full speed USB device using uhci_hcd and address 2 [ 1.346896] usb 2-2: configuration #1 chosen from 1 choice [ 1.588072] usb 3-1: new low speed USB device using uhci_hcd and address 2 [ 1.761204] usb 3-1: configuration #1 chosen from 1 choice [ 5.972042] usb 1-1: new high speed USB device using ehci_hcd and address 4 [ 6.115438] usb 1-1: configuration #1 chosen from 1 choice [ 19.990565] usbcore: registered new interface driver usbserial [ 19.991429] usb-storage: device found at 4 [ 19.991432] usb-storage: waiting for device to settle before scanning [ 20.017260] usbcore: registered new interface driver usb-storage [ 20.017305] usbcore: registered new interface driver usbserial_generic [ 20.017308] usbserial: USB Serial Driver core [ 20.017817] usb-storage: device found at 4 [ 20.017820] usb-storage: waiting for device to settle before scanning [ 20.070796] usbcore: registered new interface driver btusb [ 20.229525] usb 1-1: GSM modem (1-port) converter now attached to ttyUSB0 [ 20.229776] usb 1-1: GSM modem (1-port) converter now attached to ttyUSB1 [ 20.229843] usbcore: registered new interface driver option [ 20.230396] usbcore: registered new interface driver hiddev [ 20.246280] input: Logitech USB Mouse as /devices/pci0000:00/0000:00:1d.1/usb3/3-1/3-1:1.0/input/input6 [ 20.246438] generic-usb 0003:046D:C00C.0001: input,hidraw0: USB HID v1.10 Mouse [Logitech USB Mouse] on usb-0000:00:1d.1-1/input0 [ 20.246479] usbcore: registered new interface driver usbhid [ 20.246483] usbhid: v2.6:USB HID core driver [ 25.436579] usb-storage: device scan complete [ 25.437674] usb-storage: device scan complete

    Read the article

  • How to Buy an SD Card: Speed Classes, Sizes, and Capacities Explained

    - by Chris Hoffman
    Memory cards are used in digital cameras, music players, smartphones, tablets, and even laptops. But not all SD cards are created equal — there are different speed classes, physical sizes, and capacities to consider. Different devices require different types of SD cards. Here are the differences you’ll need to keep in mind when picking out the right SD card for your device. Speed Class In a nutshell, not all SD cards offer the same speeds. This matters for some tasks more than it matters for others. For example, if you’re a professional photographer taking photos in rapid succession on a DSLR camera saving them in high-resolution RAW format, you’ll want a fast SD card so your camera can save them as fast as possible. A fast SD card is also important if you want to record high-resolution video and save it directly to the SD card. If you’re just taking a few photos on a typical consumer camera or you’re just using an SD card to store some media files on your smartphone, the speed isn’t as important. Manufacturers use “speed classes” to measure an SD card’s speed. The SD Association that defines the SD card standard doesn’t actually define the exact speeds associated with these classes, but they do provide guidelines. There are four different speed classes — 10, 8, 4, and 2. 10 is the fastest, while 2 is the slowest. Class 2 is suitable for standard definition video recording, while classes 4 and 6 are suitable for high-definition video recording. Class 10 is suitable for “full HD video recording” and “HD still consecutive recording.” There are also two Ultra High Speed (UHS) speed classes, but they’re more expensive and are designed for professional use. UHS cards are designed for devices that support UHS. Here are the associated logos, in order from slowest to fastest:       You’ll probably be okay with a class 4 or 6 card for typical use in a digital camera, smartphone, or tablet. Class 10 cards are ideal if you’re shooting high-resolution videos or RAW photos. Class 2 cards are a bit on the slow side these days, so you may want to avoid them for all but the cheapest digital cameras. Even a cheap smartphone can record HD video, after all. An SD card’s speed class is identified on the SD card itself. You’ll also see the speed class on the online store listing or on the card’s packaging when purchasing it. For example, in the below photo, the middle SD card is speed class 4, while the two other cards are speed class 6. If you see no speed class symbol, you have a class 0 SD card. These cards were designed and produced before the speed class rating system was introduced. They may be slower than even a class 2 card. Physical Size Different devices use different sizes of SD cards. You’ll find standard-size CD cards, miniSD cards, and microSD cards. Standard SD cards are the largest, although they’re still very small. They measure 32x24x2.1 mm and weigh just two grams. Most consumer digital cameras for sale today still use standard SD cards. They have the standard “cut corner”  design. miniSD cards are smaller than standard SD cards, measuring 21.5x20x1.4 mm and weighing about 0.8 grams. This is the least common size today. miniSD cards were designed to be especially small for mobile phones, but we now have a smaller size. microSD cards are the smallest size of SD card, measuring 15x11x1 mm and weighing just 0.25 grams. These cards are used in most cell phones and smartphones that support SD cards. They’re also used in many other devices, such as tablets. SD cards will only fit into marching slots. You can’t plug a microSD card into a standard SD card slot — it won’t fit. However, you can purchase an adapter that allows you to plug a smaller SD card into a larger SD card’s form and fit it into the appropriate slot. Capacity Like USB flash drives, hard drives, solid-state drives, and other storage media, different SD cards can have different amounts of storage. But the differences between SD card capacities don’t stop there. Standard SDSC (SD) cards are 1 MB to 2 GB in size, or perhaps 4 GB in size — although 4 GB is non-standard. The SDHC standard was created later, and allows cards 2 GB to 32 GB in size. SDXC is a more recent standard that allows cards 32 GB to 2 TB in size. You’ll need a device that supports SDHC or SDXC cards to use them. At this point, the vast majority of devices should support SDHC. In fact, the SD cards you have are probably SDHC cards. SDXC is newer and less common. When buying an SD card, you’ll need to buy the right speed class, size, and capacity for your needs. Be sure to check what your device supports and consider what speed and capacity you’ll actually need. Image Credit: Ryosuke SEKIDO on Flickr, Clive Darra on Flickr, Steven Depolo on Flickr

    Read the article

  • Why does my mail get marked as spam?

    - by schoen
    I Have the server "afspraakmanager.be". It matches everything not to be a spam server.(it isn't by the way): it has reverse dns, spf,dkim,... . But hotmail marks it as spam. I think the problem is the SPF/DKIM records. when i sent an email to my gmail it says: "Received-SPF: neutral (google.com: 2a02:348:8e:6048::1 is neither permitted nor denied by best guess record for domain of [email protected]) client-ip=2a02:348:8e:6048::1; Authentication-Results: mx.google.com; spf=neutral (google.com: 2a02:348:8e:6048::1 is neither permitted nor denied by best guess record for domain of [email protected]) [email protected]; dkim=neutral (bad format) [email protected]" So i guess my SPF and DKIM records aren't set up right. But I also don't have a clue what is wrong with them. this is the zone file: ; zone file for afspraakmanager.be $ORIGIN afspraakmanager.be. $TTL 3600 @ 86400 IN SOA ns1.eurodns.com. hostmaster.eurodns.com. ( 2013102003 ; serial 86400 ; refresh 7200 ; retry 604800 ; expire 86400 ; minimum ) @ 86400 IN NS ns1.eurodns.com. @ 86400 IN NS ns2.eurodns.com. @ 86400 IN NS ns3.eurodns.com. @ 86400 IN NS ns4.eurodns.com. ; Mail Exchanger definition @ 600 IN MX 10 smtp ; IPv4 Address definition @ IN A 37.230.96.72 afspraakmanager.be 600 IN A 37.230.96.72 localhost 86400 IN A 127.0.0.1 smtp 600 IN A 37.230.96.72 www 600 IN A 37.230.96.72 ; Text definition default._domainkey 600 IN TXT "v=DKIM1\\; k=rsa\\; p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC6pvlZKnbSVXg1Bf3MF2l8xRrKPmqIw2i9Rn1yZ3HEny9qH1vyGXUjdv2O0aQbd5YShSGjtg5H/GedRMLpB0Qb+hBj1yGofOQTdcVtZZfj8qBY5Z7vEkhvtdaogQ0vLjgcwhg0BBuTewEkLxrl9IIzkPMZ1SCtM2Y0RtiUhg2cjQIDAQAB" ; Sender Policy Framework definition afspraakmanager.be 600 IN SPF "v=spf1 a mx ptr +all" The DKIM signature in the header: DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=afspraakmanager.be; s=mail; t=1382361029; bh=4pDpXBY8rCbX8+MfrklZzpQxaUsa3vSPUYjcDR3KAnU=; h=Date:From:To:Subject:From; b=SoBBaAlrueD8qID8txl2SBSqnZgN2lkPCdSPI/m7/YLezIcBedkgIX1NswYiZFl6Z AmF8dES73WUaaJjItVHSrdCJK2mJ/Az+vrgNsyk+GqZZ1YPiIlH3gqRrsguhoofXUX /gqLlqsLxqxkKKd9EbSzKRHuDGlJCLm5SlL8wnL0=

    Read the article

  • Sound device doesn't work in Windows 7

    - by Alex Farber
    Device manager shows that device is properly installed: High Definition Audio Device Device type: Sound, video and game controllers Manufacter: Microsoft Location: Location 0 (Internal High Definition Audio Bus) But every program trying to play sound reports that devicce is unavailable. In Windows XP it works properly.

    Read the article

  • How can you turn off alternate screen in OSX's Terminal.app?

    - by yacoob
    altscreen is evil. If you don't know what I'm talking about, see this page for visual demonstration. Problem is, there doesn't seem to be a way to stop it with Terminal.app (under OSX), when you're not using screen. Yes, you can edit terminfo definition, but that's rather blunt hammer. Plus that solution might break if Apple decides to update relevant term's definition in some patch. Is there some clean way to convince Terminal.app to block altscreen usage?

    Read the article

  • RenderPartial a view from another controller (and in another folder)

    - by George
    Hey MVC experts. I two database entities that i need to represent and i need to output them in a single page. I have something like this Views Def ViewA ViewB Test ViewC I want to ViewC to display ViewA, which displays ViewB. Right now i'm using something like this: // View C <!-- bla --> <% Html.RenderPartial(Url.Content("../Definition/DefinitionDetails"), i); %> // View A <!-- bla --> <% Html.RenderPartial(Url.Content("../Definition/DefinitionEditActions")); %> Is there a better to do this? I find that linking with relative pathnames can burn you. Any tips? Any chance I can make somehtiing like... Html.RenderPartial("Definition","DefinitionDetails",i); ? Thanks for the help

    Read the article

  • How to rename an alias in PowerShell?

    - by jwfearn
    I want to make my own versions of some of the builtin PowerShell aliases. Rather than completely removing the overridden aliases, I'd like to rename them so I can still use them if I want to. For example, maybe I'll rename set to orig_set and then add my own new definition for set. This is what I've tried so far: PS> alias *set* CommandType Name Definition ----------- ---- ---------- Alias set Set-Variable PS> function Rename-Alias( $s0, $s1 ) { Rename-Item Alias:\$s0 $s1 -Force } PS> Rename-Alias set orig_set PS> alias *set* CommandType Name Definition ----------- ---- ---------- Alias set Set-Variable Any ideas as to why this isn't working?

    Read the article

  • Custom whiteSpace using Haskell Parsec

    - by fryguybob
    I would like to use Parsec's makeTokenParser to build my parser, but I want to use my own definition of whiteSpace. Doing the following replaces whiteSpace with my definition, but all the lexeme parsers still use the old definition (e.g. P.identifier lexer will use the old whiteSpace). ... lexer :: P.TokenParser () lexer = l { P.whiteSpace = myWhiteSpace } where l = P.makeTokenParser myLanguageDef ... Looking at the code for makeTokenParser I think I understand why it works this way. I want to know if there are any workarounds to avoid completely duplicating the code for makeTokenParser?

    Read the article

  • Attching a Behaviour to a dynamically created table in Doctrine

    - by Cesare Contini
    How do I programmatically attach a Doctrine Behaviour programmatically to a table dynamically created through $conn->export->createTable('MyTable', $definition)? For example, if I have the following code: $definition = array( 'id' => array( 'type' => 'integer', 'primary' => true, 'autoincrement' => true ), 'name' => array( 'type' => 'string', 'length' => 255 ) ); $conn->export->createTable('MyTable', $definition) ; At this point I would need to attach a doctrine typical behaviour like Timestampable or Versionable to the newly created 'MyTable' table. Is it possible at all?

    Read the article

< Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >