Search Results

Search found 13859 results on 555 pages for 'non functional'.

Page 545/555 | < Previous Page | 541 542 543 544 545 546 547 548 549 550 551 552  | Next Page >

  • PHP: How to automate building a 100 <UL>/<LI> menuitems, while keeping the Menu Structure File Flat / Simply Managable?

    - by Sam
    Above: current "stupid" menu. (entire ul/li menu for javascript menu system) + (some li lines as page-specific submenu) Hi folks! With passion for automation and elegancy, but limited knowledge/knowhow, im stuck with "my hands in my hair" as we Dutch say, for my current menu system works perfectly, but is a pain in the a*s to update! So, i would appreciate it greatly, if you can suggest how to automate this in php: how to let the php generate the html menu code basing on a flat menu input file with TABS indented. OLD SITUATION <ul> <!-- about 100 of these <li>....</li> lines --> <li><a href="carrot.php"><p class="mnu" style="background-position:0 -820px"><? echo __("carrot juice") ?></p></a></li> <!-- lots of data, with only little bit thats really the menu itself--> </ul a javascript file reads a ul/li structure as input to build menu of format in that ul/li, the items with a hyperlink and sprite-bg position represent webpages, (inside LI) while items without hyperlink and sprite-bg are just headers of that menusection, (inside H6) to highlight the current page in the menu, the javascript menumaker uses an id number. this number corresponds to the consequtive li that is a webpage, skips h6 headers correctly. these h6 headers are only there for when importing sections of the same menu as submenu. non-li headers are not shown in menu, nore counted by the javascript menu for their ID. to know which page should be shown, i have to count from ID 0, the li items till finding the current webpage in the li structure and then manually put it in each webpage! BUT: changing an item in li order, means stupidly re-counting their entire li again! each webpage has an icon (= sprite bg-position numer), which is also used in the webpage. INTENDED RESULT I dream of, once setting what the current webpage is (e.g carrot.php) the menu system automatically "finds" and "counts" the li's and returns the id nr (for proper highlight of main menu); generates the entire menu html, and depending on which headings are set for submenu, (e.g. meals, drinks) generates those submenu (entire section below each given header); ginally adds h5 highlight inside the li of that submenu item. For the menu, i wish an easily readable, simple plain txt menu that is indented with tabs, (each tab is one depth for example) and further tabs follow for url and sprite position of icon. MY DREAM MENU-MANAGEMENT FILE |>TAB SEPARATED/INDENTED FLATMENU FILE |MUST BE CALCULATED BY PHP: |>MENUTEXT============URL=============SPRITE=====|ID===TAG================== |>about "#" -520 |00 li |> INFORMATION |—— h6 |> physical state "physical.php" -920 |01 li |> mental health "mental.php" -10 |02 li |> |>apetite "#" -1290 |03 li |> meals "#" -600 |04 li |> COLD MEAL |—— h6 |> egg salade "salad.php" -1040 |05 li |> salmon fish "salmon.php" -540 |06 li |> HOT MEAL |—— h6 |> spare ribs "spareribs.php" -120 |07 li |> di macaroni "macaroni.php" -870 |08 li |> |> drinks "#" -230 |09 li |> JUCY DRINK |—— h6 |> carrot juice "carrot.php" -820 |10 li |> mango hive "mango.php" -270 |11 li DESIRED CHRONOLOGY php outputs the entire ul/li html so the javascript can show the menu: webpage items go inside li tags, and header items go inside h6 tags, e.g. <h6>JUCY DRINK</h6> Each website page has a url filename [eg: salad.php]. Based on this given fact, the php menu generator detects the pagename, gives the IDnr of the position of that page according to the li-item nr and sets variable for javascript to highlight current menu item. the menu items below the specified headers are loaded as submenu in which the current page.php is wrapped inside h5 to highlight current page in submenu: e.g. (<li><h5><a href="carrot.php"><p>..etc..</p></h5></li> Question Which methods / steps / (chronological)ways are there for doing this? I am no good in php programming, but am learning it so please dont write any code without a line of comment why I should use that method etc. Where do I start? If I am unclear in my question, please ask. Thanks. Much appreciated!! Concrete Task List from the provided Comments/Answers, sofar: (RobertB) First, get some PHP code working that can read through a tab-delimited file and put the data into an appropriate data structure. NOW WORKING AT THIS

    Read the article

  • I can't use Spring filters in servlet-context XML

    - by gotch4
    For some reason both Eclipse and Spring can't find the filter tag (there is even a red mark)... What's wrong? <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:mvc="http://www.springframework.org/schema/mvc" xmlns:context="http://www.springframework.org/schema/context" xmlns:aop="http://www.springframework.org/schema/aop" xmlns:tx="http://www.springframework.org/schema/tx" xmlns:p="http://www.springframework.org/schema/p" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-3.0.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc-3.0.xsd"> <bean class="org.springframework.dao.annotation.PersistenceExceptionTranslationPostProcessor"></bean> <bean id="myDataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close"> <property name="driverClassName" value="com.mysql.jdbc.Driver" /> <property name="url" value="jdbc:mysql://localhost/jacciseweb" /> <property name="username" value="root" /> <property name="password" value="siussi" /> </bean> <bean id="mySessionFactory" class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean"> <property name="dataSource" ref="myDataSource" /> <property name="annotatedClasses"> <list> <value>it.jsoftware.jacciseweb.beans.Utente </value> <value>it.jsoftware.jacciseweb.beans.Ordine </value> </list> </property> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect"> org.hibernate.dialect.MySQLDialect </prop> <prop key="hibernate.show_sql"> true </prop> <prop key="hibernate.hbm2ddl.auto"> update </prop> <prop key="hibernate.cache.provider_class">org.hibernate.cache.NoCacheProvider</prop> </props> </property> </bean> <filter> <filter-name>hibernateFilter</filter-name> <filter-class> org.springframework.orm.hibernate3.support.OpenSessionInViewFilter </filter-class> <init-param> <param-name>singleSession</param-name> <param-value>true</param-value> </init-param> <init-param> <param-name>sessionFactoryBeanName</param-name> <param-value>mySessionFactory</param-value> </init-param> </filter> <!-- <aop:config> --> <!-- <aop:pointcut id="productServiceMethods" --> <!-- expression="execution(* product.ProductService.*(..))" /> --> <!-- <aop:advisor advice-ref="txAdvice" pointcut-ref="productServiceMethods" /> --> <!-- </aop:config> --> <bean id="acciseHibernateDao" class="it.jsoftware.jacciseweb.model.JAcciseWebManagementDaoHibernate"> <property name="sessionFactory" ref="mySessionFactory" /> </bean> <bean id="transactionManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager"> <property name="sessionFactory" ref="mySessionFactory" /> </bean> <tx:annotation-driven /> <bean id="acciseService" class="it.jsoftware.jacciseweb.model.JAcciseWebManagementServiceImpl"> <property name="dao" ref="acciseHibernateDao" /> </bean> <context:component-scan base-package="it.jsoftware.jacciseweb.controllers"></context:component-scan> <mvc:annotation-driven /> <bean class="org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter" p:synchronizeOnSession="true" /> <bean class="org.springframework.web.servlet.handler.BeanNameUrlHandlerMapping" /> <mvc:resources mapping="/resources/**" location="/resources/" /> <!-- non serve, è annotato --> <!-- <bean name="/accise" class="it.jsoftware.jacciseweb.controllers.MainController"> </bean> --> </beans> in particular it says "filter" is invalid content

    Read the article

  • How to handle failure to release a resource which is contained in a smart pointer?

    - by cj
    How should an error during resource deallocation be handled, when the object representing the resource is contained in a shared pointer? Smart pointers are a useful tool to manage resources safely. Examples of such resources are memory, disk files, database connections, or network connections. // open a connection to the local HTTP port boost::shared_ptr<Socket> socket = Socket::connect("localhost:80"); In a typical scenario, the class encapsulating the resource should be noncopyable and polymorphic. A good way to support this is to provide a factory method returning a shared pointer, and declare all constructors non-public. The shared pointers can now be copied from and assigned to freely. The object is automatically destroyed when no reference to it remains, and the destructor then releases the resource. /** A TCP/IP connection. */ class Socket { public: static boost::shared_ptr<Socket> connect(const std::string& address); virtual ~Socket(); protected: Socket(const std::string& address); private: // not implemented Socket(const Socket&); Socket& operator=(const Socket&); }; But there is a problem with this approach. The destructor must not throw, so a failure to release the resource will remain undetected. A common way out of this problem is to add a public method to release the resource. class Socket { public: virtual void close(); // may throw // ... }; Unfortunately, this approach introduces another problem: Our objects may now contain resources which have already been released. This complicates the implementation of the resource class. Even worse, it makes it possible for clients of the class to use it incorrectly. The following example may seem far-fetched, but it is a common pitfall in multi-threaded code. socket->close(); // ... size_t nread = socket->read(&buffer[0], buffer.size()); // wrong use! Either we ensure that the resource is not released before the object is destroyed, thereby losing any way to deal with a failed resource deallocation. Or we provide a way to release the resource explicitly during the object's lifetime, thereby making it possible to use the resource class incorrectly. There is a way out of this dilemma. But the solution involves using a modified shared pointer class. These modifications are likely to be controversial. Typical shared pointer implementations, such as boost::shared_ptr, require that no exception be thrown when their object's destructor is called. Generally, no destructor should ever throw, so this is a reasonable requirement. These implementations also allow a custom deleter function to be specified, which is called in lieu of the destructor when no reference to the object remains. The no-throw requirement is extended to this custom deleter function. The rationale for this requirement is clear: The shared pointer's destructor must not throw. If the deleter function does not throw, nor will the shared pointer's destructor. However, the same holds for other member functions of the shared pointer which lead to resource deallocation, e.g. reset(): If resource deallocation fails, no exception can be thrown. The solution proposed here is to allow custom deleter functions to throw. This means that the modified shared pointer's destructor must catch exceptions thrown by the deleter function. On the other hand, member functions other than the destructor, e.g. reset(), shall not catch exceptions of the deleter function (and their implementation becomes somewhat more complicated). Here is the original example, using a throwing deleter function: /** A TCP/IP connection. */ class Socket { public: static SharedPtr<Socket> connect(const std::string& address); protected: Socket(const std::string& address); virtual Socket() { } private: struct Deleter; // not implemented Socket(const Socket&); Socket& operator=(const Socket&); }; struct Socket::Deleter { void operator()(Socket* socket) { // Close the connection. If an error occurs, delete the socket // and throw an exception. delete socket; } }; SharedPtr<Socket> Socket::connect(const std::string& address) { return SharedPtr<Socket>(new Socket(address), Deleter()); } We can now use reset() to free the resource explicitly. If there is still a reference to the resource in another thread or another part of the program, calling reset() will only decrement the reference count. If this is the last reference to the resource, the resource is released. If resource deallocation fails, an exception is thrown. SharedPtr<Socket> socket = Socket::connect("localhost:80"); // ... socket.reset();

    Read the article

  • passing multiple queries to view with codeigniter

    - by LvS
    I am trying to build a forum with Codeigniter. So far i have the forums themselves displayed and the threads displayed, based on the creating dynamic news tutorial. But that is 2 different pages, i need to obviously display them into one page, like this: Forum 1 - thread 1 - thread 2 - thread 3 Forum 2 - thread 1 - thread 2 etc. And then the next step is obviously to display all the posts in a thread. Most likely with some pagination going on. But that is for later. For now i have the forum controller (slimmed version): <?php class Forum extends CI_Controller { public function __construct() { parent::__construct(); $this->load->model('forum_model'); $this->lang->load('forum'); $this->lang->load('dutch'); } public function index() { $data['forums'] = $this->forum_model->get_forums(); $data['title'] = $this->lang->line('title'); $data['view'] = $this->lang->line('view'); $this->load->view('templates/header', $data); $this->load->view('forum/index', $data); $this->load->view('templates/footer'); } public function view($slug) { $data['forum_item'] = $this->forum_model->get_forums($slug); if (empty($data['forum_item'])) { show_404(); } $data['title'] = $data['forum_item']['title']; $this->load->view('templates/header', $data); $this->load->view('forum/view', $data); $this->load->view('templates/footer'); } } ?> And the forum_model (also slimmed down) <?php class Forum_model extends CI_Model { public function __construct() { $this->load->database(); } public function get_forums($slug = FALSE) { if ($slug === FALSE) { $query= $this->db->get('forum'); return $query->result_array(); } $query = $this->db->get_where('forum', array('slug' => $slug)); return $query->row_array(); } public function get_threads($forumid, $limit, $offset) { $query = $this->db->get_where('thread', array('forumid', $forumid), $limit, $offset); return $query->result_array(); } } ?> And the view file <?php foreach ($forums as $forum_item): ?> <h2><?=$forum_item['title']?></h2> <div id="main"> <?=$forum_item['description']?> </div> <p><a href="forum/<?php echo $forum_item['slug'] ?>"><?=$view?></a></p> <?php endforeach ?> Now that last one, i would like to have something like this: <?php foreach ($forums as $forum_item): ?> <h2><?=$forum_item['title']?></h2> <div id="main"> <?=$forum_item['description']?> </div> <?php foreach ($threads as $thread_item): ?> <h2><?php echo $thread_item['title'] ?></h2> <p><a href="thread/<?php echo $thread_item['slug'] ?>"><?=$view?></a></p> <?php endforeach ?> <?php endforeach ?> But the question is, how do i get the model to return like a double query to the view, so that it contains both the forums and the threads within each forum. I tried to make a foreach loop in the get_forum function, but when i do this: public function get_forums($slug = FALSE) { if ($slug === FALSE) { $query= $this->db->get('forum'); foreach ($query->row_array() as $forum_item) { $thread_query=$this->get_threads($forum_item->forumid, 50, 0); } return $query->result_array(); } $query = $this->db->get_where('forum', array('slug' => $slug)); return $query->row_array(); } i get the error A PHP Error was encountered Severity: Notice Message: Trying to get property of non-object Filename: models/forum_model.php Line Number: 16 I hope anyone has some good tips, thanks! Lenny *EDIT*** Thanks for the feedback. I have been puzzling and this seems to work now :) $query= $this->db->get('forum'); foreach ($query->result() as $forum_item) { $forum[$forum_item->forumid]['title']=$forum_item->title; $thread_query=$this->db->get_where('thread', array('forumid' => $forum_item->forumid), 20, 0); foreach ($thread_query->result() as $thread_item) { $forum[$forum_item->forumid]['thread'][]=$thread_item->title; } } return $forum; } What is now next, is how to display this multidimensional array in the view, with foreach statements.... Any suggestions ? Thanks

    Read the article

  • spring mvc forward to jsp

    - by jerluc
    I currently have my web.xml configured to catch 404s and send them to my spring controller which will perform a search given the original URL request. The functionality is all there as far as the catch and search go, however the trouble begins to arise when I try to return a view. <bean class="org.springframework.web.servlet.view.ContentNegotiatingViewResolver" p:order="1"> <property name="mediaTypes"> <map> <entry key="json" value="application/json" /> <entry key="jsp" value="text/html" /> </map> </property> <property name="defaultContentType" value="application/json" /> <property name="favorPathExtension" value="true" /> <property name="viewResolvers"> <list> <bean class="org.springframework.web.servlet.view.BeanNameViewResolver" /> <bean id="viewResolver" class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <property name="prefix" value="/WEB-INF/jsp/" /> <property name="suffix" value="" /> </bean> </list> </property> <property name="defaultViews"> <list> <bean class="org.springframework.web.servlet.view.json.MappingJacksonJsonView" /> </list> </property> <property name="ignoreAcceptHeader" value="true" /> </bean> This is a snippet from my MVC config file. The problem lies in resolving the view's path to the /WEB-INF/jsp/ directory. Using a logger in my JBoss setup, I can see that when I test this search controller by going to a non-existent page, the following occurs: Server can't find the request Request is sent to 404 error page (in this case my search controller) Search controller performs search Search controller returns view name (for this illustration, we'll assume test.jsp is returned) Based off of server logger, I can see that org.springframework.web.servlet.view.JstlView is initialized once my search controller returns the view name (so I can assume it is being picked up correctly by the InternalResourceViewResolver) Server attempts to return content to browser resulting in a 404! A couple things confuse me about this: I'm not 100% sure why this isn't resolving when test.jsp clearly exists under the /WEB-INF/jsp/ directory. Even if there was some other problem, why would this result in a 404? Shouldn't a 404 error page that results in another 404 theoretically create an infinite loop? Thanks for any help or pointers! Controller class [incomplete]: @Controller public class SiteMapController { //-------------------------------------------------------------------------------------- @Autowired(required=true) private SearchService search; @Autowired(required=true) private CatalogService catalog; //-------------------------------------------------------------------------------------- @RequestMapping(value = "/sitemap", method = RequestMethod.GET) public String sitemap (HttpServletRequest request, HttpServletResponse response) { String forwardPath = ""; try { long startTime = System.nanoTime() / 1000000; String pathQuery = (String) request.getAttribute("javax.servlet.error.request_uri"); Scanner pathScanner = new Scanner(pathQuery).useDelimiter("\\/"); String context = pathScanner.next(); List<ProductLightDTO> results = new ArrayList<ProductLightDTO>(); StringBuilder query = new StringBuilder(); String currentValue; while (pathScanner.hasNext()) { currentValue = pathScanner.next().toLowerCase(); System.out.println(currentValue); if (query.length() > 0) query.append(" AND "); if (currentValue.contains("-")) { query.append("\""); query.append(currentValue.replace("-", " ")); query.append("\""); } else { query.append(currentValue + "*"); } } //results.addAll(this.doSearch(query.toString())); System.out.println("Request: " + pathQuery); System.out.println("Built Query:" + query.toString()); //System.out.println("Result size: " + results.size()); long totalTime = (System.nanoTime() / 1000000) - startTime; System.out.println("Total TTP: " + totalTime + "ms"); if (results == null || results.size() == 0) { forwardPath = "home.jsp"; } else if (results.size() == 1) { forwardPath = "product.jsp"; } else { forwardPath = "category.jsp"; } } catch (Exception ex) { System.err.println(ex); } System.out.println("Returning view: " + forwardPath); return forwardPath; } }

    Read the article

  • Only show items owned by the currently logged in user in category list view

    - by jalbasri
    I'd like to be able to provide a "Category List" view that only shows Articles that the currently logged in user owns. Is there somewhere I can edit the query used to populate the Category List view or an extension that provides this functionality. Thank you for any help you can provide. -J. Thank you for your answer. I've written the plugin. Instead of passing in an array of Articles the onContentBeforeDisplay function is called for every article and an ArrayObject of the single article gets passed in. I've been able to identify the articles I want not to be displayed but still cannot get them not to display. The $params variable has values such as "list_show_xxx" but I can't seem to change or access them. here is a var_dump($params): object(Joomla\Registry\Registry)#190 (1) { ["data":protected]=> object(stdClass)#250 (83) { ["article_layout"]=> string(9) "_:default" ["show_title"]=> string(1) "1" ["link_titles"]=> string(1) "1" ["show_intro"]=> string(1) "1" ["info_block_position"]=> string(1) "1" ["show_category"]=> string(1) "1" ["link_category"]=> string(1) "1" ["show_parent_category"]=> string(1) "0" ["link_parent_category"]=> string(1) "0" ["show_author"]=> string(1) "1" ["link_author"]=> string(1) "0" ["show_create_date"]=> string(1) "0" ["show_modify_date"]=> string(1) "0" ["show_publish_date"]=> string(1) "1" ["show_item_navigation"]=> string(1) "1" ["show_vote"]=> string(1) "0" ["show_readmore"]=> string(1) "1" ["show_readmore_title"]=> string(1) "1" ["readmore_limit"]=> string(3) "100" ["show_tags"]=> string(1) "1" ["show_icons"]=> string(1) "1" ["show_print_icon"]=> string(1) "1" ["show_email_icon"]=> string(1) "1" ["show_hits"]=> string(1) "1" ["show_noauth"]=> string(1) "0" ["urls_position"]=> string(1) "0" ["show_publishing_options"]=> string(1) "0" ["show_article_options"]=> string(1) "0" ["save_history"]=> string(1) "1" ["history_limit"]=> int(10) ["show_urls_images_frontend"]=> string(1) "0" ["show_urls_images_backend"]=> string(1) "1" ["targeta"]=> int(0) ["targetb"]=> int(0) ["targetc"]=> int(0) ["float_intro"]=> string(4) "left" ["float_fulltext"]=> string(4) "left" ["category_layout"]=> string(9) "_:default" ["show_category_heading_title_text"]=> string(1) "1" ["show_category_title"]=> string(1) "0" ["show_description"]=> string(1) "0" ["show_description_image"]=> string(1) "0" ["maxLevel"]=> string(1) "1" ["show_empty_categories"]=> string(1) "0" ["show_no_articles"]=> string(1) "1" ["show_subcat_desc"]=> string(1) "1" ["show_cat_num_articles"]=> string(1) "0" ["show_base_description"]=> string(1) "1" ["maxLevelcat"]=> string(2) "-1" ["show_empty_categories_cat"]=> string(1) "0" ["show_subcat_desc_cat"]=> string(1) "1" ["show_cat_num_articles_cat"]=> string(1) "1" ["num_leading_articles"]=> string(1) "1" ["num_intro_articles"]=> string(1) "4" ["num_columns"]=> string(1) "1" ["num_links"]=> string(1) "4" ["multi_column_order"]=> string(1) "0" ["show_subcategory_content"]=> string(1) "0" ["show_pagination_limit"]=> string(1) "1" ["filter_field"]=> string(5) "title" ["show_headings"]=> string(1) "1" ["list_show_date"]=> string(1) "0" ["date_format"]=> string(0) "" ["list_show_hits"]=> string(1) "1" ["list_show_author"]=> string(1) "1" ["orderby_pri"]=> string(5) "order" ["orderby_sec"]=> string(5) "rdate" ["order_date"]=> string(9) "published" ["show_pagination"]=> string(1) "2" ["show_pagination_results"]=> string(1) "1" ["show_feed_link"]=> string(1) "1" ["feed_summary"]=> string(1) "0" ["feed_show_readmore"]=> string(1) "0" ["display_num"]=> string(2) "10" ["menu_text"]=> int(1) ["show_page_heading"]=> int(0) ["secure"]=> int(0) ["page_title"]=> string(16) "Non-K2 News List" ["page_description"]=> string(33) "Bahrain Business Incubator Centre" ["page_rights"]=> NULL ["robots"]=> NULL ["access-edit"]=> bool(true) ["access-view"]=> bool(true) } } I've tried $params-data-list_show_author = "0" but then the page doesn't load, problem is accessing and changing the variables in $param. So the last step is to figure out how not to show the article. Any ideas?

    Read the article

  • No Hibernate Session bound to thread exception

    - by Benchik
    I have Hibernate 3.6.0.Final and Spring 3.0.0.RELEASE I get "No Hibernate Session bound to thread, and configuration does not allow creation of non-transactional one here" If I add the thread specification back in, I get "saveOrUpdate is not valid without active transaction" Any ideas? The spring-config.xml: <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close"> <property name="driverClassName" value="org.hsqldb.jdbcDriver"/> <property name="url" value="jdbc:hsqldb:mem:jsf2demo"/> <property name="username" value="sa"/> <property name="password" value=""/> </bean> <bean id="sampleSessionFactory" class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean"> <property name="dataSource" ref="sampleDataSource"/> <property name="annotatedClasses"> <list> <value>com.maxheapsize.jsf2demo.Book</value> </list> </property> <property name="hibernateProperties"> <props> <!-- prop key="hibernate.connection.pool_size">0</prop--> <prop key="hibernate.show_sql">true</prop> <prop key="hibernate.dialect">org.hibernate.dialect.HSQLDialect</prop> <!-- prop key="transaction.factory_class">org.hibernate.transaction.JDBCTransactionFactory</prop> <prop key="hibernate.current_session_context_class">thread</prop--> <prop key="hibernate.hbm2ddl.auto">create</prop> </props> </property> </bean> <bean id="sampleDataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource"> <property name="driverClassName"> <value>org.hsqldb.jdbcDriver</value> </property> <property name="url"> <value> jdbc:hsqldb:file:/spring/db/springdb;SHUTDOWN=true </value> </property> <property name="username" value="sa"/> <property name="password" value=""/> </bean> <bean id="transactionManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager"> <property name="sessionFactory" ref="sampleSessionFactory"/> </bean> <bean id="daoTxTemplate" abstract="true" class="org.springframework.transaction.interceptor.TransactionProxyFactoryBean"> <property name="transactionManager" ref="transactionManager"/> <property name="transactionAttributes"> <props> <prop key="create*"> PROPAGATION_REQUIRED,ISOLATION_READ_COMMITTED </prop> <prop key="get*"> PROPAGATION_REQUIRED,ISOLATION_READ_COMMITTED </prop> </props> </property> </bean> <bean name="openSessionInViewInterceptor" class="org.springframework.orm.hibernate3.support.OpenSessionInViewInterceptor"> <property name="sessionFactory" ref="sampleSessionFactory"/> <property name="singleSession" value="true"/> </bean> <bean id="nameDao" parent="daoTxTemplate"> <property name="target"> <bean class="com.maxheapsize.dao.NameDao"> <property name="sessionFactory" ref="sampleSessionFactory"/> </bean> </property> </bean> and the DAO: public class NameDao { private SessionFactory sessionFactory; public void setSessionFactory(SessionFactory sessionFactory) { this.sessionFactory = sessionFactory; } public SessionFactory getSessionFactory() { return sessionFactory; } @Transactional @SuppressWarnings("unchecked") public List<Name> getAll() { Session session = this.sessionFactory.getCurrentSession(); List<Name> names = (List<Name>)session.createQuery("from Name").list(); return names; } //@Transactional(propagation= Propagation.REQUIRED, readOnly=false) @Transactional public void save(Name name){ Session session = this.sessionFactory.getCurrentSession(); session.saveOrUpdate(name); session.flush(); } }

    Read the article

  • Allowing pop-up's and downloads with flex

    - by ShadowVariable
    I'm building an flex app that is basically a wrapper for a few websites. One of them is a google docs website, and I'm trying to get flex to allow downloads or popups or something that will allow me to do it. I've tried a whole bunch of solutions online and none of them have worked out. Here's the code so far: <?xml version="1.0" encoding="utf-8"?> <s:WindowedApplication xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/mx" width="100%" height="100%" creationComplete="onCreationComplete()"> <s:layout> <s:HorizontalLayout/> </s:layout> <fx:Style source="style.css"/> <fx:Script> <![CDATA[ include "CustomHTMLLoader.as"; private function onCreationComplete():void { // ... other stuff ... var custom:object; custom.htmlHost = new MyHTMLHost(); } ]]> </fx:Script> <fx:Declarations> <!-- Place non-visual elements (e.g., services, value objects) here --> </fx:Declarations> <s:BorderContainer width="100%" height="100%" backgroundColor="#87BED0" styleName="container"> <s:Panel x="188" y="17" width="826" height="112" borderAlpha="0.15" chromeColor="#0C5A74" color="#FFFFFF" cornerRadius="20" dropShadowVisible="false" enabled="true" title="Customer Service Control Panel"> <s:controlBarContent/> <s:Button id="home" x="13" y="10" height="44" label="Phones" click="myViewStack.selectedChild=Home;" enabled="true" icon="@Embed('assets/iconmonstr-mobile-phone-6-icon-32.png')"/> <s:Button id="liveagent" x="131" y="10" height="44" label="Live Agent" click="myViewStack.selectedChild=live_agent;" icon="@Embed('assets/iconmonstr-speech-bubble-11-icon-32.png')"/> <s:Button id="bigcommerce" x="260" y="10" width="158" height="44" label="Big Commerce" click="myViewStack.selectedChild=bigcommerce_home;" icon="@Embed('assets/iconmonstr-coin-6-icon-48.png')"/> <s:Button id="faq" x="436" y="10" width="88" height="44" label="FAQ" click="myViewStack.selectedChild=freqaskquestions;" fontFamily="Arial" icon="@Embed('assets/iconmonstr-help-4-icon-32.png')"/> <s:Button id="call" x="540" y="10" width="131" height="44" label="Google Docs" click="myViewStack.selectedChild=call_notes;" icon="@Embed('assets/iconmonstr-text-file-4-icon-32.png')"/> <s:Button id="hoot" x="684" y="10" width="122" height="44" label="HootSuite" click="myViewStack.selectedChild=hoot_suite;" icon="@Embed('assets/iconmonstr-facebook-icon-32.png')"/> </s:Panel> <mx:ViewStack id="myViewStack" x="0" y="140" width="100%" height="100%" borderStyle="solid"> <s:NavigatorContent id="Home"> <s:BorderContainer width="100%" height="100%"> <mx:HTML x="0" y="0" width="100%" height="100%" borderVisible="false" horizontalScrollPolicy="off" location="http://mbvphone.mtbakervapor.org/vbx/messages/inbox" /> </s:BorderContainer> </s:NavigatorContent> <s:NavigatorContent id="bigcommerce_home"> <s:BorderContainer width="100%" height="100%"> <mx:HTML x="0" y="0" width="100%" height="100%" borderVisible="false" horizontalScrollPolicy="off" location="http://www.mtbakervapor.com/admin" /> </s:BorderContainer> </s:NavigatorContent> <s:NavigatorContent id="live_agent"> <s:BorderContainer width="100%" height="100%"> <mx:HTML x="0" y="0" width="100%" height="100%" borderVisible="false" horizontalScrollPolicy="off" location="http://mbvphone.mtbakervapor.org/liveagent/agent/#login" /> </s:BorderContainer> </s:NavigatorContent> <s:NavigatorContent id="freqaskquestions"> <s:BorderContainer width="100%" height="100%"> <mx:HTML x="0" y="0" width="100%" height="100%" borderVisible="false" horizontalScrollPolicy="off" location="http://mbvphone.mtbakervapor.org/liveagent/" /> </s:BorderContainer> </s:NavigatorContent> <s:NavigatorContent id="call_notes"> <s:BorderContainer width="100%" height="100%"> <mx:HTML id="html" x="0" y="0" width="100%" height="100%" borderVisible="false" horizontalScrollPolicy="off" location="https://drive.google.com/a/mtbakervapor.com/" htmlHost="{new CustomHost()}" /> </s:BorderContainer> </s:NavigatorContent> <s:NavigatorContent id="hoot_suite"> <s:BorderContainer width="100%" height="100%"> <mx:HTML x="0" y="0" width="100%" height="100%" borderVisible="false" horizontalScrollPolicy="off" location="https://hootsuite.com/login" /> </s:BorderContainer> </s:NavigatorContent> </mx:ViewStack> <s:Image x="0" y="0" width="180" height="140" scaleMode="letterbox" smooth="false" source="assets/mbvlogo_black.png"/> </s:BorderContainer> </s:WindowedApplication> and the custom class code: package { import flash.html.HTMLHost; import flash.html.HTMLWindowCreateOptions; import flash.html.HTMLLoader; public class MyHTMLHost extends HTMLHost { public function MyHTMLHost(defaultBehaviors:Boolean=true) { super(defaultBehaviors); } override public function createWindow(windowCreateOptions:HTMLWindowCreateOptions):HTMLLoader { // all JS calls and HREFs to open a new window should use the existing window return HTMLLoader.createRootWindow(); } } } any help would be appreciated.

    Read the article

  • How to model a relationship that NHibernate (or Hibernate) doesn’t easily support

    - by MylesRip
    I have a situation in which the ideal relationship, I believe, would involve Value Object Inheritance. This is unfortunately not supported in NHibernate so any solution I come up with will be less than perfect. Let’s say that: “Item” entities have a “Location” that can be in one of multiple different formats. These formats are completely different with no overlapping fields. We will deal with each Location in the format that is provided in the data with no attempt to convert from one format to another. Each Item has exactly one Location. “SpecialItem” is a subtype of Item, however, that is unique in that it has exactly two Locations. “Group” entities aggregate Items. “LocationGroup” is as subtype of Group. LocationGroup also has a single Location that can be in any of the formats as described above. Although I’m interested in Items by Group, I’m also interested in being able to find all items with the same Location, regardless of which group they are in. I apologize for the number of stipulations listed above, but I’m afraid that simplifying it any further wouldn’t really reflect the difficulties of the situation. Here is how the above could be diagrammed: Mapping Dilemma Diagram: (http://www.freeimagehosting.net/uploads/592ad48b1a.jpg) (I tried placing the diagram inline, but Stack Overflow won't allow that until I have accumulated more points. I understand the reasoning behind it, but it is a bit inconvenient for now.) Hmmm... Apparently I can't have multiple links either. :-( Analyzing the above, I make the following observations: I treat Locations polymorphically, referring to the supertype rather than the subtype. Logically, Locations should be “Value Objects” rather than entities since it is meaningless to differentiate between two Location objects that have all the same values. Thus equality between Locations should be based on field comparisons, not identifiers. Also, value objects should be immutable and shared references should not be allowed. Using NHibernate (or Hibernate) one would typically map value objects using the “component” keyword which would cause the fields of the class to be mapped directly into the database table that represents the containing class. Put another way, there would not be a separate “Locations” table in the database (and Locations would therefore have no identifiers). NHibernate (or Hibernate) do not currently support inheritance for value objects. My choices as I see them are: Ignore the fact that Locations should be value objects and map them as entities. This would take care of the inheritance mapping issues since NHibernate supports entity inheritance. The downside is that I then have to deal with aliasing issues. (Meaning that if multiple objects share a reference to the same Location, then changing values for one object’s Location would cause the location to change for other objects that share the reference the same Location record.) I want to avoid this if possible. Another downside is that entities are typically compared by their IDs. This would mean that two Location objects would be considered not equal even if the values of all their fields are the same. This would be invalid and unacceptable from the business perspective. Flatten Locations into a single class so that there are no longer inheritance relationships for Locations. This would allow Locations to be treated as value objects which could easily be handled by using “component” mapping in NHibernate. The downside in this case would be that the domain model becomes weaker, more fragile and less maintainable. Do some “creative” mapping in the hbm files in order to force Location fields to be mapped into the containing entities’ tables without using the “component” keyword. This approach is described by Colin Jack here. My situation is more complicated than the one he describes due to the fact that SpecialItem has a second Location and the fact that a different entity, LocatedGroup, also has Locations. I could probably get it to work, but the mappings would be non-intuitive and therefore hard to understand and maintain by other developers in the future. Also, I suspect that these tricky mappings would likely not be possible using Fluent NHibernate so I would use the advantages of using that tool, at least in that situation. Surely others out there have run into similar situations. I’m hoping someone who has “been there, done that” can share some wisdom. :-) So here’s the question… Which approach should be preferred in this situation? Why?

    Read the article

  • Objects in Java ArrayList don't get updated.

    - by Sbm007
    This is going to be a very long post, hopefully you can understand what I'm talking about and I appreciate any help. Thanks Basically, I've created a personal, non-commercial project (which I don't plan to release) that can read ZIP and RAR files. It can only read the contents in the archive, the folders inside, the files inside the folders and its properties (such as last modified date, last modified time, CRC checksum, uncompressed size, compressed size and file name). It can't extract files either, so it's really a ZIP/RAR viewer if you may. Anyway that's slightly irrelevant to my problem but I thought I'd give you some background info. Now for my problem: I can successfully list all the folders and files inside a ZIP archive, so now I want to take that raw input and link it together in some useful way. I made 2 classes: ArchiveFile (represents a file inside a ZIP) and ArchiveFolder (represents a folder inside a ZIP). They both have some useful methods such as getLastModifiedDate, getName, getPath and so on. But the difference is that ArchiveFolder can hold an ArrayList of ArchiveFile's and additional ArchiveFolder's (think of this as files and folders inside a folder). Now I want to populate my raw input into one root ArchiveFolder, which will have all the files in the root dir of the ZIP in the ArchiveFile's ArrayList and any additional folders in the root dir of the ZIP in the ArchiveFolder's ArrayList (and this process can continue on like this like a chain reaction (more files/folders in that ArchiveFolder etc etc). So I came up with the following code: while (archive.hasMore()) { String path = ""; ArchiveFolder current = root; String[] contents = archive.getName().split("/"); for (int x = 0; x < contents.length; ++x) { if (x == (contents.length - 1) && !archive.getName().endsWith("/")) { // If on last item and item is a file path += contents[x]; // Update final path ArchiveFile file = new ArchiveFile(path, contents[x], archive.getUncompressedSize(), archive.getCompressedSize(), archive.getModifiedTime(), archive.getModifiedDate(), archive.getCRC()); current.addFile(file); // Create and add the file to the current ArchiveFolder } else if (x == (contents.length - 1)) { // Else if we are on last item and it is a folder path += contents[x] + "/"; // Update final path ArchiveFolder folder = new ArchiveFolder(path, contents[x], archive.getModifiedTime(), archive.getModifiedDate()); current.addFolder(folder); // Create and add this folder to the current ArchiveFile } else { // Else if we are still traversing through the path path += contents[x] + "/"; // Update path ArchiveFolder folder = new ArchiveFolder(path, contents[x]); current.addFolder(folder); // Create and add folder (remember we do not know the modified date/time as all we know is the path, so we can deduce the name only) current = folder; // Update current ArchiveFolder to the newly created one for the next iteration of the for loop } } archive.getNext(); } Assume that root is the root ArchiveFolder (initially empty). And that archive.getName() returns the name of the current file OR folder in the following fashion: file.txt or folder1/file2.txt or folder4/folder2/ (this is a empty folder) etc. So basically the relative path from the root of the ZIP archive. Please read through the comments in the above code to familiarize yourself with it. Also assume that the addFolder method in an ArchiveFile, only adds the folder if it doesn't exist already (so there are no multiple folders) and it also updates the time and date of an existing folder if it is blank (ie it was a intermediate folder we only knew the name of, but now we know its details). The code for addFolder is (pretty self-explanitory): public void addFolder(ArchiveFolder folder) { int loc = folders.indexOf(folder); // folders is the ArrayList containing ArchiveFolder's if (loc == -1) { folders.add(folder); } else { ArchiveFolder real = folders.get(loc); if (real.time == null) { real.setTime(folder.getTime()); real.setDate(folder.getDate()); } } } So I can't see anything wrong with the code, it works and after finishing, the root ArchiveFolder contains all the files in the root of the ZIP as I want it to, and it contains all the direcories in the root folder as I want it to. So you'd think it works as expected, but no the ArchiveFolder's in the root folder don't contain the data inside those 'child' folders, it's just a blank folder with no additional files and folders (while it does really contain some more files/folders when viewed in WinZip). After debugging using Eclipse, the for loop does iterate through all the files (even those not included above), so this led me to believe that there is a problem with this line of the code: current = folder; What it does is, it updates the current folder (used as an intermediate by the loop) to the newly added folder. I thought Java passed by reference and thus all new operations and new additions in future ArchiveFile's and ArchiveFolder's are automatically updated, and parent ArchiveFolder's will be updated accordingly. But that does not appear to be the case? I know this is a long ass post and I really hope anyone can help me out with this. Thanks in advance.

    Read the article

  • .NET WebRequest.PreAuthenticate not quite what it sounds like

    - by Rick Strahl
    I’ve run into the  problem a few times now: How to pre-authenticate .NET WebRequest calls doing an HTTP call to the server – essentially send authentication credentials on the very first request instead of waiting for a server challenge first? At first glance this sound like it should be easy: The .NET WebRequest object has a PreAuthenticate property which sounds like it should force authentication credentials to be sent on the first request. Looking at the MSDN example certainly looks like it does: http://msdn.microsoft.com/en-us/library/system.net.webrequest.preauthenticate.aspx Unfortunately the MSDN sample is wrong. As is the text of the Help topic which incorrectly leads you to believe that PreAuthenticate… wait for it - pre-authenticates. But it doesn’t allow you to set credentials that are sent on the first request. What this property actually does is quite different. It doesn’t send credentials on the first request but rather caches the credentials ONCE you have already authenticated once. Http Authentication is based on a challenge response mechanism typically where the client sends a request and the server responds with a 401 header requesting authentication. So the client sends a request like this: GET /wconnect/admin/wc.wc?_maintain~ShowStatus HTTP/1.1 Host: rasnote User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en,de;q=0.7,en-us;q=0.3 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive and the server responds with: HTTP/1.1 401 Unauthorized Cache-Control: private Content-Type: text/html; charset=utf-8 Server: Microsoft-IIS/7.5 WWW-Authenticate: basic realm=rasnote" X-AspNet-Version: 2.0.50727 WWW-Authenticate: Negotiate WWW-Authenticate: NTLM WWW-Authenticate: Basic realm="rasnote" X-Powered-By: ASP.NET Date: Tue, 27 Oct 2009 00:58:20 GMT Content-Length: 5163 plus the actual error message body. The client then is responsible for re-sending the current request with the authentication token information provided (in this case Basic Auth): GET /wconnect/admin/wc.wc?_maintain~ShowStatus HTTP/1.1 Host: rasnote User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en,de;q=0.7,en-us;q=0.3 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive Cookie: TimeTrakker=2HJ1998WH06696; WebLogCommentUser=Rick Strahl|http://www.west-wind.com/|[email protected]; WebStoreUser=b8bd0ed9 Authorization: Basic cgsf12aDpkc2ZhZG1zMA== Once the authorization info is sent the server responds with the actual page result. Now if you use WebRequest (or WebClient) the default behavior is to re-authenticate on every request that requires authorization. This means if you look in  Fiddler or some other HTTP client Proxy that captures requests you’ll see that each request re-authenticates: Here are two requests fired back to back: and you can see the 401 challenge, the 200 response for both requests. If you watch this same conversation between a browser and a server you’ll notice that the first 401 is also there but the subsequent 401 requests are not present. WebRequest.PreAuthenticate And this is precisely what the WebRequest.PreAuthenticate property does: It’s a caching mechanism that caches the connection credentials for a given domain in the active process and resends it on subsequent requests. It does not send credentials on the first request but it will cache credentials on subsequent requests after authentication has succeeded: string url = "http://rasnote/wconnect/admin/wc.wc?_maintain~ShowStatus"; HttpWebRequest req = HttpWebRequest.Create(url) as HttpWebRequest; req.PreAuthenticate = true; req.Credentials = new NetworkCredential("rick", "secret", "rasnote"); req.AuthenticationLevel = System.Net.Security.AuthenticationLevel.MutualAuthRequested; req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; WebResponse resp = req.GetResponse(); resp.Close(); req = HttpWebRequest.Create(url) as HttpWebRequest; req.PreAuthenticate = true; req.Credentials = new NetworkCredential("rstrahl", "secret", "rasnote"); req.AuthenticationLevel = System.Net.Security.AuthenticationLevel.MutualAuthRequested; req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; resp = req.GetResponse(); which results in the desired sequence: where only the first request doesn’t send credentials. This is quite useful as it saves quite a few round trips to the server – bascially it saves one auth request request for every authenticated request you make. In most scenarios I think you’d want to send these credentials this way but one downside to this is that there’s no way to log out the client. Since the client always sends the credentials once authenticated only an explicit operation ON THE SERVER can undo the credentials by forcing another login explicitly (ie. re-challenging with a forced 401 request). Forcing Basic Authentication Credentials on the first Request On a few occasions I’ve needed to send credentials on a first request – mainly to some oddball third party Web Services (why you’d want to use Basic Auth on a Web Service is beyond me – don’t ask but it’s not uncommon in my experience). This is true of certain services that are using Basic Authentication (especially some Apache based Web Services) and REQUIRE that the authentication is sent right from the first request. No challenge first. Ugly but there it is. Now the following works only with Basic Authentication because it’s pretty straight forward to create the Basic Authorization ‘token’ in code since it’s just an unencrypted encoding of the user name and password into base64. As you might guess this is totally unsecure and should only be used when using HTTPS/SSL connections (i’m not in this example so I can capture the Fiddler trace and my local machine doesn’t have a cert installed, but for production apps ALWAYS use SSL with basic auth). The idea is that you simply add the required Authorization header to the request on your own along with the authorization string that encodes the username and password: string url = "http://rasnote/wconnect/admin/wc.wc?_maintain~ShowStatus"; HttpWebRequest req = HttpWebRequest.Create(url) as HttpWebRequest; string user = "rick"; string pwd = "secret"; string domain = "www.west-wind.com"; string auth = "Basic " + Convert.ToBase64String(System.Text.Encoding.Default.GetBytes(user + ":" + pwd)); req.PreAuthenticate = true; req.AuthenticationLevel = System.Net.Security.AuthenticationLevel.MutualAuthRequested;req.Headers.Add("Authorization", auth); req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; WebResponse resp = req.GetResponse(); resp.Close(); This works and causes the request to immediately send auth information to the server. However, this only works with Basic Auth because you can actually create the authentication credentials easily on the client because it’s essentially clear text. The same doesn’t work for Windows or Digest authentication since you can’t easily create the authentication token on the client and send it to the server. Another issue with this approach is that PreAuthenticate has no effect when you manually force the authentication. As far as Web Request is concerned it never sent the authentication information so it’s not actually caching the value any longer. If you run 3 requests in a row like this: string url = "http://rasnote/wconnect/admin/wc.wc?_maintain~ShowStatus"; HttpWebRequest req = HttpWebRequest.Create(url) as HttpWebRequest; string user = "ricks"; string pwd = "secret"; string domain = "www.west-wind.com"; string auth = "Basic " + Convert.ToBase64String(System.Text.Encoding.Default.GetBytes(user + ":" + pwd)); req.PreAuthenticate = true; req.Headers.Add("Authorization", auth); req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; WebResponse resp = req.GetResponse(); resp.Close(); req = HttpWebRequest.Create(url) as HttpWebRequest; req.PreAuthenticate = true; req.Credentials = new NetworkCredential(user, pwd, domain); req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; resp = req.GetResponse(); resp.Close(); req = HttpWebRequest.Create(url) as HttpWebRequest; req.PreAuthenticate = true; req.Credentials = new NetworkCredential(user, pwd, domain); req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; resp = req.GetResponse(); you’ll find the trace looking like this: where the first request (the one we explicitly add the header to) authenticates, the second challenges, and any subsequent ones then use the PreAuthenticate credential caching. In effect you’ll end up with one extra 401 request in this scenario, which is still better than 401 challenges on each request. Getting Access to WebRequest in Classic .NET Web Service Clients If you’re running a classic .NET Web Service client (non-WCF) one issue with the above is how do you get access to the WebRequest to actually add the custom headers to do the custom Authentication described above? One easy way is to implement a partial class that allows you add headers with something like this: public partial class TaxService { protected NameValueCollection Headers = new NameValueCollection(); public void AddHttpHeader(string key, string value) { this.Headers.Add(key,value); } public void ClearHttpHeaders() { this.Headers.Clear(); } protected override WebRequest GetWebRequest(Uri uri) { HttpWebRequest request = (HttpWebRequest) base.GetWebRequest(uri); request.Headers.Add(this.Headers); return request; } } where TaxService is the name of the .NET generated proxy class. In code you can then call AddHttpHeader() anywhere to add additional headers which are sent as part of the GetWebRequest override. Nice and simple once you know where to hook it. For WCF there’s a bit more work involved by creating a message extension as described here: http://weblogs.asp.net/avnerk/archive/2006/04/26/Adding-custom-headers-to-every-WCF-call-_2D00_-a-solution.aspx. FWIW, I think that HTTP header manipulation should be readily available on any HTTP based Web Service client DIRECTLY without having to subclass or implement a special interface hook. But alas a little extra work is required in .NET to make this happen Not a Common Problem, but when it happens… This has been one of those issues that is really rare, but it’s bitten me on several occasions when dealing with oddball Web services – a couple of times in my own work interacting with various Web Services and a few times on customer projects that required interaction with credentials-first services. Since the servers determine the protocol, we don’t have a choice but to follow the protocol. Lovely following standards that implementers decide to ignore, isn’t it? :-}© Rick Strahl, West Wind Technologies, 2005-2010Posted in .NET  CSharp  Web Services  

    Read the article

  • Integrating JavaScript Unit Tests with Visual Studio

    - by Stephen Walther
    Modern ASP.NET web applications take full advantage of client-side JavaScript to provide better interactivity and responsiveness. If you are building an ASP.NET application in the right way, you quickly end up with lots and lots of JavaScript code. When writing server code, you should be writing unit tests. One big advantage of unit tests is that they provide you with a safety net that enable you to safely modify your existing code – for example, fix bugs, add new features, and make performance enhancements -- without breaking your existing code. Every time you modify your code, you can execute your unit tests to verify that you have not broken anything. For the same reason that you should write unit tests for your server code, you should write unit tests for your client code. JavaScript is just as susceptible to bugs as C#. There is no shortage of unit testing frameworks for JavaScript. Each of the major JavaScript libraries has its own unit testing framework. For example, jQuery has QUnit, Prototype has UnitTestJS, YUI has YUI Test, and Dojo has Dojo Objective Harness (DOH). The challenge is integrating a JavaScript unit testing framework with Visual Studio. Visual Studio and Visual Studio ALM provide fantastic support for server-side unit tests. You can easily view the results of running your unit tests in the Visual Studio Test Results window. You can set up a check-in policy which requires that all unit tests pass before your source code can be committed to the source code repository. In addition, you can set up Team Build to execute your unit tests automatically. Unfortunately, Visual Studio does not provide “out-of-the-box” support for JavaScript unit tests. MS Test, the unit testing framework included in Visual Studio, does not support JavaScript unit tests. As soon as you leave the server world, you are left on your own. The goal of this blog entry is to describe one approach to integrating JavaScript unit tests with MS Test so that you can execute your JavaScript unit tests side-by-side with your C# unit tests. The goal is to enable you to execute JavaScript unit tests in exactly the same way as server-side unit tests. You can download the source code described by this project by scrolling to the end of this blog entry. Rejected Approach: Browser Launchers One popular approach to executing JavaScript unit tests is to use a browser as a test-driver. When you use a browser as a test-driver, you open up a browser window to execute and view the results of executing your JavaScript unit tests. For example, QUnit – the unit testing framework for jQuery – takes this approach. The following HTML page illustrates how you can use QUnit to create a unit test for a function named addNumbers(). <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <title>Using QUnit</title> <link rel="stylesheet" href="http://github.com/jquery/qunit/raw/master/qunit/qunit.css" type="text/css" /> </head> <body> <h1 id="qunit-header">QUnit example</h1> <h2 id="qunit-banner"></h2> <div id="qunit-testrunner-toolbar"></div> <h2 id="qunit-userAgent"></h2> <ol id="qunit-tests"></ol> <div id="qunit-fixture">test markup, will be hidden</div> <script type="text/javascript" src="http://code.jquery.com/jquery-latest.js"></script> <script type="text/javascript" src="http://github.com/jquery/qunit/raw/master/qunit/qunit.js"></script> <script type="text/javascript"> // The function to test function addNumbers(a, b) { return a+b; } // The unit test test("Test of addNumbers", function () { equals(4, addNumbers(1,3), "1+3 should be 4"); }); </script> </body> </html> This test verifies that calling addNumbers(1,3) returns the expected value 4. When you open this page in a browser, you can see that this test does, in fact, pass. The idea is that you can quickly refresh this QUnit HTML JavaScript test driver page in your browser whenever you modify your JavaScript code. In other words, you can keep a browser window open and keep refreshing it over and over while you are developing your application. That way, you can know very quickly whenever you have broken your JavaScript code. While easy to setup, there are several big disadvantages to this approach to executing JavaScript unit tests: You must view your JavaScript unit test results in a different location than your server unit test results. The JavaScript unit test results appear in the browser and the server unit test results appear in the Visual Studio Test Results window. Because all of your unit test results don’t appear in a single location, you are more likely to introduce bugs into your code without noticing it. Because your unit tests are not integrated with Visual Studio – in particular, MS Test -- you cannot easily include your JavaScript unit tests when setting up check-in policies or when performing automated builds with Team Build. A more sophisticated approach to using a browser as a test-driver is to automate the web browser. Instead of launching the browser and loading the test code yourself, you use a framework to automate this process. There are several different testing frameworks that support this approach: · Selenium – Selenium is a very powerful framework for automating browser tests. You can create your tests by recording a Firefox session or by writing the test driver code in server code such as C#. You can learn more about Selenium at http://seleniumhq.org/. LTAF – The ASP.NET team uses the Lightweight Test Automation Framework to test JavaScript code in the ASP.NET framework. You can learn more about LTAF by visiting the project home at CodePlex: http://aspnet.codeplex.com/releases/view/35501 jsTestDriver – This framework uses Java to automate the browser. jsTestDriver creates a server which can be used to automate multiple browsers simultaneously. This project is located at http://code.google.com/p/js-test-driver/ TestSwam – This framework, created by John Resig, uses PHP to automate the browser. Like jsTestDriver, the framework creates a test server. You can open multiple browsers that are automated by the test server. Learn more about TestSwarm by visiting the following address: https://github.com/jeresig/testswarm/wiki Yeti – This is the framework introduced by Yahoo for automating browser tests. Yeti uses server-side JavaScript and depends on Node.js. Learn more about Yeti at http://www.yuiblog.com/blog/2010/08/25/introducing-yeti-the-yui-easy-testing-interface/ All of these frameworks are great for integration tests – however, they are not the best frameworks to use for unit tests. In one way or another, all of these frameworks depend on executing tests within the context of a “living and breathing” browser. If you create an ASP.NET Unit Test then Visual Studio will launch a web server before executing the unit test. Why is launching a web server so bad? It is not the worst thing in the world. However, it does introduce dependencies that prevent your code from being tested in isolation. One of the defining features of a unit test -- versus an integration test – is that a unit test tests code in isolation. Another problem with launching a web server when performing unit tests is that launching a web server can be slow. If you cannot execute your unit tests quickly, you are less likely to execute your unit tests each and every time you make a code change. You are much more likely to fall into the pit of failure. Launching a browser when performing a JavaScript unit test has all of the same disadvantages as launching a web server when performing an ASP.NET unit test. Instead of testing a unit of JavaScript code in isolation, you are testing JavaScript code within the context of a particular browser. Using the frameworks listed above for integration tests makes perfect sense. However, I want to consider a different approach for creating unit tests for JavaScript code. Using Server-Side JavaScript for JavaScript Unit Tests A completely different approach to executing JavaScript unit tests is to perform the tests outside of any browser. If you really want to test JavaScript then you should test JavaScript and leave the browser out of the testing process. There are several ways that you can execute JavaScript on the server outside the context of any browser: Rhino – Rhino is an implementation of JavaScript written in Java. The Rhino project is maintained by the Mozilla project. Learn more about Rhino at http://www.mozilla.org/rhino/ V8 – V8 is the open-source Google JavaScript engine written in C++. This is the JavaScript engine used by the Chrome web browser. You can download V8 and embed it in your project by visiting http://code.google.com/p/v8/ JScript – JScript is the JavaScript Script Engine used by Internet Explorer (up to but not including Internet Explorer 9), Windows Script Host, and Active Server Pages. Internet Explorer is still the most popular web browser. Therefore, I decided to focus on using the JScript Script Engine to execute JavaScript unit tests. Using the Microsoft Script Control There are two basic ways that you can pass JavaScript to the JScript Script Engine and execute the code: use the Microsoft Windows Script Interfaces or use the Microsoft Script Control. The difficult and proper way to execute JavaScript using the JScript Script Engine is to use the Microsoft Windows Script Interfaces. You can learn more about the Script Interfaces by visiting http://msdn.microsoft.com/en-us/library/t9d4xf28(VS.85).aspx The main disadvantage of using the Script Interfaces is that they are difficult to use from .NET. There is a great series of articles on using the Script Interfaces from C# located at http://www.drdobbs.com/184406028. I picked the easier alternative and used the Microsoft Script Control. The Microsoft Script Control is an ActiveX control that provides a higher level abstraction over the Window Script Interfaces. You can download the Microsoft Script Control from here: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=d7e31492-2595-49e6-8c02-1426fec693ac After you download the Microsoft Script Control, you need to add a reference to it to your project. Select the Visual Studio menu option Project, Add Reference to open the Add Reference dialog. Select the COM tab and add the Microsoft Script Control 1.0. Using the Script Control is easy. You call the Script Control AddCode() method to add JavaScript code to the Script Engine. Next, you call the Script Control Run() method to run a particular JavaScript function. The reference documentation for the Microsoft Script Control is located at the MSDN website: http://msdn.microsoft.com/en-us/library/aa227633%28v=vs.60%29.aspx Creating the JavaScript Code to Test To keep things simple, let’s imagine that you want to test the following JavaScript function named addNumbers() which simply adds two numbers together: MvcApplication1\Scripts\Math.js function addNumbers(a, b) { return 5; } Notice that the addNumbers() method always returns the value 5. Right-now, it will not pass a good unit test. Create this file and save it in your project with the name Math.js in your MVC project’s Scripts folder (Save the file in your actual MVC application and not your MVC test application). Creating the JavaScript Test Helper Class To make it easier to use the Microsoft Script Control in unit tests, we can create a helper class. This class contains two methods: LoadFile() – Loads a JavaScript file. Use this method to load the JavaScript file being tested or the JavaScript file containing the unit tests. ExecuteTest() – Executes the JavaScript code. Use this method to execute a JavaScript unit test. Here’s the code for the JavaScriptTestHelper class: JavaScriptTestHelper.cs   using System; using System.IO; using Microsoft.VisualStudio.TestTools.UnitTesting; using MSScriptControl; namespace MvcApplication1.Tests { public class JavaScriptTestHelper : IDisposable { private ScriptControl _sc; private TestContext _context; /// <summary> /// You need to use this helper with Unit Tests and not /// Basic Unit Tests because you need a Test Context /// </summary> /// <param name="testContext">Unit Test Test Context</param> public JavaScriptTestHelper(TestContext testContext) { if (testContext == null) { throw new ArgumentNullException("TestContext"); } _context = testContext; _sc = new ScriptControl(); _sc.Language = "JScript"; _sc.AllowUI = false; } /// <summary> /// Load the contents of a JavaScript file into the /// Script Engine. /// </summary> /// <param name="path">Path to JavaScript file</param> public void LoadFile(string path) { var fileContents = File.ReadAllText(path); _sc.AddCode(fileContents); } /// <summary> /// Pass the path of the test that you want to execute. /// </summary> /// <param name="testMethodName">JavaScript function name</param> public void ExecuteTest(string testMethodName) { dynamic result = null; try { result = _sc.Run(testMethodName, new object[] { }); } catch { var error = ((IScriptControl)_sc).Error; if (error != null) { var description = error.Description; var line = error.Line; var column = error.Column; var text = error.Text; var source = error.Source; if (_context != null) { var details = String.Format("{0} \r\nLine: {1} Column: {2}", source, line, column); _context.WriteLine(details); } } throw new AssertFailedException(error.Description); } } public void Dispose() { _sc = null; } } }     Notice that the JavaScriptTestHelper class requires a Test Context to be instantiated. For this reason, you can use the JavaScriptTestHelper only with a Visual Studio Unit Test and not a Basic Unit Test (These are two different types of Visual Studio project items). Add the JavaScriptTestHelper file to your MVC test application (for example, MvcApplication1.Tests). Creating the JavaScript Unit Test Next, we need to create the JavaScript unit test function that we will use to test the addNumbers() function. Create a folder in your MVC test project named JavaScriptTests and add the following JavaScript file to this folder: MvcApplication1.Tests\JavaScriptTests\MathTest.js /// <reference path="JavaScriptUnitTestFramework.js"/> function testAddNumbers() { // Act var result = addNumbers(1, 3); // Assert assert.areEqual(4, result, "addNumbers did not return right value!"); }   The testAddNumbers() function takes advantage of another JavaScript library named JavaScriptUnitTestFramework.js. This library contains all of the code necessary to make assertions. Add the following JavaScriptnitTestFramework.js to the same folder as the MathTest.js file: MvcApplication1.Tests\JavaScriptTests\JavaScriptUnitTestFramework.js var assert = { areEqual: function (expected, actual, message) { if (expected !== actual) { throw new Error("Expected value " + expected + " is not equal to " + actual + ". " + message); } } }; There is only one type of assertion supported by this file: the areEqual() assertion. Most likely, you would want to add additional types of assertions to this file to make it easier to write your JavaScript unit tests. Deploying the JavaScript Test Files This step is non-intuitive. When you use Visual Studio to run unit tests, Visual Studio creates a new folder and executes a copy of the files in your project. After you run your unit tests, your Visual Studio Solution will contain a new folder named TestResults that includes a subfolder for each test run. You need to configure Visual Studio to deploy your JavaScript files to the test run folder or Visual Studio won’t be able to find your JavaScript files when you execute your unit tests. You will get an error that looks something like this when you attempt to execute your unit tests: You can configure Visual Studio to deploy your JavaScript files by adding a Test Settings file to your Visual Studio Solution. It is important to understand that you need to add this file to your Visual Studio Solution and not a particular Visual Studio project. Right-click your Solution in the Solution Explorer window and select the menu option Add, New Item. Select the Test Settings item and click the Add button. After you create a Test Settings file for your solution, you can indicate that you want a particular folder to be deployed whenever you perform a test run. Select the menu option Test, Edit Test Settings to edit your test configuration file. Select the Deployment tab and select your MVC test project’s JavaScriptTest folder to deploy. Click the Apply button and the Close button to save the changes and close the dialog. Creating the Visual Studio Unit Test The very last step is to create the Visual Studio unit test (the MS Test unit test). Add a new unit test to your MVC test project by selecting the menu option Add New Item and selecting the Unit Test project item (Do not select the Basic Unit Test project item): The difference between a Basic Unit Test and a Unit Test is that a Unit Test includes a Test Context. We need this Test Context to use the JavaScriptTestHelper class that we created earlier. Enter the following test method for the new unit test: [TestMethod] public void TestAddNumbers() { var jsHelper = new JavaScriptTestHelper(this.TestContext); // Load JavaScript files jsHelper.LoadFile("JavaScriptUnitTestFramework.js"); jsHelper.LoadFile(@"..\..\..\MvcApplication1\Scripts\Math.js"); jsHelper.LoadFile("MathTest.js"); // Execute JavaScript Test jsHelper.ExecuteTest("testAddNumbers"); } This code uses the JavaScriptTestHelper to load three files: JavaScripUnitTestFramework.js – Contains the assert functions. Math.js – Contains the addNumbers() function from your MVC application which is being tested. MathTest.js – Contains the JavaScript unit test function. Next, the test method calls the JavaScriptTestHelper ExecuteTest() method to execute the testAddNumbers() JavaScript function. Running the Visual Studio JavaScript Unit Test After you complete all of the steps described above, you can execute the JavaScript unit test just like any other unit test. You can use the keyboard combination CTRL-R, CTRL-A to run all of the tests in the current Visual Studio Solution. Alternatively, you can use the buttons in the Visual Studio toolbar to run the tests: (Unfortunately, the Run All Impacted Tests button won’t work correctly because Visual Studio won’t detect that your JavaScript code has changed. Therefore, you should use either the Run Tests in Current Context or Run All Tests in Solution options instead.) The results of running the JavaScript tests appear side-by-side with the results of running the server tests in the Test Results window. For example, if you Run All Tests in Solution then you will get the following results: Notice that the TestAddNumbers() JavaScript test has failed. That is good because our addNumbers() function is hard-coded to always return the value 5. If you double-click the failing JavaScript test, you can view additional details such as the JavaScript error message and the line number of the JavaScript code that failed: Summary The goal of this blog entry was to explain an approach to creating JavaScript unit tests that can be easily integrated with Visual Studio and Visual Studio ALM. I described how you can use the Microsoft Script Control to execute JavaScript on the server. By taking advantage of the Microsoft Script Control, we were able to execute our JavaScript unit tests side-by-side with all of our other unit tests and view the results in the standard Visual Studio Test Results window. You can download the code discussed in this blog entry from here: http://StephenWalther.com/downloads/Blog/JavaScriptUnitTesting/JavaScriptUnitTests.zip Before running this code, you need to first install the Microsoft Script Control which you can download from here: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=d7e31492-2595-49e6-8c02-1426fec693ac

    Read the article

  • postfix: Temporary lookup failure

    - by mk_89
    I have followed the tutorials step by step for installing and configuring postfix https://help.ubuntu.com/community/Postfix https://help.ubuntu.com/community/PostfixBasicSetupHowto I am trying to test the services to whether Temporary lookup failure error telnet localhost 25 250 2.1.0 Ok rcpt to: fmaster@localhost 451 4.3.0 <fmaster@localhost>: Temporary lookup failure rcpt to: info@localhost 451 4.3.0 <info@localhost>: Temporary lookup failure I have tried searching the web but I have found no solutions, why am I getting this problem? mail.log Sep 24 01:03:05 bookcdb postfix/smtpd[21055]: NOQUEUE: reject: RCPT from unknown[::1]: 451 4.3.0 <info@localhost>: Temporary lookup failure; from=<root@localhost> to=<info@localhost> proto=ESMTP helo=<localhost> Sep 24 01:03:19 bookcdb postfix/smtpd[21055]: NOQUEUE: reject: RCPT from unknown[::1]: 451 4.3.0 <root@localhost>: Temporary lookup failure; from=<root@localhost> to=<root@localhost> proto=ESMTP helo=<localhost> Sep 24 01:08:19 bookcdb postfix/smtpd[21055]: timeout after RCPT from unknown[::1] Sep 24 01:08:19 bookcdb postfix/smtpd[21055]: disconnect from unknown[::1] Sep 24 01:10:49 bookcdb postfix/anvil[21059]: statistics: max connection rate 1/60s for (smtp:::1) at Sep 24 01:00:49 Sep 24 01:10:49 bookcdb postfix/anvil[21059]: statistics: max connection count 1 for (smtp:::1) at Sep 24 01:00:49 Sep 24 01:10:49 bookcdb postfix/anvil[21059]: statistics: max cache size 1 at Sep 24 01:00:49 Sep 24 01:15:36 bookcdb postfix/smtpd[22175]: error: open database /var/lib/mailman/data/aliases.db: No such file or directory Sep 24 01:15:36 bookcdb postfix/smtpd[22175]: warning: hostname localhost does not resolve to address ::1: No address associated with hostname Sep 24 01:15:36 bookcdb postfix/smtpd[22175]: connect from unknown[::1] Sep 24 01:15:55 bookcdb postfix/trivial-rewrite[22195]: error: open database /etc/postfix/transport.db: No such file or directory Sep 24 01:15:55 bookcdb postfix/trivial-rewrite[22195]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:15:55 bookcdb postfix/trivial-rewrite[22195]: warning: hash:/etc/postfix/transport lookup error for "*" Sep 24 01:15:55 bookcdb postfix/trivial-rewrite[22195]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:15:55 bookcdb postfix/trivial-rewrite[22195]: warning: hash:/etc/postfix/transport lookup error for "*" Sep 24 01:15:55 bookcdb postfix/trivial-rewrite[22195]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:15:55 bookcdb postfix/trivial-rewrite[22195]: warning: hash:/etc/postfix/transport lookup error for "root@localhost" Sep 24 01:15:55 bookcdb postfix/trivial-rewrite[22195]: warning: transport_maps lookup failure Sep 24 01:15:59 bookcdb postfix/trivial-rewrite[22195]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:15:59 bookcdb postfix/trivial-rewrite[22195]: warning: hash:/etc/postfix/transport lookup error for "fmaster@localhost" Sep 24 01:15:59 bookcdb postfix/trivial-rewrite[22195]: warning: transport_maps lookup failure Sep 24 01:15:59 bookcdb postfix/smtpd[22175]: NOQUEUE: reject: RCPT from unknown[::1]: 451 4.3.0 <fmaster@localhost>: Temporary lookup failure; from=<root@localhost> to=<fmaster@localhost> proto=ESMTP helo=<localhost> Sep 24 01:16:30 postfix/smtpd[22175]: last message repeated 5 times Sep 24 01:16:30 bookcdb postfix/smtpd[22175]: disconnect from unknown[::1] Sep 24 01:19:50 bookcdb postfix/anvil[22177]: statistics: max connection rate 1/60s for (smtp:::1) at Sep 24 01:15:36 Sep 24 01:19:50 bookcdb postfix/anvil[22177]: statistics: max connection count 1 for (smtp:::1) at Sep 24 01:15:36 Sep 24 01:19:50 bookcdb postfix/anvil[22177]: statistics: max cache size 1 at Sep 24 01:15:36 Sep 24 01:20:32 bookcdb postfix/qmgr[21039]: D0C596E0B34: from=<[email protected]>, size=442, nrcpt=1 (queue active) Sep 24 01:20:32 bookcdb postfix/qmgr[21039]: warning: connect to transport private/smtp-amavis: No such file or directory Sep 24 01:20:32 bookcdb postfix/error[22402]: D0C596E0B34: to=<[email protected]>, relay=none, delay=5369, delays=5369/0.01/0/0.09, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 01:24:16 bookcdb postfix/smtpd[22573]: error: open database /var/lib/mailman/data/aliases.db: No such file or directory Sep 24 01:24:16 bookcdb postfix/smtpd[22573]: warning: hostname localhost does not resolve to address ::1: No address associated with hostname Sep 24 01:24:16 bookcdb postfix/smtpd[22573]: connect from unknown[::1] Sep 24 01:24:43 bookcdb postfix/trivial-rewrite[22594]: error: open database /etc/postfix/transport.db: No such file or directory Sep 24 01:24:43 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:24:43 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport lookup error for "*" Sep 24 01:24:43 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:24:43 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport lookup error for "*" Sep 24 01:24:43 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:24:43 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport lookup error for "root@localhost" Sep 24 01:24:43 bookcdb postfix/trivial-rewrite[22594]: warning: transport_maps lookup failure Sep 24 01:25:14 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:25:14 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport lookup error for "[email protected]" Sep 24 01:25:14 bookcdb postfix/trivial-rewrite[22594]: warning: transport_maps lookup failure Sep 24 01:25:14 bookcdb postfix/smtpd[22573]: NOQUEUE: reject: RCPT from unknown[::1]: 451 4.3.0 <[email protected]>: Temporary lookup failure; from=<root@localhost> to=<[email protected]> proto=ESMTP helo=<localhost> Sep 24 01:25:32 bookcdb postfix/qmgr[21039]: 2E5C36E0A07: from=<[email protected]>, size=438, nrcpt=1 (queue active) Sep 24 01:25:32 bookcdb postfix/qmgr[21039]: warning: connect to transport private/smtp-amavis: No such file or directory Sep 24 01:25:32 bookcdb postfix/qmgr[21039]: 0EA3A6E0ACC: from=<[email protected]>, size=438, nrcpt=1 (queue active) Sep 24 01:25:32 bookcdb postfix/error[22631]: 2E5C36E0A07: to=<[email protected]>, orig_to=<root>, relay=none, delay=30203, delays=30203/0.01/0/0.1, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 01:25:32 bookcdb postfix/error[22632]: 0EA3A6E0ACC: to=<[email protected]>, orig_to=<root>, relay=none, delay=30115, delays=30115/0.01/0/0.11, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 01:25:58 bookcdb postfix/smtpd[22573]: warning: non-SMTP command from unknown[::1]: subject: fdf Sep 24 01:25:58 bookcdb postfix/smtpd[22573]: disconnect from unknown[::1] Sep 24 01:26:01 bookcdb postfix/smtpd[22573]: warning: hostname localhost does not resolve to address ::1: No address associated with hostname Sep 24 01:26:01 bookcdb postfix/smtpd[22573]: connect from unknown[::1] Sep 24 01:26:10 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:26:10 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport lookup error for "root@locahost" Sep 24 01:26:10 bookcdb postfix/trivial-rewrite[22594]: warning: transport_maps lookup failure Sep 24 01:26:37 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:26:37 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport lookup error for "fmaster@localhost" Sep 24 01:26:37 bookcdb postfix/trivial-rewrite[22594]: warning: transport_maps lookup failure Sep 24 01:26:37 bookcdb postfix/smtpd[22573]: NOQUEUE: reject: RCPT from unknown[::1]: 451 4.3.0 <fmaster@localhost>: Temporary lookup failure; from=<root@locahost> to=<fmaster@localhost> proto=SMTP Sep 24 01:26:45 bookcdb postfix/smtpd[22573]: disconnect from unknown[::1] Sep 24 01:30:05 bookcdb postfix/anvil[22575]: statistics: max connection rate 1/60s for (smtp:::1) at Sep 24 01:24:16 Sep 24 01:30:05 bookcdb postfix/anvil[22575]: statistics: max connection count 1 for (smtp:::1) at Sep 24 01:24:16 Sep 24 01:30:05 bookcdb postfix/anvil[22575]: statistics: max cache size 1 at Sep 24 01:24:16 Sep 24 01:34:57 bookcdb dovecot: master: Dovecot v2.0.19 starting up (core dumps disabled) Sep 24 01:35:02 bookcdb amavis[1009]: starting. /usr/sbin/amavisd-new at mail.bookcdb.com amavisd-new-2.6.5 (20110407), Unicode aware Sep 24 01:35:02 bookcdb amavis[1009]: Perl version 5.014002 Sep 24 01:35:05 bookcdb amavis[1155]: Net::Server: Group Not Defined. Defaulting to EGID '114 114' Sep 24 01:35:05 bookcdb amavis[1155]: Net::Server: User Not Defined. Defaulting to EUID '108' Sep 24 01:35:05 bookcdb amavis[1155]: Module Amavis::Conf 2.208 Sep 24 01:35:05 bookcdb amavis[1155]: Module Archive::Zip 1.30 Sep 24 01:35:05 bookcdb amavis[1155]: Module BerkeleyDB 0.49 Sep 24 01:35:05 bookcdb amavis[1155]: Module Compress::Zlib 2.033 Sep 24 01:35:05 bookcdb amavis[1155]: Module Convert::TNEF 0.17 Sep 24 01:35:05 bookcdb amavis[1155]: Module Convert::UUlib 1.4 Sep 24 01:35:05 bookcdb amavis[1155]: Module Crypt::OpenSSL::RSA 0.27 Sep 24 01:35:05 bookcdb amavis[1155]: Module DB_File 1.821 Sep 24 01:35:05 bookcdb amavis[1155]: Module Digest::MD5 2.51 Sep 24 01:35:05 bookcdb amavis[1155]: Module Digest::SHA 5.61 Sep 24 01:35:05 bookcdb amavis[1155]: Module IO::Socket::INET6 2.69 Sep 24 01:35:05 bookcdb amavis[1155]: Module MIME::Entity 5.502 Sep 24 01:35:05 bookcdb amavis[1155]: Module MIME::Parser 5.502 Sep 24 01:35:05 bookcdb amavis[1155]: Module MIME::Tools 5.502 Sep 24 01:35:05 bookcdb amavis[1155]: Module Mail::DKIM::Signer 0.39 Sep 24 01:35:05 bookcdb amavis[1155]: Module Mail::DKIM::Verifier 0.39 Sep 24 01:35:05 bookcdb amavis[1155]: Module Mail::Header 2.08 Sep 24 01:35:05 bookcdb amavis[1155]: Module Mail::Internet 2.08 Sep 24 01:35:05 bookcdb amavis[1155]: Module Mail::SPF v2.008 Sep 24 01:35:05 bookcdb amavis[1155]: Module Mail::SpamAssassin 3.003002 Sep 24 01:35:05 bookcdb amavis[1155]: Module Net::DNS 0.66 Sep 24 01:35:05 bookcdb amavis[1155]: Module Net::Server 0.99 Sep 24 01:35:05 bookcdb amavis[1155]: Module NetAddr::IP 4.058 Sep 24 01:35:05 bookcdb amavis[1155]: Module Socket6 0.23 Sep 24 01:35:05 bookcdb amavis[1155]: Module Time::HiRes 1.972101 Sep 24 01:35:05 bookcdb amavis[1155]: Module URI 1.59 Sep 24 01:35:05 bookcdb amavis[1155]: Module Unix::Syslog 1.1 Sep 24 01:35:05 bookcdb amavis[1155]: Amavis::DB code loaded Sep 24 01:35:05 bookcdb amavis[1155]: Amavis::Cache code loaded Sep 24 01:35:05 bookcdb amavis[1155]: SQL base code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: SQL::Log code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: SQL::Quarantine NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: Lookup::SQL code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: Lookup::LDAP code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: AM.PDP-in proto code loaded Sep 24 01:35:05 bookcdb amavis[1155]: SMTP-in proto code loaded Sep 24 01:35:05 bookcdb amavis[1155]: Courier proto code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: SMTP-out proto code loaded Sep 24 01:35:05 bookcdb amavis[1155]: Pipe-out proto code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: BSMTP-out proto code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: Local-out proto code loaded Sep 24 01:35:05 bookcdb amavis[1155]: OS_Fingerprint code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: ANTI-VIRUS code loaded Sep 24 01:35:05 bookcdb amavis[1155]: ANTI-SPAM code loaded Sep 24 01:35:05 bookcdb amavis[1155]: ANTI-SPAM-EXT code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: ANTI-SPAM-C code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: ANTI-SPAM-SA code loaded Sep 24 01:35:05 bookcdb amavis[1155]: Unpackers code loaded Sep 24 01:35:05 bookcdb amavis[1155]: DKIM code loaded Sep 24 01:35:05 bookcdb amavis[1155]: Tools code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: Found $file at /usr/bin/file Sep 24 01:35:05 bookcdb amavis[1155]: No $altermime, not using it Sep 24 01:35:05 bookcdb amavis[1155]: Internal decoder for .mail Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .F Sep 24 01:35:05 bookcdb amavis[1155]: Found decoder for .Z at /bin/uncompress Sep 24 01:35:05 bookcdb amavis[1155]: Internal decoder for .gz Sep 24 01:35:05 bookcdb amavis[1155]: Found decoder for .bz2 at /bin/bzip2 -d Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .lzo tried: lzop -d Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .rpm tried: rpm2cpio.pl, rpm2cpio Sep 24 01:35:05 bookcdb amavis[1155]: Found decoder for .cpio at /bin/pax Sep 24 01:35:05 bookcdb amavis[1155]: Found decoder for .tar at /bin/pax Sep 24 01:35:05 bookcdb amavis[1155]: Found decoder for .deb at /usr/bin/ar Sep 24 01:35:05 bookcdb amavis[1155]: Internal decoder for .zip Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .7z tried: 7zr, 7za, 7z Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .rar tried: unrar-free Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .arj tried: arj, unarj Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .arc tried: nomarch, arc Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .zoo tried: zoo Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .lha Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .doc tried: ripole Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .cab tried: cabextract Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .tnef Sep 24 01:35:05 bookcdb amavis[1155]: Internal decoder for .tnef Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .exe tried: unrar-free; arj, unarj Sep 24 01:35:05 bookcdb amavis[1155]: Using primary internal av scanner code for ClamAV-clamd Sep 24 01:35:05 bookcdb amavis[1155]: Found secondary av scanner ClamAV-clamscan at /usr/bin/clamscan Sep 24 01:35:05 bookcdb amavis[1155]: Creating db in /var/lib/amavis/db/; BerkeleyDB 0.49, libdb 5.1 Sep 24 01:35:05 bookcdb postgrey[1219]: Process Backgrounded Sep 24 01:35:05 bookcdb postgrey[1219]: 2012/09/24-01:35:05 postgrey (type Net::Server::Multiplex) starting! pid(1219) Sep 24 01:35:05 bookcdb postgrey[1219]: Using default listen value of 128 Sep 24 01:35:05 bookcdb postgrey[1219]: Binding to TCP port 10023 on host localhost#012 Sep 24 01:35:05 bookcdb postgrey[1219]: Setting gid to "116 116" Sep 24 01:35:05 bookcdb postgrey[1219]: Setting uid to "110" Sep 24 01:35:06 bookcdb spamd[1231]: logger: removing stderr method Sep 24 01:35:08 bookcdb spamd[1233]: spamd: server started on port 783/tcp (running version 3.3.2) Sep 24 01:35:08 bookcdb spamd[1233]: spamd: server pid: 1233 Sep 24 01:35:08 bookcdb spamd[1233]: spamd: server successfully spawned child process, pid 1238 Sep 24 01:35:08 bookcdb spamd[1233]: spamd: server successfully spawned child process, pid 1240 Sep 24 01:35:08 bookcdb spamd[1233]: prefork: child states: SI Sep 24 01:35:08 bookcdb spamd[1233]: prefork: child states: II Sep 24 01:35:15 bookcdb postfix/master[1729]: daemon started -- version 2.9.3, configuration /etc/postfix Sep 24 01:36:08 bookcdb postfix/smtpd[1995]: error: open database /var/lib/mailman/data/aliases.db: No such file or directory Sep 24 01:36:08 bookcdb postfix/smtpd[1995]: warning: hostname localhost does not resolve to address ::1: No address associated with hostname Sep 24 01:36:08 bookcdb postfix/smtpd[1995]: connect from unknown[::1] Sep 24 01:36:51 bookcdb postfix/trivial-rewrite[1999]: error: open database /etc/postfix/transport.db: No such file or directory Sep 24 01:36:51 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:36:51 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport lookup error for "*" Sep 24 01:36:51 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:36:51 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport lookup error for "*" Sep 24 01:36:51 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:36:51 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport lookup error for "root@localhost" Sep 24 01:36:51 bookcdb postfix/trivial-rewrite[1999]: warning: transport_maps lookup failure Sep 24 01:37:00 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:37:00 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport lookup error for "fmaster@localhost" Sep 24 01:37:00 bookcdb postfix/trivial-rewrite[1999]: warning: transport_maps lookup failure Sep 24 01:37:00 bookcdb postfix/smtpd[1995]: NOQUEUE: reject: RCPT from unknown[::1]: 451 4.3.0 <fmaster@localhost>: Temporary lookup failure; from=<root@localhost> to=<fmaster@localhost> proto=SMTP Sep 24 01:37:28 bookcdb dovecot: imap-login: Login: user=<mkadiri89>, method=PLAIN, rip=::1, lip=::1, mpid=2730, secured Sep 24 01:37:28 bookcdb dovecot: imap(mkadiri89): Disconnected: Logged out bytes=44/697 Sep 24 01:37:29 bookcdb dovecot: imap-login: Login: user=<mkadiri89>, method=PLAIN, rip=::1, lip=::1, mpid=2732, secured Sep 24 01:37:29 bookcdb dovecot: imap(mkadiri89): Disconnected: Logged out bytes=464/1303 Sep 24 01:37:29 bookcdb dovecot: imap-login: Login: user=<mkadiri89>, method=PLAIN, rip=::1, lip=::1, mpid=2734, secured Sep 24 01:37:29 bookcdb dovecot: imap(mkadiri89): Disconnected: Logged out bytes=117/1395 Sep 24 01:37:31 bookcdb dovecot: imap-login: Login: user=<mkadiri89>, method=PLAIN, rip=::1, lip=::1, mpid=2737, secured Sep 24 01:37:31 bookcdb dovecot: imap(mkadiri89): Disconnected: Logged out bytes=117/1395 Sep 24 01:37:49 bookcdb dovecot: imap-login: Login: user=<root>, method=PLAIN, rip=::1, lip=::1, mpid=2739, secured Sep 24 01:37:49 bookcdb dovecot: imap: Error: user root: Invalid settings in userdb: userdb returned 0 as uid Sep 24 01:37:49 bookcdb dovecot: imap: Error: Invalid user settings. Refer to server log for more information. Sep 24 01:37:54 bookcdb dovecot: imap-login: Login: user=<root>, method=PLAIN, rip=::1, lip=::1, mpid=2741, secured Sep 24 01:37:54 bookcdb dovecot: imap: Error: user root: Invalid settings in userdb: userdb returned 0 as uid Sep 24 01:37:54 bookcdb dovecot: imap: Error: Invalid user settings. Refer to server log for more information. Sep 24 01:38:04 bookcdb dovecot: imap-login: Login: user=<info>, method=PLAIN, rip=::1, lip=::1, mpid=2743, secured Sep 24 01:38:04 bookcdb dovecot: imap(info): Disconnected: Logged out bytes=44/697 Sep 24 01:38:04 bookcdb dovecot: imap-login: Login: user=<info>, method=PLAIN, rip=::1, lip=::1, mpid=2745, secured Sep 24 01:38:04 bookcdb dovecot: imap(info): Disconnected: Logged out bytes=464/1303 Sep 24 01:38:04 bookcdb dovecot: imap-login: Login: user=<info>, method=PLAIN, rip=::1, lip=::1, mpid=2747, secured Sep 24 01:38:04 bookcdb dovecot: imap(info): Disconnected: Logged out bytes=117/1395 Sep 24 01:38:55 bookcdb postfix/smtpd[1995]: disconnect from unknown[::1] Sep 24 01:38:58 bookcdb postfix/smtpd[1995]: warning: hostname localhost does not resolve to address ::1: No address associated with hostname Sep 24 01:38:58 bookcdb postfix/smtpd[1995]: connect from unknown[::1] Sep 24 01:39:11 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:39:11 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport lookup error for "info@localhost" Sep 24 01:39:11 bookcdb postfix/trivial-rewrite[1999]: warning: transport_maps lookup failure Sep 24 01:39:37 bookcdb postfix/smtpd[1995]: NOQUEUE: reject: RCPT from unknown[::1]: 451 4.3.0 <fmaster@localhost>: Temporary lookup failure; from=<info@localhost> to=<fmaster@localhost> proto=SMTP Sep 24 01:39:47 bookcdb postfix/smtpd[1995]: NOQUEUE: reject: RCPT from unknown[::1]: 451 4.3.0 <fmaster@localhost>: Temporary lookup failure; from=<info@localhost> to=<fmaster@localhost> proto=SMTP Sep 24 01:40:13 bookcdb postfix/smtpd[1995]: NOQUEUE: reject: RCPT from unknown[::1]: 451 4.3.0 <info@localhost>: Temporary lookup failure; from=<info@localhost> to=<info@localhost> proto=SMTP Sep 24 01:43:08 bookcdb postfix/smtpd[1995]: disconnect from unknown[::1] Sep 24 01:46:08 bookcdb postfix/anvil[1998]: statistics: max connection rate 1/60s for (smtp:::1) at Sep 24 01:36:08 Sep 24 01:46:08 bookcdb postfix/anvil[1998]: statistics: max connection count 1 for (smtp:::1) at Sep 24 01:36:08 Sep 24 01:46:08 bookcdb postfix/anvil[1998]: statistics: max cache size 1 at Sep 24 01:36:08 Sep 24 01:48:05 bookcdb dovecot: imap-login: Login: user=<info>, method=PLAIN, rip=::1, lip=::1, mpid=2805, secured Sep 24 01:48:05 bookcdb dovecot: imap(info): Disconnected: Logged out bytes=85/681 Sep 24 01:51:30 bookcdb dovecot: imap-login: Login: user=<info>, method=PLAIN, rip=::1, lip=::1, mpid=2815, secured Sep 24 01:51:30 bookcdb dovecot: imap(info): Disconnected: Logged out bytes=117/1395 Sep 24 02:05:15 bookcdb postfix/qmgr[1745]: 2EA006E0B32: from=<[email protected]>, size=439, nrcpt=1 (queue active) Sep 24 02:05:15 bookcdb postfix/qmgr[1745]: warning: connect to transport private/smtp-amavis: No such file or directory Sep 24 02:05:15 bookcdb postfix/qmgr[1745]: E76996E09B2: from=<[email protected]>, size=439, nrcpt=1 (queue active) Sep 24 02:05:15 bookcdb postfix/error[2842]: 2EA006E0B32: to=<[email protected]>, relay=none, delay=8391, delays=8391/0.05/0/0.09, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 02:05:16 bookcdb postfix/error[2843]: E76996E09B2: to=<[email protected]>, relay=none, delay=8416, delays=8416/0.03/0/0.11, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 02:30:15 bookcdb postfix/qmgr[1745]: D0C596E0B34: from=<[email protected]>, size=442, nrcpt=1 (queue active) Sep 24 02:30:15 bookcdb postfix/qmgr[1745]: warning: connect to transport private/smtp-amavis: No such file or directory Sep 24 02:30:15 bookcdb postfix/error[2914]: D0C596E0B34: to=<[email protected]>, relay=none, delay=9551, delays=9551/0.01/0/0.08, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 02:35:15 bookcdb postfix/qmgr[1745]: 2E5C36E0A07: from=<[email protected]>, size=438, nrcpt=1 (queue active) Sep 24 02:35:15 bookcdb postfix/qmgr[1745]: warning: connect to transport private/smtp-amavis: No such file or directory Sep 24 02:35:15 bookcdb postfix/qmgr[1745]: 0EA3A6E0ACC: from=<[email protected]>, size=438, nrcpt=1 (queue active) Sep 24 02:35:15 bookcdb postfix/error[2926]: 2E5C36E0A07: to=<[email protected]>, orig_to=<root>, relay=none, delay=34386, delays=34386/0.03/0/0.1, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 02:35:15 bookcdb postfix/error[2927]: 0EA3A6E0ACC: to=<[email protected]>, orig_to=<root>, relay=none, delay=34299, delays=34298/0.02/0/0.12, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 03:15:15 bookcdb postfix/qmgr[1745]: 2EA006E0B32: from=<[email protected]>, size=439, nrcpt=1 (queue active) Sep 24 03:15:15 bookcdb postfix/qmgr[1745]: warning: connect to transport private/smtp-amavis: No such file or directory Sep 24 03:15:15 bookcdb postfix/qmgr[1745]: E76996E09B2: from=<[email protected]>, size=439, nrcpt=1 (queue active) Sep 24 03:15:15 bookcdb postfix/error[3025]: 2EA006E0B32: to=<[email protected]>, relay=none, delay=12590, delays=12590/0.01/0/0.07, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 03:15:15 bookcdb postfix/error[3026]: E76996E09B2: to=<[email protected]>, relay=none, delay=12616, delays=12616/0.01/0/0.09, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 03:40:15 bookcdb postfix/qmgr[1745]: D0C596E0B34: from=<[email protected]>, size=442, nrcpt=1 (queue active) Sep 24 03:40:15 bookcdb postfix/qmgr[1745]: warning: connect to transport private/smtp-amavis: No such file or directory Sep 24 03:40:15 bookcdb postfix/error[3097]: D0C596E0B34: to=<[email protected]>, relay=none, delay=13752, delays=13752/0.01/0/0.07, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 03:45:15 bookcdb postfix/qmgr[1745]: 2E5C36E0A07: from=<[email protected]>, size=438, nrcpt=1 (queue active) Sep 24 03:45:15 bookcdb postfix/qmgr[1745]: warning: connect to transport private/smtp-amavis: No such file or directory Sep 24 03:45:15 bookcdb postfix/qmgr[1745]: 0EA3A6E0ACC: from=<[email protected]>, size=438, nrcpt=1 (queue active) Sep 24 03:45:15 bookcdb postfix/error[3129]: 2E5C36E0A07: to=<[email protected]>, orig_to=<root>, relay=none, delay=38586, delays=38586/0.01/0/0.09, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 03:45:15 bookcdb postfix/error[3130]: 0EA3A6E0ACC: to=<[email protected]>, orig_to=<root>, relay=none, delay=38498, delays=38498/0.01/0/0.08, dsn=4.3.0, status=deferred (mail transport unavailable) postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases, hash:/var/lib/mailman/data/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix content_filter = smtp-amavis:[127.0.0.1]:10024 home_mailbox = Maildir/ inet_interfaces = all inet_protocols = all mailbox_command = mailbox_size_limit = 0 mailman_destination_recipient_limit = 1 mydestination = server1.bookcdb.com, bookcdb.com, localhost.bookcdb.com, localho st, bookcdb.com myhostname = server1.bookcdb.com mynetworks = 127.0.0.0/8 myorigin = /etc/mailname readme_directory = no recipient_delimiter = + relay_domains = lists.bookcdb.com relayhost = smtp_tls_note_starttls_offer = yes smtp_tls_security_level = may smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_recipient_restrictions = permit_sasl_authenticated,permit_mynetworks,rejec t_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_local_domain = smtpd_sasl_security_options = noanonymous smtpd_tls_CAfile = /etc/ssl/certs/cacert.pem smtpd_tls_auth_only = no smtpd_tls_cert_file = /etc/ssl/certs/smtpd.crt smtpd_tls_key_file = /etc/ssl/private/smtpd.key smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_security_level = may smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_tls_session_cache_timeout = 3600s smtpd_use_tls = yes tls_random_source = dev:/dev/urandom transport_maps = hash:/etc/postfix/transport

    Read the article

  • .NET HTML Sanitation for rich HTML Input

    - by Rick Strahl
    Recently I was working on updating a legacy application to MVC 4 that included free form text input. When I set up the new site my initial approach was to not allow any rich HTML input, only simple text formatting that would respect a few simple HTML commands for bold, lists etc. and automatically handles line break processing for new lines and paragraphs. This is typical for what I do with most multi-line text input in my apps and it works very well with very little development effort involved. Then the client sprung another note: Oh by the way we have a bunch of customers (real estate agents) who need to post complete HTML documents. Oh uh! There goes the simple theory. After some discussion and pleading on my part (<snicker>) to try and avoid this type of raw HTML input because of potential XSS issues, the client decided to go ahead and allow raw HTML input anyway. There has been lots of discussions on this subject on StackOverFlow (and here and here) but to after reading through some of the solutions I didn't really find anything that would work even closely for what I needed. Specifically we need to be able to allow just about any HTML markup, with the exception of script code. Remote CSS and Images need to be loaded, links need to work and so. While the 'legit' HTML posted by these agents is basic in nature it does span most of the full gamut of HTML (4). Most of the solutions XSS prevention/sanitizer solutions I found were way to aggressive and rendered the posted output unusable mostly because they tend to strip any externally loaded content. In short I needed a custom solution. I thought the best solution to this would be to use an HTML parser - in this case the Html Agility Pack - and then to run through all the HTML markup provided and remove any of the blacklisted tags and a number of attributes that are prone to JavaScript injection. There's much discussion on whether to use blacklists vs. whitelists in the discussions mentioned above, but I found that whitelists can make sense in simple scenarios where you might allow manual HTML input, but when you need to allow a larger array of HTML functionality a blacklist is probably easier to manage as the vast majority of elements and attributes could be allowed. Also white listing gets a bit more complex with HTML5 and the new proliferation of new HTML tags and most new tags generally don't affect XSS issues directly. Pure whitelisting based on elements and attributes also doesn't capture many edge cases (see some of the XSS cheat sheets listed below) so even with a white list, custom logic is still required to handle many of those edge cases. The Microsoft Web Protection Library (AntiXSS) My first thought was to check out the Microsoft AntiXSS library. Microsoft has an HTML Encoding and Sanitation library in the Microsoft Web Protection Library (formerly AntiXSS Library) on CodePlex, which provides stricter functions for whitelist encoding and sanitation. Initially I thought the Sanitation class and its static members would do the trick for me,but I found that this library is way too restrictive for my needs. Specifically the Sanitation class strips out images and links which rendered the full HTML from our real estate clients completely useless. I didn't spend much time with it, but apparently I'm not alone if feeling this library is not really useful without some way to configure operation. To give you an example of what didn't work for me with the library here's a small and simple HTML fragment that includes script, img and anchor tags. I would expect the script to be stripped and everything else to be left intact. Here's the original HTML:var value = "<b>Here</b> <script>alert('hello')</script> we go. Visit the " + "<a href='http://west-wind.com'>West Wind</a> site. " + "<img src='http://west-wind.com/images/new.gif' /> " ; and the code to sanitize it with the AntiXSS Sanitize class:@Html.Raw(Microsoft.Security.Application.Sanitizer.GetSafeHtmlFragment(value)) This produced a not so useful sanitized string: Here we go. Visit the <a>West Wind</a> site. While it removed the <script> tag (good) it also removed the href from the link and the image tag altogether (bad). In some situations this might be useful, but for most tasks I doubt this is the desired behavior. While links can contain javascript: references and images can 'broadcast' information to a server, without configuration to tell the library what to restrict this becomes useless to me. I couldn't find any way to customize the white list, nor is there code available in this 'open source' library on CodePlex. Using Html Agility Pack for HTML Parsing The WPL library wasn't going to cut it. After doing a bit of research I decided the best approach for a custom solution would be to use an HTML parser and inspect the HTML fragment/document I'm trying to import. I've used the HTML Agility Pack before for a number of apps where I needed an HTML parser without requiring an instance of a full browser like the Internet Explorer Application object which is inadequate in Web apps. In case you haven't checked out the Html Agility Pack before, it's a powerful HTML parser library that you can use from your .NET code. It provides a simple, parsable HTML DOM model to full HTML documents or HTML fragments that let you walk through each of the elements in your document. If you've used the HTML or XML DOM in a browser before you'll feel right at home with the Agility Pack. Blacklist based HTML Parsing to strip XSS Code For my purposes of HTML sanitation, the process involved is to walk the HTML document one element at a time and then check each element and attribute against a blacklist. There's quite a bit of argument of what's better: A whitelist of allowed items or a blacklist of denied items. While whitelists tend to be more secure, they also require a lot more configuration. In the case of HTML5 a whitelist could be very extensive. For what I need, I only want to ensure that no JavaScript is executed, so a blacklist includes the obvious <script> tag plus any tag that allows loading of external content including <iframe>, <object>, <embed> and <link> etc. <form>  is also excluded to avoid posting content to a different location. I also disallow <head> and <meta> tags in particular for my case, since I'm only allowing posting of HTML fragments. There is also some internal logic to exclude some attributes or attributes that include references to JavaScript or CSS expressions. The default tag blacklist reflects my use case, but is customizable and can be added to. Here's my HtmlSanitizer implementation:using System.Collections.Generic; using System.IO; using System.Xml; using HtmlAgilityPack; namespace Westwind.Web.Utilities { public class HtmlSanitizer { public HashSet<string> BlackList = new HashSet<string>() { { "script" }, { "iframe" }, { "form" }, { "object" }, { "embed" }, { "link" }, { "head" }, { "meta" } }; /// <summary> /// Cleans up an HTML string and removes HTML tags in blacklist /// </summary> /// <param name="html"></param> /// <returns></returns> public static string SanitizeHtml(string html, params string[] blackList) { var sanitizer = new HtmlSanitizer(); if (blackList != null && blackList.Length > 0) { sanitizer.BlackList.Clear(); foreach (string item in blackList) sanitizer.BlackList.Add(item); } return sanitizer.Sanitize(html); } /// <summary> /// Cleans up an HTML string by removing elements /// on the blacklist and all elements that start /// with onXXX . /// </summary> /// <param name="html"></param> /// <returns></returns> public string Sanitize(string html) { var doc = new HtmlDocument(); doc.LoadHtml(html); SanitizeHtmlNode(doc.DocumentNode); //return doc.DocumentNode.WriteTo(); string output = null; // Use an XmlTextWriter to create self-closing tags using (StringWriter sw = new StringWriter()) { XmlWriter writer = new XmlTextWriter(sw); doc.DocumentNode.WriteTo(writer); output = sw.ToString(); // strip off XML doc header if (!string.IsNullOrEmpty(output)) { int at = output.IndexOf("?>"); output = output.Substring(at + 2); } writer.Close(); } doc = null; return output; } private void SanitizeHtmlNode(HtmlNode node) { if (node.NodeType == HtmlNodeType.Element) { // check for blacklist items and remove if (BlackList.Contains(node.Name)) { node.Remove(); return; } // remove CSS Expressions and embedded script links if (node.Name == "style") { if (string.IsNullOrEmpty(node.InnerText)) { if (node.InnerHtml.Contains("expression") || node.InnerHtml.Contains("javascript:")) node.ParentNode.RemoveChild(node); } } // remove script attributes if (node.HasAttributes) { for (int i = node.Attributes.Count - 1; i >= 0; i--) { HtmlAttribute currentAttribute = node.Attributes[i]; var attr = currentAttribute.Name.ToLower(); var val = currentAttribute.Value.ToLower(); span style="background: white; color: green">// remove event handlers if (attr.StartsWith("on")) node.Attributes.Remove(currentAttribute); // remove script links else if ( //(attr == "href" || attr== "src" || attr == "dynsrc" || attr == "lowsrc") && val != null && val.Contains("javascript:")) node.Attributes.Remove(currentAttribute); // Remove CSS Expressions else if (attr == "style" && val != null && val.Contains("expression") || val.Contains("javascript:") || val.Contains("vbscript:")) node.Attributes.Remove(currentAttribute); } } } // Look through child nodes recursively if (node.HasChildNodes) { for (int i = node.ChildNodes.Count - 1; i >= 0; i--) { SanitizeHtmlNode(node.ChildNodes[i]); } } } } } Please note: Use this as a starting point only for your own parsing and review the code for your specific use case! If your needs are less lenient than mine were you can you can make this much stricter by not allowing src and href attributes or CSS links if your HTML doesn't allow it. You can also check links for external URLs and disallow those - lots of options.  The code is simple enough to make it easy to extend to fit your use cases more specifically. It's also quite easy to make this code work using a WhiteList approach if you want to go that route. The code above is semi-generic for allowing full featured HTML fragments that only disallow script related content. The Sanitize method walks through each node of the document and then recursively drills into all of its children until the entire document has been traversed. Note that the code here uses an XmlTextWriter to write output - this is done to preserve XHTML style self-closing tags which are otherwise left as non-self-closing tags. The sanitizer code scans for blacklist elements and removes those elements not allowed. Note that the blacklist is configurable either in the instance class as a property or in the static method via the string parameter list. Additionally the code goes through each element's attributes and looks for a host of rules gleaned from some of the XSS cheat sheets listed at the end of the post. Clearly there are a lot more XSS vulnerabilities, but a lot of them apply to ancient browsers (IE6 and versions of Netscape) - many of these glaring holes (like CSS expressions - WTF IE?) have been removed in modern browsers. What a Pain To be honest this is NOT a piece of code that I wanted to write. I think building anything related to XSS is better left to people who have far more knowledge of the topic than I do. Unfortunately, I was unable to find a tool that worked even closely for me, or even provided a working base. For the project I was working on I had no choice and I'm sharing the code here merely as a base line to start with and potentially expand on for specific needs. It's sad that Microsoft Web Protection Library is currently such a train wreck - this is really something that should come from Microsoft as the systems vendor or possibly a third party that provides security tools. Luckily for my application we are dealing with a authenticated and validated users so the user base is fairly well known, and relatively small - this is not a wide open Internet application that's directly public facing. As I mentioned earlier in the post, if I had my way I would simply not allow this type of raw HTML input in the first place, and instead rely on a more controlled HTML input mechanism like MarkDown or even a good HTML Edit control that can provide some limits on what types of input are allowed. Alas in this case I was overridden and we had to go forward and allow *any* raw HTML posted. Sometimes I really feel sad that it's come this far - how many good applications and tools have been thwarted by fear of XSS (or worse) attacks? So many things that could be done *if* we had a more secure browser experience and didn't have to deal with every little script twerp trying to hack into Web pages and obscure browser bugs. So much time wasted building secure apps, so much time wasted by others trying to hack apps… We're a funny species - no other species manages to waste as much time, effort and resources as we humans do :-) Resources Code on GitHub Html Agility Pack XSS Cheat Sheet XSS Prevention Cheat Sheet Microsoft Web Protection Library (AntiXss) StackOverflow Links: http://stackoverflow.com/questions/341872/html-sanitizer-for-net http://blog.stackoverflow.com/2008/06/safe-html-and-xss/ http://code.google.com/p/subsonicforums/source/browse/trunk/SubSonic.Forums.Data/HtmlScrubber.cs?r=61© Rick Strahl, West Wind Technologies, 2005-2012Posted in Security  HTML  ASP.NET  JavaScript   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Using JSON.NET for dynamic JSON parsing

    - by Rick Strahl
    With the release of ASP.NET Web API as part of .NET 4.5 and MVC 4.0, JSON.NET has effectively pushed out the .NET native serializers to become the default serializer for Web API. JSON.NET is vastly more flexible than the built in DataContractJsonSerializer or the older JavaScript serializer. The DataContractSerializer in particular has been very problematic in the past because it can't deal with untyped objects for serialization - like values of type object, or anonymous types which are quite common these days. The JavaScript Serializer that came before it actually does support non-typed objects for serialization but it can't do anything with untyped data coming in from JavaScript and it's overall model of extensibility was pretty limited (JavaScript Serializer is what MVC uses for JSON responses). JSON.NET provides a robust JSON serializer that has both high level and low level components, supports binary JSON, JSON contracts, Xml to JSON conversion, LINQ to JSON and many, many more features than either of the built in serializers. ASP.NET Web API now uses JSON.NET as its default serializer and is now pulled in as a NuGet dependency into Web API projects, which is great. Dynamic JSON Parsing One of the features that I think is getting ever more important is the ability to serialize and deserialize arbitrary JSON content dynamically - that is without mapping the JSON captured directly into a .NET type as DataContractSerializer or the JavaScript Serializers do. Sometimes it isn't possible to map types due to the differences in languages (think collections, dictionaries etc), and other times you simply don't have the structures in place or don't want to create them to actually import the data. If this topic sounds familiar - you're right! I wrote about dynamic JSON parsing a few months back before JSON.NET was added to Web API and when Web API and the System.Net HttpClient libraries included the System.Json classes like JsonObject and JsonArray. With the inclusion of JSON.NET in Web API these classes are now obsolete and didn't ship with Web API or the client libraries. I re-linked my original post to this one. In this post I'll discus JToken, JObject and JArray which are the dynamic JSON objects that make it very easy to create and retrieve JSON content on the fly without underlying types. Why Dynamic JSON? So, why Dynamic JSON parsing rather than strongly typed parsing? Since applications are interacting more and more with third party services it becomes ever more important to have easy access to those services with easy JSON parsing. Sometimes it just makes lot of sense to pull just a small amount of data out of large JSON document received from a service, because the third party service isn't directly related to your application's logic most of the time - and it makes little sense to map the entire service structure in your application. For example, recently I worked with the Google Maps Places API to return information about businesses close to me (or rather the app's) location. The Google API returns a ton of information that my application had no interest in - all I needed was few values out of the data. Dynamic JSON parsing makes it possible to map this data, without having to map the entire API to a C# data structure. Instead I could pull out the three or four values I needed from the API and directly store it on my business entities that needed to receive the data - no need to map the entire Maps API structure. Getting JSON.NET The easiest way to use JSON.NET is to grab it via NuGet and add it as a reference to your project. You can add it to your project with: PM> Install-Package Newtonsoft.Json From the Package Manager Console or by using Manage NuGet Packages in your project References. As mentioned if you're using ASP.NET Web API or MVC 4 JSON.NET will be automatically added to your project. Alternately you can also go to the CodePlex site and download the latest version including source code: http://json.codeplex.com/ Creating JSON on the fly with JObject and JArray Let's start with creating some JSON on the fly. It's super easy to create a dynamic object structure with any of the JToken derived JSON.NET objects. The most common JToken derived classes you are likely to use are JObject and JArray. JToken implements IDynamicMetaProvider and so uses the dynamic  keyword extensively to make it intuitive to create object structures and turn them into JSON via dynamic object syntax. Here's an example of creating a music album structure with child songs using JObject for the base object and songs and JArray for the actual collection of songs:[TestMethod] public void JObjectOutputTest() { // strong typed instance var jsonObject = new JObject(); // you can explicitly add values here using class interface jsonObject.Add("Entered", DateTime.Now); // or cast to dynamic to dynamically add/read properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; album.Artist = "AC/DC"; album.YearReleased = 1976; album.Songs = new JArray() as dynamic; dynamic song = new JObject(); song.SongName = "Dirty Deeds Done Dirt Cheap"; song.SongLength = "4:11"; album.Songs.Add(song); song = new JObject(); song.SongName = "Love at First Feel"; song.SongLength = "3:10"; album.Songs.Add(song); Console.WriteLine(album.ToString()); } This produces a complete JSON structure: { "Entered": "2012-08-18T13:26:37.7137482-10:00", "AlbumName": "Dirty Deeds Done Dirt Cheap", "Artist": "AC/DC", "YearReleased": 1976, "Songs": [ { "SongName": "Dirty Deeds Done Dirt Cheap", "SongLength": "4:11" }, { "SongName": "Love at First Feel", "SongLength": "3:10" } ] } Notice that JSON.NET does a nice job formatting the JSON, so it's easy to read and paste into blog posts :-). JSON.NET includes a bunch of configuration options that control how JSON is generated. Typically the defaults are just fine, but you can override with the JsonSettings object for most operations. The important thing about this code is that there's no explicit type used for holding the values to serialize to JSON. Rather the JSON.NET objects are the containers that receive the data as I build up my JSON structure dynamically, simply by adding properties. This means this code can be entirely driven at runtime without compile time restraints of structure for the JSON output. Here I use JObject to create a album 'object' and immediately cast it to dynamic. JObject() is kind of similar in behavior to ExpandoObject in that it allows you to add properties by simply assigning to them. Internally, JObject values are stored in pseudo collections of key value pairs that are exposed as properties through the IDynamicMetaObject interface exposed in JSON.NET's JToken base class. For objects the syntax is very clean - you add simple typed values as properties. For objects and arrays you have to explicitly create new JObject or JArray, cast them to dynamic and then add properties and items to them. Always remember though these values are dynamic - which means no Intellisense and no compiler type checking. It's up to you to ensure that the names and values you create are accessed consistently and without typos in your code. Note that you can also access the JObject instance directly (not as dynamic) and get access to the underlying JObject type. This means you can assign properties by string, which can be useful for fully data driven JSON generation from other structures. Below you can see both styles of access next to each other:// strong type instance var jsonObject = new JObject(); // you can explicitly add values here jsonObject.Add("Entered", DateTime.Now); // expando style instance you can just 'use' properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; JContainer (the base class for JObject and JArray) is a collection so you can also iterate over the properties at runtime easily:foreach (var item in jsonObject) { Console.WriteLine(item.Key + " " + item.Value.ToString()); } The functionality of the JSON objects are very similar to .NET's ExpandObject and if you used it before, you're already familiar with how the dynamic interfaces to the JSON objects works. Importing JSON with JObject.Parse() and JArray.Parse() The JValue structure supports importing JSON via the Parse() and Load() methods which can read JSON data from a string or various streams respectively. Essentially JValue includes the core JSON parsing to turn a JSON string into a collection of JsonValue objects that can be then referenced using familiar dynamic object syntax. Here's a simple example:public void JValueParsingTest() { var jsonString = @"{""Name"":""Rick"",""Company"":""West Wind"", ""Entered"":""2012-03-16T00:03:33.245-10:00""}"; dynamic json = JValue.Parse(jsonString); // values require casting string name = json.Name; string company = json.Company; DateTime entered = json.Entered; Assert.AreEqual(name, "Rick"); Assert.AreEqual(company, "West Wind"); } The JSON string represents an object with three properties which is parsed into a JObject class and cast to dynamic. Once cast to dynamic I can then go ahead and access the object using familiar object syntax. Note that the actual values - json.Name, json.Company, json.Entered - are actually of type JToken and I have to cast them to their appropriate types first before I can do type comparisons as in the Asserts at the end of the test method. This is required because of the way that dynamic types work which can't determine the type based on the method signature of the Assert.AreEqual(object,object) method. I have to either assign the dynamic value to a variable as I did above, or explicitly cast ( (string) json.Name) in the actual method call. The JSON structure can be much more complex than this simple example. Here's another example of an array of albums serialized to JSON and then parsed through with JsonValue():[TestMethod] public void JsonArrayParsingTest() { var jsonString = @"[ { ""Id"": ""b3ec4e5c"", ""AlbumName"": ""Dirty Deeds Done Dirt Cheap"", ""Artist"": ""AC/DC"", ""YearReleased"": 1976, ""Entered"": ""2012-03-16T00:13:12.2810521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/61kTaH-uZBL._AA115_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/gp/product/…ASIN=B00008BXJ4"", ""Songs"": [ { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Dirty Deeds Done Dirt Cheap"", ""SongLength"": ""4:11"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Love at First Feel"", ""SongLength"": ""3:10"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Big Balls"", ""SongLength"": ""2:38"" } ] }, { ""Id"": ""7b919432"", ""AlbumName"": ""End of the Silence"", ""Artist"": ""Henry Rollins Band"", ""YearReleased"": 1992, ""Entered"": ""2012-03-16T00:13:12.2800521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/51FO3rb1tuL._SL160_AA160_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/End-Silence-Rollins-Band/dp/B0000040OX/ref=sr_1_5?ie=UTF8&qid=1302232195&sr=8-5"", ""Songs"": [ { ""AlbumId"": ""7b919432"", ""SongName"": ""Low Self Opinion"", ""SongLength"": ""5:24"" }, { ""AlbumId"": ""7b919432"", ""SongName"": ""Grip"", ""SongLength"": ""4:51"" } ] } ]"; JArray jsonVal = JArray.Parse(jsonString) as JArray; dynamic albums = jsonVal; foreach (dynamic album in albums) { Console.WriteLine(album.AlbumName + " (" + album.YearReleased.ToString() + ")"); foreach (dynamic song in album.Songs) { Console.WriteLine("\t" + song.SongName); } } Console.WriteLine(albums[0].AlbumName); Console.WriteLine(albums[0].Songs[1].SongName); } JObject and JArray in ASP.NET Web API Of course these types also work in ASP.NET Web API controller methods. If you want you can accept parameters using these object or return them back to the server. The following contrived example receives dynamic JSON input, and then creates a new dynamic JSON object and returns it based on data from the first:[HttpPost] public JObject PostAlbumJObject(JObject jAlbum) { // dynamic input from inbound JSON dynamic album = jAlbum; // create a new JSON object to write out dynamic newAlbum = new JObject(); // Create properties on the new instance // with values from the first newAlbum.AlbumName = album.AlbumName + " New"; newAlbum.NewProperty = "something new"; newAlbum.Songs = new JArray(); foreach (dynamic song in album.Songs) { song.SongName = song.SongName + " New"; newAlbum.Songs.Add(song); } return newAlbum; } The raw POST request to the server looks something like this: POST http://localhost/aspnetwebapi/samples/PostAlbumJObject HTTP/1.1User-Agent: FiddlerContent-type: application/jsonHost: localhostContent-Length: 88 {AlbumName: "Dirty Deeds",Songs:[ { SongName: "Problem Child"},{ SongName: "Squealer"}]} and the output that comes back looks like this: {  "AlbumName": "Dirty Deeds New",  "NewProperty": "something new",  "Songs": [    {      "SongName": "Problem Child New"    },    {      "SongName": "Squealer New"    }  ]} The original values are echoed back with something extra appended to demonstrate that we're working with a new object. When you receive or return a JObject, JValue, JToken or JArray instance in a Web API method, Web API ignores normal content negotiation and assumes your content is going to be received and returned as JSON, so effectively the parameter and result type explicitly determines the input and output format which is nice. Dynamic to Strong Type Mapping You can also map JObject and JArray instances to a strongly typed object, so you can mix dynamic and static typing in the same piece of code. Using the 2 Album jsonString shown earlier, the code below takes an array of albums and picks out only a single album and casts that album to a static Album instance.[TestMethod] public void JsonParseToStrongTypeTest() { JArray albums = JArray.Parse(jsonString) as JArray; // pick out one album JObject jalbum = albums[0] as JObject; // Copy to a static Album instance Album album = jalbum.ToObject<Album>(); Assert.IsNotNull(album); Assert.AreEqual(album.AlbumName,jalbum.Value<string>("AlbumName")); Assert.IsTrue(album.Songs.Count > 0); } This is pretty damn useful for the scenario I mentioned earlier - you can read a large chunk of JSON and dynamically walk the property hierarchy down to the item you want to access, and then either access the specific item dynamically (as shown earlier) or map a part of the JSON to a strongly typed object. That's very powerful if you think about it - it leaves you in total control to decide what's dynamic and what's static. Strongly typed JSON Parsing With all this talk of dynamic let's not forget that JSON.NET of course also does strongly typed serialization which is drop dead easy. Here's a simple example on how to serialize and deserialize an object with JSON.NET:[TestMethod] public void StronglyTypedSerializationTest() { // Demonstrate deserialization from a raw string var album = new Album() { AlbumName = "Dirty Deeds Done Dirt Cheap", Artist = "AC/DC", Entered = DateTime.Now, YearReleased = 1976, Songs = new List<Song>() { new Song() { SongName = "Dirty Deeds Done Dirt Cheap", SongLength = "4:11" }, new Song() { SongName = "Love at First Feel", SongLength = "3:10" } } }; // serialize to string string json2 = JsonConvert.SerializeObject(album,Formatting.Indented); Console.WriteLine(json2); // make sure we can serialize back var album2 = JsonConvert.DeserializeObject<Album>(json2); Assert.IsNotNull(album2); Assert.IsTrue(album2.AlbumName == "Dirty Deeds Done Dirt Cheap"); Assert.IsTrue(album2.Songs.Count == 2); } JsonConvert is a high level static class that wraps lower level functionality, but you can also use the JsonSerializer class, which allows you to serialize/parse to and from streams. It's a little more work, but gives you a bit more control. The functionality available is easy to discover with Intellisense, and that's good because there's not a lot in the way of documentation that's actually useful. Summary JSON.NET is a pretty complete JSON implementation with lots of different choices for JSON parsing from dynamic parsing to static serialization, to complex querying of JSON objects using LINQ. It's good to see this open source library getting integrated into .NET, and pushing out the old and tired stock .NET parsers so that we finally have a bit more flexibility - and extensibility - in our JSON parsing. Good to go! Resources Sample Test Project http://json.codeplex.com/© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  Web Api  AJAX   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • SimpleMembership, Membership Providers, Universal Providers and the new ASP.NET 4.5 Web Forms and ASP.NET MVC 4 templates

    - by Jon Galloway
    The ASP.NET MVC 4 Internet template adds some new, very useful features which are built on top of SimpleMembership. These changes add some great features, like a much simpler and extensible membership API and support for OAuth. However, the new account management features require SimpleMembership and won't work against existing ASP.NET Membership Providers. I'll start with a summary of top things you need to know, then dig into a lot more detail. Summary: SimpleMembership has been designed as a replacement for traditional the previous ASP.NET Role and Membership provider system SimpleMembership solves common problems people ran into with the Membership provider system and was designed for modern user / membership / storage needs SimpleMembership integrates with the previous membership system, but you can't use a MembershipProvider with SimpleMembership The new ASP.NET MVC 4 Internet application template AccountController requires SimpleMembership and is not compatible with previous MembershipProviders You can continue to use existing ASP.NET Role and Membership providers in ASP.NET 4.5 and ASP.NET MVC 4 - just not with the ASP.NET MVC 4 AccountController The existing ASP.NET Role and Membership provider system remains supported as is part of the ASP.NET core ASP.NET 4.5 Web Forms does not use SimpleMembership; it implements OAuth on top of ASP.NET Membership The ASP.NET Web Site Administration Tool (WSAT) is not compatible with SimpleMembership The following is the result of a few conversations with Erik Porter (PM for ASP.NET MVC) to make sure I had some the overall details straight, combined with a lot of time digging around in ILSpy and Visual Studio's assembly browsing tools. SimpleMembership: The future of membership for ASP.NET The ASP.NET Membership system was introduces with ASP.NET 2.0 back in 2005. It was designed to solve common site membership requirements at the time, which generally involved username / password based registration and profile storage in SQL Server. It was designed with a few extensibility mechanisms - notably a provider system (which allowed you override some specifics like backing storage) and the ability to store additional profile information (although the additional  profile information was packed into a single column which usually required access through the API). While it's sometimes frustrating to work with, it's held up for seven years - probably since it handles the main use case (username / password based membership in a SQL Server database) smoothly and can be adapted to most other needs (again, often frustrating, but it can work). The ASP.NET Web Pages and WebMatrix efforts allowed the team an opportunity to take a new look at a lot of things - e.g. the Razor syntax started with ASP.NET Web Pages, not ASP.NET MVC. The ASP.NET Web Pages team designed SimpleMembership to (wait for it) simplify the task of dealing with membership. As Matthew Osborn said in his post Using SimpleMembership With ASP.NET WebPages: With the introduction of ASP.NET WebPages and the WebMatrix stack our team has really be focusing on making things simpler for the developer. Based on a lot of customer feedback one of the areas that we wanted to improve was the built in security in ASP.NET. So with this release we took that time to create a new built in (and default for ASP.NET WebPages) security provider. I say provider because the new stuff is still built on the existing ASP.NET framework. So what do we call this new hotness that we have created? Well, none other than SimpleMembership. SimpleMembership is an umbrella term for both SimpleMembership and SimpleRoles. Part of simplifying membership involved fixing some common problems with ASP.NET Membership. Problems with ASP.NET Membership ASP.NET Membership was very obviously designed around a set of assumptions: Users and user information would most likely be stored in a full SQL Server database or in Active Directory User and profile information would be optimized around a set of common attributes (UserName, Password, IsApproved, CreationDate, Comment, Role membership...) and other user profile information would be accessed through a profile provider Some problems fall out of these assumptions. Requires Full SQL Server for default cases The default, and most fully featured providers ASP.NET Membership providers (SQL Membership Provider, SQL Role Provider, SQL Profile Provider) require full SQL Server. They depend on stored procedure support, and they rely on SQL Server cache dependencies, they depend on agents for clean up and maintenance. So the main SQL Server based providers don't work well on SQL Server CE, won't work out of the box on SQL Azure, etc. Note: Cory Fowler recently let me know about these Updated ASP.net scripts for use with Microsoft SQL Azure which do support membership, personalization, profile, and roles. But the fact that we need a support page with a set of separate SQL scripts underscores the underlying problem. Aha, you say! Jon's forgetting the Universal Providers, a.k.a. System.Web.Providers! Hold on a bit, we'll get to those... Custom Membership Providers have to work with a SQL-Server-centric API If you want to work with another database or other membership storage system, you need to to inherit from the provider base classes and override a bunch of methods which are tightly focused on storing a MembershipUser in a relational database. It can be done (and you can often find pretty good ones that have already been written), but it's a good amount of work and often leaves you with ugly code that has a bunch of System.NotImplementedException fun since there are a lot of methods that just don't apply. Designed around a specific view of users, roles and profiles The existing providers are focused on traditional membership - a user has a username and a password, some specific roles on the site (e.g. administrator, premium user), and may have some additional "nice to have" optional information that can be accessed via an API in your application. This doesn't fit well with some modern usage patterns: In OAuth and OpenID, the user doesn't have a password Often these kinds of scenarios map better to user claims or rights instead of monolithic user roles For many sites, profile or other non-traditional information is very important and needs to come from somewhere other than an API call that maps to a database blob What would work a lot better here is a system in which you were able to define your users, rights, and other attributes however you wanted and the membership system worked with your model - not the other way around. Requires specific schema, overflow in blob columns I've already mentioned this a few times, but it bears calling out separately - ASP.NET Membership focuses on SQL Server storage, and that storage is based on a very specific database schema. SimpleMembership as a better membership system As you might have guessed, SimpleMembership was designed to address the above problems. Works with your Schema As Matthew Osborn explains in his Using SimpleMembership With ASP.NET WebPages post, SimpleMembership is designed to integrate with your database schema: All SimpleMembership requires is that there are two columns on your users table so that we can hook up to it – an “ID” column and a “username” column. The important part here is that they can be named whatever you want. For instance username doesn't have to be an alias it could be an email column you just have to tell SimpleMembership to treat that as the “username” used to log in. Matthew's example shows using a very simple user table named Users (it could be named anything) with a UserID and Username column, then a bunch of other columns he wanted in his app. Then we point SimpleMemberhip at that table with a one-liner: WebSecurity.InitializeDatabaseFile("SecurityDemo.sdf", "Users", "UserID", "Username", true); No other tables are needed, the table can be named anything we want, and can have pretty much any schema we want as long as we've got an ID and something that we can map to a username. Broaden database support to the whole SQL Server family While SimpleMembership is not database agnostic, it works across the SQL Server family. It continues to support full SQL Server, but it also works with SQL Azure, SQL Server CE, SQL Server Express, and LocalDB. Everything's implemented as SQL calls rather than requiring stored procedures, views, agents, and change notifications. Note that SimpleMembership still requires some flavor of SQL Server - it won't work with MySQL, NoSQL databases, etc. You can take a look at the code in WebMatrix.WebData.dll using a tool like ILSpy if you'd like to see why - there places where SQL Server specific SQL statements are being executed, especially when creating and initializing tables. It seems like you might be able to work with another database if you created the tables separately, but I haven't tried it and it's not supported at this point. Note: I'm thinking it would be possible for SimpleMembership (or something compatible) to run Entity Framework so it would work with any database EF supports. That seems useful to me - thoughts? Note: SimpleMembership has the same database support - anything in the SQL Server family - that Universal Providers brings to the ASP.NET Membership system. Easy to with Entity Framework Code First The problem with with ASP.NET Membership's system for storing additional account information is that it's the gate keeper. That means you're stuck with its schema and accessing profile information through its API. SimpleMembership flips that around by allowing you to use any table as a user store. That means you're in control of the user profile information, and you can access it however you'd like - it's just data. Let's look at a practical based on the AccountModel.cs class in an ASP.NET MVC 4 Internet project. Here I'm adding a Birthday property to the UserProfile class. [Table("UserProfile")] public class UserProfile { [Key] [DatabaseGeneratedAttribute(DatabaseGeneratedOption.Identity)] public int UserId { get; set; } public string UserName { get; set; } public DateTime Birthday { get; set; } } Now if I want to access that information, I can just grab the account by username and read the value. var context = new UsersContext(); var username = User.Identity.Name; var user = context.UserProfiles.SingleOrDefault(u => u.UserName == username); var birthday = user.Birthday; So instead of thinking of SimpleMembership as a big membership API, think of it as something that handles membership based on your user database. In SimpleMembership, everything's keyed off a user row in a table you define rather than a bunch of entries in membership tables that were out of your control. How SimpleMembership integrates with ASP.NET Membership Okay, enough sales pitch (and hopefully background) on why things have changed. How does this affect you? Let's start with a diagram to show the relationship (note: I've simplified by removing a few classes to show the important relationships): So SimpleMembershipProvider is an implementaiton of an ExtendedMembershipProvider, which inherits from MembershipProvider and adds some other account / OAuth related things. Here's what ExtendedMembershipProvider adds to MembershipProvider: The important thing to take away here is that a SimpleMembershipProvider is a MembershipProvider, but a MembershipProvider is not a SimpleMembershipProvider. This distinction is important in practice: you cannot use an existing MembershipProvider (including the Universal Providers found in System.Web.Providers) with an API that requires a SimpleMembershipProvider, including any of the calls in WebMatrix.WebData.WebSecurity or Microsoft.Web.WebPages.OAuth.OAuthWebSecurity. However, that's as far as it goes. Membership Providers still work if you're accessing them through the standard Membership API, and all of the core stuff  - including the AuthorizeAttribute, role enforcement, etc. - will work just fine and without any change. Let's look at how that affects you in terms of the new templates. Membership in the ASP.NET MVC 4 project templates ASP.NET MVC 4 offers six Project Templates: Empty - Really empty, just the assemblies, folder structure and a tiny bit of basic configuration. Basic - Like Empty, but with a bit of UI preconfigured (css / images / bundling). Internet - This has both a Home and Account controller and associated views. The Account Controller supports registration and login via either local accounts and via OAuth / OpenID providers. Intranet - Like the Internet template, but it's preconfigured for Windows Authentication. Mobile - This is preconfigured using jQuery Mobile and is intended for mobile-only sites. Web API - This is preconfigured for a service backend built on ASP.NET Web API. Out of these templates, only one (the Internet template) uses SimpleMembership. ASP.NET MVC 4 Basic template The Basic template has configuration in place to use ASP.NET Membership with the Universal Providers. You can see that configuration in the ASP.NET MVC 4 Basic template's web.config: <profile defaultProvider="DefaultProfileProvider"> <providers> <add name="DefaultProfileProvider" type="System.Web.Providers.DefaultProfileProvider, System.Web.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" applicationName="/" /> </providers> </profile> <membership defaultProvider="DefaultMembershipProvider"> <providers> <add name="DefaultMembershipProvider" type="System.Web.Providers.DefaultMembershipProvider, System.Web.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="false" requiresUniqueEmail="false" maxInvalidPasswordAttempts="5" minRequiredPasswordLength="6" minRequiredNonalphanumericCharacters="0" passwordAttemptWindow="10" applicationName="/" /> </providers> </membership> <roleManager defaultProvider="DefaultRoleProvider"> <providers> <add name="DefaultRoleProvider" type="System.Web.Providers.DefaultRoleProvider, System.Web.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" applicationName="/" /> </providers> </roleManager> <sessionState mode="InProc" customProvider="DefaultSessionProvider"> <providers> <add name="DefaultSessionProvider" type="System.Web.Providers.DefaultSessionStateProvider, System.Web.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" /> </providers> </sessionState> This means that it's business as usual for the Basic template as far as ASP.NET Membership works. ASP.NET MVC 4 Internet template The Internet template has a few things set up to bootstrap SimpleMembership: \Models\AccountModels.cs defines a basic user account and includes data annotations to define keys and such \Filters\InitializeSimpleMembershipAttribute.cs creates the membership database using the above model, then calls WebSecurity.InitializeDatabaseConnection which verifies that the underlying tables are in place and marks initialization as complete (for the application's lifetime) \Controllers\AccountController.cs makes heavy use of OAuthWebSecurity (for OAuth account registration / login / management) and WebSecurity. WebSecurity provides account management services for ASP.NET MVC (and Web Pages) WebSecurity can work with any ExtendedMembershipProvider. There's one in the box (SimpleMembershipProvider) but you can write your own. Since a standard MembershipProvider is not an ExtendedMembershipProvider, WebSecurity will throw exceptions if the default membership provider is a MembershipProvider rather than an ExtendedMembershipProvider. Practical example: Create a new ASP.NET MVC 4 application using the Internet application template Install the Microsoft ASP.NET Universal Providers for LocalDB NuGet package Run the application, click on Register, add a username and password, and click submit You'll get the following execption in AccountController.cs::Register: To call this method, the "Membership.Provider" property must be an instance of "ExtendedMembershipProvider". This occurs because the ASP.NET Universal Providers packages include a web.config transform that will update your web.config to add the Universal Provider configuration I showed in the Basic template example above. When WebSecurity tries to use the configured ASP.NET Membership Provider, it checks if it can be cast to an ExtendedMembershipProvider before doing anything else. So, what do you do? Options: If you want to use the new AccountController, you'll either need to use the SimpleMembershipProvider or another valid ExtendedMembershipProvider. This is pretty straightforward. If you want to use an existing ASP.NET Membership Provider in ASP.NET MVC 4, you can't use the new AccountController. You can do a few things: Replace  the AccountController.cs and AccountModels.cs in an ASP.NET MVC 4 Internet project with one from an ASP.NET MVC 3 application (you of course won't have OAuth support). Then, if you want, you can go through and remove other things that were built around SimpleMembership - the OAuth partial view, the NuGet packages (e.g. the DotNetOpenAuthAuth package, etc.) Use an ASP.NET MVC 4 Internet application template and add in a Universal Providers NuGet package. Then copy in the AccountController and AccountModel classes. Create an ASP.NET MVC 3 project and upgrade it to ASP.NET MVC 4 using the steps shown in the ASP.NET MVC 4 release notes. None of these are particularly elegant or simple. Maybe we (or just me?) can do something to make this simpler - perhaps a NuGet package. However, this should be an edge case - hopefully the cases where you'd need to create a new ASP.NET but use legacy ASP.NET Membership Providers should be pretty rare. Please let me (or, preferably the team) know if that's an incorrect assumption. Membership in the ASP.NET 4.5 project template ASP.NET 4.5 Web Forms took a different approach which builds off ASP.NET Membership. Instead of using the WebMatrix security assemblies, Web Forms uses Microsoft.AspNet.Membership.OpenAuth assembly. I'm no expert on this, but from a bit of time in ILSpy and Visual Studio's (very pretty) dependency graphs, this uses a Membership Adapter to save OAuth data into an EF managed database while still running on top of ASP.NET Membership. Note: There may be a way to use this in ASP.NET MVC 4, although it would probably take some plumbing work to hook it up. How does this fit in with Universal Providers (System.Web.Providers)? Just to summarize: Universal Providers are intended for cases where you have an existing ASP.NET Membership Provider and you want to use it with another SQL Server database backend (other than SQL Server). It doesn't require agents to handle expired session cleanup and other background tasks, it piggybacks these tasks on other calls. Universal Providers are not really, strictly speaking, universal - at least to my way of thinking. They only work with databases in the SQL Server family. Universal Providers do not work with Simple Membership. The Universal Providers packages include some web config transforms which you would normally want when you're using them. What about the Web Site Administration Tool? Visual Studio includes tooling to launch the Web Site Administration Tool (WSAT) to configure users and roles in your application. WSAT is built to work with ASP.NET Membership, and is not compatible with Simple Membership. There are two main options there: Use the WebSecurity and OAuthWebSecurity API to manage the users and roles Create a web admin using the above APIs Since SimpleMembership runs on top of your database, you can update your users as you would any other data - via EF or even in direct database edits (in development, of course)

    Read the article

  • Using an alternate JSON Serializer in ASP.NET Web API

    - by Rick Strahl
    The new ASP.NET Web API that Microsoft released alongside MVC 4.0 Beta last week is a great framework for building REST and AJAX APIs. I've been working with it for quite a while now and I really like the way it works and the complete set of features it provides 'in the box'. It's about time that Microsoft gets a decent API for building generic HTTP endpoints into the framework. DataContractJsonSerializer sucks As nice as Web API's overall design is one thing still sucks: The built-in JSON Serialization uses the DataContractJsonSerializer which is just too limiting for many scenarios. The biggest issues I have with it are: No support for untyped values (object, dynamic, Anonymous Types) MS AJAX style Date Formatting Ugly serialization formats for types like Dictionaries To me the most serious issue is dealing with serialization of untyped objects. I have number of applications with AJAX front ends that dynamically reformat data from business objects to fit a specific message format that certain UI components require. The most common scenario I have there are IEnumerable query results from a database with fields from the result set rearranged to fit the sometimes unconventional formats required for the UI components (like jqGrid for example). Creating custom types to fit these messages seems like overkill and projections using Linq makes this much easier to code up. Alas DataContractJsonSerializer doesn't support it. Neither does DataContractSerializer for XML output for that matter. What this means is that you can't do stuff like this in Web API out of the box:public object GetAnonymousType() { return new { name = "Rick", company = "West Wind", entered= DateTime.Now }; } Basically anything that doesn't have an explicit type DataContractJsonSerializer will not let you return. FWIW, the same is true for XmlSerializer which also doesn't work with non-typed values for serialization. The example above is obviously contrived with a hardcoded object graph, but it's not uncommon to get dynamic values returned from queries that have anonymous types for their result projections. Apparently there's a good possibility that Microsoft will ship Json.NET as part of Web API RTM release.  Scott Hanselman confirmed this as a footnote in his JSON Dates post a few days ago. I've heard several other people from Microsoft confirm that Json.NET will be included and be the default JSON serializer, but no details yet in what capacity it will show up. Let's hope it ends up as the default in the box. Meanwhile this post will show you how you can use it today with the beta and get JSON that matches what you should see in the RTM version. What about JsonValue? To be fair Web API DOES include a new JsonValue/JsonObject/JsonArray type that allow you to address some of these scenarios. JsonValue is a new type in the System.Json assembly that can be used to build up an object graph based on a dictionary. It's actually a really cool implementation of a dynamic type that allows you to create an object graph and spit it out to JSON without having to create .NET type first. JsonValue can also receive a JSON string and parse it without having to actually load it into a .NET type (which is something that's been missing in the core framework). This is really useful if you get a JSON result from an arbitrary service and you don't want to explicitly create a mapping type for the data returned. For serialization you can create an object structure on the fly and pass it back as part of an Web API action method like this:public JsonValue GetJsonValue() { dynamic json = new JsonObject(); json.name = "Rick"; json.company = "West Wind"; json.entered = DateTime.Now; dynamic address = new JsonObject(); address.street = "32 Kaiea"; address.zip = "96779"; json.address = address; dynamic phones = new JsonArray(); json.phoneNumbers = phones; dynamic phone = new JsonObject(); phone.type = "Home"; phone.number = "808 123-1233"; phones.Add(phone); phone = new JsonObject(); phone.type = "Home"; phone.number = "808 123-1233"; phones.Add(phone); //var jsonString = json.ToString(); return json; } which produces the following output (formatted here for easier reading):{ name: "rick", company: "West Wind", entered: "2012-03-08T15:33:19.673-10:00", address: { street: "32 Kaiea", zip: "96779" }, phoneNumbers: [ { type: "Home", number: "808 123-1233" }, { type: "Mobile", number: "808 123-1234" }] } If you need to build a simple JSON type on the fly these types work great. But if you have an existing type - or worse a query result/list that's already formatted JsonValue et al. become a pain to work with. As far as I can see there's no way to just throw an object instance at JsonValue and have it convert into JsonValue dictionary. It's a manual process. Using alternate Serializers in Web API So, currently the default serializer in WebAPI is DataContractJsonSeriaizer and I don't like it. You may not either, but luckily you can swap the serializer fairly easily. If you'd rather use the JavaScriptSerializer built into System.Web.Extensions or Json.NET today, it's not too difficult to create a custom MediaTypeFormatter that uses these serializers and can replace or partially replace the native serializer. Here's a MediaTypeFormatter implementation using the ASP.NET JavaScriptSerializer:using System; using System.Net.Http.Formatting; using System.Threading.Tasks; using System.Web.Script.Serialization; using System.Json; using System.IO; namespace Westwind.Web.WebApi { public class JavaScriptSerializerFormatter : MediaTypeFormatter { public JavaScriptSerializerFormatter() { SupportedMediaTypes.Add(new System.Net.Http.Headers.MediaTypeHeaderValue("application/json")); } protected override bool CanWriteType(Type type) { // don't serialize JsonValue structure use default for that if (type == typeof(JsonValue) || type == typeof(JsonObject) || type== typeof(JsonArray) ) return false; return true; } protected override bool CanReadType(Type type) { if (type == typeof(IKeyValueModel)) return false; return true; } protected override System.Threading.Tasks.Taskobject OnReadFromStreamAsync(Type type, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext) { var task = Taskobject.Factory.StartNew(() = { var ser = new JavaScriptSerializer(); string json; using (var sr = new StreamReader(stream)) { json = sr.ReadToEnd(); sr.Close(); } object val = ser.Deserialize(json,type); return val; }); return task; } protected override System.Threading.Tasks.Task OnWriteToStreamAsync(Type type, object value, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext, System.Net.TransportContext transportContext) { var task = Task.Factory.StartNew( () = { var ser = new JavaScriptSerializer(); var json = ser.Serialize(value); byte[] buf = System.Text.Encoding.Default.GetBytes(json); stream.Write(buf,0,buf.Length); stream.Flush(); }); return task; } } } Formatter implementation is pretty simple: You override 4 methods to tell which types you can handle and then handle the input or output streams to create/parse the JSON data. Note that when creating output you want to take care to still allow JsonValue/JsonObject/JsonArray types to be handled by the default serializer so those objects serialize properly - if you let either JavaScriptSerializer or JSON.NET handle them they'd try to render the dictionaries which is very undesirable. If you'd rather use Json.NET here's the JSON.NET version of the formatter:// this code requires a reference to JSON.NET in your project #if true using System; using System.Net.Http.Formatting; using System.Threading.Tasks; using System.Web.Script.Serialization; using System.Json; using Newtonsoft.Json; using System.IO; using Newtonsoft.Json.Converters; namespace Westwind.Web.WebApi { public class JsonNetFormatter : MediaTypeFormatter { public JsonNetFormatter() { SupportedMediaTypes.Add(new System.Net.Http.Headers.MediaTypeHeaderValue("application/json")); } protected override bool CanWriteType(Type type) { // don't serialize JsonValue structure use default for that if (type == typeof(JsonValue) || type == typeof(JsonObject) || type == typeof(JsonArray)) return false; return true; } protected override bool CanReadType(Type type) { if (type == typeof(IKeyValueModel)) return false; return true; } protected override System.Threading.Tasks.Taskobject OnReadFromStreamAsync(Type type, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext) { var task = Taskobject.Factory.StartNew(() = { var settings = new JsonSerializerSettings() { NullValueHandling = NullValueHandling.Ignore, }; var sr = new StreamReader(stream); var jreader = new JsonTextReader(sr); var ser = new JsonSerializer(); ser.Converters.Add(new IsoDateTimeConverter()); object val = ser.Deserialize(jreader, type); return val; }); return task; } protected override System.Threading.Tasks.Task OnWriteToStreamAsync(Type type, object value, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext, System.Net.TransportContext transportContext) { var task = Task.Factory.StartNew( () = { var settings = new JsonSerializerSettings() { NullValueHandling = NullValueHandling.Ignore, }; string json = JsonConvert.SerializeObject(value, Formatting.Indented, new JsonConverter[1] { new IsoDateTimeConverter() } ); byte[] buf = System.Text.Encoding.Default.GetBytes(json); stream.Write(buf,0,buf.Length); stream.Flush(); }); return task; } } } #endif   One advantage of the Json.NET serializer is that you can specify a few options on how things are formatted and handled. You get null value handling and you can plug in the IsoDateTimeConverter which is nice to product proper ISO dates that I would expect any Json serializer to output these days. Hooking up the Formatters Once you've created the custom formatters you need to enable them for your Web API application. To do this use the GlobalConfiguration.Configuration object and add the formatter to the Formatters collection. Here's what this looks like hooked up from Application_Start in a Web project:protected void Application_Start(object sender, EventArgs e) { // Action based routing (used for RPC calls) RouteTable.Routes.MapHttpRoute( name: "StockApi", routeTemplate: "stocks/{action}/{symbol}", defaults: new { symbol = RouteParameter.Optional, controller = "StockApi" } ); // WebApi Configuration to hook up formatters and message handlers // optional RegisterApis(GlobalConfiguration.Configuration); } public static void RegisterApis(HttpConfiguration config) { // Add JavaScriptSerializer formatter instead - add at top to make default //config.Formatters.Insert(0, new JavaScriptSerializerFormatter()); // Add Json.net formatter - add at the top so it fires first! // This leaves the old one in place so JsonValue/JsonObject/JsonArray still are handled config.Formatters.Insert(0, new JsonNetFormatter()); } One thing to remember here is the GlobalConfiguration object which is Web API's static configuration instance. I think this thing is seriously misnamed given that GlobalConfiguration could stand for anything and so is hard to discover if you don't know what you're looking for. How about WebApiConfiguration or something more descriptive? Anyway, once you know what it is you can use the Formatters collection to insert your custom formatter. Note that I insert my formatter at the top of the list so it takes precedence over the default formatter. I also am not removing the old formatter because I still want JsonValue/JsonObject/JsonArray to be handled by the default serialization mechanism. Since they process in sequence and I exclude processing for these types JsonValue et al. still get properly serialized/deserialized. Summary Currently DataContractJsonSerializer in Web API is a pain, but at least we have the ability with relatively limited effort to replace the MediaTypeFormatter and plug in our own JSON serializer. This is useful for many scenarios - if you have existing client applications that used MVC JsonResult or ASP.NET AJAX results from ASMX AJAX services you can plug in the JavaScript serializer and get exactly the same serializer you used in the past so your results will be the same and don't potentially break clients. JSON serializers do vary a bit in how they serialize some of the more complex types (like Dictionaries and dates for example) and so if you're migrating it might be helpful to ensure your client code doesn't break when you switch to ASP.NET Web API. Going forward it looks like Microsoft is planning on plugging in Json.Net into Web API and make that the default. I think that's an awesome choice since Json.net has been around forever, is fast and easy to use and provides a ton of functionality as part of this great library. I just wish Microsoft would have figured this out sooner instead of now at the last minute integrating with it especially given that Json.Net has a similar set of lower level JSON objects JsonValue/JsonObject etc. which now will end up being duplicated by the native System.Json stuff. It's not like we don't already have enough confusion regarding which JSON serializer to use (JavaScriptSerializer, DataContractJsonSerializer, JsonValue/JsonObject/JsonArray and now Json.net). For years I've been using my own JSON serializer because the built in choices are both limited. However, with an official encorsement of Json.Net I'm happily moving on to use that in my applications. Let's see and hope Microsoft gets this right before ASP.NET Web API goes gold.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api  AJAX  ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Java error starting with "log4j:WARN No appenders could be found for logger" in ZuckerReports SugarC

    - by Tom McDonnell
    Greetings all. I apologise for posting this problem here, but I do so in desperation after receiving no response on the SugarCRM forums. Even if a reader is unfamiliar with ZuckerReports or SugarCRM some general advice on Java may be of use to me. I have installed ZuckerReports v1.12 in SugarCRM 5.5.1. When I attempt to run a report I get the following error message. cmdline: javaw -classpath "custom/ZuckerReports/resources/;custom/ZuckerReports/resources/contact_counts_by_first_name.jasper_files/;modules/ZuckerReports/jasper/ant-1.7.1.jar;modules/ZuckerReports/jasper/antlr-2.7.6.jar;modules/ZuckerReports/jasper/asm-attrs.jar;modules/ZuckerReports/jasper/asm.jar;modules/ZuckerReports/jasper/barbecue-1.5-beta1.jar;modules/ZuckerReports/jasper/barcode4j-2.0.jar;modules/ZuckerReports/jasper/batik-anim.jar;modules/ZuckerReports/jasper/batik-awt-util.jar;modules/ZuckerReports/jasper/batik-bridge.jar;modules/ZuckerReports/jasper/batik-css.jar;modules/ZuckerReports/jasper/batik-dom.jar;modules/ZuckerReports/jasper/batik-ext.jar;modules/ZuckerReports/jasper/batik-gvt.jar;modules/ZuckerReports/jasper/batik-parser.jar;modules/ZuckerReports/jasper/batik-script.jar;modules/ZuckerReports/jasper/batik-svg-dom.jar;modules/ZuckerReports/jasper/batik-svggen.jar;modules/ZuckerReports/jasper/batik-util.jar;modules/ZuckerReports/jasper/batik-xml.jar;modules/ZuckerReports/jasper/bcel-5.2.jar;modules/ZuckerReports/jasper/bsh-2.0b4.jar;modules/ZuckerReports/jasper/castor-1.2.jar;modules/ZuckerReports/jasper/cglib-2.1.jar;modules/ZuckerReports/jasper/cincom-jr-xmla.jar;modules/ZuckerReports/jasper/commons-beanutils-1.8.2.jar;modules/ZuckerReports/jasper/commons-collections-3.2.1.jar;modules/ZuckerReports/jasper/commons-dbcp-1.2.2.jar;modules/ZuckerReports/jasper/commons-digester-1.7.jar;modules/ZuckerReports/jasper/commons-javaflow-20060411.jar;modules/ZuckerReports/jasper/commons-logging-1.1.jar;modules/ZuckerReports/jasper/commons-math-1.0.jar;modules/ZuckerReports/jasper/commons-pool-1.3.jar;modules/ZuckerReports/jasper/commons-vfs-1.0.jar;modules/ZuckerReports/jasper/dom4j-1.6.jar;modules/ZuckerReports/jasper/ehcache-1.1.jar;modules/ZuckerReports/jasper/eigenbase-properties-1.1.0.10924.jar;modules/ZuckerReports/jasper/eigenbase-resgen-1.3.0.11873.jar;modules/ZuckerReports/jasper/eigenbase-xom-1.3.0.11999.jar;modules/ZuckerReports/jasper/ejb3-persistence.jar;modules/ZuckerReports/jasper/groovy-all-1.5.5.jar;modules/ZuckerReports/jasper/hibernate-annotations.jar;modules/ZuckerReports/jasper/hibernate-commons-annotations.jar;modules/ZuckerReports/jasper/hibernate3.jar;modules/ZuckerReports/jasper/hsqldb-1.8.0-10.jar;modules/ZuckerReports/jasper/iText-2.1.0.jar;modules/ZuckerReports/jasper/iTextAsian.jar;modules/ZuckerReports/jasper/jakarta-bcel-20050813.jar;modules/ZuckerReports/jasper/jasperreports-3.7.1.jar;modules/ZuckerReports/jasper/jasperreports-chart-themes-3.6.2.jar;modules/ZuckerReports/jasper/jasperreports-extensions-3.5.3.jar;modules/ZuckerReports/jasper/jasperreports-fonts-3.6.1.jar;modules/ZuckerReports/jasper/javacup.jar;modules/ZuckerReports/jasper/javassist-3.4.GA.jar;modules/ZuckerReports/jasper/jaxen-1.1.1.jar;modules/ZuckerReports/jasper/jcommon-1.0.15.jar;modules/ZuckerReports/jasper/jdt-compiler-3.1.1.jar;modules/ZuckerReports/jasper/jfreechart-1.0.12.jar;modules/ZuckerReports/jasper/jpa.jar;modules/ZuckerReports/jasper/js_activation-1.1.jar;modules/ZuckerReports/jasper/js_axis-1.4patched.jar;modules/ZuckerReports/jasper/js_commons-codec-1.3.jar;modules/ZuckerReports/jasper/js_commons-discovery-0.2.jar;modules/ZuckerReports/jasper/js_commons-httpclient-3.1.jar;modules/ZuckerReports/jasper/js_jasperserver-common-ws-3.5.0.jar;modules/ZuckerReports/jasper/js_jaxrpc.jar;modules/ZuckerReports/jasper/js_mail-1.4.jar;modules/ZuckerReports/jasper/js_saaj-api-1.3.jar;modules/ZuckerReports/jasper/js_wsdl4j-1.5.1.jar;modules/ZuckerReports/jasper/jta.jar;modules/ZuckerReports/jasper/jxl-2.6.jar;modules/ZuckerReports/jasper/log4j-1.2.15.jar;modules/ZuckerReports/jasper/mondrian-3.1.1.12687-Jaspersoft.jar;modules/ZuckerReports/jasper/mysql-connector-java-3.1.11-bin.jar;modules/ZuckerReports/jasper/olap4j-0.9.7.145.jar;modules/ZuckerReports/jasper/png-encoder-1.5.jar;modules/ZuckerReports/jasper/poi-3.2-FINAL-20081019.jar;modules/ZuckerReports/jasper/rex-20080421.jar;modules/ZuckerReports/jasper/rhino-1.7R1.jar;modules/ZuckerReports/jasper/saaj-api-1.3.jar;modules/ZuckerReports/jasper/slf4j-api.jar;modules/ZuckerReports/jasper/slf4j-log4j12.jar;modules/ZuckerReports/jasper/spring.jar;modules/ZuckerReports/jasper/sqleonardo-2007.03.jar;modules/ZuckerReports/jasper/swingx-2007_10_07.jar;modules/ZuckerReports/jasper/xml-apis-ext.jar;modules/ZuckerReports/jasper/xml-apis.jar;modules/ZuckerReports/jasper/zuckerreports-1.0.jar" at.go_mobile.zuckerreports.JasperBatchMain custom/ZuckerReports/temp/aff882c1-684b-d2de-403e-4be367bc2f5f/cmd.properties 2&1 JasperBatchMain :: loading jasper design custom/ZuckerReports/resources/contact_counts_by_first_name.jasper JasperBatchMain :: getParameterValue(REPORT_PARAMETERS_MAP, java.util.Map) = null JasperBatchMain :: getParameterValue(JASPER_REPORT, net.sf.jasperreports.engine.JasperReport) = null JasperBatchMain :: getParameterValue(REPORT_CONNECTION, java.sql.Connection) = null JasperBatchMain :: getParameterValue(REPORT_MAX_COUNT, java.lang.Integer) = null JasperBatchMain :: getParameterValue(REPORT_DATA_SOURCE, net.sf.jasperreports.engine.JRDataSource) = null JasperBatchMain :: getParameterValue(REPORT_SCRIPTLET, net.sf.jasperreports.engine.JRAbstractScriptlet) = null JasperBatchMain :: getParameterValue(REPORT_LOCALE, java.util.Locale) = null JasperBatchMain :: getParameterValue(REPORT_RESOURCE_BUNDLE, java.util.ResourceBundle) = null JasperBatchMain :: getParameterValue(REPORT_TIME_ZONE, java.util.TimeZone) = null JasperBatchMain :: getParameterValue(REPORT_FORMAT_FACTORY, net.sf.jasperreports.engine.util.FormatFactory) = null JasperBatchMain :: getParameterValue(REPORT_CLASS_LOADER, java.lang.ClassLoader) = null JasperBatchMain :: getParameterValue(REPORT_URL_HANDLER_FACTORY, java.net.URLStreamHandlerFactory) = null JasperBatchMain :: getParameterValue(REPORT_FILE_RESOLVER, net.sf.jasperreports.engine.util.FileResolver) = null JasperBatchMain :: getParameterValue(REPORT_VIRTUALIZER, net.sf.jasperreports.engine.JRVirtualizer) = null JasperBatchMain :: getParameterValue(IS_IGNORE_PAGINATION, java.lang.Boolean) = null JasperBatchMain :: getParameterValue(REPORT_TEMPLATES, java.util.Collection) = null log4j:WARN No appenders could be found for logger (net.sf.jasperreports.extensions.ExtensionsEnviron ment). log4j:WARN Please initialize the log4j system properly. Exception in thread "main" java.lang.IllegalArgumentException: Null 'key' argument. at org.jfree.data.DefaultKeyedValues.setValue(Default KeyedValues.java:229) at org.jfree.data.DefaultKeyedValues2D.setValue(Defau ltKeyedValues2D.java:337) at org.jfree.data.DefaultKeyedValues2D.addValue(Defau ltKeyedValues2D.java:303) at org.jfree.data.category.DefaultCategoryDataset.add Value(DefaultCategoryDataset.java:222) at net.sf.jasperreports.charts.fill.JRFillCategoryDat aset.customIncrement(JRFillCategoryDataset.java:14 3) at net.sf.jasperreports.engine.fill.JRFillElementData set.increment(JRFillElementDataset.java:175) at net.sf.jasperreports.engine.fill.JRCalculator.calc ulateVariables(JRCalculator.java:148) at net.sf.jasperreports.engine.fill.JRVerticalFiller. fillDetail(JRVerticalFiller.java:736) at net.sf.jasperreports.engine.fill.JRVerticalFiller. fillReportContent(JRVerticalFiller.java:272) at net.sf.jasperreports.engine.fill.JRVerticalFiller. fillReport(JRVerticalFiller.java:114) at net.sf.jasperreports.engine.fill.JRBaseFiller.fill (JRBaseFiller.java:923) at net.sf.jasperreports.engine.fill.JRBaseFiller.fill (JRBaseFiller.java:826) at net.sf.jasperreports.engine.fill.JRFiller.fillRepo rt(JRFiller.java:59) at at.go_mobile.zuckerreports.JasperBatchMain.main(Ja sperBatchMain.java:126) The same report runs correctly in another SugarCRM installation on the same server. The installation in which the report runs correctly is of the same version, and has the same version of the ZuckerReports module. The report previously ran correctly on both installations. I think that the only changes that have been made on the installation in which the report now does not work since the report was last successfully run are the additions of a few custom fields in the Contacts module. These changes should have nothing to do with ZuckerReports. I have tried uninstalling and reinstalling the ZuckerReports module, but the problem remains. A google search for the warnings given in the error message ie. * log4j:WARN No appenders could be found for logger (net.sf.jasperreports.extensions.ExtensionsEnviron ment). * log4j:WARN Please initialize the log4j system properly. Returns a few links (not specific to ZuckerReports) with tips similar to the following: * log4j.properties or log4j.xml needs to be on the classpath where log4j can find it. I cannot find a file with either of those names anywhere on my server, and yet the report can be run successfully on one of my SugarCRM installations. So I figure log4j must be being configured another way. Can anyone suggest a way to solve this problem? Or explain how I might discover how log4j is configured in ZuckerReports? Or explain how I might compare the working with the non-working installation in order to help find a solution? (I have tried searching for files containing "log4j" in both installations and comparing but all I can find are .jar files (nothing I can read with a text editor), and the .jar files found in each installation appear to be the same.)

    Read the article

  • Mobile enabled web apps with ASP.NET MVC 3 and jQuery Mobile

    - by shiju
    In my previous blog posts, I have demonstrated a simple web app using ASP.NET MVC 3 and EF Code First. In this post, I will be focus on making this application for mobile devices. A single web site will be used for both mobile browsers and desktop browsers. If users are accessing the web app from mobile browsers, users will be redirect to mobile specific pages and will get normal pages if users are accessing from desktop browsers. In this demo app, the mobile specific pages are maintained in an ASP.NET MVC Area named Mobile and mobile users will be redirect to MVC Area Mobile. Let’s add a new area named Mobile to the ASP.NET MVC app. For adding Area, right click the ASP.NET MVC project and  select Area from Add option. Our mobile specific pages using jQuery Mobile will be maintained in the Mobile Area. ASP.NET MVC Global filter for redirecting mobile visitors to Mobile area Let’s add an ASP.NET MVC Global filter for redirecting mobile visitors to Mobile area. The below Global filter is taken from the sample app http://aspnetmobilesamples.codeplex.com/ created by the ASP.NET team. The below filer will redirect the Mobile visitors to an ASP.NET MVC Area Mobile. public class RedirectMobileDevicesToMobileAreaAttribute : AuthorizeAttribute     {         protected override bool AuthorizeCore(System.Web.HttpContextBase httpContext)         {             // Only redirect on the first request in a session             if (!httpContext.Session.IsNewSession)                 return true;               // Don't redirect non-mobile browsers             if (!httpContext.Request.Browser.IsMobileDevice)                 return true;               // Don't redirect requests for the Mobile area             if (Regex.IsMatch(httpContext.Request.Url.PathAndQuery, "/Mobile($|/)"))                 return true;               return false;         }           protected override void HandleUnauthorizedRequest(AuthorizationContext filterContext)         {             var redirectionRouteValues = GetRedirectionRouteValues(filterContext.RequestContext);             filterContext.Result = new RedirectToRouteResult(redirectionRouteValues);         }           // Override this method if you want to customize the controller/action/parameters to which         // mobile users would be redirected. This lets you redirect users to the mobile equivalent         // of whatever resource they originally requested.         protected virtual RouteValueDictionary GetRedirectionRouteValues(RequestContext requestContext)         {             return new RouteValueDictionary(new { area = "Mobile", controller = "Home", action = "Index" });         }     } Let’s add the global filer RedirectMobileDevicesToMobileAreaAttribute to the global filter collection in the Application_Start() of Global.asax.cs file   GlobalFilters.Filters.Add(new RedirectMobileDevicesToMobileAreaAttribute(), 1); Now your mobile visitors will be redirect to the Mobile area. But the browser detection logic in the RedirectMobileDevicesToMobileAreaAttribute filter will not be working in some modern browsers and some conditions. But the good news is that ASP.NET’s browser detection feature is extensible and will be greatly working with the open source framework 51Degrees.mobi. 51Degrees.mobi is a Browser Capabilities Provider that will be working with ASP.NET’s Request.Browser and will provide more accurate and detailed information. For more details visit the documentation page at http://51degrees.codeplex.com/documentation. Let’s add a reference to 51Degrees.mobi library using NuGet We can easily add the 51Degrees.mobi from NuGet and this will update the web.config for necessary configuartions. Mobile Web App using jQuery Mobile Framework jQuery Mobile Framework is built on top of jQuery that provides top-of-the-line JavaScript in a unified User Interface that works across the most-used smartphone web browsers and tablet form factors. It provides an easy way to develop user interfaces for mobile web apps. The current version of the framework is jQuery Mobile Alpha 3. We need to include the following files to use jQuery Mobile. The jQuery Mobile CSS file (jquery.mobile-1.0a3.min.css) The jQuery library (jquery-1.5.min.js) The jQuery Mobile library (jquery.mobile-1.0a3.min.js) Let’s add the required jQuery files directly from jQuery CDN . You can download the files and host them on your own server. jQuery Mobile page structure The basic jQuery Mobile page structure is given below <!DOCTYPE html> <html>   <head>   <title>Page Title</title>   <link rel="stylesheet" href="http://code.jquery.com/mobile/1.0a3/jquery.mobile-1.0a1.min.css" />   <script src="http://code.jquery.com/jquery-1.5.min.js"></script>   <script src="http://code.jquery.com/mobile/1.0a3/jquery.mobile-1.0a3.min.js"></script> </head> <body> <div data-role="page">   <div data-role="header">     <h1>Page Title</h1>   </div>   <div data-role="content">     <p>Page content goes here.</p>      </div>   <div data-role="footer">     <h4>Page Footer</h4>   </div> </div> </body> </html> The data- attributes are the new feature of HTML5 so that jQuery Mobile will be working on browsers that supporting HTML 5. You can get a detailed browser support details from http://jquerymobile.com/gbs/ . In the Head section we have included the Core jQuery javascript file and jQuery Mobile Library and the core CSS Library for the UI Element Styling. These jQuery files are minified versions and will improve the performance of page load on Mobile Devices. The jQuery Mobile pages are identified with an element with the data-role="page" attribute inside the <body> tag. <div data-role="page"> </div> Within the "page" container, any valid HTML markup can be used, but for typical pages in jQuery Mobile, the immediate children of a "page" are div element with data-roles of "header", "content", and "footer". <div data-role="page">     <div data-role="header">...</div>     <div data-role="content">...</div>     <div data-role="footer">...</div> </div> The div data-role="content" holds the main content of the HTML page and will be used for making user interaction elements. The div data-role="header" is header part of the page and div data-role="footer" is the footer part of the page. Creating Mobile specific pages in the Mobile Area Let’s create Layout page for our Mobile area <!DOCTYPE html> <html> <head>     <title>@ViewBag.Title</title>     <link rel="stylesheet" href="http://code.jquery.com/mobile/1.0a3/jquery.mobile-1.0a3.min.css" />     <script src="http://code.jquery.com/jquery-1.5.min.js"></script>     <script src="http://code.jquery.com/mobile/1.0a3/jquery.mobile-1.0a3.min.js"></script>     </head>      <body> @RenderBody()    </body> </html> In the Layout page, I have given reference to jQuery Mobile JavaScript files and the CSS file. Let’s add an Index view page Index.chtml @{     ViewBag.Title = "Index"; } <div data-role="page"> <div data-role="header">      <h1>Expense Tracker Mobile</h1> </div> <div data-role="content">   <ul data-role="listview">     <li>@Html.Partial("_LogOnPartial")</li>      <li>@Html.ActionLink("Home", "Index", "Home")</li>      <li>@Html.ActionLink("Category", "Index", "Category")</li>                          <li>@Html.ActionLink("Expense", "Index", "Expense")</li> </ul> </div> <div data-role="footer">           Shiju Varghese | <a href="http://weblogs.asp.net/shijuvarghese">Blog     </a> | <a href="http://twitter.com/shijucv">Twitter</a>   </div> </div>   In the Index page, we have used data-role “listview” for showing our content as List View Let’s create a data entry screen create.cshtml @model MyFinance.Domain.Category @{     ViewBag.Title = "Create Category"; }   <div data-role="page"> <div data-role="header">      <h1>Create Category</h1>             @Html.ActionLink("Home", "Index","Home",null, new { @class = "ui-btn-right" })      </div>       <div data-role="content">     @using (Html.BeginForm("Create","Category",FormMethod.Post))     {       <div data-role="fieldcontain">        @Html.LabelFor(model => model.Name)        @Html.EditorFor(model => model.Name)        <div>           @Html.ValidationMessageFor(m => m.Name)        </div>         </div>         <div data-role="fieldcontain">         @Html.LabelFor(model => model.Description)         @Html.EditorFor(model => model.Description)                   </div>                    <div class="ui-body ui-body-b">         <button type="submit" data-role="button" data-theme="b">Save</button>       </div>     }        </div> </div>   In jQuery Mobile, the form elements should be placed inside the data-role="fieldcontain" The below screen shots show the pages rendered in mobile browser Index Page Create Page Source Code You can download the source code from http://efmvc.codeplex.com   Summary We have created a single  web app for desktop browsers and mobile browsers. If a user access the site from desktop browsers, users will get normal web pages and get mobile specific pages if users access from mobile browsers. If users are accessing the website from mobile devices, we will redirect to a ASP.NET MVC area Mobile. For redirecting to the Mobile area, we have used a Global filer for the redirection logic and used open source framework 51Degrees.mobi for the better support for mobile browser detection. In the Mobile area, we have created the pages using jQuery Mobile and users will get mobile friendly web pages. We can create great mobile web apps using ASP.NET MVC  and jQuery Mobile Framework.

    Read the article

  • Making WCF Output a single WSDL file for interop purposes.

    - by Glav
    By default, when WCF emits a WSDL definition for your services, it can often contain many links to others related schemas that need to be imported. For the most part, this is fine. WCF clients understand this type of schema without issue, and it conforms to the requisite standards as far as WSDL definitions go. However, some non Microsoft stacks will only work with a single WSDL file and require that all definitions for the service(s) (port types, messages, operation etc…) are contained within that single file. In other words, no external imports are supported. Some Java clients (to my working knowledge) have this limitation. This obviously presents a problem when trying to create services exposed for consumption and interop by these clients. Note: You can download the full source code for this sample from here To illustrate this point, lets say we have a simple service that looks like: Service Contract public interface IService1 { [OperationContract] [FaultContract(typeof(DataFault))] string GetData(DataModel1 model); [OperationContract] [FaultContract(typeof(DataFault))] string GetMoreData(DataModel2 model); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Service Implementation/Behaviour public class Service1 : IService1 { public string GetData(DataModel1 model) { return string.Format("Some Field was: {0} and another field was {1}", model.SomeField,model.AnotherField); } public string GetMoreData(DataModel2 model) { return string.Format("Name: {0}, age: {1}", model.Name, model.Age); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Configuration File <system.serviceModel> <services> <service name="SingleWSDL_WcfService.Service1" behaviorConfiguration="SingleWSDL_WcfService.Service1Behavior"> <!-- ...std/default data omitted for brevity..... --> <endpoint address ="" binding="wsHttpBinding" contract="SingleWSDL_WcfService.IService1" > ....... </services> <behaviors> <serviceBehaviors> <behavior name="SingleWSDL_WcfService.Service1Behavior"> ........ </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } When WCF is asked to produce a WSDL for this service, it will produce a file that looks something like this (note: some sections omitted for brevity): <?xml version="1.0" encoding="utf-8" ?> - <wsdl:definitions name="Service1" targetNamespace="http://tempuri.org/" xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" ...... namespace definitions omitted for brevity + &lt;wsp:Policy wsu:Id="WSHttpBinding_IService1_policy"> ... multiple policy items omitted for brevity </wsp:Policy> - <wsdl:types> - <xsd:schema targetNamespace="http://tempuri.org/Imports"> <xsd:import schemaLocation="http://localhost:2370/HostingSite/Service-default.svc?xsd=xsd0" namespace="http://tempuri.org/" /> <xsd:import schemaLocation="http://localhost:2370/HostingSite/Service-default.svc?xsd=xsd3" namespace="Http://SingleWSDL/Fault" /> <xsd:import schemaLocation="http://localhost:2370/HostingSite/Service-default.svc?xsd=xsd1" namespace="http://schemas.microsoft.com/2003/10/Serialization/" /> <xsd:import schemaLocation="http://localhost:2370/HostingSite/Service-default.svc?xsd=xsd2" namespace="http://SingleWSDL/Model1" /> <xsd:import schemaLocation="http://localhost:2370/HostingSite/Service-default.svc?xsd=xsd4" namespace="http://SingleWSDL/Model2" /> </xsd:schema> </wsdl:types> + <wsdl:message name="IService1_GetData_InputMessage"> .... </wsdl:message> - <wsdl:operation name="GetData"> ..... </wsdl:operation> - <wsdl:service name="Service1"> ....... </wsdl:service> </wsdl:definitions> The above snippet from the WSDL shows the external links and references that are generated by WCF for a relatively simple service. Note the xsd:import statements that reference external XSD definitions which are also generated by WCF. In order to get WCF to produce a single WSDL file, we first need to follow some good practices when it comes to WCF service definitions. Step 1: Define a namespace for your service contract. [ServiceContract(Namespace="http://SingleWSDL/Service1")] public interface IService1 { ...... } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Normally you would not use a literal string and may instead define a constant to use in your own application for the namespace. When this is applied and we generate the WSDL, we get the following statement inserted into the document: <wsdl:import namespace="http://SingleWSDL/Service1" location="http://localhost:2370/HostingSite/Service-default.svc?wsdl=wsdl0" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } All the previous imports have gone. If we follow this link, we will see that the XSD imports are now in this external WSDL file. Not really any benefit for our purposes. Step 2: Define a namespace for your service behaviour [ServiceBehavior(Namespace = "http://SingleWSDL/Service1")] public class Service1 : IService1 { ...... } As you can see, the namespace of the service behaviour should be the same as the service contract interface to which it implements. Failure to do these tasks will cause WCF to emit its default http://tempuri.org namespace all over the place and cause WCF to still generate import statements. This is also true if the namespace of the contract and behaviour differ. If you define one and not the other, defaults kick in, and you’ll find extra imports generated. While each of the previous 2 steps wont cause any less import statements to be generated, you will notice that namespace definitions within the WSDL have identical, well defined names. Step 3: Define a binding namespace In the configuration file, modify the endpoint configuration line item to iunclude a bindingNamespace attribute which is the same as that defined on the service behaviour and service contract <endpoint address="" binding="wsHttpBinding" contract="SingleWSDL_WcfService.IService1" bindingNamespace="http://SingleWSDL/Service1"> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } However, this does not completely solve the issue. What this will do is remove the WSDL import statements like this one: <wsdl:import namespace="http://SingleWSDL/Service1" location="http://localhost:2370/HostingSite/Service-default.svc?wsdl" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } from the generated WSDL. Finally…. the magic…. Step 4: Use a custom endpoint behaviour to read in external imports and include in the main WSDL output. In order to force WCF to output a single WSDL with all the required definitions, we need to define a custom WSDL Export extension that can be applied to any endpoints. This requires implementing the IWsdlExportExtension and IEndpointBehavior interfaces and then reading in any imported schemas, and adding that output to the main, flattened WSDL to be output. Sounds like fun right…..? Hmmm well maybe not. This step sounds a little hairy, but its actually quite easy thanks to some kind individuals who have already done this for us. As far as I know, there are 2 available implementations that we can easily use to perform the import and “WSDL flattening”.  WCFExtras which is on codeplex and FlatWsdl by Thinktecture. Both implementations actually do exactly the same thing with the imports and provide an endpoint behaviour, however FlatWsdl does a little more work for us by providing a ServiceHostFactory that we can use which automatically attaches the requisite behaviour to our endpoints for us. To use this in an IIS hosted service, we can modify the .SVC file to specify this ne factory to use like so: <%@ ServiceHost Language="C#" Debug="true" Service="SingleWSDL_WcfService.Service1" Factory="Thinktecture.ServiceModel.Extensions.Description.FlatWsdlServiceHostFactory" %> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Within a service application or another form of executable such as a console app, we can simply create an instance of the custom service host and open it as we normally would as shown here: FlatWsdlServiceHost host = new FlatWsdlServiceHost(typeof(Service1)); host.Open(); And we are done. WCF will now generate one single WSDL file that contains all he WSDL imports and data/XSD imports. You can download the full source code for this sample from here Hope this has helped you. Note: Please note that I have not extensively tested this in a number of different scenarios so no guarantees there.

    Read the article

  • Making WCF Output a single WSDL file for interop purposes.

    By default, when WCF emits a WSDL definition for your services, it can often contain many links to others related schemas that need to be imported. For the most part, this is fine. WCF clients understand this type of schema without issue, and it conforms to the requisite standards as far as WSDL definitions go. However, some non Microsoft stacks will only work with a single WSDL file and require that all definitions for the service(s) (port types, messages, operation etc) are contained within that single file. In other words, no external imports are supported. Some Java clients (to my working knowledge) have this limitation. This obviously presents a problem when trying to create services exposed for consumption and interop by these clients. Note: You can download the full source code for this sample from here To illustrate this point, lets say we have a simple service that looks like: Service Contract public interface IService1 { [OperationContract] [FaultContract(typeof(DataFault))] string GetData(DataModel1 model); [OperationContract] [FaultContract(typeof(DataFault))] string GetMoreData(DataModel2 model); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Service Implementation/Behaviour public class Service1 : IService1 { public string GetData(DataModel1 model) { return string.Format("Some Field was: {0} and another field was {1}", model.SomeField,model.AnotherField); } public string GetMoreData(DataModel2 model) { return string.Format("Name: {0}, age: {1}", model.Name, model.Age); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Configuration File <system.serviceModel> <services> <service name="SingleWSDL_WcfService.Service1" behaviorConfiguration="SingleWSDL_WcfService.Service1Behavior"> <!-- ...std/default data omitted for brevity..... --> <endpoint address ="" binding="wsHttpBinding" contract="SingleWSDL_WcfService.IService1" > ....... </services> <behaviors> <serviceBehaviors> <behavior name="SingleWSDL_WcfService.Service1Behavior"> ........ </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } When WCF is asked to produce a WSDL for this service, it will produce a file that looks something like this (note: some sections omitted for brevity): <?xml version="1.0" encoding="utf-8" ?> - <wsdl:definitions name="Service1" targetNamespace="http://tempuri.org/" xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" ...... namespace definitions omitted for brevity + <wsp:Policy wsu:Id="WSHttpBinding_IService1_policy"> ... multiple policy items omitted for brevity </wsp:Policy> - <wsdl:types> - <xsd:schema targetNamespace="http://tempuri.org/Imports"> <xsd:import schemaLocation="http://localhost:2370/HostingSite/Service-default.svc?xsd=xsd0" namespace="http://tempuri.org/" /> <xsd:import schemaLocation="http://localhost:2370/HostingSite/Service-default.svc?xsd=xsd3" namespace="Http://SingleWSDL/Fault" /> <xsd:import schemaLocation="http://localhost:2370/HostingSite/Service-default.svc?xsd=xsd1" namespace="http://schemas.microsoft.com/2003/10/Serialization/" /> <xsd:import schemaLocation="http://localhost:2370/HostingSite/Service-default.svc?xsd=xsd2" namespace="http://SingleWSDL/Model1" /> <xsd:import schemaLocation="http://localhost:2370/HostingSite/Service-default.svc?xsd=xsd4" namespace="http://SingleWSDL/Model2" /> </xsd:schema> </wsdl:types> + <wsdl:message name="IService1_GetData_InputMessage"> .... </wsdl:message> - <wsdl:operation name="GetData"> ..... </wsdl:operation> - <wsdl:service name="Service1"> ....... </wsdl:service> </wsdl:definitions> The above snippet from the WSDL shows the external links and references that are generated by WCF for a relatively simple service. Note the xsd:import statements that reference external XSD definitions which are also generated by WCF. In order to get WCF to produce a single WSDL file, we first need to follow some good practices when it comes to WCF service definitions. Step 1: Define a namespace for your service contract. [ServiceContract(Namespace="http://SingleWSDL/Service1")] public interface IService1 { ...... } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Normally you would not use a literal string and may instead define a constant to use in your own application for the namespace. When this is applied and we generate the WSDL, we get the following statement inserted into the document: <wsdl:import namespace="http://SingleWSDL/Service1" location="http://localhost:2370/HostingSite/Service-default.svc?wsdl=wsdl0" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } All the previous imports have gone. If we follow this link, we will see that the XSD imports are now in this external WSDL file. Not really any benefit for our purposes. Step 2: Define a namespace for your service behaviour [ServiceBehavior(Namespace = "http://SingleWSDL/Service1")] public class Service1 : IService1 { ...... } As you can see, the namespace of the service behaviour should be the same as the service contract interface to which it implements. Failure to do these tasks will cause WCF to emit its default http://tempuri.org namespace all over the place and cause WCF to still generate import statements. This is also true if the namespace of the contract and behaviour differ. If you define one and not the other, defaults kick in, and youll find extra imports generated. While each of the previous 2 steps wont cause any less import statements to be generated, you will notice that namespace definitions within the WSDL have identical, well defined names. Step 3: Define a binding namespace In the configuration file, modify the endpoint configuration line item to iunclude a bindingNamespace attribute which is the same as that defined on the service behaviour and service contract <endpoint address="" binding="wsHttpBinding" contract="SingleWSDL_WcfService.IService1" bindingNamespace="http://SingleWSDL/Service1"> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } However, this does not completely solve the issue. What this will do is remove the WSDL import statements like this one: <wsdl:import namespace="http://SingleWSDL/Service1" location="http://localhost:2370/HostingSite/Service-default.svc?wsdl" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } from the generated WSDL. Finally. the magic. Step 4: Use a custom endpoint behaviour to read in external imports and include in the main WSDL output. In order to force WCF to output a single WSDL with all the required definitions, we need to define a custom WSDL Export extension that can be applied to any endpoints. This requires implementing the IWsdlExportExtension and IEndpointBehavior interfaces and then reading in any imported schemas, and adding that output to the main, flattened WSDL to be output. Sounds like fun right..? Hmmm well maybe not. This step sounds a little hairy, but its actually quite easy thanks to some kind individuals who have already done this for us. As far as I know, there are 2 available implementations that we can easily use to perform the import and WSDL flattening.  WCFExtras which is on codeplex and FlatWsdl by Thinktecture. Both implementations actually do exactly the same thing with the imports and provide an endpoint behaviour, however FlatWsdl does a little more work for us by providing a ServiceHostFactory that we can use which automatically attaches the requisite behaviour to our endpoints for us. To use this in an IIS hosted service, we can modify the .SVC file to specify this ne factory to use like so: <%@ ServiceHost Language="C#" Debug="true" Service="SingleWSDL_WcfService.Service1" Factory="Thinktecture.ServiceModel.Extensions.Description.FlatWsdlServiceHostFactory" %> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Within a service application or another form of executable such as a console app, we can simply create an instance of the custom service host and open it as we normally would as shown here: FlatWsdlServiceHost host = new FlatWsdlServiceHost(typeof(Service1)); host.Open(); And we are done. WCF will now generate one single WSDL file that contains all he WSDL imports and data/XSD imports. You can download the full source code for this sample from here Hope this has helped you. Note: Please note that I have not extensively tested this in a number of different scenarios so no guarantees there.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Quick guide to Oracle IRM 11g: Server configuration

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g index Welcome to the second article in this quick quide to Oracle IRM 11g. Hopefully you've just finished the first article which takes you through deploying the software onto a Linux server. This article walks you through the configuration of this new service and contains a subset of information from the official documentation and is focused on installing the server on Oracle Enterprise Linux. If you are planning to deploy on a non-Linux platform, you will need to reference the documentation for platform specific information. Contents Introduction Create IRM WebLogic Domain Starting the Admin Server and initial configuration Introduction In the previous article the database was prepared, the WebLogic Application Server installed and the files required for an IRM server installed. But we don't actually have a configured system yet. We need to now create a WebLogic Domain in which the IRM server will run, then configure some of the settings and crypography so that we can create a context and be ready to seal some content and test it all works. This article doesn't cover the configuration of SSL communication from client to server. This is quite a big topic and a separate article has been dedicated for this area. In these articles I also use the hostname, irm.company.internal to reference the IRM server and later on use the hostname irm.company.com in reference to the public facing service. Create IRM WebLogic Domain First step is creating the WebLogic domain, in a console switch to the newly created IRM installation folder as shown below and we will run the domain configuration wizard. [oracle@irm /]$ cd /oracle/middleware/Oracle_IRM/common/bin [oracle@irm bin]$ ./config.sh First thing the wizard will ask is if you wish to create a new or extend an existing domain. This guide is creating a standalone system so you should select to create a new domain. Next step is to choose what technologies from the Oracle ECM Suite you wish this domain to host. You are only interested in selecting the option "Oracle Information Rights Management". When you select this check box you will notice that it also selects "Oracle Enterprise Manager" and "Oracle JRF" as these are dependencies of the IRM server. You then need to specify where you wish to place the domain files. I usually just change the domain name from base_domain or irm_domain and leave the others with their defaults. Now the domain will have a single user initially and by default this user is called "weblogic". I usually change this account name to "sysadmin" or "administrator", but in this guide lets just accept the default. With respects to the next dialog, again for eval or dev reasons, leave the server startup mode as development. The JDK should also be automatically detected. We now need to provide details of the database. This guide is using the Oracle 11gR2 database and the settings I used can be seen in the image to the right. There is a lot of configuration that can now be done for the admin server, any managed servers and where the deployments reside. In this guide I am leaving all of these to their defaults so do not check any of the boxes. However I will on this blog be detailing later how you can go back and setup things such as automated startup of an IRM server which require changes to these default settings. But for now, lets leave it all alone and just click next. Now we are ready to install. Note that from this dialog you can scroll the left window and see there are going to be two servers created from the defaults. The AdminServer which is where you modify settings for the WebLogic Server and also hosts the Oracle Enterprise Manager for IRM which allows to monitor the IRM service performance and also make service related settings (which we shortly do below) and the IRM_server1 which hosts the actual IRM services themselves. So go right ahead and hit create, the process is pretty quick and usually under 10 minutes. When the domain creation ends, it will give you the URL to the admin server. It's worth noting this down and the URL is usually; http://irm.company.internal:7001 Starting the Admin Server and initial configuration First thing to do is to start the WebLogic Admin server and review the initial IRM server settings. In this guide we are going to run the Admin server and IRM server in console windows, in another article I will discuss running these as background services. So for now, start a console and run the Admin server by doing the following. cd /oracle/middleware/user_projects/domains/irm_domain/ ./startWebLogic.sh Wait for the server to start, you are looking for the following line to be reported in the console window. <BEA-00360><Server started in RUNNING mode> First step is configuring the IRM service via Enterprise Manager. Now that the Admin server is running you can point a browser at http://irm.company.internal:7001/em. Login with the username and password you supplied when you created the domain. In Enterprise Manager the IRM service administrator is able to make server wide configuration. However finding where to access the pages with these settings can be a bit of a challenge. After logging in on the left you'll see a tree containing elements of the Enterprise Manager farm Farm_irm_domain. Open up Content Management, then Information Rights Management and finally select the IRM node. On the right then select the IRM menu item, navigate to the Administration section and now we have four options, for now, we are just going to look at General Settings. The image on the right proves that a picture is worth a thousand words (or 113 in this case). The General Settings page allows you to set the cryptographic algorithms used for protecting sealed content. Unless you have a burning need to increase the key lengths or you need to comply to a regulation or government mandate, AES192 is a good start. You can change this later on without worry. The most important setting here we need to make is the Server URL. In this blog article I go over why this URL is so important, basically every single piece of content you protect with Oracle IRM is going to have this URL embedded in it, so if it's wrong or unresolvable, then nobody can open the secured documents. Note that in our environment we have yet to do any SSL configuration of the service. If you intend to build a server without SSL, then use http as the protocol instead of https. But I would recommend using SSL and setting this up is described in the next article. I would also probably up the device count from 1 to 3. This means that any user can retrieve rights to access content onto 3 computers at any one time. The default of 1 doesn't really make sense in development, evaluation nor even production environments and my experience is that 3 is a better number. Next step is to create the keystore for the IRM server. When a classification (called a context) is created, Oracle IRM generates a unique set of symmetric keys which are used to secure the content itself. These keys are then encrypted with a set of "wrapper" asymmetric cryptography keys which are stored externally to the server either in a Java Key Store or a HSM. These keys need to be generated and the following shows my commands and the resulting output. I have greyed out the responses from the commands so you can see the input a little easier. [oracle@irmsrv ~]$ cd /oracle/middleware/wlserver_10.3/server/bin/ [oracle@irmsrv bin]$ ./setWLSEnv.sh CLASSPATH=/oracle/middleware/patch_wls1033/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/oracle/middleware/patch_ocp353/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/usr/java/jdk1.6.0_18/lib/tools.jar:/oracle/middleware/wlserver_10.3/server/lib/weblogic_sp.jar:/oracle/middleware/wlserver_10.3/server/lib/weblogic.jar:/oracle/middleware/modules/features/weblogic.server.modules_10.3.3.0.jar:/oracle/middleware/wlserver_10.3/server/lib/webservices.jar:/oracle/middleware/modules/org.apache.ant_1.7.1/lib/ant-all.jar:/oracle/middleware/modules/net.sf.antcontrib_1.1.0.0_1-0b2/lib/ant-contrib.jar: PATH=/oracle/middleware/wlserver_10.3/server/bin:/oracle/middleware/modules/org.apache.ant_1.7.1/bin:/usr/java/jdk1.6.0_18/jre/bin:/usr/java/jdk1.6.0_18/bin:/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/home/oracle/bin Your environment has been set. [oracle@irmsrv bin]$ cd /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/ [oracle@irmsrv fmwconfig]$ keytool -genkeypair -alias oracle.irm.wrap -keyalg RSA -keysize 2048 -keystore irm.jks Enter keystore password: Re-enter new password: What is your first and last name? [Unknown]: Simon Thorpe What is the name of your organizational unit? [Unknown]: Oracle What is the name of your organization? [Unknown]: Oracle What is the name of your City or Locality? [Unknown]: San Francisco What is the name of your State or Province? [Unknown]: CA What is the two-letter country code for this unit? [Unknown]: US Is CN=Simon Thorpe, OU=Oracle, O=Oracle, L=San Francisco, ST=CA, C=US correct? [no]: yes Enter key password for (RETURN if same as keystore password): At this point we now have an irm.jks in the directory /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig. The reason we store it here is this folder would be backed up as part of a domain backup. As with any cryptographic technology, DO NOT LOSE THESE KEYS OR THIS KEY STORE. Once you've sealed content against a context, the keys will be wrapped with these keys, lose these keys, and you can't get access to any secured content, pretty important. Now we've got the keys created, we need to go back to the IRM Enterprise Manager and set the location of the key store. Going back to the General Settings page in Enterprise Manager scroll down to Keystore Settings. Leave the type as JKS but change the location to; /oracle/Middleware/user_projects/domains/irm_domain/config/fmwconfig/irm.jks and hit Apply. The final step with regards to the key store is we need to tell the server what the password is for the Java Key Store so that it can be opened and the keys accessed. Once more fire up a console window and run these commands (again i've greyed out the clutter to see the commands easier). You will see dummy passed into the commands, this is because the command asks for a username, but in this instance we don't use one, hence the value dummy is passed and it isn't used. [oracle@irmsrv fmwconfig]$ cd /oracle/middleware/Oracle_IRM/common/bin/ [oracle@irmsrv bin]$ ./wlst.sh ... lots of settings fly by... Welcome to WebLogic Server Administration Scripting Shell Type help() for help on available commands wls:/offline>connect('weblogic','password','t3://irmsrv.us.oracle.com:7001') Connecting to t3://irmsrv.us.oracle.com:7001 with userid weblogic ... Successfully connected to Admin Server 'AdminServer' that belongs to domain 'irm_domain'. Warning: An insecure protocol was used to connect to the server. To ensure on-the-wire security, the SSL port or Admin port should be used instead. wls:/irm_domain/serverConfig>createCred("IRM","keystore:irm.jks","dummy","password") Location changed to domainRuntime tree. This is a read-only tree with DomainMBean as the root. For more help, use help(domainRuntime)wls:/irm_domain/serverConfig>createCred("IRM","key:irm.jks:oracle.irm.wrap","dummy","password") Already in Domain Runtime Tree wls:/irm_domain/serverConfig> At last we are now ready to fire up the IRM server itself. The domain creation created a managed server called IRM_server1 and we need to start this, use the following commands in a new console window. cd /oracle/middleware/user_projects/domains/irm_domain/bin/ ./startManagedWebLogic.sh IRM_server1 This will start up the server in the console, unlike the Admin server, you need to provide the username and password for the service to start. Enter in your weblogic username and password when prompted. You can change this behavior by putting the password into a boot.properties file, read more about this in the WebLogic Server documentation. Once running, wait until you see the line; <Notice><WebLogicServer><BEA-000360><Server started in RUNNING mode> At this point we can now login to the Oracle IRM Management Website at the URL. http://irm.company.internal:1600/irm_rights/ The server is just configured for HTTP at the moment, no SSL involved. Just want to ensure we can get a working system up and running. You should now see a login like the image on the right and you can now login using your weblogic username and password. The next article in this guide goes over adding SSL and now testing your server by actually adding a few users, sealing some content and opening this content as a user.

    Read the article

  • Where does ASP.NET Web API Fit?

    - by Rick Strahl
    With the pending release of ASP.NET MVC 4 and the new ASP.NET Web API, there has been a lot of discussion of where the new Web API technology fits in the ASP.NET Web stack. There are a lot of choices to build HTTP based applications available now on the stack - we've come a long way from when WebForms and Http Handlers/Modules where the only real options. Today we have WebForms, MVC, ASP.NET Web Pages, ASP.NET AJAX, WCF REST and now Web API as well as the core ASP.NET runtime to choose to build HTTP content with. Web API definitely squarely addresses the 'API' aspect - building consumable services - rather than HTML content, but even to that end there are a lot of choices you have today. So where does Web API fit, and when doesn't it? But before we get into that discussion, let's talk about what a Web API is and why we should care. What's a Web API? HTTP 'APIs' (Microsoft's new terminology for a service I guess)  are becoming increasingly more important with the rise of the many devices in use today. Most mobile devices like phones and tablets run Apps that are using data retrieved from the Web over HTTP. Desktop applications are also moving in this direction with more and more online content and synching moving into even traditional desktop applications. The pending Windows 8 release promises an app like platform for both the desktop and other devices, that also emphasizes consuming data from the Cloud. Likewise many Web browser hosted applications these days are relying on rich client functionality to create and manipulate the browser user interface, using AJAX rather than server generated HTML data to load up the user interface with data. These mobile or rich Web applications use their HTTP connection to return data rather than HTML markup in the form of JSON or XML typically. But an API can also serve other kinds of data, like images or other binary files, or even text data and HTML (although that's less common). A Web API is what feeds rich applications with data. ASP.NET Web API aims to service this particular segment of Web development by providing easy semantics to route and handle incoming requests and an easy to use platform to serve HTTP data in just about any content format you choose to create and serve from the server. But .NET already has various HTTP Platforms The .NET stack already includes a number of technologies that provide the ability to create HTTP service back ends, and it has done so since the very beginnings of the .NET platform. From raw HTTP Handlers and Modules in the core ASP.NET runtime, to high level platforms like ASP.NET MVC, Web Forms, ASP.NET AJAX and the WCF REST engine (which technically is not ASP.NET, but can integrate with it), you've always been able to handle just about any kind of HTTP request and response with ASP.NET. The beauty of the raw ASP.NET platform is that it provides you everything you need to build just about any type of HTTP application you can dream up from low level APIs/custom engines to high level HTML generation engine. ASP.NET as a core platform clearly has stood the test of time 10+ years later and all other frameworks like Web API are built on top of this ASP.NET core. However, although it's possible to create Web APIs / Services using any of the existing out of box .NET technologies, none of them have been a really nice fit for building arbitrary HTTP based APIs. Sure, you can use an HttpHandler to create just about anything, but you have to build a lot of plumbing to build something more complex like a comprehensive API that serves a variety of requests, handles multiple output formats and can easily pass data up to the server in a variety of ways. Likewise you can use ASP.NET MVC to handle routing and creating content in various formats fairly easily, but it doesn't provide a great way to automatically negotiate content types and serve various content formats directly (it's possible to do with some plumbing code of your own but not built in). Prior to Web API, Microsoft's main push for HTTP services has been WCF REST, which was always an awkward technology that had a severe personality conflict, not being clear on whether it wanted to be part of WCF or purely a separate technology. In the end it didn't do either WCF compatibility or WCF agnostic pure HTTP operation very well, which made for a very developer-unfriendly environment. Personally I didn't like any of the implementations at the time, so much so that I ended up building my own HTTP service engine (as part of the West Wind Web Toolkit), as have a few other third party tools that provided much better integration and ease of use. With the release of Web API for the first time I feel that I can finally use the tools in the box and not have to worry about creating and maintaining my own toolkit as Web API addresses just about all the features I implemented on my own and much more. ASP.NET Web API provides a better HTTP Experience ASP.NET Web API differentiates itself from the previous Microsoft in-box HTTP service solutions in that it was built from the ground up around the HTTP protocol and its messaging semantics. Unlike WCF REST or ASP.NET AJAX with ASMX, it’s a brand new platform rather than bolted on technology that is supposed to work in the context of an existing framework. The strength of the new ASP.NET Web API is that it combines the best features of the platforms that came before it, to provide a comprehensive and very usable HTTP platform. Because it's based on ASP.NET and borrows a lot of concepts from ASP.NET MVC, Web API should be immediately familiar and comfortable to most ASP.NET developers. Here are some of the features that Web API provides that I like: Strong Support for URL Routing to produce clean URLs using familiar MVC style routing semantics Content Negotiation based on Accept headers for request and response serialization Support for a host of supported output formats including JSON, XML, ATOM Strong default support for REST semantics but they are optional Easily extensible Formatter support to add new input/output types Deep support for more advanced HTTP features via HttpResponseMessage and HttpRequestMessage classes and strongly typed Enums to describe many HTTP operations Convention based design that drives you into doing the right thing for HTTP Services Very extensible, based on MVC like extensibility model of Formatters and Filters Self-hostable in non-Web applications  Testable using testing concepts similar to MVC Web API is meant to handle any kind of HTTP input and produce output and status codes using the full spectrum of HTTP functionality available in a straight forward and flexible manner. Looking at the list above you can see that a lot of functionality is very similar to ASP.NET MVC, so many ASP.NET developers should feel quite comfortable with the concepts of Web API. The Routing and core infrastructure of Web API are very similar to how MVC works providing many of the benefits of MVC, but with focus on HTTP access and manipulation in Controller methods rather than HTML generation in MVC. There’s much improved support for content negotiation based on HTTP Accept headers with the framework capable of detecting automatically what content the client is sending and requesting and serving the appropriate data format in return. This seems like such a little and obvious thing, but it's really important. Today's service backends often are used by multiple clients/applications and being able to choose the right data format for what fits best for the client is very important. While previous solutions were able to accomplish this using a variety of mixed features of WCF and ASP.NET, Web API combines all this functionality into a single robust server side HTTP framework that intrinsically understands the HTTP semantics and subtly drives you in the right direction for most operations. And when you need to customize or do something that is not built in, there are lots of hooks and overrides for most behaviors, and even many low level hook points that allow you to plug in custom functionality with relatively little effort. No Brainers for Web API There are a few scenarios that are a slam dunk for Web API. If your primary focus of an application or even a part of an application is some sort of API then Web API makes great sense. HTTP ServicesIf you're building a comprehensive HTTP API that is to be consumed over the Web, Web API is a perfect fit. You can isolate the logic in Web API and build your application as a service breaking out the logic into controllers as needed. Because the primary interface is the service there's no confusion of what should go where (MVC or API). Perfect fit. Primary AJAX BackendsIf you're building rich client Web applications that are relying heavily on AJAX callbacks to serve its data, Web API is also a slam dunk. Again because much if not most of the business logic will probably end up in your Web API service logic, there's no confusion over where logic should go and there's no duplication. In Single Page Applications (SPA), typically there's very little HTML based logic served other than bringing up a shell UI and then filling the data from the server with AJAX which means the business logic required for data retrieval and data acceptance and validation too lives in the Web API. Perfect fit. Generic HTTP EndpointsAnother good fit are generic HTTP endpoints that to serve data or handle 'utility' type functionality in typical Web applications. If you need to implement an image server, or an upload handler in the past I'd implement that as an HTTP handler. With Web API you now have a well defined place where you can implement these types of generic 'services' in a location that can easily add endpoints (via Controller methods) or separated out as more full featured APIs. Granted this could be done with MVC as well, but Web API seems a clearer and more well defined place to store generic application services. This is one thing I used to do a lot of in my own libraries and Web API addresses this nicely. Great fit. Mixed HTML and AJAX Applications: Not a clear Choice  For all the commonality that Web API and MVC share they are fundamentally different platforms that are independent of each other. A lot of people have asked when does it make sense to use MVC vs. Web API when you're dealing with typical Web application that creates HTML and also uses AJAX functionality for rich functionality. While it's easy to say that all 'service'/AJAX logic should go into a Web API and all HTML related generation into MVC, that can often result in a lot of code duplication. Also MVC supports JSON and XML result data fairly easily as well so there's some confusion where that 'trigger point' is of when you should switch to Web API vs. just implementing functionality as part of MVC controllers. Ultimately there's a tradeoff between isolation of functionality and duplication. A good rule of thumb I think works is that if a large chunk of the application's functionality serves data Web API is a good choice, but if you have a couple of small AJAX requests to serve data to a grid or autocomplete box it'd be overkill to separate out that logic into a separate Web API controller. Web API does add overhead to your application (it's yet another framework that sits on top of core ASP.NET) so it should be worth it .Keep in mind that MVC can generate HTML and JSON/XML and just about any other content easily and that functionality is not going away, so just because you Web API is there it doesn't mean you have to use it. Web API is not a full replacement for MVC obviously either since there's not the same level of support to feed HTML from Web API controllers (although you can host a RazorEngine easily enough if you really want to go that route) so if you're HTML is part of your API or application in general MVC is still a better choice either alone or in combination with Web API. I suspect (and hope) that in the future Web API's functionality will merge even closer with MVC so that you might even be able to mix functionality of both into single Controllers so that you don't have to make any trade offs, but at the moment that's not the case. Some Issues To think about Web API is similar to MVC but not the Same Although Web API looks a lot like MVC it's not the same and some common functionality of MVC behaves differently in Web API. For example, the way single POST variables are handled is different than MVC and doesn't lend itself particularly well to some AJAX scenarios with POST data. Code Duplication I already touched on this in the Mixed HTML and Web API section, but if you build an MVC application that also exposes a Web API it's quite likely that you end up duplicating a bunch of code and - potentially - infrastructure. You may have to create authentication logic both for an HTML application and for the Web API which might need something different altogether. More often than not though the same logic is used, and there's no easy way to share. If you implement an MVC ActionFilter and you want that same functionality in your Web API you'll end up creating the filter twice. AJAX Data or AJAX HTML On a recent post's comments, David made some really good points regarding the commonality of MVC and Web API's and its place. One comment that caught my eye was a little more generic, regarding data services vs. HTML services. David says: I see a lot of merit in the combination of Knockout.js, client side templates and view models, calling Web API for a responsive UI, but sometimes late at night that still leaves me wondering why I would no longer be using some of the nice tooling and features that have evolved in MVC ;-) You know what - I can totally relate to that. On the last Web based mobile app I worked on, we decided to serve HTML partials to the client via AJAX for many (but not all!) things, rather than sending down raw data to inject into the DOM on the client via templating or direct manipulation. While there are definitely more bytes on the wire, with this, the overhead ended up being actually fairly small if you keep the 'data' requests small and atomic. Performance was often made up by the lack of client side rendering of HTML. Server rendered HTML for AJAX templating gives so much better infrastructure support without having to screw around with 20 mismatched client libraries. Especially with MVC and partials it's pretty easy to break out your HTML logic into very small, atomic chunks, so it's actually easy to create small rendering islands that can be used via composition on the server, or via AJAX calls to small, tight partials that return HTML to the client. Although this is often frowned upon as to 'heavy', it worked really well in terms of developer effort as well as providing surprisingly good performance on devices. There's still plenty of jQuery and AJAX logic happening on the client but it's more manageable in small doses rather than trying to do the entire UI composition with JavaScript and/or 'not-quite-there-yet' template engines that are very difficult to debug. This is not an issue directly related to Web API of course, but something to think about especially for AJAX or SPA style applications. Summary Web API is a great new addition to the ASP.NET platform and it addresses a serious need for consolidation of a lot of half-baked HTTP service API technologies that came before it. Web API feels 'right', and hits the right combination of usability and flexibility at least for me and it's a good fit for true API scenarios. However, just because a new platform is available it doesn't meant that other tools or tech that came before it should be discarded or even upgraded to the new platform. There's nothing wrong with continuing to use MVC controller methods to handle API tasks if that's what your app is running now - there's very little to be gained by upgrading to Web API just because. But going forward Web API clearly is the way to go, when building HTTP data interfaces and it's good to see that Microsoft got this one right - it was sorely needed! Resources ASP.NET Web API AspConf Ask the Experts Session (first 5 minutes) © Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Is there a Telecommunications Reference Architecture?

    - by raul.goycoolea
    @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Abstract   Reference architecture provides needed architectural information that can be provided in advance to an enterprise to enable consistent architectural best practices. Enterprise Reference Architecture helps business owners to actualize their strategies, vision, objectives, and principles. It evaluates the IT systems, based on Reference Architecture goals, principles, and standards. It helps to reduce IT costs by increasing functionality, availability, scalability, etc. Telecom Reference Architecture provides customers with the flexibility to view bundled service bills online with the provision of multiple services. It provides real-time, flexible billing and charging systems, to handle complex promotions, discounts, and settlements with multiple parties. This paper attempts to describe the Reference Architecture for the Telecom Enterprises. It lays the foundation for a Telecom Reference Architecture by articulating the requirements, drivers, and pitfalls for telecom service providers. It describes generic reference architecture for telecom enterprises and moves on to explain how to achieve Enterprise Reference Architecture by using SOA.   Introduction   A Reference Architecture provides a methodology, set of practices, template, and standards based on a set of successful solutions implemented earlier. These solutions have been generalized and structured for the depiction of both a logical and a physical architecture, based on the harvesting of a set of patterns that describe observations in a number of successful implementations. It helps as a reference for the various architectures that an enterprise can implement to solve various problems. It can be used as the starting point or the point of comparisons for various departments/business entities of a company, or for the various companies for an enterprise. It provides multiple views for multiple stakeholders.   Major artifacts of the Enterprise Reference Architecture are methodologies, standards, metadata, documents, design patterns, etc.   Purpose of Reference Architecture   In most cases, architects spend a lot of time researching, investigating, defining, and re-arguing architectural decisions. It is like reinventing the wheel as their peers in other organizations or even the same organization have already spent a lot of time and effort defining their own architectural practices. This prevents an organization from learning from its own experiences and applying that knowledge for increased effectiveness.   Reference architecture provides missing architectural information that can be provided in advance to project team members to enable consistent architectural best practices.   Enterprise Reference Architecture helps an enterprise to achieve the following at the abstract level:   ·       Reference architecture is more of a communication channel to an enterprise ·       Helps the business owners to accommodate to their strategies, vision, objectives, and principles. ·       Evaluates the IT systems based on Reference Architecture Principles ·       Reduces IT spending through increasing functionality, availability, scalability, etc ·       A Real-time Integration Model helps to reduce the latency of the data updates Is used to define a single source of Information ·       Provides a clear view on how to manage information and security ·       Defines the policy around the data ownership, product boundaries, etc. ·       Helps with cost optimization across project and solution portfolios by eliminating unused or duplicate investments and assets ·       Has a shorter implementation time and cost   Once the reference architecture is in place, the set of architectural principles, standards, reference models, and best practices ensure that the aligned investments have the greatest possible likelihood of success in both the near term and the long term (TCO).     Common pitfalls for Telecom Service Providers   Telecom Reference Architecture serves as the first step towards maturity for a telecom service provider. During the course of our assignments/experiences with telecom players, we have come across the following observations – Some of these indicate a lack of maturity of the telecom service provider:   ·       In markets that are growing and not so mature, it has been observed that telcos have a significant amount of in-house or home-grown applications. In some of these markets, the growth has been so rapid that IT has been unable to cope with business demands. Telcos have shown a tendency to come up with workarounds in their IT applications so as to meet business needs. ·       Even for core functions like provisioning or mediation, some telcos have tried to manage with home-grown applications. ·       Most of the applications do not have the required scalability or maintainability to sustain growth in volumes or functionality. ·       Applications face interoperability issues with other applications in the operator's landscape. Integrating a new application or network element requires considerable effort on the part of the other applications. ·       Application boundaries are not clear, and functionality that is not in the initial scope of that application gets pushed onto it. This results in the development of the multiple, small applications without proper boundaries. ·       Usage of Legacy OSS/BSS systems, poor Integration across Multiple COTS Products and Internal Systems. Most of the Integrations are developed on ad-hoc basis and Point-to-Point Integration. ·       Redundancy of the business functions in different applications • Fragmented data across the different applications and no integrated view of the strategic data • Lot of performance Issues due to the usage of the complex integration across OSS and BSS systems   However, this is where the maturity of the telecom industry as a whole can be of help. The collaborative efforts of telcos to overcome some of these problems have resulted in bodies like the TM Forum. They have come up with frameworks for business processes, data, applications, and technology for telecom service providers. These could be a good starting point for telcos to clean up their enterprise landscape.   Industry Trends in Telecom Reference Architecture   Telecom reference architectures are evolving rapidly because telcos are facing business and IT challenges.   “The reality is that there probably is no killer application, no silver bullet that the telcos can latch onto to carry them into a 21st Century.... Instead, there are probably hundreds – perhaps thousands – of niche applications.... And the only way to find which of these works for you is to try out lots of them, ramp up the ones that work, and discontinue the ones that fail.” – Martin Creaner President & CTO TM Forum.   The following trends have been observed in telecom reference architecture:   ·       Transformation of business structures to align with customer requirements ·       Adoption of more Internet-like technical architectures. The Web 2.0 concept is increasingly being used. ·       Virtualization of the traditional operations support system (OSS) ·       Adoption of SOA to support development of IP-based services ·       Adoption of frameworks like Service Delivery Platforms (SDPs) and IP Multimedia Subsystem ·       (IMS) to enable seamless deployment of various services over fixed and mobile networks ·       Replacement of in-house, customized, and stove-piped OSS/BSS with standards-based COTS products ·       Compliance with industry standards and frameworks like eTOM, SID, and TAM to enable seamless integration with other standards-based products   Drivers of Reference Architecture   The drivers of the Reference Architecture are Reference Architecture Goals, Principles, and Enterprise Vision and Telecom Transformation. The details are depicted below diagram. @font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoCaption, li.MsoCaption, div.MsoCaption { margin: 0cm 0cm 10pt; font-size: 9pt; font-family: "Times New Roman"; color: rgb(79, 129, 189); font-weight: bold; }div.Section1 { page: Section1; } Figure 1. Drivers for Reference Architecture @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Today’s telecom reference architectures should seamlessly integrate traditional legacy-based applications and transition to next-generation network technologies (e.g., IP multimedia subsystems). This has resulted in new requirements for flexible, real-time billing and OSS/BSS systems and implications on the service provider’s organizational requirements and structure.   Telecom reference architectures are today expected to:   ·       Integrate voice, messaging, email and other VAS over fixed and mobile networks, back end systems ·       Be able to provision multiple services and service bundles • Deliver converged voice, video and data services ·       Leverage the existing Network Infrastructure ·       Provide real-time, flexible billing and charging systems to handle complex promotions, discounts, and settlements with multiple parties. ·       Support charging of advanced data services such as VoIP, On-Demand, Services (e.g.  Video), IMS/SIP Services, Mobile Money, Content Services and IPTV. ·       Help in faster deployment of new services • Serve as an effective platform for collaboration between network IT and business organizations ·       Harness the potential of converging technology, networks, devices and content to develop multimedia services and solutions of ever-increasing sophistication on a single Internet Protocol (IP) ·       Ensure better service delivery and zero revenue leakage through real-time balance and credit management ·       Lower operating costs to drive profitability   Enterprise Reference Architecture   The Enterprise Reference Architecture (RA) fills the gap between the concepts and vocabulary defined by the reference model and the implementation. Reference architecture provides detailed architectural information in a common format such that solutions can be repeatedly designed and deployed in a consistent, high-quality, supportable fashion. This paper attempts to describe the Reference Architecture for the Telecom Application Usage and how to achieve the Enterprise Level Reference Architecture using SOA.   • Telecom Reference Architecture • Enterprise SOA based Reference Architecture   Telecom Reference Architecture   Tele Management Forum’s New Generation Operations Systems and Software (NGOSS) is an architectural framework for organizing, integrating, and implementing telecom systems. NGOSS is a component-based framework consisting of the following elements:   ·       The enhanced Telecom Operations Map (eTOM) is a business process framework. ·       The Shared Information Data (SID) model provides a comprehensive information framework that may be specialized for the needs of a particular organization. ·       The Telecom Application Map (TAM) is an application framework to depict the functional footprint of applications, relative to the horizontal processes within eTOM. ·       The Technology Neutral Architecture (TNA) is an integrated framework. TNA is an architecture that is sustainable through technology changes.   NGOSS Architecture Standards are:   ·       Centralized data ·       Loosely coupled distributed systems ·       Application components/re-use  ·       A technology-neutral system framework with technology specific implementations ·       Interoperability to service provider data/processes ·       Allows more re-use of business components across multiple business scenarios ·       Workflow automation   The traditional operator systems architecture consists of four layers,   ·       Business Support System (BSS) layer, with focus toward customers and business partners. Manages order, subscriber, pricing, rating, and billing information. ·       Operations Support System (OSS) layer, built around product, service, and resource inventories. ·       Networks layer – consists of Network elements and 3rd Party Systems. ·       Integration Layer – to maximize application communication and overall solution flexibility.   Reference architecture for telecom enterprises is depicted below. @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoCaption, li.MsoCaption, div.MsoCaption { margin: 0cm 0cm 10pt; font-size: 9pt; font-family: "Times New Roman"; color: rgb(79, 129, 189); font-weight: bold; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Figure 2. Telecom Reference Architecture   The major building blocks of any Telecom Service Provider architecture are as follows:   1. Customer Relationship Management   CRM encompasses the end-to-end lifecycle of the customer: customer initiation/acquisition, sales, ordering, and service activation, customer care and support, proactive campaigns, cross sell/up sell, and retention/loyalty.   CRM also includes the collection of customer information and its application to personalize, customize, and integrate delivery of service to a customer, as well as to identify opportunities for increasing the value of the customer to the enterprise.   The key functionalities related to Customer Relationship Management are   ·       Manage the end-to-end lifecycle of a customer request for products. ·       Create and manage customer profiles. ·       Manage all interactions with customers – inquiries, requests, and responses. ·       Provide updates to Billing and other south bound systems on customer/account related updates such as customer/ account creation, deletion, modification, request bills, final bill, duplicate bills, credit limits through Middleware. ·       Work with Order Management System, Product, and Service Management components within CRM. ·       Manage customer preferences – Involve all the touch points and channels to the customer, including contact center, retail stores, dealers, self service, and field service, as well as via any media (phone, face to face, web, mobile device, chat, email, SMS, mail, the customer's bill, etc.). ·       Support single interface for customer contact details, preferences, account details, offers, customer premise equipment, bill details, bill cycle details, and customer interactions.   CRM applications interact with customers through customer touch points like portals, point-of-sale terminals, interactive voice response systems, etc. The requests by customers are sent via fulfillment/provisioning to billing system for ordering processing.   2. Billing and Revenue Management   Billing and Revenue Management handles the collection of appropriate usage records and production of timely and accurate bills – for providing pre-bill usage information and billing to customers; for processing their payments; and for performing payment collections. In addition, it handles customer inquiries about bills, provides billing inquiry status, and is responsible for resolving billing problems to the customer's satisfaction in a timely manner. This process grouping also supports prepayment for services.   The key functionalities provided by these applications are   ·       To ensure that enterprise revenue is billed and invoices delivered appropriately to customers. ·       To manage customers’ billing accounts, process their payments, perform payment collections, and monitor the status of the account balance. ·       To ensure the timely and effective fulfillment of all customer bill inquiries and complaints. ·       Collect the usage records from mediation and ensure appropriate rating and discounting of all usage and pricing. ·       Support revenue sharing; split charging where usage is guided to an account different from the service consumer. ·       Support prepaid and post-paid rating. ·       Send notification on approach / exceeding the usage thresholds as enforced by the subscribed offer, and / or as setup by the customer. ·       Support prepaid, post paid, and hybrid (where some services are prepaid and the rest of the services post paid) customers and conversion from post paid to prepaid, and vice versa. ·       Support different billing function requirements like charge prorating, promotion, discount, adjustment, waiver, write-off, account receivable, GL Interface, late payment fee, credit control, dunning, account or service suspension, re-activation, expiry, termination, contract violation penalty, etc. ·       Initiate direct debit to collect payment against an invoice outstanding. ·       Send notification to Middleware on different events; for example, payment receipt, pre-suspension, threshold exceed, etc.   Billing systems typically get usage data from mediation systems for rating and billing. They get provisioning requests from order management systems and inquiries from CRM systems. Convergent and real-time billing systems can directly get usage details from network elements.   3. Mediation   Mediation systems transform/translate the Raw or Native Usage Data Records into a general format that is acceptable to billing for their rating purposes.   The following lists the high-level roles and responsibilities executed by the Mediation system in the end-to-end solution.   ·       Collect Usage Data Records from different data sources – like network elements, routers, servers – via different protocol and interfaces. ·       Process Usage Data Records – Mediation will process Usage Data Records as per the source format. ·       Validate Usage Data Records from each source. ·       Segregates Usage Data Records coming from each source to multiple, based on the segregation requirement of end Application. ·       Aggregates Usage Data Records based on the aggregation rule if any from different sources. ·       Consolidates multiple Usage Data Records from each source. ·       Delivers formatted Usage Data Records to different end application like Billing, Interconnect, Fraud Management, etc. ·       Generates audit trail for incoming Usage Data Records and keeps track of all the Usage Data Records at various stages of mediation process. ·       Checks duplicate Usage Data Records across files for a given time window.   4. Fulfillment   This area is responsible for providing customers with their requested products in a timely and correct manner. It translates the customer's business or personal need into a solution that can be delivered using the specific products in the enterprise's portfolio. This process informs the customers of the status of their purchase order, and ensures completion on time, as well as ensuring a delighted customer. These processes are responsible for accepting and issuing orders. They deal with pre-order feasibility determination, credit authorization, order issuance, order status and tracking, customer update on customer order activities, and customer notification on order completion. Order management and provisioning applications fall into this category.   The key functionalities provided by these applications are   ·       Issuing new customer orders, modifying open customer orders, or canceling open customer orders; ·       Verifying whether specific non-standard offerings sought by customers are feasible and supportable; ·       Checking the credit worthiness of customers as part of the customer order process; ·       Testing the completed offering to ensure it is working correctly; ·       Updating of the Customer Inventory Database to reflect that the specific product offering has been allocated, modified, or cancelled; ·       Assigning and tracking customer provisioning activities; ·       Managing customer provisioning jeopardy conditions; and ·       Reporting progress on customer orders and other processes to customer.   These applications typically get orders from CRM systems. They interact with network elements and billing systems for fulfillment of orders.   5. Enterprise Management   This process area includes those processes that manage enterprise-wide activities and needs, or have application within the enterprise as a whole. They encompass all business management processes that   ·       Are necessary to support the whole of the enterprise, including processes for financial management, legal management, regulatory management, process, cost, and quality management, etc.;   ·       Are responsible for setting corporate policies, strategies, and directions, and for providing guidelines and targets for the whole of the business, including strategy development and planning for areas, such as Enterprise Architecture, that are integral to the direction and development of the business;   ·       Occur throughout the enterprise, including processes for project management, performance assessments, cost assessments, etc.     (i) Enterprise Risk Management:   Enterprise Risk Management focuses on assuring that risks and threats to the enterprise value and/or reputation are identified, and appropriate controls are in place to minimize or eliminate the identified risks. The identified risks may be physical or logical/virtual. Successful risk management ensures that the enterprise can support its mission critical operations, processes, applications, and communications in the face of serious incidents such as security threats/violations and fraud attempts. Two key areas covered in Risk Management by telecom operators are:   ·       Revenue Assurance: Revenue assurance system will be responsible for identifying revenue loss scenarios across components/systems, and will help in rectifying the problems. The following lists the high-level roles and responsibilities executed by the Revenue Assurance system in the end-to-end solution. o   Identify all usage information dropped when networks are being upgraded. o   Interconnect bill verification. o   Identify where services are routinely provisioned but never billed. o   Identify poor sales policies that are intensifying collections problems. o   Find leakage where usage is sent to error bucket and never billed for. o   Find leakage where field service, CRM, and network build-out are not optimized.   ·       Fraud Management: Involves collecting data from different systems to identify abnormalities in traffic patterns, usage patterns, and subscription patterns to report suspicious activity that might suggest fraudulent usage of resources, resulting in revenue losses to the operator.   The key roles and responsibilities of the system component are as follows:   o   Fraud management system will capture and monitor high usage (over a certain threshold) in terms of duration, value, and number of calls for each subscriber. The threshold for each subscriber is decided by the system and fixed automatically. o   Fraud management will be able to detect the unauthorized access to services for certain subscribers. These subscribers may have been provided unauthorized services by employees. The component will raise the alert to the operator the very first time of such illegal calls or calls which are not billed. o   The solution will be to have an alarm management system that will deliver alarms to the operator/provider whenever it detects a fraud, thus minimizing fraud by catching it the first time it occurs. o   The Fraud Management system will be capable of interfacing with switches, mediation systems, and billing systems   (ii) Knowledge Management   This process focuses on knowledge management, technology research within the enterprise, and the evaluation of potential technology acquisitions.   Key responsibilities of knowledge base management are to   ·       Maintain knowledge base – Creation and updating of knowledge base on ongoing basis. ·       Search knowledge base – Search of knowledge base on keywords or category browse ·       Maintain metadata – Management of metadata on knowledge base to ensure effective management and search. ·       Run report generator. ·       Provide content – Add content to the knowledge base, e.g., user guides, operational manual, etc.   (iii) Document Management   It focuses on maintaining a repository of all electronic documents or images of paper documents relevant to the enterprise using a system.   (iv) Data Management   It manages data as a valuable resource for any enterprise. For telecom enterprises, the typical areas covered are Master Data Management, Data Warehousing, and Business Intelligence. It is also responsible for data governance, security, quality, and database management.   Key responsibilities of Data Management are   ·       Using ETL, extract the data from CRM, Billing, web content, ERP, campaign management, financial, network operations, asset management info, customer contact data, customer measures, benchmarks, process data, e.g., process inputs, outputs, and measures, into Enterprise Data Warehouse. ·       Management of data traceability with source, data related business rules/decisions, data quality, data cleansing data reconciliation, competitors data – storage for all the enterprise data (customer profiles, products, offers, revenues, etc.) ·       Get online update through night time replication or physical backup process at regular frequency. ·       Provide the data access to business intelligence and other systems for their analysis, report generation, and use.   (v) Business Intelligence   It uses the Enterprise Data to provide the various analysis and reports that contain prospects and analytics for customer retention, acquisition of new customers due to the offers, and SLAs. It will generate right and optimized plans – bolt-ons for the customers.   The following lists the high-level roles and responsibilities executed by the Business Intelligence system at the Enterprise Level:   ·       It will do Pattern analysis and reports problem. ·       It will do Data Analysis – Statistical analysis, data profiling, affinity analysis of data, customer segment wise usage patterns on offers, products, service and revenue generation against services and customer segments. ·       It will do Performance (business, system, and forecast) analysis, churn propensity, response time, and SLAs analysis. ·       It will support for online and offline analysis, and report drill down capability. ·       It will collect, store, and report various SLA data. ·       It will provide the necessary intelligence for marketing and working on campaigns, etc., with cost benefit analysis and predictions.   It will advise on customer promotions with additional services based on loyalty and credit history of customer   ·       It will Interface with Enterprise Data Management system for data to run reports and analysis tasks. It will interface with the campaign schedules, based on historical success evidence.   (vi) Stakeholder and External Relations Management   It manages the enterprise's relationship with stakeholders and outside entities. Stakeholders include shareholders, employee organizations, etc. Outside entities include regulators, local community, and unions. Some of the processes within this grouping are Shareholder Relations, External Affairs, Labor Relations, and Public Relations.   (vii) Enterprise Resource Planning   It is used to manage internal and external resources, including tangible assets, financial resources, materials, and human resources. Its purpose is to facilitate the flow of information between all business functions inside the boundaries of the enterprise and manage the connections to outside stakeholders. ERP systems consolidate all business operations into a uniform and enterprise wide system environment.   The key roles and responsibilities for Enterprise System are given below:   ·        It will handle responsibilities such as core accounting, financial, and management reporting. ·       It will interface with CRM for capturing customer account and details. ·       It will interface with billing to capture the billing revenue and other financial data. ·       It will be responsible for executing the dunning process. Billing will send the required feed to ERP for execution of dunning. ·       It will interface with the CRM and Billing through batch interfaces. Enterprise management systems are like horizontals in the enterprise and typically interact with all major telecom systems. E.g., an ERP system interacts with CRM, Fulfillment, and Billing systems for different kinds of data exchanges.   6. External Interfaces/Touch Points   The typical external parties are customers, suppliers/partners, employees, shareholders, and other stakeholders. External interactions from/to a Service Provider to other parties can be achieved by a variety of mechanisms, including:   ·       Exchange of emails or faxes ·       Call Centers ·       Web Portals ·       Business-to-Business (B2B) automated transactions   These applications provide an Internet technology driven interface to external parties to undertake a variety of business functions directly for themselves. These can provide fully or partially automated service to external parties through various touch points.   Typical characteristics of these touch points are   ·       Pre-integrated self-service system, including stand-alone web framework or integration front end with a portal engine ·       Self services layer exposing atomic web services/APIs for reuse by multiple systems across the architectural environment ·       Portlets driven connectivity exposing data and services interoperability through a portal engine or web application   These touch points mostly interact with the CRM systems for requests, inquiries, and responses.   7. Middleware   The component will be primarily responsible for integrating the different systems components under a common platform. It should provide a Standards-Based Platform for building Service Oriented Architecture and Composite Applications. The following lists the high-level roles and responsibilities executed by the Middleware component in the end-to-end solution.   ·       As an integration framework, covering to and fro interfaces ·       Provide a web service framework with service registry. ·       Support SOA framework with SOA service registry. ·       Each of the interfaces from / to Middleware to other components would handle data transformation, translation, and mapping of data points. ·       Receive data from the caller / activate and/or forward the data to the recipient system in XML format. ·       Use standard XML for data exchange. ·       Provide the response back to the service/call initiator. ·       Provide a tracking until the response completion. ·       Keep a store transitional data against each call/transaction. ·       Interface through Middleware to get any information that is possible and allowed from the existing systems to enterprise systems; e.g., customer profile and customer history, etc. ·       Provide the data in a common unified format to the SOA calls across systems, and follow the Enterprise Architecture directive. ·       Provide an audit trail for all transactions being handled by the component.   8. Network Elements   The term Network Element means a facility or equipment used in the provision of a telecommunications service. Such terms also includes features, functions, and capabilities that are provided by means of such facility or equipment, including subscriber numbers, databases, signaling systems, and information sufficient for billing and collection or used in the transmission, routing, or other provision of a telecommunications service.   Typical network elements in a GSM network are Home Location Register (HLR), Intelligent Network (IN), Mobile Switching Center (MSC), SMS Center (SMSC), and network elements for other value added services like Push-to-talk (PTT), Ring Back Tone (RBT), etc.   Network elements are invoked when subscribers use their telecom devices for any kind of usage. These elements generate usage data and pass it on to downstream systems like mediation and billing system for rating and billing. They also integrate with provisioning systems for order/service fulfillment.   9. 3rd Party Applications   3rd Party systems are applications like content providers, payment gateways, point of sale terminals, and databases/applications maintained by the Government.   Depending on applicability and the type of functionality provided by 3rd party applications, the integration with different telecom systems like CRM, provisioning, and billing will be done.   10. Service Delivery Platform   A service delivery platform (SDP) provides the architecture for the rapid deployment, provisioning, execution, management, and billing of value added telecom services. SDPs are based on the concept of SOA and layered architecture. They support the delivery of voice, data services, and content in network and device-independent fashion. They allow application developers to aggregate network capabilities, services, and sources of content. SDPs typically contain layers for web services exposure, service application development, and network abstraction.   SOA Reference Architecture   SOA concept is based on the principle of developing reusable business service and building applications by composing those services, instead of building monolithic applications in silos. It’s about bridging the gap between business and IT through a set of business-aligned IT services, using a set of design principles, patterns, and techniques.   In an SOA, resources are made available to participants in a value net, enterprise, line of business (typically spanning multiple applications within an enterprise or across multiple enterprises). It consists of a set of business-aligned IT services that collectively fulfill an organization’s business processes and goals. We can choreograph these services into composite applications and invoke them through standard protocols. SOA, apart from agility and reusability, enables:   ·       The business to specify processes as orchestrations of reusable services ·       Technology agnostic business design, with technology hidden behind service interface ·       A contractual-like interaction between business and IT, based on service SLAs ·       Accountability and governance, better aligned to business services ·       Applications interconnections untangling by allowing access only through service interfaces, reducing the daunting side effects of change ·       Reduced pressure to replace legacy and extended lifetime for legacy applications, through encapsulation in services   ·       A Cloud Computing paradigm, using web services technologies, that makes possible service outsourcing on an on-demand, utility-like, pay-per-usage basis   The following section represents the Reference Architecture of logical view for the Telecom Solution. The new custom built application needs to align with this logical architecture in the long run to achieve EA benefits.   Packaged implementation applications, such as ERP billing applications, need to expose their functions as service providers (as other applications consume) and interact with other applications as service consumers.   COT applications need to expose services through wrappers such as adapters to utilize existing resources and at the same time achieve Enterprise Architecture goal and objectives.   The following are the various layers for Enterprise level deployment of SOA. This diagram captures the abstract view of Enterprise SOA layers and important components of each layer. Layered architecture means decomposition of services such that most interactions occur between adjacent layers. However, there is no strict rule that top layers should not directly communicate with bottom layers.   The diagram below represents the important logical pieces that would result from overall SOA transformation. @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoCaption, li.MsoCaption, div.MsoCaption { margin: 0cm 0cm 10pt; font-size: 9pt; font-family: "Times New Roman"; color: rgb(79, 129, 189); font-weight: bold; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Figure 3. Enterprise SOA Reference Architecture 1.          Operational System Layer: This layer consists of all packaged applications like CRM, ERP, custom built applications, COTS based applications like Billing, Revenue Management, Fulfilment, and the Enterprise databases that are essential and contribute directly or indirectly to the Enterprise OSS/BSS Transformation.   ERP holds the data of Asset Lifecycle Management, Supply Chain, and Advanced Procurement and Human Capital Management, etc.   CRM holds the data related to Order, Sales, and Marketing, Customer Care, Partner Relationship Management, Loyalty, etc.   Content Management handles Enterprise Search and Query. Billing application consists of the following components:   ·       Collections Management, Customer Billing Management, Invoices, Real-Time Rating, Discounting, and Applying of Charges ·       Enterprise databases will hold both the application and service data, whether structured or unstructured.   MDM - Master data majorly consists of Customer, Order, Product, and Service Data.     2.          Enterprise Component Layer:   This layer consists of the Application Services and Common Services that are responsible for realizing the functionality and maintaining the QoS of the exposed services. This layer uses container-based technologies such as application servers to implement the components, workload management, high availability, and load balancing.   Application Services: This Service Layer enables application, technology, and database abstraction so that the complex accessing logic is hidden from the other service layers. This is a basic service layer, which exposes application functionalities and data as reusable services. The three types of the Application access services are:   ·       Application Access Service: This Service Layer exposes application level functionalities as a reusable service between BSS to BSS and BSS to OSS integration. This layer is enabled using disparate technology such as Web Service, Integration Servers, and Adaptors, etc.   ·       Data Access Service: This Service Layer exposes application data services as a reusable reference data service. This is done via direct interaction with application data. and provides the federated query.   ·       Network Access Service: This Service Layer exposes provisioning layer as a reusable service from OSS to OSS integration. This integration service emphasizes the need for high performance, stateless process flows, and distributed design.   Common Services encompasses management of structured, semi-structured, and unstructured data such as information services, portal services, interaction services, infrastructure services, and security services, etc.   3.          Integration Layer:   This consists of service infrastructure components like service bus, service gateway for partner integration, service registry, service repository, and BPEL processor. Service bus will carry the service invocation payloads/messages between consumers and providers. The other important functions expected from it are itinerary based routing, distributed caching of routing information, transformations, and all qualities of service for messaging-like reliability, scalability, and availability, etc. Service registry will hold all contracts (wsdl) of services, and it helps developers to locate or discover service during design time or runtime.   • BPEL processor would be useful in orchestrating the services to compose a complex business scenario or process. • Workflow and business rules management are also required to support manual triggering of certain activities within business process. based on the rules setup and also the state machine information. Application, data, and service mediation layer typically forms the overall composite application development framework or SOA Framework.   4.          Business Process Layer: These are typically the intermediate services layer and represent Shared Business Process Services. At Enterprise Level, these services are from Customer Management, Order Management, Billing, Finance, and Asset Management application domains.   5.          Access Layer: This layer consists of portals for Enterprise and provides a single view of Enterprise information management and dashboard services.   6.          Channel Layer: This consists of various devices; applications that form part of extended enterprise; browsers through which users access the applications.   7.          Client Layer: This designates the different types of users accessing the enterprise applications. The type of user typically would be an important factor in determining the level of access to applications.   8.          Vertical pieces like management, monitoring, security, and development cut across all horizontal layers Management and monitoring involves all aspects of SOA-like services, SLAs, and other QoS lifecycle processes for both applications and services surrounding SOA governance.     9.          EA Governance, Reference Architecture, Roadmap, Principles, and Best Practices:   EA Governance is important in terms of providing the overall direction to SOA implementation within the enterprise. This involves board-level involvement, in addition to business and IT executives. At a high level, this involves managing the SOA projects implementation, managing SOA infrastructure, and controlling the entire effort through all fine-tuned IT processes in accordance with COBIT (Control Objectives for Information Technology).   Devising tools and techniques to promote reuse culture, and the SOA way of doing things needs competency centers to be established in addition to training the workforce to take up new roles that are suited to SOA journey.   Conclusions   Reference Architectures can serve as the basis for disparate architecture efforts throughout the organization, even if they use different tools and technologies. Reference architectures provide best practices and approaches in the independent way a vendor deals with technology and standards. Reference Architectures model the abstract architectural elements for an enterprise independent of the technologies, protocols, and products that are used to implement an SOA. Telecom enterprises today are facing significant business and technology challenges due to growing competition, a multitude of services, and convergence. Adopting architectural best practices could go a long way in meeting these challenges. The use of SOA-based architecture for communication to each of the external systems like Billing, CRM, etc., in OSS/BSS system has made the architecture very loosely coupled, with greater flexibility. Any change in the external systems would be absorbed at the Integration Layer without affecting the rest of the ecosystem. The use of a Business Process Management (BPM) tool makes the management and maintenance of the business processes easy, with better performance in terms of lead time, quality, and cost. Since the Architecture is based on standards, it will lower the cost of deploying and managing OSS/BSS applications over their lifecycles.

    Read the article

  • l2tp / ipsec debian Openswan U2.6.38 does not connect

    - by locojay
    i am trying to get ipsec/l2tp running on a debian server with an iphone as a client but always get: Dec 2 21:00:04 vpn pluto[22711]: packet from <clientip>:43598: received Vendor ID payload [RFC 3947] method set to=115 Dec 2 21:00:04 vpn pluto[22711]: packet from <clientip>:43598: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike] meth=114, but already using method 115 Dec 2 21:00:04 vpn pluto[22711]: packet from <clientip>:43598: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-08] meth=113, but already using method 115 Dec 2 21:00:04 vpn pluto[22711]: packet from <clientip>:43598: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-07] meth=112, but already using method 115 Dec 2 21:00:04 vpn pluto[22711]: packet from <clientip>:43598: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-06] meth=111, but already using method 115 Dec 2 21:00:04 vpn pluto[22711]: packet from <clientip>:43598: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-05] meth=110, but already using method 115 Dec 2 21:00:04 vpn pluto[22711]: packet from <clientip>:43598: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-04] meth=109, but already using method 115 Dec 2 21:00:04 vpn pluto[22711]: packet from <clientip>:43598: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-03] meth=108, but already using method 115 Dec 2 21:00:04 vpn pluto[22711]: packet from <clientip>:43598: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02] meth=107, but already using method 115 Dec 2 21:00:04 vpn pluto[22711]: packet from <clientip>:43598: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02_n] meth=106, but already using method 115 Dec 2 21:00:04 vpn pluto[22711]: packet from <clientip>:43598: ignoring Vendor ID payload [FRAGMENTATION 80000000] Dec 2 21:00:04 vpn pluto[22711]: packet from <clientip>:43598: received Vendor ID payload [Dead Peer Detection] Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[4] <clientip> #20: responding to Main Mode from unknown peer <clientip> Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[4] <clientip> #20: transition from state STATE_MAIN_R0 to state STATE_MAIN_R1 Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[4] <clientip> #20: STATE_MAIN_R1: sent MR1, expecting MI2 Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[4] <clientip> #20: NAT-Traversal: Result using draft-ietf-ipsec-nat-t-ike (MacOS X): both are NATed Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[4] <clientip> #20: transition from state STATE_MAIN_R1 to state STATE_MAIN_R2 Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[4] <clientip> #20: STATE_MAIN_R2: sent MR2, expecting MI3 Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[4] <clientip> #20: ignoring informational payload, type IPSEC_INITIAL_CONTACT msgid=00000000 Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[4] <clientip> #20: Main mode peer ID is ID_IPV4_ADDR: '10.2.210.176' Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[4] <clientip> #20: switched from "L2TP-PSK-noNAT" to "L2TP-PSK-noNAT" Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #20: deleting connection "L2TP-PSK-noNAT" instance with peer <clientip> {isakmp=#0/ipsec=#0} Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #20: transition from state STATE_MAIN_R2 to state STATE_MAIN_R3 Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #20: new NAT mapping for #20, was <clientip>:43598, now <clientip>:49826 Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #20: STATE_MAIN_R3: sent MR3, ISAKMP SA established {auth=OAKLEY_PRESHARED_KEY cipher=aes_256 prf=oakley_sha group=modp1024} Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #20: Dead Peer Detection (RFC 3706): enabled Dec 2 21:00:05 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #20: the peer proposed: <public ip>/32:17/1701 -> 10.2.210.176/32:17/0 Dec 2 21:00:05 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #20: NAT-Traversal: received 2 NAT-OA. using first, ignoring others Dec 2 21:00:05 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #21: responding to Quick Mode proposal {msgid:311d3282} Dec 2 21:00:05 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #21: us: 171.138.2.13<171.138.2.13>:17/1701 Dec 2 21:00:05 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #21: them: <clientip>[10.2.210.176]:17/61719 Dec 2 21:00:05 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #21: transition from state STATE_QUICK_R0 to state STATE_QUICK_R1 Dec 2 21:00:05 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #21: STATE_QUICK_R1: sent QR1, inbound IPsec SA installed, expecting QI2 Dec 2 21:00:05 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #21: Dead Peer Detection (RFC 3706): enabled Dec 2 21:00:05 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #21: transition from state STATE_QUICK_R1 to state STATE_QUICK_R2 Dec 2 21:00:05 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #21: STATE_QUICK_R2: IPsec SA established transport mode {ESP=>0x05e23c9a <0x216077a9 xfrm=AES_256-HMAC_SHA1 NATOA=10.2.210.176 NATD=<clientip>:49826 DPD=enabled} Dec 2 21:00:26 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #20: received Delete SA(0x05e23c9a) payload: deleting IPSEC State #21 Dec 2 21:00:26 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #20: received and ignored informational message Dec 2 21:00:27 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #20: received Delete SA payload: deleting ISAKMP State #20 Dec 2 21:00:27 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip>: deleting connection "L2TP-PSK-noNAT" instance with peer <clientip> {isakmp=#0/ipsec=#0} Dec 2 21:00:27 vpn pluto[22711]: packet from <clientip>:49826: received and ignored informational message Dec 2 21:00:27 vpn pluto[22711]: ERROR: asynchronous network error report on eth0 (sport=4500) for message to <clientip> port 49826, complainant <clientip>: Connection refused [errno 111, origin ICMP type 3 code 3 (not authenticated)] my setup looks like this verizon fios actiontec -- DMZ-- ddwrt router -- debian xen instance actiontec : 192.168.1.1 ddwrt: 171.138.2.1 debian xen server: 171.138.2.13 forwarded udp 500, 4500, 1701 on ddwrt to debian xen instance. vpn passthrough is enabled /etc/ipsec.conf config setup dumpdir=/var/run/pluto/ nat_traversal=yes virtual_private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/12,%v4:25.0.0.0/8,%v6:fd00::/8,%v6:fe80::/10,%v4:!171.138.2.0/24,%v4:!192.168.1.0/24 protostack=netkey # Add connections here conn L2TP-PSK-NAT rightsubnet=vhost:%priv also=L2TP-PSK-noNAT conn L2TP-PSK-noNAT authby=secret pfs=no auto=add keyingtries=3 # we cannot rekey for %any, let client rekey rekey=no # Apple iOS doesn't send delete notify so we need dead peer detection # to detect vanishing clients dpddelay=30 dpdtimeout=120 dpdaction=clear # Set ikelifetime and keylife to same defaults windows has ikelifetime=8h keylife=1h # l2tp-over-ipsec is transport mode type=transport # left=171.138.2.13 # # For updated Windows 2000/XP clients, # to support old clients as well, use leftprotoport=17/%any leftprotoport=17/1701 # # The remote user. # right=%any # Using the magic port of "%any" means "any one single port". This is # a work around required for Apple OSX clients that use a randomly # high port. rightprotoport=17/%any #force all to be nat'ed. because of ios conn passthrough-for-non-l2tp type=passthrough left=171.138.2.13 leftnexthop=171.138.2.1 right=0.0.0.0 rightsubnet=0.0.0.0/0 auto=route /etc/xl2tp/xl2tp.conf [global] ipsec saref = no listen-addr = 171.138.2.13 ;port = 1701 ;debug network = yes ;debug tunnel = yes ;debug network = yes ;debug packet = yes [lns default] ip range = 171.138.2.231-171.138.2.239 local ip = 171.138.2.13 assign ip = yes require chap = no refuse pap = no require authentication = no ;name = OpenswanVPN ppp debug = yes pppoptfile = /etc/ppp/options.xlt2tpd lenght bit = yes /etc/ppp/options.xl2tpd ;require-mschap-v2 pcp-accept-local ipcp-accept-local ipcp-accept-remote ;ms-dns 171.138.2.1 ms-dns 192.168.1.1 ms-dns 8.8.8.8 name l2tpd noccp auth crtscts idle 1800 mtu 1410 mru 1410 lock proxyarp connect-delay 5000 debug dump logfd 2 logfile /var/log/xl2tpd.log ipsec verify Checking your system to see if IPsec got installed and started correctly: Version check and ipsec on-path [OK] Linux Openswan U2.6.38/K3.0.0-1-amd64 (netkey) Checking for IPsec support in kernel [OK] SAref kernel support [N/A] NETKEY: Testing XFRM related proc values [OK] [OK] [OK] Checking that pluto is running [OK] Pluto listening for IKE on udp 500 [OK] Pluto listening for NAT-T on udp 4500 [OK] Two or more interfaces found, checking IP forwarding [FAILED] Checking NAT and MASQUERADEing [OK] Checking for 'ip' command [OK] Checking /bin/sh is not /bin/dash [WARNING] Checking for 'iptables' command [OK] Opportunistic Encryption Support [DISABLED] The failed can be ignored i guess since cat /proc/sys/net/ipv4/ip_forward returns 1 any help would be much appreciated as i don't have any idea why this is not working

    Read the article

< Previous Page | 541 542 543 544 545 546 547 548 549 550 551 552  | Next Page >