Search Results

Search found 15150 results on 606 pages for 'azure services'.

Page 600/606 | < Previous Page | 596 597 598 599 600 601 602 603 604 605 606  | Next Page >

  • How to create a SOAP REQUEST using ASP.NET (VB) without using Visual

    - by user311691
    Hi all , I urgently need your help . I am new to consuming a web service using SOAP protocol. I have been given a demo webservice URL which ends in .WSDL and NOT .asml?WSDL. The problem is I cannot add a web reference using Visual studio OR Disco.exe or Wsdl.exe - This webservice has been created on a java platform and for security reasons the only way to make a invoke the webservice is at runtime using SOAP protocol IN asp.net (VB). I I have created some code but cannot seem to send the soap object to the receiving web service. If I could get a solution with step by step instructions on how I can send a SOAP REQUEST. Below is my code and all am trying to do is send a SOAP REQUEST and receive a SOAP RESPONSE which I will display in my browser. <%@ page language="vb" %> <%@ Import Namespace="System.Data"%> <%@ Import Namespace="System.Xml"%> <%@ Import Namespace="System.Net"%> <%@ Import Namespace="System.IO"%> <%@ Import Namespace="System.Text"%> <script runat=server> Private Sub Page_Load() Dim objHTTPReq As HttpWebRequest Dim WebserviceUrl As String = "http://xx.xx.xx:8084/asy/wsdl/asy.wsdl" objHTTPReq = CType(WebRequest.Create(WebserviceUrl), HttpWebRequest) Dim soapXML As String soapXML = "<?xml version='1.0' encoding='utf-8'?>" & _ " <soap:Envelope xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance'" & _ " xmlns:xsd='http://www.w3.org/2001/XMLSchema'"& _ " xmlns:soap='http://schemas.xmlsoap.org/soap/envelope/' >"& _ " <soap:Body> "& _ " <validatePaymentData xmlns='http://asybanks.webservices.asycuda.org'> " & _ " <bankCode>"& bankCode &"</bankCode> " & _ " <PaymentDataType>" & _ " <paymentType>"& payment_type &"</paymentType> " & _ " <amount>"& ass_amount &"</amount> " & _ " <ReferenceType>" & _ " <year>"& year &"</year> " & _ " <customsOfficeCode>"& station &"</customsOfficeCode> " & _ " </ReferenceType>" & _ " <accountNumber>"& zra_account &"</accountNumber> " & _ " </PaymentDataType> " & _ " </validatePaymentData> " & _ " </soap:Body> " & _ " </soap:Envelope> " objHTTPReq.Headers.Add("SOAPAction", "http://asybanks.webservices.asycuda.org") objHTTPReq.ContentType = "text/xml; charset=utf-8" objHTTPReq.ContentLength = soapXML.Length objHTTPReq.Accept = "text/xml" objHTTPReq.Method = "POST" Dim objHTTPRes As HttpWebResponse = CType(objHTTPReq.GetResponse(), HttpWebResponse) Dim dataStream As Stream = objHTTPRes.GetResponseStream() Dim reader As StreamReader = new StreamReader(dataStream) Dim responseFromServer As String = reader.ReadToEnd() OurXml.text = responseFromServer End Sub </script> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title> XML TRANSACTION SIMULATION - N@W@ TJ </title> </head> <body> <form id="form1" runat="server"> <div> <p>ZRA test Feedback:</p> <asp:label id="OurXml" runat="server"/> </div> </form> </body> </html> the demo webservice looks like this: <?xml version="1.0" encoding="UTF-8" ?> - <!-- WEB SERVICE JAVA DEMO --> - <definitions targetNamespace="http://asybanks.webservices.asycuda.org" xmlns="http://schemas.xmlsoap.org/wsdl/" xmlns:apachesoap="http://xml.apache.org/xml-soap" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:y="http://asybanks.webservices.asycuda.org"> - <types> - <xs:schema elementFormDefault="qualified" targetNamespace="http://asybanks.webservices.asycuda.org" xmlns="http://www.w3.org/2001/XMLSchema"> SOME OTHER INFORMATION AT THE BOTTOM <soap:address location="http://xx.xx.xx:8084/asy/services/asy" /> </port> </service> </definitions> From the above excerpt of the wsdl url webservice, I am not sure which namespace to use for soapACTION - please advise.... Please if you could comment every stage of a soap request and provide a working demo - I would be most grateful as I would be learning rather than just assuming stuff :)

    Read the article

  • How do I add the j2ee.jar to a Java2WSDL ant script programmatically?

    - by Marcus
    I am using IBM's Rational Application Developer. I have an ant script that contains the Java2WSDL task. When I run it via IBM, it gives compiler errors unless I include the j2ee.jar file in the classpath via the run tool (it does not pick up the jar files in the classpath in the script). However, I need to be able to call this script programmatically, and it is giving me this error: "java.lang.NoClassDefFoundError: org.eclipse.core.runtime.CoreException" I'm not sure which jars need to be added or where? Since a simple echo script runs, I assume that it is the j2ee.jar or another ant jar that needs to be added. I've added it to the project's buildpath, but that doesn't help. (I also have ant.jar, wsanttasks.jar, all the ant jars from the plugin, tools.jar, remoteAnt.jar, and the swt - all which are included in the buildpath when you run the script by itself.) Script: <?xml version="1.0" encoding="UTF-8"?> <project default="build" basedir="."> <path id="lib.path"> <fileset dir="C:\Program Files\IBM\WebSphere\AppServer\lib" includes="*.jar"/> <!-- Adding these does not help. <fileset dir="C:\Program Files\IBM\SDP70Shared\plugins\org.apache.ant_1.6.5\lib" includes="*.jar"/> <fileset dir="C:\Program Files\IBM\SDP70\jdk\lib" includes="*.jar"/> <fileset dir="C:\Program Files\IBM\SDP70\configuration\org.eclipse.osgi\bundles\1139\1\.cp\lib" includes="*.jar"/> <fileset dir="C:\Program Files\IBM\SDP70Shared\plugins" includes="*.jar"/> --> </path> <taskdef name="java2wsdl" classname="com.ibm.websphere.ant.tasks.Java2WSDL"> <classpath refid="lib.path"/> </taskdef> <target name="build"> <echo message="Beginning build"/> <javac srcdir="C:\J2W_Test\Java2Wsdl_Example" destdir="C:\J2W_Test\Java2Wsdl_Example"> <classpath refid="lib.path"/> <include name="WSExample.java"/> </javac> <echo message="Set up javac"/> <echo message="Running java2wsdl"/> <java2wsdl output="C:\J2W_Test\Java2Wsdl_Example\example\META-INF\wsdl\WSExample.wsdl" classpath="C:\J2W_Test\Java2Wsdl_Example" className= "example.WSExample" namespace="http://example" namespaceImpl="http://example" location="http://localhost:9080/example/services/WSExample" style="document" use="literal"> <mapping namespace="http://example" package="example"/> </java2wsdl> <echo message="Complete"/> </target> </project> Code: File buildFile = new File("build.xml"); Project p = new Project(); p.setUserProperty("ant.file", buildFile.getAbsolutePath()); DefaultLogger consoleLogger = new DefaultLogger(); consoleLogger.setErrorPrintStream(System.err); consoleLogger.setOutputPrintStream(System.out); consoleLogger.setMessageOutputLevel(Project.MSG_INFO); p.addBuildListener(consoleLogger); try { p.fireBuildStarted(); p.init(); ProjectHelper helper = ProjectHelper.getProjectHelper(); p.addReference("ant.projectHelper", helper); helper.parse(p, buildFile); p.executeTarget(p.getDefaultTarget()); p.fireBuildFinished(null); } catch (BuildException e) { p.fireBuildFinished(e); } Error: [java2wsdl] java.lang.NoClassDefFoundError: org.eclipse.core.runtime.CoreException [java2wsdl] at java.lang.J9VMInternals.verifyImpl(Native Method) [java2wsdl] at java.lang.J9VMInternals.verify(J9VMInternals.java:68) [java2wsdl] at java.lang.J9VMInternals.initialize(J9VMInternals.java:129) [java2wsdl] at com.ibm.ws.webservices.multiprotocol.discovery.ServiceProviderManager.getDiscoveredServiceProviders(ServiceProviderManager.java:378) [java2wsdl] at com.ibm.ws.webservices.multiprotocol.discovery.ServiceProviderManager.getAllServiceProviders(ServiceProviderManager.java:214) [java2wsdl] at com.ibm.ws.webservices.wsdl.fromJava.Emitter.initPluggableBindings(Emitter.java:2704) [java2wsdl] at com.ibm.ws.webservices.wsdl.fromJava.Emitter.<init>(Emitter.java:389) [java2wsdl] at com.ibm.ws.webservices.tools.ant.Java2WSDL.execute(Java2WSDL.java:122) [java2wsdl] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:275) [java2wsdl] at org.apache.tools.ant.Task.perform(Task.java:364) [java2wsdl] at org.apache.tools.ant.Target.execute(Target.java:341) [java2wsdl] at org.apache.tools.ant.Target.performTasks(Target.java:369) [java2wsdl] at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1216) [java2wsdl] at org.apache.tools.ant.Project.executeTarget(Project.java:1185) [java2wsdl] at att.ant.RunAnt.main(RunAnt.java:32)

    Read the article

  • Will creating a background thread in a WCF service during a call, take up a thread in the ASP .NET t

    - by Nate Pinchot
    The following code is part of a WCF service. Will eventWatcher take up a thread in the ASP .NET thread pool, even if it is set IsBackground = true? /// <summary> /// Provides methods to work with the PhoneSystem web services SDK. /// This is a singleton since we need to keep track of what lines (extensions) are open. /// </summary> public sealed class PhoneSystemWebServiceFactory : IDisposable { // singleton instance reference private static readonly PhoneSystemWebServiceFactory instance = new PhoneSystemWebServiceFactory(); private static readonly object l = new object(); private static volatile Hashtable monitoredExtensions = new Hashtable(); private static readonly PhoneSystemWebServiceClient webServiceClient = CreateWebServiceClient(); private static volatile bool isClientRegistered; private static volatile string clientHandle; private static readonly Thread eventWatcherThread = new Thread(EventPoller) {IsBackground = true}; #region Constructor // these constructors are hacks to make the C# compiler not mark beforefieldinit // more info: http://www.yoda.arachsys.com/csharp/singleton.html static PhoneSystemWebServiceFactory() { } PhoneSystemWebServiceFactory() { } #endregion #region Properties /// <summary> /// Gets a thread safe instance of PhoneSystemWebServiceFactory /// </summary> public static PhoneSystemWebServiceFactory Instance { get { return instance; } } #endregion #region Private methods /// <summary> /// Create and configure a PhoneSystemWebServiceClient with basic http binding and endpoint from app settings. /// </summary> /// <returns>PhoneSystemWebServiceClient</returns> private static PhoneSystemWebServiceClient CreateWebServiceClient() { string url = ConfigurationManager.AppSettings["PhoneSystemWebService_Url"]; if (string.IsNullOrEmpty(url)) { throw new ConfigurationErrorsException( "The AppSetting \"PhoneSystemWebService_Url\" could not be found. Check the application configuration and ensure that the element exists. Example: <appSettings><add key=\"PhoneSystemWebService_Url\" value=\"http://xyz\" /></appSettings>"); } return new PhoneSystemWebServiceClient(new BasicHttpBinding(), new EndpointAddress(url)); } #endregion #region Event poller public static void EventPoller() { while (true) { if (Thread.CurrentThread.ThreadState == ThreadState.Aborted || Thread.CurrentThread.ThreadState == ThreadState.AbortRequested || Thread.CurrentThread.ThreadState == ThreadState.Stopped || Thread.CurrentThread.ThreadState == ThreadState.StopRequested) break; // get events //webServiceClient.GetEvents(clientHandle, 30, 100); } Thread.Sleep(5000); } #endregion #region Client registration methods private static void RegisterClientIfNeeded() { if (isClientRegistered) { return; } lock (l) { // double lock check if (isClientRegistered) { return; } //clientHandle = webServiceClient.RegisterClient("PhoneSystemWebServiceFactoryInternal", null); isClientRegistered = true; } } private static void UnregisterClient() { if (!isClientRegistered) { return; } lock (l) { // double lock check if (!isClientRegistered) { return; } //webServiceClient.UnegisterClient(clientHandle); } } #endregion #region Phone extension methods public bool SubscribeToEventsForExtension(string extension) { if (monitoredExtensions.Contains(extension)) { return false; } lock (monitoredExtensions.SyncRoot) { // double lock check if (monitoredExtensions.Contains(extension)) { return false; } RegisterClientIfNeeded(); // open line so we receive events for extension LineInfo lineInfo; try { //lineInfo = webServiceClient.OpenLine(clientHandle, extension); } catch (FaultException<PhoneSystemWebSDKErrorDetail>) { // TODO: log error return false; } // add extension to list of monitored extensions //monitoredExtensions.Add(extension, lineInfo.lineID); monitoredExtensions.Add(extension, 1); // start event poller thread if not already started if (eventWatcherThread.ThreadState == ThreadState.Stopped || eventWatcherThread.ThreadState == ThreadState.Unstarted) { eventWatcherThread.Start(); } return true; } } public bool UnsubscribeFromEventsForExtension(string extension) { if (!monitoredExtensions.Contains(extension)) { return false; } lock (monitoredExtensions.SyncRoot) { if (!monitoredExtensions.Contains(extension)) { return false; } // close line try { //webServiceClient.CloseLine(clientHandle, (int) monitoredExtensions[extension]); } catch (FaultException<PhoneSystemWebSDKErrorDetail>) { // TODO: log error return false; } // remove extension from list of monitored extensions monitoredExtensions.Remove(extension); // if we are not monitoring anything else, stop the poller and unregister the client if (monitoredExtensions.Count == 0) { eventWatcherThread.Abort(); UnregisterClient(); } return true; } } public bool IsExtensionMonitored(string extension) { lock (monitoredExtensions.SyncRoot) { return monitoredExtensions.Contains(extension); } } #endregion #region Dispose public void Dispose() { lock (l) { // close any open lines var extensions = monitoredExtensions.Keys.Cast<string>().ToList(); while (extensions.Count > 0) { UnsubscribeFromEventsForExtension(extensions[0]); extensions.RemoveAt(0); } if (!isClientRegistered) { return; } // unregister web service client UnregisterClient(); } } #endregion }

    Read the article

  • JSF 2.1 Spring 3.0 Integration

    - by danny.lesnik
    I'm trying to make very simple Spring 3 + JSF2.1 integration according to examples I googled in the web. So here is my code: My HTML submitted to actionController.actionSubmitted() method: <h:form> <h:message for="textPanel" style="color:red;" /> <h:panelGrid columns="3" rows="5" id="textPanel"> //all my bean prperties mapped to HTML code. </h:panelGrid> <h:commandButton value="Submit" action="#{actionController.actionSubmitted}" /> </h:form> now the Action Controller itself: @ManagedBean(name="actionController") @SessionScoped public class ActionController implements Serializable{ @ManagedProperty(value="#{user}") User user; @ManagedProperty(value="#{mailService}") MailService mailService; public void setMailService(MailService mailService) { this.mailService = mailService; } public void setUser(User user) { this.user = user; } private static final long serialVersionUID = 1L; public ActionController() {} public String actionSubmitted(){ System.out.println(user.getEmail()); mailService.sendUserMail(user); return "success"; } } Now my bean Spring: public interface MailService { void sendUserMail(User user); } public class MailServiceImpl implements MailService{ @Override public void sendUserMail(User user) { System.out.println("Mail to "+user.getEmail()+" sent." ); } } This is my web.xml <listener> <listener-class> org.springframework.web.context.ContextLoaderListener </listener-class> </listener> <listener> <listener-class> org.springframework.web.context.request.RequestContextListener </listener-class> </listener> <!-- Welcome page --> <welcome-file-list> <welcome-file>index.xhtml</welcome-file> </welcome-file-list> <!-- JSF mapping --> <servlet> <servlet-name>Faces Servlet</servlet-name> <servlet-class>javax.faces.webapp.FacesServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> my applicationContext.xml <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd"> <bean id="mailService" class="com.vanilla.jsf.services.MailServiceImpl"> </bean> </beans> my faces-config.xml is the following: <application> <el-resolver> org.springframework.web.jsf.el.SpringBeanFacesELResolver </el-resolver> <message-bundle> com.vanilla.jsf.validators.MyMessages </message-bundle> </application> <managed-bean> <managed-bean-name>actionController</managed-bean-name> <managed-bean-class>com.vanilla.jsf.controllers.ActionController</managed-bean-class> <managed-bean-scope>session</managed-bean-scope> <managed-property> <property-name>mailService</property-name> <value>#{mailService}</value> </managed-property> </managed-bean> <navigation-rule> <from-view-id>index.xhtml</from-view-id> <navigation-case> <from-action>#{actionController.actionSubmitted}</from-action> <from-outcome>success</from-outcome> <to-view-id>submitted.xhtml</to-view-id> <redirect /> </navigation-case> </navigation-rule> My Problem is that I'm getting NullPointerExeption because my mailService Spring bean is null. public String actionSubmitted(){ System.out.println(user.getEmail()); //mailService is null Getting NullPointerException mailService.sendUserMail(user); return "success"; }

    Read the article

  • How should I delete a child object from within a parent's slot? Possibly boost::asio specific.

    - by kaliatech
    I have written a network server class that maintains a std::set of network clients. The network clients emit a signal to the network server on disconnect (via boost::bind). When a network client disconnects, the client instance needs to be removed from the Set and eventually deleted. I would think this is a common pattern, but I am having problems that might, or might not, be specific to ASIO. I've tried to trim down to just the relevant code: /** NetworkServer.hpp **/ class NetworkServices : private boost::noncopyable { public: NetworkServices(void); ~NetworkServices(void); private: void run(); void onNetworkClientEvent(NetworkClientEvent&); private: std::set<boost::shared_ptr<const NetworkClient>> clients; }; /** NetworkClient.cpp **/ void NetworkServices::run() { running = true; boost::asio::io_service::work work(io_service); //keeps service running even if no operations // This creates just one thread for the boost::asio async network services boost::thread iot(boost::bind(&NetworkServices::run_io_service, this)); while (running) { boost::system::error_code err; try { tcp::socket* socket = new tcp::socket(io_service); acceptor->accept(*socket, err); if (!err) { NetworkClient* networkClient = new NetworkClient(io_service, boost::shared_ptr<tcp::socket>(socket)); networkClient->networkClientEventSignal.connect(boost::bind(&NetworkServices::onNetworkClientEvent, this, _1)); clients.insert(boost::shared_ptr<NetworkClient>(networkClient)); networkClient->init(); //kicks off 1st asynch_read call } } // etc... } } void NetworkServices::onNetworkClientEvent(NetworkClientEvent& evt) { switch(evt.getType()) { case NetworkClientEvent::CLIENT_ERROR : { boost::shared_ptr<const NetworkClient> clientPtr = evt.getClient().getSharedPtr(); // ------ THIS IS THE MAGIC LINE ----- // If I keep this, the io_service hangs. If I comment it out, // everything works fine (but I never delete the disconnected NetworkClient). // If actually deleted the client here I might expect problems because it is the caller // of this method via boost::signal and bind. However, The clientPtr is a shared ptr, and a // reference is being kept in the client itself while signaling, so // I would the object is not going to be deleted from the heap here. That seems to be the case. // Never-the-less, this line makes all the difference, most likely because it controls whether or not the NetworkClient ever gets deleted. clients.erase(clientPtr); //I should probably put this socket clean-up in NetworkClient destructor. Regardless by doing this, // I would expect the ASIO socket stuff to be adequately cleaned-up after this. tcp::socket& socket = clientPtr->getSocket(); try { socket.shutdown(boost::asio::socket_base::shutdown_both); socket.close(); } catch(...) { CommServerContext::error("Error while shutting down and closing socket."); } break; } default : { break; } } } /** NetworkClient.hpp **/ class NetworkClient : public boost::enable_shared_from_this<NetworkClient>, Client { NetworkClient(boost::asio::io_service& io_service, boost::shared_ptr<tcp::socket> socket); virtual ~NetworkClient(void); inline boost::shared_ptr<const NetworkClient> getSharedPtr() const { return shared_from_this(); }; boost::signal <void (NetworkClientEvent&)> networkClientEventSignal; void onAsyncReadHeader(const boost::system::error_code& error, size_t bytes_transferred); }; /** NetworkClient.cpp - onAsyncReadHeader method called from io_service.run() thread as result of an async_read operation. Error condition usually result of an unexpected client disconnect.**/ void NetworkClient::onAsyncReadHeader( const boost::system::error_code& error, size_t bytes_transferred) { if (error) { //Make sure this instance doesn't get deleted from parent/slot deferencing //Alternatively, somehow schedule for future delete? boost::shared_ptr<const NetworkClient> clientPtr = getSharedPtr(); //Signal to service that this client is disconnecting NetworkClientEvent evt(*this, NetworkClientEvent::CLIENT_ERROR); networkClientEventSignal(evt); networkClientEventSignal.disconnect_all_slots(); return; } I believe it's not safe to delete the client from within the slot handler because the function return would be ... undefined? (Interestingly, it doesn't seem to blow up on me though.) So I've used boost:shared_ptr along with shared_from_this to make sure the client doesn't get deleted until all slots have been signaled. It doesn't seem to really matter though. I believe this question is not specific to ASIO, but the problem manifests in a peculiar way when using ASIO. I have one thread executing io_service.run(). All ASIO read/write operations are performed asynchronously. Everything works fine with multiple clients connecting/disconnecting UNLESS I delete my client object from the Set per the code above. If I delete my client object, the io_service seemingly deadlocks internally and no further asynchronous operations are performed unless I start another thread. I have try/catches around the io_service.run() call and have not been able to detect any errors. Questions: Are there best practices for deleting child objects, that are also signal emitters, from within parent slots? Any ideas as to why the io_service is hanging when I delete my network client object?

    Read the article

  • View bound to paged collection view not updating all of the time.

    - by Thomas
    I new to silverlight and trying to make a business application using the mvvm pattern and ria services. I have a view model class that contains a PagedCollectoinView and it is set to the item source of a datagrid. When I update the PagedCollectionView the datagrid is only updated the first time then after that subsequent changes to the data to not reflect in the view until after another edit. Things seem to be delayed one edit. Below is a summarized example of my xaml and code behind. This is the code for my view model public class CustomerContactLinks : INotifyPropertyChanged { private ObservableCollection<CustomerContactLink> _CustomerContact; public ObservableCollection<CustomerContactLink> CustomerContact { get { if (_CustomerContact == null) _CustomerContact = new ObservableCollection<CustomerContactLink>(); return _CustomerContact; } set { _CustomerContact = value; } } private PagedCollectionView _CustomerContactPaged; public PagedCollectionView CustomerContactPaged { get { if (_CustomerContactPaged == null) _CustomerContactPaged = new PagedCollectionView(CustomerContact); return _CustomerContactPaged; } } private TicketSystemDataContext _ctx; public TicketSystemDataContext ctx { get { if (_ctx == null) _ctx = new TicketSystemDataContext(); return _ctx; } } public void GetAll() { ctx.Load(ctx.GetCustomerContactInfoQuery(), LoadCustomerContactsComplete, null); } private void LoadCustomerContactsComplete(LoadOperation<CustomerContactLink> lo) { foreach (var entity in lo.Entities) { CustomerContact.Add(entity as CustomerContactLink); } } #region INotifyPropertyChanged Members public event PropertyChangedEventHandler PropertyChanged; private void RaisePropertyChanged(string propertyName) { if (PropertyChanged != null) { this.PropertyChanged(this, new PropertyChangedEventArgs(propertyName)); } } #endregion } Here is the basics of my XAML <Data:DataGrid x:Name="GridCustomers" MinHeight="100" MaxWidth="1000" IsReadOnly="True" AutoGenerateColumns="False"> <Data:DataGrid.Columns> <Data:DataGridTextColumn Header="First Name" Binding="{Binding Customer.FirstName}" Width="105" /> <Data:DataGridTextColumn Header="MI" Binding="{Binding Customer.MiddleName}" Width="35" /> <Data:DataGridTextColumn Header="Last Name" Binding="{Binding Customer.LastName}" Width="105"/> <Data:DataGridTextColumn Header="Address1" Binding="{Binding Contact.Address1}" Width="130"/> <Data:DataGridTextColumn Header="Address2" Binding="{Binding Contact.Address2}" Width="130"/> <Data:DataGridTextColumn Header="City" Binding="{Binding Contact.City}" Width="110"/> <Data:DataGridTextColumn Header="State" Binding="{Binding Contact.State}" Width="50"/> <Data:DataGridTextColumn Header="Zip" Binding="{Binding Contact.Zip}" Width="45"/> <Data:DataGridTextColumn Header="Home" Binding="{Binding Contact.PhoneHome}" Width="85"/> <Data:DataGridTextColumn Header="Cell" Binding="{Binding Contact.PhoneCell}" Width="85"/> <Data:DataGridTextColumn Header="Email" Binding="{Binding Contact.Email}" Width="118"/> </Data:DataGrid.Columns> </Data:DataGrid> <DataForm:DataForm x:Name="CustomerDetails" Header="Customer Details" AutoGenerateFields="False" AutoEdit="False" AutoCommit="False" CommandButtonsVisibility="Edit" Width="1000" Margin="0,5,0,0"> <DataForm:DataForm.EditTemplate> </DataForm:DataForm.EditTemplate> </DataForm:DataForm> And here is my code behind public Customers() { InitializeComponent(); BusyDialogIndicator.IsBusy = true; Loaded += new RoutedEventHandler(Customers_Loaded); CustomerDetails.BeginningEdit += new EventHandler(CustomerDetails_BeginningEdit); } void CustomerDetails_BeginningEdit(object sender, System.ComponentModel.CancelEventArgs e) { CustomerContacts.CustomerContactPaged.EditItem(CustomerDetails.CurrentItem); } private void Customers_Loaded(object sender, RoutedEventArgs e) { CustomerContacts = new CustomerContactLinks(); CustomerContacts.GetAll(); GridCustomers.ItemsSource = CustomerContacts.CustomerContactPaged; GridCustomerPager.Source = CustomerContacts.CustomerContactPaged; GridCustomers.SelectionChanged += new SelectionChangedEventHandler(GridCustomers_SelectionChanged); BusyDialogIndicator.IsBusy = false; } void GridCustomers_SelectionChanged(object sender, SelectionChangedEventArgs e) { CustomerDetails.CurrentItem = GridCustomers.SelectedItem as CustomerContactLink; } private void SaveChanges_Click(object sender, RoutedEventArgs e) { if (WebContext.Current.User.IsAuthenticated) { bool commited = CustomerDetails.CommitEdit(); if (commited && (!CustomerDetails.IsItemChanged && CustomerDetails.IsItemValid)) { CustomerContacts.Update(CustomerDetails.CurrentItem as CustomerContactLink); CustomerContacts.ctx.SubmitChanges(); CustomerContacts.CustomerContactPaged.CommitEdit(); CustomerContacts.CustomerContactPaged.Refresh(); (GridCustomers.ItemsSource as PagedCollectionView).Refresh(); } } }

    Read the article

  • Thread Blocks During Call

    - by user578875
    I have a serious problem, I'm developing an application that mesures on call time during a call; the problem presents when, with the phone on the ear, the thread that the timer has, blocks and no longer responds before taking off my ear. The next log shows the problem. 01-11 16:14:19.607 14558 14566 I Estado : postDelayed Async Service 01-11 16:14:20.607 14558 14566 I Estado : postDelayed Async Service 01-11 16:14:21.607 14558 14566 I Estado : postDelayed Async Service 01-11 16:14:22.597 14558 14566 I Estado : postDelayed Async Service 01-11 16:14:23.608 14558 14566 I Estado : postDelayed Async Service 01-11 16:14:24.017 1106 1106 D iddd : select() < 0, Probably a handled signal: Interrupted system call 01-11 16:14:24.607 14558 14566 I Estado : postDelayed Async Service 01-11 16:18:05.500 1106 1106 D iddd : select() < 0, Probably a handled signal: Interrupted system call 01-11 16:18:06.026 14558 14566 I Estado : postDelayed Async Service 01-11 16:18:06.026 14558 14566 I Estado : postDelayed Async Service 01-11 16:18:06.026 14558 14566 I Estado : postDelayed Async Service 01-11 16:18:06.026 14558 14566 I Estado : postDelayed Async Service 01-11 16:18:06.026 14558 14566 I Estado : postDelayed Async Service 01-11 16:18:06.026 14558 14566 I Estado : postDelayed Async Service I've been trying with Services, Timers, Threads, AyncTasks and they all present the same problem. My Code: @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); requestWindowFeature(Window.FEATURE_NO_TITLE); getWindow().setFlags(WindowManager.LayoutParams.FLAG_FULLSCREEN, WindowManager.LayoutParams.FLAG_FULLSCREEN); setContentView(R.layout.main); HangUpService.setMainActivity(this); objHangUpService = new Intent(this, HangUpService.class); Runnable rAccion = new Runnable() { public void run() { TelephonyManager tm = (TelephonyManager)getSystemService(TELEPHONY_SERVICE); tm.listen(mPhoneListener, PhoneStateListener.LISTEN_CALL_STATE); objVibrator = (Vibrator) getSystemService(getApplicationContext().VIBRATOR_SERVICE); final ListView lstLlamadas = (ListView) findViewById(R.id.lstFavoritos); final EditText txtMinutos = (EditText) findViewById(R.id.txtMinutos); final EditText txtSegundos = (EditText) findViewById(R.id.txtSegundos); ArrayList<Contacto> cContactos = new ArrayList<Contacto>(); ContactoAdapter caContactos = new ContactoAdapter(HangUp.this, R.layout.row,cContactos); Cursor curContactos = getContentResolver().query( ContactsContract.Contacts.CONTENT_URI, null, null, null, ContactsContract.Contacts.TIMES_CONTACTED + " DESC"); while (curContactos.moveToNext()){ String strNombre = curContactos.getString(curContactos.getColumnIndex(ContactsContract.Contacts.DISPLAY_NAME)); String strID = curContactos.getString(curContactos.getColumnIndex(ContactsContract.Contacts._ID)); String strHasPhone=curContactos.getString(curContactos.getColumnIndex(ContactsContract.Contacts.HAS_PHONE_NUMBER)); String strStarred=curContactos.getString(curContactos.getColumnIndex(ContactsContract.Contacts.STARRED)); if (Integer.parseInt(strHasPhone) > 0 && Integer.parseInt(strStarred) ==1 ) { Cursor CursorTelefono = getContentResolver().query( ContactsContract.CommonDataKinds.Phone.CONTENT_URI, null, ContactsContract.CommonDataKinds.Phone.CONTACT_ID +" = " + strID, null, null); while (CursorTelefono.moveToNext()) { String strTipo=CursorTelefono.getString(CursorTelefono.getColumnIndex(ContactsContract.CommonDataKinds.Phone.TYPE)); String strTelefono=CursorTelefono.getString(CursorTelefono.getColumnIndex(ContactsContract.CommonDataKinds.Phone.NUMBER)); strNumero=strTelefono; String args[]=new String[1]; args[0]=strNumero; Cursor CursorCallLog = getContentResolver().query( android.provider.CallLog.Calls.CONTENT_URI, null, android.provider.CallLog.Calls.NUMBER + "=?", args, android.provider.CallLog.Calls.DATE+ " DESC"); if (Integer.parseInt(strTipo)==2) { caContactos.add( new Contacto( strNombre, strTelefono ) ); } } CursorTelefono.close(); } } curContactos.close(); lstLlamadas.setAdapter(caContactos); lstLlamadas.setOnItemClickListener(new OnItemClickListener() { @Override public void onItemClick(AdapterView a, View v, int position, long id) { Contacto mContacto=(Contacto)lstLlamadas.getItemAtPosition(position); i = new Intent(HangUp.this, Llamada.class); Log.i("Estado","Declaro Intent"); Bundle bundle = new Bundle(); bundle.putString("telefono", mContacto.getTelefono()); i.putExtras(bundle); startActivityForResult(i,SUB_ACTIVITY_ID); Log.i("Estado","Inicio Intent"); blActivo=true; try { String strMinutos=txtMinutos.getText().toString(); String strSegundos=txtSegundos.getText().toString(); if(!strMinutos.equals("") && !strSegundos.equals("")){ int Tiempo = ( (Integer.parseInt(txtMinutos.getText().toString())*60) + Integer.parseInt(txtSegundos.getText().toString()) )* 1000; handler.removeCallbacks(rVibrate); cTime = System.currentTimeMillis(); cTime=cTime+Tiempo; objHangUpAsync = new HangUpAsync(cTime,objVibrator,objPowerManager,objKeyguardLock); objHangUpAsync.execute(); objPowerManager.userActivity(Tiempo+3000, true); objHangUpService.putExtra("cTime", cTime); //startService(objHangUpService); } catch (Exception e) { e.printStackTrace(); } finally { } } }); } }; } AsyncTask: @Override protected String doInBackground(String... arg0) { blActivo = true; mWakeLock = objPowerManager.newWakeLock(PowerManager.FULL_WAKE_LOCK, "My Tag"); objKeyguardLock.disableKeyguard(); Log.i("Estado", "Entro a doInBackground"); timer.scheduleAtFixedRate( new TimerTask() { public void run() { if (blActivo){ if (cTime blActivo=false; objVibrator.vibrate(1000); Log.i("Estado","Vibrar desde Async"); this.cancel(); }else{ try{ mWakeLock.acquire(); mWakeLock.release(); Log.i("Estado","postDelayed Async Service"); }catch(Exception e){ Log.i("Estado","Error: " + e.getMessage()); } } } } }, 0, INTERVAL); return null; }

    Read the article

  • NullReferenceException makes me want to shoot myself

    - by rockinthesixstring
    Ok, so once upon a time, my code worked. Since then I did some refactoring (basic stuff so I thought) but now I'm getting a null reference exception, and I can't bloody well figure out why. Here's where it starts. note, this is the logout method, but all of my ActivityLogs sections are throwing this error Function LogOut(ByVal go As String) As ActionResult ActivityLogService.AddActivity(AuthenticationHelper.RetrieveAuthUser.ID, _ IActivityLogService.LogType.UserLogout, _ HttpContext.Request.UserHostAddress) ActivityLogService.SubmitChanges() ''# more stuff happens after this End Function The service is pretty straight forward (notice the ERROR THROWN HERE) Public Sub AddActivity(ByVal userid As Integer, ByVal activity As Integer, ByVal ip As String) Implements IActivityLogService.AddActivity Dim _activity As ActivityLog = New ActivityLog With {.Activity = activity, .UserID = userid, .UserIP = ip.IPAddressToNumber, .ActivityDate = DateTime.UtcNow} ActivityLogRepository.Create(_activity) ''#ERROR THROWN HERE End Sub And the Interface that the Service uses looks like this Public Interface IActivityLogService Sub AddActivity(ByVal userid As Integer, ByVal activity As Integer, ByVal ip As String) Function GetUsersLastActivity(ByVal UserID As Integer) As ActivityLog Sub SubmitChanges() ''' <summary> ''' The type of activity done by the user ''' </summary> ''' <remarks>Each int refers to an activity. ''' There can be no duplicates or modifications ''' after this application goes live</remarks> Enum LogType As Integer ''' <summary> ''' A new session started ''' </summary> SessionStarted = 1 ''' <summary> ''' A new user is Added/Created ''' </summary> UserAdded = 2 ''' <summary> ''' User has updated their profile ''' </summary> UserUpdated = 3 ''' <summary> ''' User has logged into they system ''' </summary> UserLogin = 4 ''' <summary> ''' User has logged out of the system ''' </summary> UserLogout = 5 ''' <summary> ''' A new event has been added ''' </summary> EventAdded = 6 ''' <summary> ''' An event has been updated ''' </summary> EventUpdated = 7 ''' <summary> ''' An event has been deleted ''' </summary> EventDeleted = 8 ''' <summary> ''' An event has received a Up/Down vote ''' </summary> EventVoted = 9 ''' <summary> ''' An event has been closed ''' </summary> EventCloseVoted = 10 ''' <summary> ''' A comment has been added to an event ''' </summary> CommentAdded = 11 ''' <summary> ''' A comment has been updated ''' </summary> CommentUpdated = 12 ''' <summary> ''' A comment has been deleted ''' </summary> CommentDeleted = 13 ''' <summary> ''' An event or comment has been reported as spam ''' </summary> SpamReported = 14 End Enum End Interface And the repository is equally straight forward Public Sub Create(ByVal activity As ActivityLog) Implements IActivityLogRepository.Create dc.ActivityLogs.InsertOnSubmit(activity) End Sub The stack trace is as follows [NullReferenceException: Object reference not set to an instance of an object.] MyApp.Core.Domain.ActivityLogService.AddActivity(Int32 userid, Int32 activity, String ip) in E:\Projects\MyApp\MyApp.Core\Domain\Services\ActivityLogService.vb:49 MyApp.Core.Controllers.UsersController.Authenticate(String go) in E:\Projects\MyApp\MyApp.Core\Controllers\UsersController.vb:258 lambda_method(Closure , ControllerBase , Object[] ) +163 System.Web.Mvc.ActionMethodDispatcher.Execute(ControllerBase controller, Object[] parameters) +51 System.Web.Mvc.ReflectedActionDescriptor.Execute(ControllerContext controllerContext, IDictionary2 parameters) +409 System.Web.Mvc.ControllerActionInvoker.InvokeActionMethod(ControllerContext controllerContext, ActionDescriptor actionDescriptor, IDictionary2 parameters) +52 System.Web.Mvc.<c_DisplayClass15.b_12() +127 System.Web.Mvc.ControllerActionInvoker.InvokeActionMethodFilter(IActionFilter filter, ActionExecutingContext preContext, Func1 continuation) +436 System.Web.Mvc.<>c__DisplayClass17.<InvokeActionMethodWithFilters>b__14() +61 System.Web.Mvc.ControllerActionInvoker.InvokeActionMethodWithFilters(ControllerContext controllerContext, IList1 filters, ActionDescriptor actionDescriptor, IDictionary2 parameters) +305 System.Web.Mvc.ControllerActionInvoker.InvokeAction(ControllerContext controllerContext, String actionName) +830 System.Web.Mvc.Controller.ExecuteCore() +136 System.Web.Mvc.ControllerBase.Execute(RequestContext requestContext) +232 System.Web.Mvc.ControllerBase.System.Web.Mvc.IController.Execute(RequestContext requestContext) +39 System.Web.Mvc.<>c__DisplayClassb.<BeginProcessRequest>b__5() +68 System.Web.Mvc.Async.<>c__DisplayClass1.<MakeVoidDelegate>b__0() +44 System.Web.Mvc.Async.<>c__DisplayClass81.b__7(IAsyncResult ) +42 System.Web.Mvc.Async.WrappedAsyncResult`1.End() +141 System.Web.Mvc.Async.AsyncResultWrapper.End(IAsyncResult asyncResult, Object tag) +54 System.Web.Mvc.Async.AsyncResultWrapper.End(IAsyncResult asyncResult, Object tag) +40 System.Web.Mvc.<c_DisplayClasse.b_d() +61 System.Web.Mvc.SecurityUtil.b_0(Action f) +31 System.Web.Mvc.SecurityUtil.ProcessInApplicationTrust(Action action) +56 System.Web.Mvc.MvcHandler.EndProcessRequest(IAsyncResult asyncResult) +110 System.Web.Mvc.MvcHandler.System.Web.IHttpAsyncHandler.EndProcessRequest(IAsyncResult result) +38 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +690 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +194 here's an image of the error when attached to the debugger. And here's an image of the DB schema in question Can anyone shed some light on what I might be missing here?

    Read the article

  • Parsing adobe Kuler RSS feed

    - by dezkev
    I have been trying to parse the below XML file (kuler rss feed). I have read the various posts on this site but am unable to piece them together. I specifically want to extract the child(or siblings) nodes of the element <kuler:themeItem>. However I am getting an exception : Namespace Manager or XsltContext needed. This query has a prefix, variable, or user-defined function. Pl help : C# 3.0 net framework 3.5 RSS feed snippet: <?xml version="1.0" encoding="UTF-8" ?> - <rss xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:kuler="http://kuler.adobe.com/kuler/API/rss/" xmlns:rss="http://blogs.law.harvard.edu/tech/rss" version="2.0"> - <channel> <title>kuler recent themes</title> <link>http://kuler.adobe.com/</link> <description>most recent themes published on kuler (1 to 20 of 332518)</description> <language>en-us</language> <pubDate>Wed, 07 Apr 2010 08:41:31 PST</pubDate> <lastBuildDate>Wed, 07 Apr 2010 08:41:31 PST</lastBuildDate> <docs>http://blogs.law.harvard.edu/tech/rss</docs> <generator>Kuler Services</generator> <managingEditor>[email protected]</managingEditor> <webMaster>[email protected]</webMaster> <recordCount>332518</recordCount> <startIndex>0</startIndex> <itemsPerPage>20</itemsPerPage> - <item> <title>Theme Title: Muted Graph</title> <link>http://kuler.adobe.com/index.cfm#themeID/856075</link> <guid>http://kuler.adobe.com/index.cfm#themeID/856075</guid> - <enclosure xmlns="http://www.solitude.dk/syndication/enclosures/"> <title>Muted Graph</title> - <link length="1" type="image/png"> <url>http://kuler-api.adobe.com/kuler/themeImages/theme_856075.png</url> </link> </enclosure> <description><img src="http://kuler-api.adobe.com/kuler/themeImages/theme_856075.png" /><br /> Artist: tischt<br /> ThemeID: 856075<br /> Posted: 04/07/2010<br /> Hex: F1E9B2, 3D3606, 2A3231, 4A0A07, 424431</description> - <kuler:themeItem> <kuler:themeID>856075</kuler:themeID> <kuler:themeTitle>Muted Graph</kuler:themeTitle> <kuler:themeImage>http://kuler-api.adobe.com/kuler/themeImages/theme_856075.png</kuler:themeImage> - <kuler:themeAuthor> <kuler:authorID>216099</kuler:authorID> <kuler:authorLabel>tischt</kuler:authorLabel> </kuler:themeAuthor> <kuler:themeTags /> <kuler:themeRating>0</kuler:themeRating> <kuler:themeDownloadCount>0</kuler:themeDownloadCount> <kuler:themeCreatedAt>20100407</kuler:themeCreatedAt> <kuler:themeEditedAt>20100407</kuler:themeEditedAt> - <kuler:themeSwatches> - <kuler:swatch> <kuler:swatchHexColor>F1E9B2</kuler:swatchHexColor> <kuler:swatchColorMode>rgb</kuler:swatchColorMode> <kuler:swatchChannel1>0.945098</kuler:swatchChannel1> <kuler:swatchChannel2>0.913725</kuler:swatchChannel2> <kuler:swatchChannel3>0.698039</kuler:swatchChannel3> <kuler:swatchChannel4>0.0</kuler:swatchChannel4> <kuler:swatchIndex>0</kuler:swatchIndex> </kuler:swatch> My Code so far: static void Main(string[] args) { const string feedUrl = "http://kuler-api.adobe.com/rss/get.cfm?listtype=recent&key=xxxx"; var doc = new XmlDocument(); var request = WebRequest.Create(feedUrl) as HttpWebRequest; using (var response = request.GetResponse() as HttpWebResponse) { var reader = new StreamReader(response.GetResponseStream()); doc.Load(reader); } XmlNodeList rsslist = doc.SelectNodes("//rss/channel/item/kuler:themeItem"); for (int i = 0; i < rsslist.Count; i++) { XmlNode rssdetail = rsslist.Item(i).SelectSingleNode("kuler:themeTitle"); string title = rssdetail.InnerText; Console.WriteLine(title); } } }

    Read the article

  • layout is not included in all pages in asp.net mvc4 application

    - by Ahmed
    I am developing an asp.net mvc4 application with Bootstrap 3 and i've _Layout.cshtml in "Shared" folder , in Views, i've two pages, "Index and "Register" and i've included Layout in both of these Views but It seems that Layout is included in only "Index and not in "Register" View. Following are my Index and Register Views @{ ViewBag.Title = "Index"; Layout = "~/Views/Shared/_Layout.cshtml"; } <h2 align="center" class="bg-info">Login</h2> <form class="form-horizontal" role="form"> <div class="form-group"> <label for="inputEmail3" class="col-sm-2 control-label"><strong>UserName : </strong></label> <div class="col-sm-10"> <input type="email" class="form-control" id="inputEmail3" placeholder="UserName"> </div> </div> <div class="form-group"> <label for="inputPassword3" class="col-sm-2 control-label"><strong>Password</strong></label> <div class="col-sm-10"> <input type="password" class="form-control" id="inputPassword3" placeholder="Password"> </div> </div> <div class="form-group"> <div class="col-sm-offset-2 col-sm-10"> <div class="checkbox"> <label> <input type="checkbox"> Remember me </label> </div> </div> </div> <div class="form-group"> <div class="col-sm-offset-2 col-sm-10"> <button type="submit" class="btn btn-primary">Sign in</button> </div> </div> <h2 align="center" class="bg-info">SignIn With Other Services</h2> </form> <form class="form-horizontal" role="form" method="post" action="/Home/FacebookLogin"> <div class="form-group"> <div class="col-sm-offset-2 col-sm-10"> <button type="submit" class="btn btn-primary">SignIn with Facebook</button> </div> </div> </form> <h2 align="center" class="bg-info">Don't Have an Account?</h2> <form class="form-horizontal" role="form" method="post" action="/Home/Register"> <div class="form-group"> <div class="col-sm-offset-2 col-sm-10"> <button type="submit" class="btn btn-primary">Register</button> </div> </div> </form> ![@{ ViewBag.Title = "Register"; Layout = "~/Views/Shared/_Layout.cshtml"; } <h2 align="center" class="bg-info">Register</h2> <form class="form-horizontal" role="form"> <div class="form-group"> <label for="inputEmail3" class="col-sm-2 control-label"><strong>UserName : </strong></label> <div class="col-sm-10"> <input type="email" class="form-control" id="uname" name="uname" placeholder="UserName"> <input type="button" class="btn btn-primary" id="check" value="Check Availability" > <h4 class="bg-warning"></h4> </div> </div> <div class="form-group"> <label for="inputPassword3" class="col-sm-2 control-label"><strong>Password</strong></label> <div class="col-sm-10"> <input type="password" class="form-control" id="upass" name="upass" placeholder="Password"> </div> </div> <div class="form-group"> <label for="inputPassword3" class="col-sm-2 control-label"><strong>Retype Password</strong></label> <div class="col-sm-10"> <input type="password" class="form-control" id="retype" placeholder="Password"> </div> </div> <div class="form-group"> <div class="col-sm-offset-2 col-sm-10"> <button type="submit" class="btn btn-primary">Register</button> </div> </div> </form>]

    Read the article

  • Allowing pop-up's and downloads with flex

    - by ShadowVariable
    I'm building an flex app that is basically a wrapper for a few websites. One of them is a google docs website, and I'm trying to get flex to allow downloads or popups or something that will allow me to do it. I've tried a whole bunch of solutions online and none of them have worked out. Here's the code so far: <?xml version="1.0" encoding="utf-8"?> <s:WindowedApplication xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/mx" width="100%" height="100%" creationComplete="onCreationComplete()"> <s:layout> <s:HorizontalLayout/> </s:layout> <fx:Style source="style.css"/> <fx:Script> <![CDATA[ include "CustomHTMLLoader.as"; private function onCreationComplete():void { // ... other stuff ... var custom:object; custom.htmlHost = new MyHTMLHost(); } ]]> </fx:Script> <fx:Declarations> <!-- Place non-visual elements (e.g., services, value objects) here --> </fx:Declarations> <s:BorderContainer width="100%" height="100%" backgroundColor="#87BED0" styleName="container"> <s:Panel x="188" y="17" width="826" height="112" borderAlpha="0.15" chromeColor="#0C5A74" color="#FFFFFF" cornerRadius="20" dropShadowVisible="false" enabled="true" title="Customer Service Control Panel"> <s:controlBarContent/> <s:Button id="home" x="13" y="10" height="44" label="Phones" click="myViewStack.selectedChild=Home;" enabled="true" icon="@Embed('assets/iconmonstr-mobile-phone-6-icon-32.png')"/> <s:Button id="liveagent" x="131" y="10" height="44" label="Live Agent" click="myViewStack.selectedChild=live_agent;" icon="@Embed('assets/iconmonstr-speech-bubble-11-icon-32.png')"/> <s:Button id="bigcommerce" x="260" y="10" width="158" height="44" label="Big Commerce" click="myViewStack.selectedChild=bigcommerce_home;" icon="@Embed('assets/iconmonstr-coin-6-icon-48.png')"/> <s:Button id="faq" x="436" y="10" width="88" height="44" label="FAQ" click="myViewStack.selectedChild=freqaskquestions;" fontFamily="Arial" icon="@Embed('assets/iconmonstr-help-4-icon-32.png')"/> <s:Button id="call" x="540" y="10" width="131" height="44" label="Google Docs" click="myViewStack.selectedChild=call_notes;" icon="@Embed('assets/iconmonstr-text-file-4-icon-32.png')"/> <s:Button id="hoot" x="684" y="10" width="122" height="44" label="HootSuite" click="myViewStack.selectedChild=hoot_suite;" icon="@Embed('assets/iconmonstr-facebook-icon-32.png')"/> </s:Panel> <mx:ViewStack id="myViewStack" x="0" y="140" width="100%" height="100%" borderStyle="solid"> <s:NavigatorContent id="Home"> <s:BorderContainer width="100%" height="100%"> <mx:HTML x="0" y="0" width="100%" height="100%" borderVisible="false" horizontalScrollPolicy="off" location="http://mbvphone.mtbakervapor.org/vbx/messages/inbox" /> </s:BorderContainer> </s:NavigatorContent> <s:NavigatorContent id="bigcommerce_home"> <s:BorderContainer width="100%" height="100%"> <mx:HTML x="0" y="0" width="100%" height="100%" borderVisible="false" horizontalScrollPolicy="off" location="http://www.mtbakervapor.com/admin" /> </s:BorderContainer> </s:NavigatorContent> <s:NavigatorContent id="live_agent"> <s:BorderContainer width="100%" height="100%"> <mx:HTML x="0" y="0" width="100%" height="100%" borderVisible="false" horizontalScrollPolicy="off" location="http://mbvphone.mtbakervapor.org/liveagent/agent/#login" /> </s:BorderContainer> </s:NavigatorContent> <s:NavigatorContent id="freqaskquestions"> <s:BorderContainer width="100%" height="100%"> <mx:HTML x="0" y="0" width="100%" height="100%" borderVisible="false" horizontalScrollPolicy="off" location="http://mbvphone.mtbakervapor.org/liveagent/" /> </s:BorderContainer> </s:NavigatorContent> <s:NavigatorContent id="call_notes"> <s:BorderContainer width="100%" height="100%"> <mx:HTML id="html" x="0" y="0" width="100%" height="100%" borderVisible="false" horizontalScrollPolicy="off" location="https://drive.google.com/a/mtbakervapor.com/" htmlHost="{new CustomHost()}" /> </s:BorderContainer> </s:NavigatorContent> <s:NavigatorContent id="hoot_suite"> <s:BorderContainer width="100%" height="100%"> <mx:HTML x="0" y="0" width="100%" height="100%" borderVisible="false" horizontalScrollPolicy="off" location="https://hootsuite.com/login" /> </s:BorderContainer> </s:NavigatorContent> </mx:ViewStack> <s:Image x="0" y="0" width="180" height="140" scaleMode="letterbox" smooth="false" source="assets/mbvlogo_black.png"/> </s:BorderContainer> </s:WindowedApplication> and the custom class code: package { import flash.html.HTMLHost; import flash.html.HTMLWindowCreateOptions; import flash.html.HTMLLoader; public class MyHTMLHost extends HTMLHost { public function MyHTMLHost(defaultBehaviors:Boolean=true) { super(defaultBehaviors); } override public function createWindow(windowCreateOptions:HTMLWindowCreateOptions):HTMLLoader { // all JS calls and HREFs to open a new window should use the existing window return HTMLLoader.createRootWindow(); } } } any help would be appreciated.

    Read the article

  • Microsoft and jQuery

    - by Rick Strahl
    The jQuery JavaScript library has been steadily getting more popular and with recent developments from Microsoft, jQuery is also getting ever more exposure on the ASP.NET platform including now directly from Microsoft. jQuery is a light weight, open source DOM manipulation library for JavaScript that has changed how many developers think about JavaScript. You can download it and find more information on jQuery on www.jquery.com. For me jQuery has had a huge impact on how I develop Web applications and was probably the main reason I went from dreading to do JavaScript development to actually looking forward to implementing client side JavaScript functionality. It has also had a profound impact on my JavaScript skill level for me by seeing how the library accomplishes things (and often reviewing the terse but excellent source code). jQuery made an uncomfortable development platform (JavaScript + DOM) a joy to work on. Although jQuery is by no means the only JavaScript library out there, its ease of use, small size, huge community of plug-ins and pure usefulness has made it easily the most popular JavaScript library available today. As a long time jQuery user, I’ve been excited to see the developments from Microsoft that are bringing jQuery to more ASP.NET developers and providing more integration with jQuery for ASP.NET’s core features rather than relying on the ASP.NET AJAX library. Microsoft and jQuery – making Friends jQuery is an open source project but in the last couple of years Microsoft has really thrown its weight behind supporting this open source library as a supported component on the Microsoft platform. When I say supported I literally mean supported: Microsoft now offers actual tech support for jQuery as part of their Product Support Services (PSS) as jQuery integration has become part of several of the ASP.NET toolkits and ships in several of the default Web project templates in Visual Studio 2010. The ASP.NET MVC 3 framework (still in Beta) also uses jQuery for a variety of client side support features including client side validation and we can look forward toward more integration of client side functionality via jQuery in both MVC and WebForms in the future. In other words jQuery is becoming an optional but included component of the ASP.NET platform. PSS support means that support staff will answer jQuery related support questions as part of any support incidents related to ASP.NET which provides some piece of mind to some corporate development shops that require end to end support from Microsoft. In addition to including jQuery and supporting it, Microsoft has also been getting involved in providing development resources for extending jQuery’s functionality via plug-ins. Microsoft’s last version of the Microsoft Ajax Library – which is the successor to the native ASP.NET AJAX Library – included some really cool functionality for client templates, databinding and localization. As it turns out Microsoft has rebuilt most of that functionality using jQuery as the base API and provided jQuery plug-ins of these components. Very recently these three plug-ins were submitted and have been approved for inclusion in the official jQuery plug-in repository and been taken over by the jQuery team for further improvements and maintenance. Even more surprising: The jQuery-templates component has actually been approved for inclusion in the next major update of the jQuery core in jQuery V1.5, which means it will become a native feature that doesn’t require additional script files to be loaded. Imagine this – an open source contribution from Microsoft that has been accepted into a major open source project for a core feature improvement. Microsoft has come a long way indeed! What the Microsoft Involvement with jQuery means to you For Microsoft jQuery support is a strategic decision that affects their direction in client side development, but nothing stopped you from using jQuery in your applications prior to Microsoft’s official backing and in fact a large chunk of developers did so readily prior to Microsoft’s announcement. Official support from Microsoft brings a few benefits to developers however. jQuery support in Visual Studio 2010 means built-in support for jQuery IntelliSense, automatically added jQuery scripts in many projects types and a common base for client side functionality that actually uses what most developers are already using. If you have already been using jQuery and were worried about straying from the Microsoft line and their internal Microsoft Ajax Library – worry no more. With official support and the change in direction towards jQuery Microsoft is now following along what most in the ASP.NET community had already been doing by using jQuery, which is likely the reason for Microsoft’s shift in direction in the first place. ASP.NET AJAX and the Microsoft AJAX Library weren’t bad technology – there was tons of useful functionality buried in these libraries. However, these libraries never got off the ground, mainly because early incarnations were squarely aimed at control/component developers rather than application developers. For all the functionality that these controls provided for control developers they lacked in useful and easily usable application developer functionality that was easily accessible in day to day client side development. The result was that even though Microsoft shipped support for these tools in the box (in .NET 3.5 and 4.0), other than for the internal support in ASP.NET for things like the UpdatePanel and the ASP.NET AJAX Control Toolkit as well as some third party vendors, the Microsoft client libraries were largely ignored by the developer community opening the door for other client side solutions. Microsoft seems to be acknowledging developer choice in this case: Many more developers were going down the jQuery path rather than using the Microsoft built libraries and there seems to be little sense in continuing development of a technology that largely goes unused by the majority of developers. Kudos for Microsoft for recognizing this and gracefully changing directions. Note that even though there will be no further development in the Microsoft client libraries they will continue to be supported so if you’re using them in your applications there’s no reason to start running for the exit in a panic and start re-writing everything with jQuery. Although that might be a reasonable choice in some cases, jQuery and the Microsoft libraries work well side by side so that you can leave existing solutions untouched even as you enhance them with jQuery. The Microsoft jQuery Plug-ins – Solid Core Features One of the most interesting developments in Microsoft’s embracing of jQuery is that Microsoft has started contributing to jQuery via standard mechanism set for jQuery developers: By submitting plug-ins. Microsoft took some of the nicest new features of the unpublished Microsoft Ajax Client Library and re-wrote these components for jQuery and then submitted them as plug-ins to the jQuery plug-in repository. Accepted plug-ins get taken over by the jQuery team and that’s exactly what happened with the three plug-ins submitted by Microsoft with the templating plug-in even getting slated to be published as part of the jQuery core in the next major release (1.5). The following plug-ins are provided by Microsoft: jQuery Templates – a client side template rendering engine jQuery Data Link – a client side databinder that can synchronize changes without code jQuery Globalization – provides formatting and conversion features for dates and numbers The first two are ports of functionality that was slated for the Microsoft Ajax Library while functionality for the globalization library provides functionality that was already found in the original ASP.NET AJAX library. To me all three plug-ins address a pressing need in client side applications and provide functionality I’ve previously used in other incarnations, but with more complete implementations. Let’s take a close look at these plug-ins. jQuery Templates http://api.jquery.com/category/plugins/templates/ Client side templating is a key component for building rich JavaScript applications in the browser. Templating on the client lets you avoid from manually creating markup by creating DOM nodes and injecting them individually into the document via code. Rather you can create markup templates – similar to the way you create classic ASP server markup – and merge data into these templates to render HTML which you can then inject into the document or replace existing content with. Output from templates are rendered as a jQuery matched set and can then be easily inserted into the document as needed. Templating is key to minimize client side code and reduce repeated code for rendering logic. Instead a single template can be used in many places for updating and adding content to existing pages. Further if you build pure AJAX interfaces that rely entirely on client rendering of the initial page content, templates allow you to a use a single markup template to handle all rendering of each specific HTML section/element. I’ve used a number of different client rendering template engines with jQuery in the past including jTemplates (a PHP style templating engine) and a modified version of John Resig’s MicroTemplating engine which I built into my own set of libraries because it’s such a commonly used feature in my client side applications. jQuery templates adds a much richer templating model that allows for sub-templates and access to the data items. Like John Resig’s original Micro Template engine, the core basics of the templating engine create JavaScript code which means that templates can include JavaScript code. To give you a basic idea of how templates work imagine I have an application that downloads a set of stock quotes based on a symbol list then displays them in the document. To do this you can create an ‘item’ template that describes how each of the quotes is renderd as a template inside of the document: <script id="stockTemplate" type="text/x-jquery-tmpl"> <div id="divStockQuote" class="errordisplay" style="width: 500px;"> <div class="label">Company:</div><div><b>${Company}(${Symbol})</b></div> <div class="label">Last Price:</div><div>${LastPrice}</div> <div class="label">Net Change:</div><div> {{if NetChange > 0}} <b style="color:green" >${NetChange}</b> {{else}} <b style="color:red" >${NetChange}</b> {{/if}} </div> <div class="label">Last Update:</div><div>${LastQuoteTimeString}</div> </div> </script> The ‘template’ is little more than HTML with some markup expressions inside of it that define the template language. Notice the embedded ${} expressions which reference data from the quote objects returned from an AJAX call on the server. You can embed any JavaScript or value expression in these template expressions. There are also a number of structural commands like {{if}} and {{each}} that provide for rudimentary logic inside of your templates as well as commands ({{tmpl}} and {{wrap}}) for nesting templates. You can find more about the full set of markup expressions available in the documentation. To load up this data you can use code like the following: <script type="text/javascript"> //var Proxy = new ServiceProxy("../PageMethods/PageMethodsService.asmx/"); $(document).ready(function () { $("#btnGetQuotes").click(GetQuotes); }); function GetQuotes() { var symbols = $("#txtSymbols").val().split(","); $.ajax({ url: "../PageMethods/PageMethodsService.asmx/GetStockQuotes", data: JSON.stringify({ symbols: symbols }), // parameter map type: "POST", // data has to be POSTed contentType: "application/json", timeout: 10000, dataType: "json", success: function (result) { var quotes = result.d; var jEl = $("#stockTemplate").tmpl(quotes); $("#quoteDisplay").empty().append(jEl); }, error: function (xhr, status) { alert(status + "\r\n" + xhr.responseText); } }); }; </script> In this case an ASMX AJAX service is called to retrieve the stock quotes. The service returns an array of quote objects. The result is returned as an object with the .d property (in Microsoft service style) that returns the actual array of quotes. The template is applied with: var jEl = $("#stockTemplate").tmpl(quotes); which selects the template script tag and uses the .tmpl() function to apply the data to it. The result is a jQuery matched set of elements that can then be appended to the quote display element in the page. The template is merged against an array in this example. When the result is an array the template is automatically applied to each each array item. If you pass a single data item – like say a stock quote – the template works exactly the same way but is applied only once. Templates also have access to a $data item which provides the current data item and information about the tempalte that is currently executing. This makes it possible to keep context within the context of the template itself and also to pass context from a parent template to a child template which is very powerful. Templates can be evaluated by using the template selector and calling the .tmpl() function on the jQuery matched set as shown above or you can use the static $.tmpl() function to provide a template as a string. This allows you to dynamically create templates in code or – more likely – to load templates from the server via AJAX calls. In short there are options The above shows off some of the basics, but there’s much for functionality available in the template engine. Check the documentation link for more information and links to additional examples. The plug-in download also comes with a number of examples that demonstrate functionality. jQuery templates will become a native component in jQuery Core 1.5, so it’s definitely worthwhile checking out the engine today and get familiar with this interface. As much as I’m stoked about templating becoming part of the jQuery core because it’s such an integral part of many applications, there are also a couple shortcomings in the current incarnation: Lack of Error Handling Currently if you embed an expression that is invalid it’s simply not rendered. There’s no error rendered into the template nor do the various  template functions throw errors which leaves finding of bugs as a runtime exercise. I would like some mechanism – optional if possible – to be able to get error info of what is failing in a template when it’s rendered. No String Output Templates are always rendered into a jQuery matched set and there’s no way that I can see to directly render to a string. String output can be useful for debugging as well as opening up templating for creating non-HTML string output. Limited JavaScript Access Unlike John Resig’s original MicroTemplating Engine which was entirely based on JavaScript code generation these templates are limited to a few structured commands that can ‘execute’. There’s no code execution inside of script code which means you’re limited to calling expressions available in global objects or the data item passed in. This may or may not be a big deal depending on the complexity of your template logic. Error handling has been discussed quite a bit and it’s likely there will be some solution to that particualar issue by the time jQuery templates ship. The others are relatively minor issues but something to think about anyway. jQuery Data Link http://api.jquery.com/category/plugins/data-link/ jQuery Data Link provides the ability to do two-way data binding between input controls and an underlying object’s properties. The typical scenario is linking a textbox to a property of an object and have the object updated when the text in the textbox is changed and have the textbox change when the value in the object or the entire object changes. The plug-in also supports converter functions that can be applied to provide the conversion logic from string to some other value typically necessary for mapping things like textbox string input to say a number property and potentially applying additional formatting and calculations. In theory this sounds great, however in reality this plug-in has some serious usability issues. Using the plug-in you can do things like the following to bind data: person = { firstName: "rick", lastName: "strahl"}; $(document).ready( function() { // provide for two-way linking of inputs $("form").link(person); // bind to non-input elements explicitly $("#objFirst").link(person, { firstName: { name: "objFirst", convertBack: function (value, source, target) { $(target).text(value); } } }); $("#objLast").link(person, { lastName: { name: "objLast", convertBack: function (value, source, target) { $(target).text(value); } } }); }); This code hooks up two-way linking between a couple of textboxes on the page and the person object. The first line in the .ready() handler provides mapping of object to form field with the same field names as properties on the object. Note that .link() does NOT bind items into the textboxes when you call .link() – changes are mapped only when values change and you move out of the field. Strike one. The two following commands allow manual binding of values to specific DOM elements which is effectively a one-way bind. You specify the object and a then an explicit mapping where name is an ID in the document. The converter is required to explicitly assign the value to the element. Strike two. You can also detect changes to the underlying object and cause updates to the input elements bound. Unfortunately the syntax to do this is not very natural as you have to rely on the jQuery data object. To update an object’s properties and get change notification looks like this: function updateFirstName() { $(person).data("firstName", person.firstName + " (code updated)"); } This works fine in causing any linked fields to be updated. In the bindings above both the firstName input field and objFirst DOM element gets updated. But the syntax requires you to use a jQuery .data() call for each property change to ensure that the changes are tracked properly. Really? Sure you’re binding through multiple layers of abstraction now but how is that better than just manually assigning values? The code savings (if any) are going to be minimal. As much as I would like to have a WPF/Silverlight/Observable-like binding mechanism in client script, this plug-in doesn’t help much towards that goal in its current incarnation. While you can bind values, the ‘binder’ is too limited to be really useful. If initial values can’t be assigned from the mappings you’re going to end up duplicating work loading the data using some other mechanism. There’s no easy way to re-bind data with a different object altogether since updates trigger only through the .data members. Finally, any non-input elements have to be bound via code that’s fairly verbose and frankly may be more voluminous than what you might write by hand for manual binding and unbinding. Two way binding can be very useful but it has to be easy and most importantly natural. If it’s more work to hook up a binding than writing a couple of lines to do binding/unbinding this sort of thing helps very little in most scenarios. In talking to some of the developers the feature set for Data Link is not complete and they are still soliciting input for features and functionality. If you have ideas on how you want this feature to be more useful get involved and post your recommendations. As it stands, it looks to me like this component needs a lot of love to become useful. For this component to really provide value, bindings need to be able to be refreshed easily and work at the object level, not just the property level. It seems to me we would be much better served by a model binder object that can perform these binding/unbinding tasks in bulk rather than a tool where each link has to be mapped first. I also find the choice of creating a jQuery plug-in questionable – it seems a standalone object – albeit one that relies on the jQuery library – would provide a more intuitive interface than the current forcing of options onto a plug-in style interface. Out of the three Microsoft created components this is by far the least useful and least polished implementation at this point. jQuery Globalization http://github.com/jquery/jquery-global Globalization in JavaScript applications often gets short shrift and part of the reason for this is that natively in JavaScript there’s little support for formatting and parsing of numbers and dates. There are a number of JavaScript libraries out there that provide some support for globalization, but most are limited to a particular portion of globalization. As .NET developers we’re fairly spoiled by the richness of APIs provided in the framework and when dealing with client development one really notices the lack of these features. While you may not necessarily need to localize your application the globalization plug-in also helps with some basic tasks for non-localized applications: Dealing with formatting and parsing of dates and time values. Dates in particular are problematic in JavaScript as there are no formatters whatsoever except the .toString() method which outputs a verbose and next to useless long string. With the globalization plug-in you get a good chunk of the formatting and parsing functionality that the .NET framework provides on the server. You can write code like the following for example to format numbers and dates: var date = new Date(); var output = $.format(date, "MMM. dd, yy") + "\r\n" + $.format(date, "d") + "\r\n" + // 10/25/2010 $.format(1222.32213, "N2") + "\r\n" + $.format(1222.33, "c") + "\r\n"; alert(output); This becomes even more useful if you combine it with templates which can also include any JavaScript expressions. Assuming the globalization plug-in is loaded you can create template expressions that use the $.format function. Here’s the template I used earlier for the stock quote again with a couple of formats applied: <script id="stockTemplate" type="text/x-jquery-tmpl"> <div id="divStockQuote" class="errordisplay" style="width: 500px;"> <div class="label">Company:</div><div><b>${Company}(${Symbol})</b></div> <div class="label">Last Price:</div> <div>${$.format(LastPrice,"N2")}</div> <div class="label">Net Change:</div><div> {{if NetChange > 0}} <b style="color:green" >${NetChange}</b> {{else}} <b style="color:red" >${NetChange}</b> {{/if}} </div> <div class="label">Last Update:</div> <div>${$.format(LastQuoteTime,"MMM dd, yyyy")}</div> </div> </script> There are also parsing methods that can parse dates and numbers from strings into numbers easily: alert($.parseDate("25.10.2010")); alert($.parseInt("12.222")); // de-DE uses . for thousands separators As you can see culture specific options are taken into account when parsing. The globalization plugin provides rich support for a variety of locales: Get a list of all available cultures Query cultures for culture items (like currency symbol, separators etc.) Localized string names for all calendar related items (days of week, months) Generated off of .NET’s supported locales In short you get much of the same functionality that you already might be using in .NET on the server side. The plugin includes a huge number of locales and an Globalization.all.min.js file that contains the text defaults for each of these locales as well as small locale specific script files that define each of the locale specific settings. It’s highly recommended that you NOT use the huge globalization file that includes all locales, but rather add script references to only those languages you explicitly care about. Overall this plug-in is a welcome helper. Even if you use it with a single locale (like en-US) and do no other localization, you’ll gain solid support for number and date formatting which is a vital feature of many applications. Changes for Microsoft It’s good to see Microsoft coming out of its shell and away from the ‘not-built-here’ mentality that has been so pervasive in the past. It’s especially good to see it applied to jQuery – a technology that has stood in drastic contrast to Microsoft’s own internal efforts in terms of design, usage model and… popularity. It’s great to see that Microsoft is paying attention to what customers prefer to use and supporting the customer sentiment – even if it meant drastically changing course of policy and moving into a more open and sharing environment in the process. The additional jQuery support that has been introduced in the last two years certainly has made lives easier for many developers on the ASP.NET platform. It’s also nice to see Microsoft submitting proposals through the standard jQuery process of plug-ins and getting accepted for various very useful projects. Certainly the jQuery Templates plug-in is going to be very useful to many especially since it will be baked into the jQuery core in jQuery 1.5. I hope we see more of this type of involvement from Microsoft in the future. Kudos!© Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery  ASP.NET  

    Read the article

  • postfix: Temporary lookup failure

    - by mk_89
    I have followed the tutorials step by step for installing and configuring postfix https://help.ubuntu.com/community/Postfix https://help.ubuntu.com/community/PostfixBasicSetupHowto I am trying to test the services to whether Temporary lookup failure error telnet localhost 25 250 2.1.0 Ok rcpt to: fmaster@localhost 451 4.3.0 <fmaster@localhost>: Temporary lookup failure rcpt to: info@localhost 451 4.3.0 <info@localhost>: Temporary lookup failure I have tried searching the web but I have found no solutions, why am I getting this problem? mail.log Sep 24 01:03:05 bookcdb postfix/smtpd[21055]: NOQUEUE: reject: RCPT from unknown[::1]: 451 4.3.0 <info@localhost>: Temporary lookup failure; from=<root@localhost> to=<info@localhost> proto=ESMTP helo=<localhost> Sep 24 01:03:19 bookcdb postfix/smtpd[21055]: NOQUEUE: reject: RCPT from unknown[::1]: 451 4.3.0 <root@localhost>: Temporary lookup failure; from=<root@localhost> to=<root@localhost> proto=ESMTP helo=<localhost> Sep 24 01:08:19 bookcdb postfix/smtpd[21055]: timeout after RCPT from unknown[::1] Sep 24 01:08:19 bookcdb postfix/smtpd[21055]: disconnect from unknown[::1] Sep 24 01:10:49 bookcdb postfix/anvil[21059]: statistics: max connection rate 1/60s for (smtp:::1) at Sep 24 01:00:49 Sep 24 01:10:49 bookcdb postfix/anvil[21059]: statistics: max connection count 1 for (smtp:::1) at Sep 24 01:00:49 Sep 24 01:10:49 bookcdb postfix/anvil[21059]: statistics: max cache size 1 at Sep 24 01:00:49 Sep 24 01:15:36 bookcdb postfix/smtpd[22175]: error: open database /var/lib/mailman/data/aliases.db: No such file or directory Sep 24 01:15:36 bookcdb postfix/smtpd[22175]: warning: hostname localhost does not resolve to address ::1: No address associated with hostname Sep 24 01:15:36 bookcdb postfix/smtpd[22175]: connect from unknown[::1] Sep 24 01:15:55 bookcdb postfix/trivial-rewrite[22195]: error: open database /etc/postfix/transport.db: No such file or directory Sep 24 01:15:55 bookcdb postfix/trivial-rewrite[22195]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:15:55 bookcdb postfix/trivial-rewrite[22195]: warning: hash:/etc/postfix/transport lookup error for "*" Sep 24 01:15:55 bookcdb postfix/trivial-rewrite[22195]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:15:55 bookcdb postfix/trivial-rewrite[22195]: warning: hash:/etc/postfix/transport lookup error for "*" Sep 24 01:15:55 bookcdb postfix/trivial-rewrite[22195]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:15:55 bookcdb postfix/trivial-rewrite[22195]: warning: hash:/etc/postfix/transport lookup error for "root@localhost" Sep 24 01:15:55 bookcdb postfix/trivial-rewrite[22195]: warning: transport_maps lookup failure Sep 24 01:15:59 bookcdb postfix/trivial-rewrite[22195]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:15:59 bookcdb postfix/trivial-rewrite[22195]: warning: hash:/etc/postfix/transport lookup error for "fmaster@localhost" Sep 24 01:15:59 bookcdb postfix/trivial-rewrite[22195]: warning: transport_maps lookup failure Sep 24 01:15:59 bookcdb postfix/smtpd[22175]: NOQUEUE: reject: RCPT from unknown[::1]: 451 4.3.0 <fmaster@localhost>: Temporary lookup failure; from=<root@localhost> to=<fmaster@localhost> proto=ESMTP helo=<localhost> Sep 24 01:16:30 postfix/smtpd[22175]: last message repeated 5 times Sep 24 01:16:30 bookcdb postfix/smtpd[22175]: disconnect from unknown[::1] Sep 24 01:19:50 bookcdb postfix/anvil[22177]: statistics: max connection rate 1/60s for (smtp:::1) at Sep 24 01:15:36 Sep 24 01:19:50 bookcdb postfix/anvil[22177]: statistics: max connection count 1 for (smtp:::1) at Sep 24 01:15:36 Sep 24 01:19:50 bookcdb postfix/anvil[22177]: statistics: max cache size 1 at Sep 24 01:15:36 Sep 24 01:20:32 bookcdb postfix/qmgr[21039]: D0C596E0B34: from=<[email protected]>, size=442, nrcpt=1 (queue active) Sep 24 01:20:32 bookcdb postfix/qmgr[21039]: warning: connect to transport private/smtp-amavis: No such file or directory Sep 24 01:20:32 bookcdb postfix/error[22402]: D0C596E0B34: to=<[email protected]>, relay=none, delay=5369, delays=5369/0.01/0/0.09, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 01:24:16 bookcdb postfix/smtpd[22573]: error: open database /var/lib/mailman/data/aliases.db: No such file or directory Sep 24 01:24:16 bookcdb postfix/smtpd[22573]: warning: hostname localhost does not resolve to address ::1: No address associated with hostname Sep 24 01:24:16 bookcdb postfix/smtpd[22573]: connect from unknown[::1] Sep 24 01:24:43 bookcdb postfix/trivial-rewrite[22594]: error: open database /etc/postfix/transport.db: No such file or directory Sep 24 01:24:43 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:24:43 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport lookup error for "*" Sep 24 01:24:43 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:24:43 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport lookup error for "*" Sep 24 01:24:43 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:24:43 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport lookup error for "root@localhost" Sep 24 01:24:43 bookcdb postfix/trivial-rewrite[22594]: warning: transport_maps lookup failure Sep 24 01:25:14 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:25:14 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport lookup error for "[email protected]" Sep 24 01:25:14 bookcdb postfix/trivial-rewrite[22594]: warning: transport_maps lookup failure Sep 24 01:25:14 bookcdb postfix/smtpd[22573]: NOQUEUE: reject: RCPT from unknown[::1]: 451 4.3.0 <[email protected]>: Temporary lookup failure; from=<root@localhost> to=<[email protected]> proto=ESMTP helo=<localhost> Sep 24 01:25:32 bookcdb postfix/qmgr[21039]: 2E5C36E0A07: from=<[email protected]>, size=438, nrcpt=1 (queue active) Sep 24 01:25:32 bookcdb postfix/qmgr[21039]: warning: connect to transport private/smtp-amavis: No such file or directory Sep 24 01:25:32 bookcdb postfix/qmgr[21039]: 0EA3A6E0ACC: from=<[email protected]>, size=438, nrcpt=1 (queue active) Sep 24 01:25:32 bookcdb postfix/error[22631]: 2E5C36E0A07: to=<[email protected]>, orig_to=<root>, relay=none, delay=30203, delays=30203/0.01/0/0.1, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 01:25:32 bookcdb postfix/error[22632]: 0EA3A6E0ACC: to=<[email protected]>, orig_to=<root>, relay=none, delay=30115, delays=30115/0.01/0/0.11, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 01:25:58 bookcdb postfix/smtpd[22573]: warning: non-SMTP command from unknown[::1]: subject: fdf Sep 24 01:25:58 bookcdb postfix/smtpd[22573]: disconnect from unknown[::1] Sep 24 01:26:01 bookcdb postfix/smtpd[22573]: warning: hostname localhost does not resolve to address ::1: No address associated with hostname Sep 24 01:26:01 bookcdb postfix/smtpd[22573]: connect from unknown[::1] Sep 24 01:26:10 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:26:10 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport lookup error for "root@locahost" Sep 24 01:26:10 bookcdb postfix/trivial-rewrite[22594]: warning: transport_maps lookup failure Sep 24 01:26:37 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:26:37 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport lookup error for "fmaster@localhost" Sep 24 01:26:37 bookcdb postfix/trivial-rewrite[22594]: warning: transport_maps lookup failure Sep 24 01:26:37 bookcdb postfix/smtpd[22573]: NOQUEUE: reject: RCPT from unknown[::1]: 451 4.3.0 <fmaster@localhost>: Temporary lookup failure; from=<root@locahost> to=<fmaster@localhost> proto=SMTP Sep 24 01:26:45 bookcdb postfix/smtpd[22573]: disconnect from unknown[::1] Sep 24 01:30:05 bookcdb postfix/anvil[22575]: statistics: max connection rate 1/60s for (smtp:::1) at Sep 24 01:24:16 Sep 24 01:30:05 bookcdb postfix/anvil[22575]: statistics: max connection count 1 for (smtp:::1) at Sep 24 01:24:16 Sep 24 01:30:05 bookcdb postfix/anvil[22575]: statistics: max cache size 1 at Sep 24 01:24:16 Sep 24 01:34:57 bookcdb dovecot: master: Dovecot v2.0.19 starting up (core dumps disabled) Sep 24 01:35:02 bookcdb amavis[1009]: starting. /usr/sbin/amavisd-new at mail.bookcdb.com amavisd-new-2.6.5 (20110407), Unicode aware Sep 24 01:35:02 bookcdb amavis[1009]: Perl version 5.014002 Sep 24 01:35:05 bookcdb amavis[1155]: Net::Server: Group Not Defined. Defaulting to EGID '114 114' Sep 24 01:35:05 bookcdb amavis[1155]: Net::Server: User Not Defined. Defaulting to EUID '108' Sep 24 01:35:05 bookcdb amavis[1155]: Module Amavis::Conf 2.208 Sep 24 01:35:05 bookcdb amavis[1155]: Module Archive::Zip 1.30 Sep 24 01:35:05 bookcdb amavis[1155]: Module BerkeleyDB 0.49 Sep 24 01:35:05 bookcdb amavis[1155]: Module Compress::Zlib 2.033 Sep 24 01:35:05 bookcdb amavis[1155]: Module Convert::TNEF 0.17 Sep 24 01:35:05 bookcdb amavis[1155]: Module Convert::UUlib 1.4 Sep 24 01:35:05 bookcdb amavis[1155]: Module Crypt::OpenSSL::RSA 0.27 Sep 24 01:35:05 bookcdb amavis[1155]: Module DB_File 1.821 Sep 24 01:35:05 bookcdb amavis[1155]: Module Digest::MD5 2.51 Sep 24 01:35:05 bookcdb amavis[1155]: Module Digest::SHA 5.61 Sep 24 01:35:05 bookcdb amavis[1155]: Module IO::Socket::INET6 2.69 Sep 24 01:35:05 bookcdb amavis[1155]: Module MIME::Entity 5.502 Sep 24 01:35:05 bookcdb amavis[1155]: Module MIME::Parser 5.502 Sep 24 01:35:05 bookcdb amavis[1155]: Module MIME::Tools 5.502 Sep 24 01:35:05 bookcdb amavis[1155]: Module Mail::DKIM::Signer 0.39 Sep 24 01:35:05 bookcdb amavis[1155]: Module Mail::DKIM::Verifier 0.39 Sep 24 01:35:05 bookcdb amavis[1155]: Module Mail::Header 2.08 Sep 24 01:35:05 bookcdb amavis[1155]: Module Mail::Internet 2.08 Sep 24 01:35:05 bookcdb amavis[1155]: Module Mail::SPF v2.008 Sep 24 01:35:05 bookcdb amavis[1155]: Module Mail::SpamAssassin 3.003002 Sep 24 01:35:05 bookcdb amavis[1155]: Module Net::DNS 0.66 Sep 24 01:35:05 bookcdb amavis[1155]: Module Net::Server 0.99 Sep 24 01:35:05 bookcdb amavis[1155]: Module NetAddr::IP 4.058 Sep 24 01:35:05 bookcdb amavis[1155]: Module Socket6 0.23 Sep 24 01:35:05 bookcdb amavis[1155]: Module Time::HiRes 1.972101 Sep 24 01:35:05 bookcdb amavis[1155]: Module URI 1.59 Sep 24 01:35:05 bookcdb amavis[1155]: Module Unix::Syslog 1.1 Sep 24 01:35:05 bookcdb amavis[1155]: Amavis::DB code loaded Sep 24 01:35:05 bookcdb amavis[1155]: Amavis::Cache code loaded Sep 24 01:35:05 bookcdb amavis[1155]: SQL base code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: SQL::Log code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: SQL::Quarantine NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: Lookup::SQL code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: Lookup::LDAP code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: AM.PDP-in proto code loaded Sep 24 01:35:05 bookcdb amavis[1155]: SMTP-in proto code loaded Sep 24 01:35:05 bookcdb amavis[1155]: Courier proto code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: SMTP-out proto code loaded Sep 24 01:35:05 bookcdb amavis[1155]: Pipe-out proto code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: BSMTP-out proto code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: Local-out proto code loaded Sep 24 01:35:05 bookcdb amavis[1155]: OS_Fingerprint code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: ANTI-VIRUS code loaded Sep 24 01:35:05 bookcdb amavis[1155]: ANTI-SPAM code loaded Sep 24 01:35:05 bookcdb amavis[1155]: ANTI-SPAM-EXT code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: ANTI-SPAM-C code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: ANTI-SPAM-SA code loaded Sep 24 01:35:05 bookcdb amavis[1155]: Unpackers code loaded Sep 24 01:35:05 bookcdb amavis[1155]: DKIM code loaded Sep 24 01:35:05 bookcdb amavis[1155]: Tools code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: Found $file at /usr/bin/file Sep 24 01:35:05 bookcdb amavis[1155]: No $altermime, not using it Sep 24 01:35:05 bookcdb amavis[1155]: Internal decoder for .mail Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .F Sep 24 01:35:05 bookcdb amavis[1155]: Found decoder for .Z at /bin/uncompress Sep 24 01:35:05 bookcdb amavis[1155]: Internal decoder for .gz Sep 24 01:35:05 bookcdb amavis[1155]: Found decoder for .bz2 at /bin/bzip2 -d Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .lzo tried: lzop -d Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .rpm tried: rpm2cpio.pl, rpm2cpio Sep 24 01:35:05 bookcdb amavis[1155]: Found decoder for .cpio at /bin/pax Sep 24 01:35:05 bookcdb amavis[1155]: Found decoder for .tar at /bin/pax Sep 24 01:35:05 bookcdb amavis[1155]: Found decoder for .deb at /usr/bin/ar Sep 24 01:35:05 bookcdb amavis[1155]: Internal decoder for .zip Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .7z tried: 7zr, 7za, 7z Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .rar tried: unrar-free Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .arj tried: arj, unarj Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .arc tried: nomarch, arc Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .zoo tried: zoo Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .lha Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .doc tried: ripole Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .cab tried: cabextract Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .tnef Sep 24 01:35:05 bookcdb amavis[1155]: Internal decoder for .tnef Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .exe tried: unrar-free; arj, unarj Sep 24 01:35:05 bookcdb amavis[1155]: Using primary internal av scanner code for ClamAV-clamd Sep 24 01:35:05 bookcdb amavis[1155]: Found secondary av scanner ClamAV-clamscan at /usr/bin/clamscan Sep 24 01:35:05 bookcdb amavis[1155]: Creating db in /var/lib/amavis/db/; BerkeleyDB 0.49, libdb 5.1 Sep 24 01:35:05 bookcdb postgrey[1219]: Process Backgrounded Sep 24 01:35:05 bookcdb postgrey[1219]: 2012/09/24-01:35:05 postgrey (type Net::Server::Multiplex) starting! pid(1219) Sep 24 01:35:05 bookcdb postgrey[1219]: Using default listen value of 128 Sep 24 01:35:05 bookcdb postgrey[1219]: Binding to TCP port 10023 on host localhost#012 Sep 24 01:35:05 bookcdb postgrey[1219]: Setting gid to "116 116" Sep 24 01:35:05 bookcdb postgrey[1219]: Setting uid to "110" Sep 24 01:35:06 bookcdb spamd[1231]: logger: removing stderr method Sep 24 01:35:08 bookcdb spamd[1233]: spamd: server started on port 783/tcp (running version 3.3.2) Sep 24 01:35:08 bookcdb spamd[1233]: spamd: server pid: 1233 Sep 24 01:35:08 bookcdb spamd[1233]: spamd: server successfully spawned child process, pid 1238 Sep 24 01:35:08 bookcdb spamd[1233]: spamd: server successfully spawned child process, pid 1240 Sep 24 01:35:08 bookcdb spamd[1233]: prefork: child states: SI Sep 24 01:35:08 bookcdb spamd[1233]: prefork: child states: II Sep 24 01:35:15 bookcdb postfix/master[1729]: daemon started -- version 2.9.3, configuration /etc/postfix Sep 24 01:36:08 bookcdb postfix/smtpd[1995]: error: open database /var/lib/mailman/data/aliases.db: No such file or directory Sep 24 01:36:08 bookcdb postfix/smtpd[1995]: warning: hostname localhost does not resolve to address ::1: No address associated with hostname Sep 24 01:36:08 bookcdb postfix/smtpd[1995]: connect from unknown[::1] Sep 24 01:36:51 bookcdb postfix/trivial-rewrite[1999]: error: open database /etc/postfix/transport.db: No such file or directory Sep 24 01:36:51 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:36:51 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport lookup error for "*" Sep 24 01:36:51 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:36:51 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport lookup error for "*" Sep 24 01:36:51 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:36:51 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport lookup error for "root@localhost" Sep 24 01:36:51 bookcdb postfix/trivial-rewrite[1999]: warning: transport_maps lookup failure Sep 24 01:37:00 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:37:00 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport lookup error for "fmaster@localhost" Sep 24 01:37:00 bookcdb postfix/trivial-rewrite[1999]: warning: transport_maps lookup failure Sep 24 01:37:00 bookcdb postfix/smtpd[1995]: NOQUEUE: reject: RCPT from unknown[::1]: 451 4.3.0 <fmaster@localhost>: Temporary lookup failure; from=<root@localhost> to=<fmaster@localhost> proto=SMTP Sep 24 01:37:28 bookcdb dovecot: imap-login: Login: user=<mkadiri89>, method=PLAIN, rip=::1, lip=::1, mpid=2730, secured Sep 24 01:37:28 bookcdb dovecot: imap(mkadiri89): Disconnected: Logged out bytes=44/697 Sep 24 01:37:29 bookcdb dovecot: imap-login: Login: user=<mkadiri89>, method=PLAIN, rip=::1, lip=::1, mpid=2732, secured Sep 24 01:37:29 bookcdb dovecot: imap(mkadiri89): Disconnected: Logged out bytes=464/1303 Sep 24 01:37:29 bookcdb dovecot: imap-login: Login: user=<mkadiri89>, method=PLAIN, rip=::1, lip=::1, mpid=2734, secured Sep 24 01:37:29 bookcdb dovecot: imap(mkadiri89): Disconnected: Logged out bytes=117/1395 Sep 24 01:37:31 bookcdb dovecot: imap-login: Login: user=<mkadiri89>, method=PLAIN, rip=::1, lip=::1, mpid=2737, secured Sep 24 01:37:31 bookcdb dovecot: imap(mkadiri89): Disconnected: Logged out bytes=117/1395 Sep 24 01:37:49 bookcdb dovecot: imap-login: Login: user=<root>, method=PLAIN, rip=::1, lip=::1, mpid=2739, secured Sep 24 01:37:49 bookcdb dovecot: imap: Error: user root: Invalid settings in userdb: userdb returned 0 as uid Sep 24 01:37:49 bookcdb dovecot: imap: Error: Invalid user settings. Refer to server log for more information. Sep 24 01:37:54 bookcdb dovecot: imap-login: Login: user=<root>, method=PLAIN, rip=::1, lip=::1, mpid=2741, secured Sep 24 01:37:54 bookcdb dovecot: imap: Error: user root: Invalid settings in userdb: userdb returned 0 as uid Sep 24 01:37:54 bookcdb dovecot: imap: Error: Invalid user settings. Refer to server log for more information. Sep 24 01:38:04 bookcdb dovecot: imap-login: Login: user=<info>, method=PLAIN, rip=::1, lip=::1, mpid=2743, secured Sep 24 01:38:04 bookcdb dovecot: imap(info): Disconnected: Logged out bytes=44/697 Sep 24 01:38:04 bookcdb dovecot: imap-login: Login: user=<info>, method=PLAIN, rip=::1, lip=::1, mpid=2745, secured Sep 24 01:38:04 bookcdb dovecot: imap(info): Disconnected: Logged out bytes=464/1303 Sep 24 01:38:04 bookcdb dovecot: imap-login: Login: user=<info>, method=PLAIN, rip=::1, lip=::1, mpid=2747, secured Sep 24 01:38:04 bookcdb dovecot: imap(info): Disconnected: Logged out bytes=117/1395 Sep 24 01:38:55 bookcdb postfix/smtpd[1995]: disconnect from unknown[::1] Sep 24 01:38:58 bookcdb postfix/smtpd[1995]: warning: hostname localhost does not resolve to address ::1: No address associated with hostname Sep 24 01:38:58 bookcdb postfix/smtpd[1995]: connect from unknown[::1] Sep 24 01:39:11 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:39:11 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport lookup error for "info@localhost" Sep 24 01:39:11 bookcdb postfix/trivial-rewrite[1999]: warning: transport_maps lookup failure Sep 24 01:39:37 bookcdb postfix/smtpd[1995]: NOQUEUE: reject: RCPT from unknown[::1]: 451 4.3.0 <fmaster@localhost>: Temporary lookup failure; from=<info@localhost> to=<fmaster@localhost> proto=SMTP Sep 24 01:39:47 bookcdb postfix/smtpd[1995]: NOQUEUE: reject: RCPT from unknown[::1]: 451 4.3.0 <fmaster@localhost>: Temporary lookup failure; from=<info@localhost> to=<fmaster@localhost> proto=SMTP Sep 24 01:40:13 bookcdb postfix/smtpd[1995]: NOQUEUE: reject: RCPT from unknown[::1]: 451 4.3.0 <info@localhost>: Temporary lookup failure; from=<info@localhost> to=<info@localhost> proto=SMTP Sep 24 01:43:08 bookcdb postfix/smtpd[1995]: disconnect from unknown[::1] Sep 24 01:46:08 bookcdb postfix/anvil[1998]: statistics: max connection rate 1/60s for (smtp:::1) at Sep 24 01:36:08 Sep 24 01:46:08 bookcdb postfix/anvil[1998]: statistics: max connection count 1 for (smtp:::1) at Sep 24 01:36:08 Sep 24 01:46:08 bookcdb postfix/anvil[1998]: statistics: max cache size 1 at Sep 24 01:36:08 Sep 24 01:48:05 bookcdb dovecot: imap-login: Login: user=<info>, method=PLAIN, rip=::1, lip=::1, mpid=2805, secured Sep 24 01:48:05 bookcdb dovecot: imap(info): Disconnected: Logged out bytes=85/681 Sep 24 01:51:30 bookcdb dovecot: imap-login: Login: user=<info>, method=PLAIN, rip=::1, lip=::1, mpid=2815, secured Sep 24 01:51:30 bookcdb dovecot: imap(info): Disconnected: Logged out bytes=117/1395 Sep 24 02:05:15 bookcdb postfix/qmgr[1745]: 2EA006E0B32: from=<[email protected]>, size=439, nrcpt=1 (queue active) Sep 24 02:05:15 bookcdb postfix/qmgr[1745]: warning: connect to transport private/smtp-amavis: No such file or directory Sep 24 02:05:15 bookcdb postfix/qmgr[1745]: E76996E09B2: from=<[email protected]>, size=439, nrcpt=1 (queue active) Sep 24 02:05:15 bookcdb postfix/error[2842]: 2EA006E0B32: to=<[email protected]>, relay=none, delay=8391, delays=8391/0.05/0/0.09, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 02:05:16 bookcdb postfix/error[2843]: E76996E09B2: to=<[email protected]>, relay=none, delay=8416, delays=8416/0.03/0/0.11, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 02:30:15 bookcdb postfix/qmgr[1745]: D0C596E0B34: from=<[email protected]>, size=442, nrcpt=1 (queue active) Sep 24 02:30:15 bookcdb postfix/qmgr[1745]: warning: connect to transport private/smtp-amavis: No such file or directory Sep 24 02:30:15 bookcdb postfix/error[2914]: D0C596E0B34: to=<[email protected]>, relay=none, delay=9551, delays=9551/0.01/0/0.08, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 02:35:15 bookcdb postfix/qmgr[1745]: 2E5C36E0A07: from=<[email protected]>, size=438, nrcpt=1 (queue active) Sep 24 02:35:15 bookcdb postfix/qmgr[1745]: warning: connect to transport private/smtp-amavis: No such file or directory Sep 24 02:35:15 bookcdb postfix/qmgr[1745]: 0EA3A6E0ACC: from=<[email protected]>, size=438, nrcpt=1 (queue active) Sep 24 02:35:15 bookcdb postfix/error[2926]: 2E5C36E0A07: to=<[email protected]>, orig_to=<root>, relay=none, delay=34386, delays=34386/0.03/0/0.1, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 02:35:15 bookcdb postfix/error[2927]: 0EA3A6E0ACC: to=<[email protected]>, orig_to=<root>, relay=none, delay=34299, delays=34298/0.02/0/0.12, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 03:15:15 bookcdb postfix/qmgr[1745]: 2EA006E0B32: from=<[email protected]>, size=439, nrcpt=1 (queue active) Sep 24 03:15:15 bookcdb postfix/qmgr[1745]: warning: connect to transport private/smtp-amavis: No such file or directory Sep 24 03:15:15 bookcdb postfix/qmgr[1745]: E76996E09B2: from=<[email protected]>, size=439, nrcpt=1 (queue active) Sep 24 03:15:15 bookcdb postfix/error[3025]: 2EA006E0B32: to=<[email protected]>, relay=none, delay=12590, delays=12590/0.01/0/0.07, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 03:15:15 bookcdb postfix/error[3026]: E76996E09B2: to=<[email protected]>, relay=none, delay=12616, delays=12616/0.01/0/0.09, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 03:40:15 bookcdb postfix/qmgr[1745]: D0C596E0B34: from=<[email protected]>, size=442, nrcpt=1 (queue active) Sep 24 03:40:15 bookcdb postfix/qmgr[1745]: warning: connect to transport private/smtp-amavis: No such file or directory Sep 24 03:40:15 bookcdb postfix/error[3097]: D0C596E0B34: to=<[email protected]>, relay=none, delay=13752, delays=13752/0.01/0/0.07, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 03:45:15 bookcdb postfix/qmgr[1745]: 2E5C36E0A07: from=<[email protected]>, size=438, nrcpt=1 (queue active) Sep 24 03:45:15 bookcdb postfix/qmgr[1745]: warning: connect to transport private/smtp-amavis: No such file or directory Sep 24 03:45:15 bookcdb postfix/qmgr[1745]: 0EA3A6E0ACC: from=<[email protected]>, size=438, nrcpt=1 (queue active) Sep 24 03:45:15 bookcdb postfix/error[3129]: 2E5C36E0A07: to=<[email protected]>, orig_to=<root>, relay=none, delay=38586, delays=38586/0.01/0/0.09, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 03:45:15 bookcdb postfix/error[3130]: 0EA3A6E0ACC: to=<[email protected]>, orig_to=<root>, relay=none, delay=38498, delays=38498/0.01/0/0.08, dsn=4.3.0, status=deferred (mail transport unavailable) postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases, hash:/var/lib/mailman/data/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix content_filter = smtp-amavis:[127.0.0.1]:10024 home_mailbox = Maildir/ inet_interfaces = all inet_protocols = all mailbox_command = mailbox_size_limit = 0 mailman_destination_recipient_limit = 1 mydestination = server1.bookcdb.com, bookcdb.com, localhost.bookcdb.com, localho st, bookcdb.com myhostname = server1.bookcdb.com mynetworks = 127.0.0.0/8 myorigin = /etc/mailname readme_directory = no recipient_delimiter = + relay_domains = lists.bookcdb.com relayhost = smtp_tls_note_starttls_offer = yes smtp_tls_security_level = may smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_recipient_restrictions = permit_sasl_authenticated,permit_mynetworks,rejec t_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_local_domain = smtpd_sasl_security_options = noanonymous smtpd_tls_CAfile = /etc/ssl/certs/cacert.pem smtpd_tls_auth_only = no smtpd_tls_cert_file = /etc/ssl/certs/smtpd.crt smtpd_tls_key_file = /etc/ssl/private/smtpd.key smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_security_level = may smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_tls_session_cache_timeout = 3600s smtpd_use_tls = yes tls_random_source = dev:/dev/urandom transport_maps = hash:/etc/postfix/transport

    Read the article

  • Using FiddlerCore to capture HTTP Requests with .NET

    - by Rick Strahl
    Over the last few weeks I’ve been working on my Web load testing utility West Wind WebSurge. One of the key components of a load testing tool is the ability to capture URLs effectively so that you can play them back later under load. One of the options in WebSurge for capturing URLs is to use its built-in capture tool which acts as an HTTP proxy to capture any HTTP and HTTPS traffic from most Windows HTTP clients, including Web Browsers as well as standalone Windows applications and services. To make this happen, I used Eric Lawrence’s awesome FiddlerCore library, which provides most of the functionality of his desktop Fiddler application, all rolled into an easy to use library that you can plug into your own applications. FiddlerCore makes it almost too easy to capture HTTP content! For WebSurge I needed to capture all HTTP traffic in order to capture the full HTTP request – URL, headers and any content posted by the client. The result of what I ended up creating is this semi-generic capture form: In this post I’m going to demonstrate how easy it is to use FiddlerCore to build this HTTP Capture Form.  If you want to jump right in here are the links to get Telerik’s Fiddler Core and the code for the demo provided here. FiddlerCore Download FiddlerCore on NuGet Show me the Code (WebSurge Integration code from GitHub) Download the WinForms Sample Form West Wind Web Surge (example implementation in live app) Note that FiddlerCore is bound by a license for commercial usage – see license.txt in the FiddlerCore distribution for details. Integrating FiddlerCore FiddlerCore is a library that simply plugs into your application. You can download it from the Telerik site and manually add the assemblies to your project, or you can simply install the NuGet package via:       PM> Install-Package FiddlerCore The library consists of the FiddlerCore.dll as well as a couple of support libraries (CertMaker.dll and BCMakeCert.dll) that are used for installing SSL certificates. I’ll have more on SSL captures and certificate installation later in this post. But first let’s see how easy it is to use FiddlerCore to capture HTTP content by looking at how to build the above capture form. Capturing HTTP Content Once the library is installed it’s super easy to hook up Fiddler functionality. Fiddler includes a number of static class methods on the FiddlerApplication object that can be called to hook up callback events as well as actual start monitoring HTTP URLs. In the following code directly lifted from WebSurge, I configure a few filter options on Form level object, from the user inputs shown on the form by assigning it to a capture options object. In the live application these settings are persisted configuration values, but in the demo they are one time values initialized and set on the form. Once these options are set, I hook up the AfterSessionComplete event to capture every URL that passes through the proxy after the request is completed and start up the Proxy service:void Start() { if (tbIgnoreResources.Checked) CaptureConfiguration.IgnoreResources = true; else CaptureConfiguration.IgnoreResources = false; string strProcId = txtProcessId.Text; if (strProcId.Contains('-')) strProcId = strProcId.Substring(strProcId.IndexOf('-') + 1).Trim(); strProcId = strProcId.Trim(); int procId = 0; if (!string.IsNullOrEmpty(strProcId)) { if (!int.TryParse(strProcId, out procId)) procId = 0; } CaptureConfiguration.ProcessId = procId; CaptureConfiguration.CaptureDomain = txtCaptureDomain.Text; FiddlerApplication.AfterSessionComplete += FiddlerApplication_AfterSessionComplete; FiddlerApplication.Startup(8888, true, true, true); } The key lines for FiddlerCore are just the last two lines of code that include the event hookup code as well as the Startup() method call. Here I only hook up to the AfterSessionComplete event but there are a number of other events that hook various stages of the HTTP request cycle you can also hook into. Other events include BeforeRequest, BeforeResponse, RequestHeadersAvailable, ResponseHeadersAvailable and so on. In my case I want to capture the request data and I actually have several options to capture this data. AfterSessionComplete is the last event that fires in the request sequence and it’s the most common choice to capture all request and response data. I could have used several other events, but AfterSessionComplete is one place where you can look both at the request and response data, so this will be the most common place to hook into if you’re capturing content. The implementation of AfterSessionComplete is responsible for capturing all HTTP request headers and it looks something like this:private void FiddlerApplication_AfterSessionComplete(Session sess) { // Ignore HTTPS connect requests if (sess.RequestMethod == "CONNECT") return; if (CaptureConfiguration.ProcessId > 0) { if (sess.LocalProcessID != 0 && sess.LocalProcessID != CaptureConfiguration.ProcessId) return; } if (!string.IsNullOrEmpty(CaptureConfiguration.CaptureDomain)) { if (sess.hostname.ToLower() != CaptureConfiguration.CaptureDomain.Trim().ToLower()) return; } if (CaptureConfiguration.IgnoreResources) { string url = sess.fullUrl.ToLower(); var extensions = CaptureConfiguration.ExtensionFilterExclusions; foreach (var ext in extensions) { if (url.Contains(ext)) return; } var filters = CaptureConfiguration.UrlFilterExclusions; foreach (var urlFilter in filters) { if (url.Contains(urlFilter)) return; } } if (sess == null || sess.oRequest == null || sess.oRequest.headers == null) return; string headers = sess.oRequest.headers.ToString(); var reqBody = sess.GetRequestBodyAsString(); // if you wanted to capture the response //string respHeaders = session.oResponse.headers.ToString(); //var respBody = session.GetResponseBodyAsString(); // replace the HTTP line to inject full URL string firstLine = sess.RequestMethod + " " + sess.fullUrl + " " + sess.oRequest.headers.HTTPVersion; int at = headers.IndexOf("\r\n"); if (at < 0) return; headers = firstLine + "\r\n" + headers.Substring(at + 1); string output = headers + "\r\n" + (!string.IsNullOrEmpty(reqBody) ? reqBody + "\r\n" : string.Empty) + Separator + "\r\n\r\n"; BeginInvoke(new Action<string>((text) => { txtCapture.AppendText(text); UpdateButtonStatus(); }), output); } The code starts by filtering out some requests based on the CaptureOptions I set before the capture is started. These options/filters are applied when requests actually come in. This is very useful to help narrow down the requests that are captured for playback based on options the user picked. I find it useful to limit requests to a certain domain for captures, as well as filtering out some request types like static resources – images, css, scripts etc. This is of course optional, but I think it’s a common scenario and WebSurge makes good use of this feature. AfterSessionComplete like other FiddlerCore events, provides a Session object parameter which contains all the request and response details. There are oRequest and oResponse objects to hold their respective data. In my case I’m interested in the raw request headers and body only, as you can see in the commented code you can also retrieve the response headers and body. Here the code captures the request headers and body and simply appends the output to the textbox on the screen. Note that the Fiddler events are asynchronous, so in order to display the content in the UI they have to be marshaled back the UI thread with BeginInvoke, which here simply takes the generated headers and appends it to the existing textbox test on the form. As each request is processed, the headers are captured and appended to the bottom of the textbox resulting in a Session HTTP capture in the format that Web Surge internally supports, which is basically raw request headers with a customized 1st HTTP Header line that includes the full URL rather than a server relative URL. When the capture is done the user can either copy the raw HTTP session to the clipboard, or directly save it to file. This raw capture format is the same format WebSurge and also Fiddler use to import/export request data. While this code is application specific, it demonstrates the kind of logic that you can easily apply to the request capture process, which is one of the reasonsof why FiddlerCore is so powerful. You get to choose what content you want to look up as part of your own application logic and you can then decide how to capture or use that data as part of your application. The actual captured data in this case is only a string. The user can edit the data by hand or in the the case of WebSurge, save it to disk and automatically open the captured session as a new load test. Stopping the FiddlerCore Proxy Finally to stop capturing requests you simply disconnect the event handler and call the FiddlerApplication.ShutDown() method:void Stop() { FiddlerApplication.AfterSessionComplete -= FiddlerApplication_AfterSessionComplete; if (FiddlerApplication.IsStarted()) FiddlerApplication.Shutdown(); } As you can see, adding HTTP capture functionality to an application is very straight forward. FiddlerCore offers tons of features I’m not even touching on here – I suspect basic captures are the most common scenario, but a lot of different things can be done with FiddlerCore’s simple API interface. Sky’s the limit! The source code for this sample capture form (WinForms) is provided as part of this article. Adding Fiddler Certificates with FiddlerCore One of the sticking points in West Wind WebSurge has been that if you wanted to capture HTTPS/SSL traffic, you needed to have the full version of Fiddler and have HTTPS decryption enabled. Essentially you had to use Fiddler to configure HTTPS decryption and the associated installation of the Fiddler local client certificate that is used for local decryption of incoming SSL traffic. While this works just fine, requiring to have Fiddler installed and then using a separate application to configure the SSL functionality isn’t ideal. Fortunately FiddlerCore actually includes the tools to register the Fiddler Certificate directly using FiddlerCore. Why does Fiddler need a Certificate in the first Place? Fiddler and FiddlerCore are essentially HTTP proxies which means they inject themselves into the HTTP conversation by re-routing HTTP traffic to a special HTTP port (8888 by default for Fiddler) and then forward the HTTP data to the original client. Fiddler injects itself as the system proxy in using the WinInet Windows settings  which are the same settings that Internet Explorer uses and that are configured in the Windows and Internet Explorer Internet Settings dialog. Most HTTP clients running on Windows pick up and apply these system level Proxy settings before establishing new HTTP connections and that’s why most clients automatically work once Fiddler – or FiddlerCore/WebSurge are running. For plain HTTP requests this just works – Fiddler intercepts the HTTP requests on the proxy port and then forwards them to the original port (80 for HTTP and 443 for SSL typically but it could be any port). For SSL however, this is not quite as simple – Fiddler can easily act as an HTTPS/SSL client to capture inbound requests from the server, but when it forwards the request to the client it has to also act as an SSL server and provide a certificate that the client trusts. This won’t be the original certificate from the remote site, but rather a custom local certificate that effectively simulates an SSL connection between the proxy and the client. If there is no custom certificate configured for Fiddler the SSL request fails with a certificate validation error. The key for this to work is that a custom certificate has to be installed that the HTTPS client trusts on the local machine. For a much more detailed description of the process you can check out Eric Lawrence’s blog post on Certificates. If you’re using the desktop version of Fiddler you can install a local certificate into the Windows certificate store. Fiddler proper does this from the Options menu: This operation does several things: It installs the Fiddler Root Certificate It sets trust to this Root Certificate A new client certificate is generated for each HTTPS site monitored Certificate Installation with FiddlerCore You can also provide this same functionality using FiddlerCore which includes a CertMaker class. Using CertMaker is straight forward to use and it provides an easy way to create some simple helpers that can install and uninstall a Fiddler Root certificate:public static bool InstallCertificate() { if (!CertMaker.rootCertExists()) { if (!CertMaker.createRootCert()) return false; if (!CertMaker.trustRootCert()) return false; } return true; } public static bool UninstallCertificate() { if (CertMaker.rootCertExists()) { if (!CertMaker.removeFiddlerGeneratedCerts(true)) return false; } return true; } InstallCertificate() works by first checking whether the root certificate is already installed and if it isn’t goes ahead and creates a new one. The process of creating the certificate is a two step process – first the actual certificate is created and then it’s moved into the certificate store to become trusted. I’m not sure why you’d ever split these operations up since a cert created without trust isn’t going to be of much value, but there are two distinct steps. When you trigger the trustRootCert() method, a message box will pop up on the desktop that lets you know that you’re about to trust a local private certificate. This is a security feature to ensure that you really want to trust the Fiddler root since you are essentially installing a man in the middle certificate. It’s quite safe to use this generated root certificate, because it’s been specifically generated for your machine and thus is not usable from external sources, the only way to use this certificate in a trusted way is from the local machine. IOW, unless somebody has physical access to your machine, there’s no useful way to hijack this certificate and use it for nefarious purposes (see Eric’s post for more details). Once the Root certificate has been installed, FiddlerCore/Fiddler create new certificates for each site that is connected to with HTTPS. You can end up with quite a few temporary certificates in your certificate store. To uninstall you can either use Fiddler and simply uncheck the Decrypt HTTPS traffic option followed by the remove Fiddler certificates button, or you can use FiddlerCore’s CertMaker.removeFiddlerGeneratedCerts() which removes the root cert and any of the intermediary certificates Fiddler created. Keep in mind that when you uninstall you uninstall the certificate for both FiddlerCore and Fiddler, so use UninstallCertificate() with care and realize that you might affect the Fiddler application’s operation by doing so as well. When to check for an installed Certificate Note that the check to see if the root certificate exists is pretty fast, while the actual process of installing the certificate is a relatively slow operation that even on a fast machine takes a few seconds. Further the trust operation pops up a message box so you probably don’t want to install the certificate repeatedly. Since the check for the root certificate is fast, you can easily put a call to InstallCertificate() in any capture startup code – in which case the certificate installation only triggers when a certificate is in fact not installed. Personally I like to make certificate installation explicit – just like Fiddler does, so in WebSurge I use a small drop down option on the menu to install or uninstall the SSL certificate:   This code calls the InstallCertificate and UnInstallCertificate functions respectively – the experience with this is similar to what you get in Fiddler with the extra dialog box popping up to prompt confirmation for installation of the root certificate. Once the cert is installed you can then capture SSL requests. There’s a gotcha however… Gotcha: FiddlerCore Certificates don’t stick by Default When I originally tried to use the Fiddler certificate installation I ran into an odd problem. I was able to install the certificate and immediately after installation was able to capture HTTPS requests. Then I would exit the application and come back in and try the same HTTPS capture again and it would fail due to a missing certificate. CertMaker.rootCertExists() would return false after every restart and if re-installed the certificate a new certificate would get added to the certificate store resulting in a bunch of duplicated root certificates with different keys. What the heck? CertMaker and BcMakeCert create non-sticky CertificatesI turns out that FiddlerCore by default uses different components from what the full version of Fiddler uses. Fiddler uses a Windows utility called MakeCert.exe to create the Fiddler Root certificate. FiddlerCore however installs the CertMaker.dll and BCMakeCert.dll assemblies, which use a different crypto library (Bouncy Castle) for certificate creation than MakeCert.exe which uses the Windows Crypto API. The assemblies provide support for non-windows operation for Fiddler under Mono, as well as support for some non-Windows certificate platforms like iOS and Android for decryption. The bottom line is that the FiddlerCore provided bouncy castle assemblies are not sticky by default as the certificates created with them are not cached as they are in Fiddler proper. To get certificates to ‘stick’ you have to explicitly cache the certificates in Fiddler’s internal preferences. A cache aware version of InstallCertificate looks something like this:public static bool InstallCertificate() { if (!CertMaker.rootCertExists()) { if (!CertMaker.createRootCert()) return false; if (!CertMaker.trustRootCert()) return false; App.Configuration.UrlCapture.Cert = FiddlerApplication.Prefs.GetStringPref("fiddler.certmaker.bc.cert", null); App.Configuration.UrlCapture.Key = FiddlerApplication.Prefs.GetStringPref("fiddler.certmaker.bc.key", null); } return true; } public static bool UninstallCertificate() { if (CertMaker.rootCertExists()) { if (!CertMaker.removeFiddlerGeneratedCerts(true)) return false; } App.Configuration.UrlCapture.Cert = null; App.Configuration.UrlCapture.Key = null; return true; } In this code I store the Fiddler cert and private key in an application configuration settings that’s stored with the application settings (App.Configuration.UrlCapture object). These settings automatically persist when WebSurge is shut down. The values are read out of Fiddler’s internal preferences store which is set after a new certificate has been created. Likewise I clear out the configuration settings when the certificate is uninstalled. In order for these setting to be used you have to also load the configuration settings into the Fiddler preferences *before* a call to rootCertExists() is made. I do this in the capture form’s constructor:public FiddlerCapture(StressTestForm form) { InitializeComponent(); CaptureConfiguration = App.Configuration.UrlCapture; MainForm = form; if (!string.IsNullOrEmpty(App.Configuration.UrlCapture.Cert)) { FiddlerApplication.Prefs.SetStringPref("fiddler.certmaker.bc.key", App.Configuration.UrlCapture.Key); FiddlerApplication.Prefs.SetStringPref("fiddler.certmaker.bc.cert", App.Configuration.UrlCapture.Cert); }} This is kind of a drag to do and not documented anywhere that I could find, so hopefully this will save you some grief if you want to work with the stock certificate logic that installs with FiddlerCore. MakeCert provides sticky Certificates and the same functionality as Fiddler But there’s actually an easier way. If you want to skip the above Fiddler preference configuration code in your application you can choose to distribute MakeCert.exe instead of certmaker.dll and bcmakecert.dll. When you use MakeCert.exe, the certificates settings are stored in Windows so they are available without any custom configuration inside of your application. It’s easier to integrate and as long as you run on Windows and you don’t need to support iOS or Android devices is simply easier to deal with. To integrate into your project, you can remove the reference to CertMaker.dll (and the BcMakeCert.dll assembly) from your project. Instead copy MakeCert.exe into your output folder. To make sure MakeCert.exe gets pushed out, include MakeCert.exe in your project and set the Build Action to None, and Copy to Output Directory to Copy if newer. Note that the CertMaker.dll reference in the project has been removed and on disk the files for Certmaker.dll, as well as the BCMakeCert.dll files on disk. Keep in mind that these DLLs are resources of the FiddlerCore NuGet package, so updating the package may end up pushing those files back into your project. Once MakeCert.exe is distributed FiddlerCore checks for it first before using the assemblies so as long as MakeCert.exe exists it’ll be used for certificate creation (at least on Windows). Summary FiddlerCore is a pretty sweet tool, and it’s absolutely awesome that we get to plug in most of the functionality of Fiddler right into our own applications. A few years back I tried to build this sort of functionality myself for an app and ended up giving up because it’s a big job to get HTTP right – especially if you need to support SSL. FiddlerCore now provides that functionality as a turnkey solution that can be plugged into your own apps easily. The only downside is FiddlerCore’s documentation for more advanced features like certificate installation which is pretty sketchy. While for the most part FiddlerCore’s feature set is easy to work with without any documentation, advanced features are often not intuitive to gleam by just using Intellisense or the FiddlerCore help file reference (which is not terribly useful). While Eric Lawrence is very responsive on his forum and on Twitter, there simply isn’t much useful documentation on Fiddler/FiddlerCore available online. If you run into trouble the forum is probably the first place to look and then ask a question if you can’t find the answer. The best documentation you can find is Eric’s Fiddler Book which covers a ton of functionality of Fiddler and FiddlerCore. The book is a great reference to Fiddler’s feature set as well as providing great insights into the HTTP protocol. The second half of the book that gets into the innards of HTTP is an excellent read for anybody who wants to know more about some of the more arcane aspects and special behaviors of HTTP – it’s well worth the read. While the book has tons of information in a very readable format, it’s unfortunately not a great reference as it’s hard to find things in the book and because it’s not available online you can’t electronically search for the great content in it. But it’s hard to complain about any of this given the obvious effort and love that’s gone into this awesome product for all of these years. A mighty big thanks to Eric Lawrence  for having created this useful tool that so many of us use all the time, and also to Telerik for picking up Fiddler/FiddlerCore and providing Eric the resources to support and improve this wonderful tool full time and keeping it free for all. Kudos! Resources FiddlerCore Download FiddlerCore NuGet Fiddler Capture Sample Form Fiddler Capture Form in West Wind WebSurge (GitHub) Eric Lawrence’s Fiddler Book© Rick Strahl, West Wind Technologies, 2005-2014Posted in .NET  HTTP   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Why is Java EE 6 better than Spring ?

    - by arungupta
    Java EE 6 was released over 2 years ago and now there are 14 compliant application servers. In all my talks around the world, a question that is frequently asked is Why should I use Java EE 6 instead of Spring ? There are already several blogs covering that topic: Java EE wins over Spring by Bill Burke Why will I use Java EE instead of Spring in new Enterprise Java projects in 2012 ? by Kai Waehner (more discussion on TSS) Spring to Java EE migration (Part 1 and 2, 3 and 4 coming as well) by David Heffelfinger Spring to Java EE - A Migration Experience by Lincoln Baxter Migrating Spring to Java EE 6 by Bert Ertman and Paul Bakker at NLJUG Moving from Spring to Java EE 6 - The Age of Frameworks is Over at TSS Java EE vs Spring Shootout by Rohit Kelapure and Reza Rehman at JavaOne 2011 Java EE 6 and the Ewoks by Murat Yener Definite excuse to avoid Spring forever - Bert Ertman and Arun Gupta I will try to share my perspective in this blog. First of all, I'd like to start with a note: Thank you Spring framework for filling the interim gap and providing functionality that is now included in the mainstream Java EE 6 application servers. The Java EE platform has evolved over the years learning from frameworks like Spring and provides all the functionality to build an enterprise application. Thank you very much Spring framework! While Spring was revolutionary in its time and is still very popular and quite main stream in the same way Struts was circa 2003, it really is last generation's framework - some people are even calling it legacy. However my theory is "code is king". So my approach is to build/take a simple Hello World CRUD application in Java EE 6 and Spring and compare the deployable artifacts. I started looking at the official tutorial Developing a Spring Framework MVC Application Step-by-Step but it is using the older version 2.5. I wasn't able to find any updated version in the current 3.1 release. Next, I downloaded Spring Tool Suite and thought that would provide some template samples to get started. A least a quick search did not show any handy tutorials - either video or text-based. So I searched and found a link to their SVN repository at src.springframework.org/svn/spring-samples/. I tried the "mvc-basic" sample and the generated WAR file was 4.43 MB. While it was named a "basic" sample it seemed to come with 19 different libraries bundled but it was what I could find: ./WEB-INF/lib/aopalliance-1.0.jar./WEB-INF/lib/hibernate-validator-4.1.0.Final.jar./WEB-INF/lib/jcl-over-slf4j-1.6.1.jar./WEB-INF/lib/joda-time-1.6.2.jar./WEB-INF/lib/joda-time-jsptags-1.0.2.jar./WEB-INF/lib/jstl-1.2.jar./WEB-INF/lib/log4j-1.2.16.jar./WEB-INF/lib/slf4j-api-1.6.1.jar./WEB-INF/lib/slf4j-log4j12-1.6.1.jar./WEB-INF/lib/spring-aop-3.0.5.RELEASE.jar./WEB-INF/lib/spring-asm-3.0.5.RELEASE.jar./WEB-INF/lib/spring-beans-3.0.5.RELEASE.jar./WEB-INF/lib/spring-context-3.0.5.RELEASE.jar./WEB-INF/lib/spring-context-support-3.0.5.RELEASE.jar./WEB-INF/lib/spring-core-3.0.5.RELEASE.jar./WEB-INF/lib/spring-expression-3.0.5.RELEASE.jar./WEB-INF/lib/spring-web-3.0.5.RELEASE.jar./WEB-INF/lib/spring-webmvc-3.0.5.RELEASE.jar./WEB-INF/lib/validation-api-1.0.0.GA.jar And it is not even using any database! The app deployed fine on GlassFish 3.1.2 but the "@Controller Example" link did not work as it was missing the context root. With a bit of tweaking I could deploy the application and assume that the account got created because no error was displayed in the browser or server log. Next I generated the WAR for "mvc-ajax" and the 5.1 MB WAR had 20 JARs (1 removed, 2 added): ./WEB-INF/lib/aopalliance-1.0.jar./WEB-INF/lib/hibernate-validator-4.1.0.Final.jar./WEB-INF/lib/jackson-core-asl-1.6.4.jar./WEB-INF/lib/jackson-mapper-asl-1.6.4.jar./WEB-INF/lib/jcl-over-slf4j-1.6.1.jar./WEB-INF/lib/joda-time-1.6.2.jar./WEB-INF/lib/jstl-1.2.jar./WEB-INF/lib/log4j-1.2.16.jar./WEB-INF/lib/slf4j-api-1.6.1.jar./WEB-INF/lib/slf4j-log4j12-1.6.1.jar./WEB-INF/lib/spring-aop-3.0.5.RELEASE.jar./WEB-INF/lib/spring-asm-3.0.5.RELEASE.jar./WEB-INF/lib/spring-beans-3.0.5.RELEASE.jar./WEB-INF/lib/spring-context-3.0.5.RELEASE.jar./WEB-INF/lib/spring-context-support-3.0.5.RELEASE.jar./WEB-INF/lib/spring-core-3.0.5.RELEASE.jar./WEB-INF/lib/spring-expression-3.0.5.RELEASE.jar./WEB-INF/lib/spring-web-3.0.5.RELEASE.jar./WEB-INF/lib/spring-webmvc-3.0.5.RELEASE.jar./WEB-INF/lib/validation-api-1.0.0.GA.jar 2 more JARs for just doing Ajax. Anyway, deploying this application gave the following error: Caused by: java.lang.NoSuchMethodError: org.codehaus.jackson.map.SerializationConfig.<init>(Lorg/codehaus/jackson/map/ClassIntrospector;Lorg/codehaus/jackson/map/AnnotationIntrospector;Lorg/codehaus/jackson/map/introspect/VisibilityChecker;Lorg/codehaus/jackson/map/jsontype/SubtypeResolver;)V    at org.springframework.samples.mvc.ajax.json.ConversionServiceAwareObjectMapper.<init>(ConversionServiceAwareObjectMapper.java:20)    at org.springframework.samples.mvc.ajax.json.JacksonConversionServiceConfigurer.postProcessAfterInitialization(JacksonConversionServiceConfigurer.java:40)    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyBeanPostProcessorsAfterInitialization(AbstractAutowireCapableBeanFactory.java:407) Seems like some incorrect repos in the "pom.xml". Next one is "mvc-showcase" and the 6.49 MB WAR now has 28 JARs as shown below: ./WEB-INF/lib/aopalliance-1.0.jar./WEB-INF/lib/aspectjrt-1.6.10.jar./WEB-INF/lib/commons-fileupload-1.2.2.jar./WEB-INF/lib/commons-io-2.0.1.jar./WEB-INF/lib/el-api-2.2.jar./WEB-INF/lib/hibernate-validator-4.1.0.Final.jar./WEB-INF/lib/jackson-core-asl-1.8.1.jar./WEB-INF/lib/jackson-mapper-asl-1.8.1.jar./WEB-INF/lib/javax.inject-1.jar./WEB-INF/lib/jcl-over-slf4j-1.6.1.jar./WEB-INF/lib/jdom-1.0.jar./WEB-INF/lib/joda-time-1.6.2.jar./WEB-INF/lib/jstl-api-1.2.jar./WEB-INF/lib/jstl-impl-1.2.jar./WEB-INF/lib/log4j-1.2.16.jar./WEB-INF/lib/rome-1.0.0.jar./WEB-INF/lib/slf4j-api-1.6.1.jar./WEB-INF/lib/slf4j-log4j12-1.6.1.jar./WEB-INF/lib/spring-aop-3.1.0.RELEASE.jar./WEB-INF/lib/spring-asm-3.1.0.RELEASE.jar./WEB-INF/lib/spring-beans-3.1.0.RELEASE.jar./WEB-INF/lib/spring-context-3.1.0.RELEASE.jar./WEB-INF/lib/spring-context-support-3.1.0.RELEASE.jar./WEB-INF/lib/spring-core-3.1.0.RELEASE.jar./WEB-INF/lib/spring-expression-3.1.0.RELEASE.jar./WEB-INF/lib/spring-web-3.1.0.RELEASE.jar./WEB-INF/lib/spring-webmvc-3.1.0.RELEASE.jar./WEB-INF/lib/validation-api-1.0.0.GA.jar The app at least deployed and showed results this time. But still no database! Next I tried building "jpetstore" and got the error: [ERROR] Failed to execute goal on project org.springframework.samples.jpetstore:Could not resolve dependencies for project org.springframework.samples:org.springframework.samples.jpetstore:war:1.0.0-SNAPSHOT: Failed to collect dependencies for [commons-fileupload:commons-fileupload:jar:1.2.1 (compile), org.apache.struts:com.springsource.org.apache.struts:jar:1.2.9 (compile), javax.xml.rpc:com.springsource.javax.xml.rpc:jar:1.1.0 (compile), org.apache.commons:com.springsource.org.apache.commons.dbcp:jar:1.2.2.osgi (compile), commons-io:commons-io:jar:1.3.2 (compile), hsqldb:hsqldb:jar:1.8.0.7 (compile), org.apache.tiles:tiles-core:jar:2.2.0 (compile), org.apache.tiles:tiles-jsp:jar:2.2.0 (compile), org.tuckey:urlrewritefilter:jar:3.1.0 (compile), org.springframework:spring-webmvc:jar:3.0.0.BUILD-SNAPSHOT (compile), org.springframework:spring-orm:jar:3.0.0.BUILD-SNAPSHOT (compile), org.springframework:spring-context-support:jar:3.0.0.BUILD-SNAPSHOT (compile), org.springframework.webflow:spring-js:jar:2.0.7.RELEASE (compile), org.apache.ibatis:com.springsource.com.ibatis:jar:2.3.4.726 (runtime), com.caucho:com.springsource.com.caucho:jar:3.2.1 (compile), org.apache.axis:com.springsource.org.apache.axis:jar:1.4.0 (compile), javax.wsdl:com.springsource.javax.wsdl:jar:1.6.1 (compile), javax.servlet:jstl:jar:1.2 (runtime), org.aspectj:aspectjweaver:jar:1.6.5 (compile), javax.servlet:servlet-api:jar:2.5 (provided), javax.servlet.jsp:jsp-api:jar:2.1 (provided), junit:junit:jar:4.6 (test)]: Failed to read artifact descriptor for org.springframework:spring-webmvc:jar:3.0.0.BUILD-SNAPSHOT: Could not transfer artifact org.springframework:spring-webmvc:pom:3.0.0.BUILD-SNAPSHOT from/to JBoss repository (http://repository.jboss.com/maven2): Access denied to: http://repository.jboss.com/maven2/org/springframework/spring-webmvc/3.0.0.BUILD-SNAPSHOT/spring-webmvc-3.0.0.BUILD-SNAPSHOT.pom It appears the sample is broken - maybe I was pulling from the wrong repository - would be great if someone were to point me at a good target to use here. With a 50% hit on samples in this repository, I started searching through numerous blogs, most of which have either outdated information (using XML-heavy Spring 2.5), some piece of configuration (which is a typical "feature" of Spring) is missing, or too much complexity in the sample. I finally found this blog that worked like a charm. This blog creates a trivial Spring MVC 3 application using Hibernate and MySQL. This application performs CRUD operations on a single table in a database using typical Spring technologies.  I downloaded the sample code from the blog, deployed it on GlassFish 3.1.2 and could CRUD the "person" entity. The source code for this application can be downloaded here. More details on the application statistics below. And then I built a similar CRUD application in Java EE 6 using NetBeans wizards in a couple of minutes. The source code for the application can be downloaded here and the WAR here. The Spring Source Tool Suite may also offer similar wizard-driven capabilities but this blog focus primarily on comparing the runtimes. The lack of STS tutorials was slightly disappointing as well. NetBeans however has tons of text-based and video tutorials and tons of material even by the community. One more bit on the download size of tools bundle ... NetBeans 7.1.1 "All" is 211 MB (which includes GlassFish and Tomcat) Spring Tool Suite  2.9.0 is 347 MB (~ 65% bigger) This blog is not about the tooling comparison so back to the Java EE 6 version of the application .... In order to run the Java EE version on GlassFish, copy the MySQL Connector/J to glassfish3/glassfish/domains/domain1/lib/ext directory and create a JDBC connection pool and JDBC resource as: ./bin/asadmin create-jdbc-connection-pool --datasourceclassname \\ com.mysql.jdbc.jdbc2.optional.MysqlDataSource --restype \\ javax.sql.DataSource --property \\ portNumber=3306:user=mysql:password=mysql:databaseName=mydatabase \\ myConnectionPool ./bin/asadmin create-jdbc-resource --connectionpoolid myConnectionPool jdbc/myDataSource I generated WARs for the two projects and the table below highlights some differences between them: Java EE 6 Spring WAR File Size 0.021030 MB 10.87 MB (~516x) Number of files 20 53 (> 2.5x) Bundled libraries 0 36 Total size of libraries 0 12.1 MB XML files 3 5 LoC in XML files 50 (11 + 15 + 24) 129 (27 + 46 + 16 + 11 + 19) (~ 2.5x) Total .properties files 1 Bundle.properties 2 spring.properties, log4j.properties Cold Deploy 5,339 ms 11,724 ms Second Deploy 481 ms 6,261 ms Third Deploy 528 ms 5,484 ms Fourth Deploy 484 ms 5,576 ms Runtime memory ~73 MB ~101 MB Some points worth highlighting from the table ... 516x WAR file, 10x deployment time - With 12.1 MB of libraries (for a very basic application) bundled in your application, the WAR file size and the deployment time will naturally go higher. The WAR file for Spring-based application is 516x bigger and the deployment time is double during the first deployment and ~ 10x during subsequent deployments. The Java EE 6 application is fully portable and will run on any Java EE 6 compliant application server. 36 libraries in the WAR - There are 14 Java EE 6 compliant application servers today. Each of those servers provide all the functionality like transactions, dependency injection, security, persistence, etc typically required of an enterprise or web application. There is no need to bundle 36 libraries worth 12.1 MB for a trivial CRUD application. These 14 compliant application servers provide all the functionality baked in. Now you can also deploy these libraries in the container but then you don't get the "portability" offered by Spring in that case. Does your typical Spring deployment actually do that ? 3x LoC in XML - The number of XML files is about 1.6x and the LoC is ~ 2.5x. So much XML seems circa 2003 when the Java language had no annotations. The XML files can be further reduced, e.g. faces-config.xml can be replaced without providing i18n, but I just want to compare stock applications. Memory usage - Both the applications were deployed on default GlassFish 3.1.2 installation and any additional memory consumed as part of deployment/access was attributed to the application. This is by no means scientific but at least provides an initial ballpark. This area definitely needs more investigation. Another table that compares typical Java EE 6 compliant application servers and the custom-stack created for a Spring application ... Java EE 6 Spring Web Container ? 53 MB (tcServer 2.6.3 Developer Edition) Security ? 12 MB (Spring Security 3.1.0) Persistence ? 6.3 MB (Hibernate 4.1.0, required) Dependency Injection ? 5.3 MB (Framework) Web Services ? 796 KB (Spring WS 2.0.4) Messaging ? 3.4 MB (RabbitMQ Server 2.7.1) 936 KB (Java client 936) OSGi ? 1.3 MB (Spring OSGi 1.2.1) GlassFish and WebLogic (starting at 33 MB) 83.3 MB There are differentiating factors on both the stacks. But most of the functionality like security, persistence, and dependency injection is baked in a Java EE 6 compliant application server but needs to be individually managed and patched for a Spring application. This very quickly leads to a "stack explosion". The Java EE 6 servers are tested extensively on a variety of platforms in different combinations whereas a Spring application developer is responsible for testing with different JDKs, Operating Systems, Versions, Patches, etc. Oracle has both the leading OSS lightweight server with GlassFish and the leading enterprise Java server with WebLogic Server, both Java EE 6 and both with lightweight deployment options. The Web Container offered as part of a Java EE 6 application server not only deploys your enterprise Java applications but also provide operational management, diagnostics, and mission-critical capabilities required by your applications. The Java EE 6 platform also introduced the Web Profile which is a subset of the specifications from the entire platform. It is targeted at developers of modern web applications offering a reasonably complete stack, composed of standard APIs, and is capable out-of-the-box of addressing the needs of a large class of Web applications. As your applications grow, the stack can grow to the full Java EE 6 platform. The GlassFish Server Web Profile starting at 33MB (smaller than just the non-standard tcServer) provides most of the functionality typically required by a web application. WebLogic provides battle-tested functionality for a high throughput, low latency, and enterprise grade web application. No individual managing or patching, all tested and commercially supported for you! Note that VMWare does have a server, tcServer, but it is non-standard and not even certified to the level of the standard Web Profile most customers expect these days. Customers who choose this risk proprietary lock-in since VMWare does not seem to want to formally certify with either Java EE 6 Enterprise Platform or with Java EE 6 Web Profile but of course it would be great if they were to join the community and help their customers reduce the risk of deploying on VMWare software. Some more points to help you decide choose between Java EE 6 and Spring ... Freedom to choose container - There are 14 Java EE 6 compliant application servers today, with a variety of open source and commercial offerings. A Java EE 6 application can be deployed on any of those containers. So if you deployed your application on GlassFish today and would like to scale up with your demands then you can deploy the same application to WebLogic. And because of the portability of a Java EE 6 application, you can even take it a different vendor altogether. Spring requires a runtime which could be any of these app servers as well. But why use Spring when all the required functionality is already baked into the application server itself ? Spring also has a different definition of portability where they claim to bundle all the libraries in the WAR file and move to any application server. But we saw earlier how bloated that archive could be. The equivalent features in Spring runtime offerings (mainly tcServer) are not all open source, not as mature, and often require manual assembly.  Vendor choice - The Java EE 6 platform is created using the Java Community Process where all the big players like Oracle, IBM, RedHat, and Apache are conritbuting to make the platform successful. Each application server provides the basic Java EE 6 platform compliance and has its own competitive offerings. This allows you to choose an application server for deploying your Java EE 6 applications. If you are not happy with the support or feature of one vendor then you can move your application to a different vendor because of the portability promise offered by the platform. Spring is a set of products from a single company, one price book, one support organization, one sustaining organization, one sales organization, etc. If any of those cause a customer headache, where do you go ? Java EE, backed by multiple vendors, is a safer bet for those that are risk averse. Production support - With Spring, typically you need to get support from two vendors - VMWare and the container provider. With Java EE 6, all of this is typically provided by one vendor. For example, Oracle offers commercial support from systems, operating systems, JDK, application server, and applications on top of them. VMWare certainly offers complete production support but do you really want to put all your eggs in one basket ? Do you really use tcServer ? ;-) Maintainability - With Spring, you are likely building your own distribution with multiple JAR files, integrating, patching, versioning, etc of all those components. Spring's claim is that multiple JAR files allow you to go à la carte and pick the latest versions of different components. But who is responsible for testing whether all these versions work together ? Yep, you got it, its YOU! If something does not work, who patches and maintains the JARs ? Of course, you! Commercial support for such a configuration ? On your own! The Java EE application servers manage all of this for you and provide a well-tested and commercially supported bundle. While it is always good to realize that there is something new and improved that updates and replaces older frameworks like Spring, the good news is not only does a Java EE 6 container offer what is described here, most also will let you deploy and run your Spring applications on them while you go through an upgrade to a more modern architecture. End result, you get the best of both worlds - keeping your legacy investment but moving to a more agile, lightweight world of Java EE 6. A message to the Spring lovers ... The complexity in J2EE 1.2, 1.3, and 1.4 led to the genesis of Spring but that was in 2004. This is 2012 and the name has changed to "Java EE 6" :-) There are tons of improvements in the Java EE platform to make it easy-to-use and powerful. Some examples: Adding @Stateless on a POJO makes it an EJB EJBs can be packaged in a WAR with no special packaging or deployment descriptors "web.xml" and "faces-config.xml" are optional in most of the common cases Typesafe dependency injection is now part of the Java EE platform Add @Path on a POJO allows you to publish it as a RESTful resource EJBs can be used as backing beans for Facelets-driven JSF pages providing full MVC Java EE 6 WARs are known to be kilobytes in size and deployed in milliseconds Tons of other simplifications in the platform and application servers So if you moved away from J2EE to Spring many years ago and have not looked at Java EE 6 (which has been out since Dec 2009) then you should definitely try it out. Just be at least aware of what other alternatives are available instead of restricting yourself to one stack. Here are some workshops and screencasts worth trying: screencast #37 shows how to build an end-to-end application using NetBeans screencast #36 builds the same application using Eclipse javaee-lab-feb2012.pdf is a 3-4 hours self-paced hands-on workshop that guides you to build a comprehensive Java EE 6 application using NetBeans Each city generally has a "spring cleanup" program every year. It allows you to clean up the mess from your house. For your software projects, you don't need to wait for an annual event, just get started and reduce the technical debt now! Move away from your legacy Spring-based applications to a lighter and more modern approach of building enterprise Java applications using Java EE 6. Watch this beautiful presentation that explains how to migrate from Spring -> Java EE 6: List of files in the Java EE 6 project: ./index.xhtml./META-INF./person./person/Create.xhtml./person/Edit.xhtml./person/List.xhtml./person/View.xhtml./resources./resources/css./resources/css/jsfcrud.css./template.xhtml./WEB-INF./WEB-INF/classes./WEB-INF/classes/Bundle.properties./WEB-INF/classes/META-INF./WEB-INF/classes/META-INF/persistence.xml./WEB-INF/classes/org./WEB-INF/classes/org/javaee./WEB-INF/classes/org/javaee/javaeemysql./WEB-INF/classes/org/javaee/javaeemysql/AbstractFacade.class./WEB-INF/classes/org/javaee/javaeemysql/Person.class./WEB-INF/classes/org/javaee/javaeemysql/Person_.class./WEB-INF/classes/org/javaee/javaeemysql/PersonController$1.class./WEB-INF/classes/org/javaee/javaeemysql/PersonController$PersonControllerConverter.class./WEB-INF/classes/org/javaee/javaeemysql/PersonController.class./WEB-INF/classes/org/javaee/javaeemysql/PersonFacade.class./WEB-INF/classes/org/javaee/javaeemysql/util./WEB-INF/classes/org/javaee/javaeemysql/util/JsfUtil.class./WEB-INF/classes/org/javaee/javaeemysql/util/PaginationHelper.class./WEB-INF/faces-config.xml./WEB-INF/web.xml List of files in the Spring 3.x project: ./META-INF ./META-INF/MANIFEST.MF./WEB-INF./WEB-INF/applicationContext.xml./WEB-INF/classes./WEB-INF/classes/log4j.properties./WEB-INF/classes/org./WEB-INF/classes/org/krams ./WEB-INF/classes/org/krams/tutorial ./WEB-INF/classes/org/krams/tutorial/controller ./WEB-INF/classes/org/krams/tutorial/controller/MainController.class ./WEB-INF/classes/org/krams/tutorial/domain ./WEB-INF/classes/org/krams/tutorial/domain/Person.class ./WEB-INF/classes/org/krams/tutorial/service ./WEB-INF/classes/org/krams/tutorial/service/PersonService.class ./WEB-INF/hibernate-context.xml ./WEB-INF/hibernate.cfg.xml ./WEB-INF/jsp ./WEB-INF/jsp/addedpage.jsp ./WEB-INF/jsp/addpage.jsp ./WEB-INF/jsp/deletedpage.jsp ./WEB-INF/jsp/editedpage.jsp ./WEB-INF/jsp/editpage.jsp ./WEB-INF/jsp/personspage.jsp ./WEB-INF/lib ./WEB-INF/lib/antlr-2.7.6.jar ./WEB-INF/lib/aopalliance-1.0.jar ./WEB-INF/lib/c3p0-0.9.1.2.jar ./WEB-INF/lib/cglib-nodep-2.2.jar ./WEB-INF/lib/commons-beanutils-1.8.3.jar ./WEB-INF/lib/commons-collections-3.2.1.jar ./WEB-INF/lib/commons-digester-2.1.jar ./WEB-INF/lib/commons-logging-1.1.1.jar ./WEB-INF/lib/dom4j-1.6.1.jar ./WEB-INF/lib/ejb3-persistence-1.0.2.GA.jar ./WEB-INF/lib/hibernate-annotations-3.4.0.GA.jar ./WEB-INF/lib/hibernate-commons-annotations-3.1.0.GA.jar ./WEB-INF/lib/hibernate-core-3.3.2.GA.jar ./WEB-INF/lib/javassist-3.7.ga.jar ./WEB-INF/lib/jstl-1.1.2.jar ./WEB-INF/lib/jta-1.1.jar ./WEB-INF/lib/junit-4.8.1.jar ./WEB-INF/lib/log4j-1.2.14.jar ./WEB-INF/lib/mysql-connector-java-5.1.14.jar ./WEB-INF/lib/persistence-api-1.0.jar ./WEB-INF/lib/slf4j-api-1.6.1.jar ./WEB-INF/lib/slf4j-log4j12-1.6.1.jar ./WEB-INF/lib/spring-aop-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-asm-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-beans-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-context-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-context-support-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-core-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-expression-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-jdbc-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-orm-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-tx-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-web-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-webmvc-3.0.5.RELEASE.jar ./WEB-INF/lib/standard-1.1.2.jar ./WEB-INF/lib/xml-apis-1.0.b2.jar ./WEB-INF/spring-servlet.xml ./WEB-INF/spring.properties ./WEB-INF/web.xml So, are you excited about Java EE 6 ? Want to get started now ? Here are some resources: Java EE 6 SDK (including runtime, samples, tutorials etc) GlassFish Server Open Source Edition 3.1.2 (Community) Oracle GlassFish Server 3.1.2 (Commercial) Java EE 6 using WebLogic 12c and NetBeans (Video) Java EE 6 with NetBeans and GlassFish (Video) Java EE with Eclipse and GlassFish (Video)

    Read the article

  • Integrating HTML into Silverlight Applications

    - by dwahlin
    Looking for a way to display HTML content within a Silverlight application? If you haven’t tried doing that before it can be challenging at first until you know a few tricks of the trade.  Being able to display HTML is especially handy when you’re required to display RSS feeds (with embedded HTML), SQL Server Reporting Services reports, PDF files (not actually HTML – but the techniques discussed will work), or other HTML content.  In this post I'll discuss three options for displaying HTML content in Silverlight applications and describe how my company is using these techniques in client applications. Displaying HTML Overlays If you need to display HTML over a Silverlight application (such as an RSS feed containing HTML data in it) you’ll need to set the Silverlight control’s windowless parameter to true. This can be done using the object tag as shown next: <object data="data:application/x-silverlight-2," type="application/x-silverlight-2" width="100%" height="100%"> <param name="source" value="ClientBin/HTMLAndSilverlight.xap"/> <param name="onError" value="onSilverlightError" /> <param name="background" value="white" /> <param name="minRuntimeVersion" value="4.0.50401.0" /> <param name="autoUpgrade" value="true" /> <param name="windowless" value="true" /> <a href="http://go.microsoft.com/fwlink/?LinkID=149156&v=4.0.50401.0" style="text-decoration:none"> <img src="http://go.microsoft.com/fwlink/?LinkId=161376" alt="Get Microsoft Silverlight" style="border-style:none"/> </a> </object> By setting the control to “windowless” you can overlay HTML objects by using absolute positioning and other CSS techniques. Keep in mind that on Windows machines the windowless setting can result in a performance hit when complex animations or HD video are running since the plug-in content is displayed directly by the browser window. It goes without saying that you should only set windowless to true when you really need the functionality it offers. For example, if I want to display my blog’s RSS content on top of a Silverlight application I could set windowless to true and create a user control that grabbed the content and output it using a DataList control: <style type="text/css"> a {text-decoration:none;font-weight:bold;font-size:14pt;} </style> <div style="margin-top:10px; margin-left:10px;margin-right:5px;"> <asp:DataList ID="RSSDataList" runat="server" DataSourceID="RSSDataSource"> <ItemTemplate> <a href='<%# XPath("link") %>'><%# XPath("title") %></a> <br /> <%# XPath("description") %> <br /> </ItemTemplate> </asp:DataList> <asp:XmlDataSource ID="RSSDataSource" DataFile="http://weblogs.asp.net/dwahlin/rss.aspx" XPath="rss/channel/item" CacheDuration="60" runat="server" /> </div> The user control can then be placed in the page hosting the Silverlight control as shown below. This example adds a Close button, additional content to display in the overlay window and the HTML generated from the user control. <div id="RSSDiv"> <div style="background-color:#484848;border:1px solid black;height:35px;width:100%;"> <img alt="Close Button" align="right" src="Images/Close.png" onclick="HideOverlay();" style="cursor:pointer;" /> </div> <div style="overflow:auto;width:800px;height:565px;"> <div style="float:left;width:100px;height:103px;margin-left:10px;margin-top:5px;"> <img src="http://weblogs.asp.net/blogs/dwahlin/dan2008.jpg" style="border:1px solid Gray" /> </div> <div style="float:left;width:300px;height:103px;margin-top:5px;"> <a href="http://weblogs.asp.net/dwahlin" style="margin-left:10px;font-size:20pt;">Dan Wahlin's Blog</a> </div> <br /><br /><br /> <div style="clear:both;margin-top:20px;"> <uc:BlogRoller ID="BlogRoller" runat="server" /> </div> </div> </div> Of course, we wouldn’t want the RSS HTML content to be shown until requested. Once it’s requested the absolute position of where it should show above the Silverlight control can be set using standard CSS styles. The following ID selector named #RSSDiv handles hiding the overlay div shown above and determines where it will be display on the screen. #RSSDiv { background-color:White; position:absolute; top:100px; left:300px; width:800px; height:600px; border:1px solid black; display:none; } Now that the HTML content to display above the Silverlight control is set, how can we show it as a user clicks a HyperlinkButton or other control in the application? Fortunately, Silverlight provides an excellent HTML bridge that allows direct access to content hosted within a page. The following code shows two JavaScript functions that can be called from Siverlight to handle showing or hiding HTML overlay content. The two functions rely on jQuery (http://www.jQuery.com) to make it easy to select HTML objects and manipulate their properties: function ShowOverlay() { rssDiv.css('display', 'block'); } function HideOverlay() { rssDiv.css('display', 'none'); } Calling the ShowOverlay function is as simple as adding the following code into the Silverlight application within a button’s Click event handler: private void OverlayHyperlinkButton_Click(object sender, RoutedEventArgs e) { HtmlPage.Window.Invoke("ShowOverlay"); } The result of setting the Silverlight control’s windowless parameter to true and showing the HTML overlay content is shown in the following screenshot:   Thinking Outside the Box to Show HTML Content Setting the windowless parameter to true may not be a viable option for some Silverlight applications or you may simply want to go about showing HTML content a different way. The next technique I’ll show takes advantage of simple HTML, CSS and JavaScript code to handle showing HTML content while a Silverlight application is running in the browser. Keep in mind that with Silverlight’s HTML bridge feature you can always pop-up HTML content in a new browser window using code similar to the following: System.Windows.Browser.HtmlPage.Window.Navigate( new Uri("http://silverlight.net"), "_blank"); For this example I’ll demonstrate how to hide the Silverlight application while maximizing a container div containing the HTML content to show. This allows HTML content to take up the full screen area of the browser without having to set windowless to true and when done right can make the user feel like they never left the Silverlight application. The following HTML shows several div elements that are used to display HTML within the same browser window as the Silverlight application: <div id="JobPlanDiv"> <div style="vertical-align:middle"> <img alt="Close Button" align="right" src="Images/Close.png" onclick="HideJobPlanIFrame();" style="cursor:pointer;" /> </div> <div id="JobPlan_IFrame_Container" style="height:95%;width:100%;margin-top:37px;"></div> </div> The JobPlanDiv element acts as a container for two other divs that handle showing a close button and hosting an iframe that will be added dynamically at runtime. JobPlanDiv isn’t visible when the Silverlight application loads due to the following ID selector added into the page: #JobPlanDiv { position:absolute; background-color:#484848; overflow:hidden; left:0; top:0; height:100%; width:100%; display:none; } When the HTML content needs to be shown or hidden the JavaScript functions shown next can be used: var jobPlanIFrameID = 'JobPlan_IFrame'; var slHost = null; var jobPlanContainer = null; var jobPlanIFrameContainer = null; var rssDiv = null; $(document).ready(function () { slHost = $('#silverlightControlHost'); jobPlanContainer = $('#JobPlanDiv'); jobPlanIFrameContainer = $('#JobPlan_IFrame_Container'); rssDiv = $('#RSSDiv'); }); function ShowJobPlanIFrame(url) { jobPlanContainer.css('display', 'block'); $('<iframe id="' + jobPlanIFrameID + '" src="' + url + '" style="height:100%;width:100%;" />') .appendTo(jobPlanIFrameContainer); slHost.css('width', '0%'); } function HideJobPlanIFrame() { jobPlanContainer.css('display', 'none'); $('#' + jobPlanIFrameID).remove(); slHost.css('width', '100%'); } ShowJobPlanIFrame() handles showing the JobPlanDiv div and adding an iframe into it dynamically. Once JobPlanDiv is shown, the Silverlight control host has its width set to a value of 0% to allow the control to stay alive while making it invisible to the user. I found that this technique works better across multiple browsers as opposed to manipulating the Silverlight control host div’s display or visibility properties. Now that you’ve seen the code to handle showing and hiding the HTML content area, let’s switch focus to the Silverlight application. As a user clicks on a link such as “View Report” the ShowJobPlanIFrame() JavaScript function needs to be called. The following code handles that task: private void ReportHyperlinkButton_Click(object sender, RoutedEventArgs e) { ShowBrowser(_BaseUrl + "/Report.aspx"); } public void ShowBrowser(string url) { HtmlPage.Window.Invoke("ShowJobPlanIFrame", url); } Any URL can be passed into the ShowBrowser() method which handles invoking the JavaScript function. This includes standard web pages or even PDF files. We’ve used this technique frequently with our SmartPrint control (http://www.smartwebcontrols.com) which converts Silverlight screens into PDF documents and displays them. Here’s an example of the content generated:   Silverlight 4’s WebBrowser Control Both techniques shown to this point work well when Silverlight is running in-browser but not so well when it’s running out-of-browser since there’s no host page that you can access using the HTML bridge. Fortunately, Silverlight 4 provides a WebBrowser control that can be used to perform the same functionality quite easily. We’re currently using it in client applications to display PDF documents, SSRS reports and standard HTML content. Using the WebBrowser control simplifies the application quite a bit since no JavaScript is required if the application only runs out-of-browser. Here’s a simple example of defining the WebBrowser control in XAML. I typically define it in MainPage.xaml when a Silverlight Navigation template is used to create the project so that I can re-use the functionality across multiple screens. <Grid x:Name="WebBrowserGrid" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" Visibility="Collapsed"> <StackPanel HorizontalAlignment="Stretch" VerticalAlignment="Stretch"> <Border Background="#484848" HorizontalAlignment="Stretch" Height="40"> <Image x:Name="WebBrowserImage" Width="100" Height="33" Cursor="Hand" HorizontalAlignment="Right" Source="/HTMLAndSilverlight;component/Assets/Images/Close.png" MouseLeftButtonDown="WebBrowserImage_MouseLeftButtonDown" /> </Border> <WebBrowser x:Name="JobPlanReportWebBrowser" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" /> </StackPanel> </Grid> Looking through the XAML you can see that a close image is defined along with the WebBrowser control. Because the URL that the WebBrowser should navigate to isn’t known at design time no value is assigned to the control’s Source property. If the XAML shown above is left “as is” you’ll find that any HTML content assigned to the WebBrowser doesn’t display properly. This is due to no height or width being set on the control. To handle this issue the following code is added into the XAML’s code-behind file to dynamically determine the height and width of the page and assign it to the WebBrowser. This is done by handling the SizeChanged event. void MainPage_SizeChanged(object sender, SizeChangedEventArgs e) { WebBrowserGrid.Height = JobPlanReportWebBrowser.Height = ActualHeight; WebBrowserGrid.Width = JobPlanReportWebBrowser.Width = ActualWidth; } When the user wants to view HTML content they click a button which executes the code shown in next: public void ShowBrowser(string url) { if (Application.Current.IsRunningOutOfBrowser) { JobPlanReportWebBrowser.NavigateToString("<html><body><iframe src='" + url + "' style='width:100%;height:97%;' /></body></html>"); WebBrowserGrid.Visibility = Visibility.Visible; } else { HtmlPage.Window.Invoke("ShowJobPlanIFrame", url); } } private void WebBrowserImage_MouseLeftButtonDown(object sender, MouseButtonEventArgs e) { WebBrowserGrid.Visibility = Visibility.Collapsed; }   Looking through the code you’ll see that it checks to see if the Silverlight application is running out-of-browser and then either displays the WebBrowser control or runs the JavaScript function discussed earlier. Although the WebBrowser control’s Source property could be assigned the URI of the page to navigate to, by assigning HTML content using the NavigateToString() method and adding an iframe, content can be shown from any site including cross-domain sites. This is especially handy when you need to grab a page from a reporting site that’s in a different domain than the Silverlight application. Here’s an example of viewing  PDF file inside of an out-of-browser application. The first image shows the application running out-of-browser before the user clicks a PDF HyperlinkButton.  The second image shows the PDF being displayed.   While there are certainly other techniques that can be used, the ones shown here have worked well for us in different applications and provide the ability to display HTML content in-browser or out-of-browser. Feel free to add a comment if you have another tip or trick you like to use when working with HTML content in Silverlight applications.   Download Code Sample   For more information about onsite, online and video training, mentoring and consulting solutions for .NET, SharePoint or Silverlight please visit http://www.thewahlingroup.com.

    Read the article

  • SOA Suite 11g Native Format Builder Complex Format Example

    - by bob.webster
    This rather long posting details the steps required to process a grouping of fixed length records using Format Builder.   If it’s 10 pm and you’re feeling beat you might want to leave this until tomorrow.  But if it’s 10 pm and you need to get a Format Builder Complex template done, read on… The goal is to process individual orders from a file using the 11g File Adapter and Format Builder Sample Data =========== 001Square Widget            0245.98 102Triagular Widget         1120.00 403Circular Widget           0099.45 ORD8898302/01/2011 301Hexagon Widget         1150.98 ORD6735502/01/2011 The records are fixed length records representing a number of logical Order records. Each order record consists of a number of item records starting with a 3 digit number, followed by a single Summary Record which starts with the constant ORD. How can this file be processed so that the first poll returns the first order? 001Square Widget            0245.98 102Triagular Widget         1120.00 403Circular Widget           0099.45 ORD8898302/01/2011 And the second poll returns the second order? 301Hexagon Widget           1150.98 ORD6735502/01/2011 Note: if you need more than one order per poll, that’s also possible, see the “Multiple Messages” field in the “File Adapter Step 6 of 9” snapshot further down.   To follow along with this example you will need - Studio Edition Version 11.1.1.4.0    with the   - SOA Extension for JDeveloper 11.1.1.4.0 installed Both can be downloaded from here:  http://www.oracle.com/technetwork/middleware/soasuite/downloads/index.html You will not need a running WebLogic Server domain to complete the steps and Format Builder tests in this article.     Start with a SOA Composite containing a File Adapter The Format Builder is part of the File Adapter so start by creating a new SOA Project and Composite. Here is a quick summary for those not familiar with these steps - Start JDeveloper - From the Main Menu choose File->New - In the New Gallery window that opens Expand the “General” category and Select the Applications node.   Then choose SOA Application from the Items section on the right.  Finally press the OK button. - In Step 1 of the “Create SOA Application wizard” that appears enter an Application Name and an Directory of your     choice,   then press the Next button. - In Step 2 of the “Create SOA Application wizard”, press the Next button leaving all entries as defaulted. - In Step 3 of the “Create SOA Application wizard”, Enter a composite name of your choice and Press the Finish   Button These steps result in a new Application and SOA Project. The SOA Project contains a composite.xml file which is opened and shown below. For our example we have not defined a Mediator or a BPEL process to minimize the steps, but one or the other would eventually be needed to use the File Adapter we are about to create. Drag and drop the File Adapter icon from the Component Pallette onto either the LEFT side of the diagram under “Exposed Services” or the right side under “External References”.  (See the Green Circle in the image below).  Placing the adapter on the left side would indicate the file being processed is inbound to the composite, if the adapter is placed on the right side then the data is outbound to a file.     Note that the same Format Builder definition can be used in both directions.  For example we could use the format with a File Adapter on the left side of the composite to parse fixed data into XML, modify the data in our Composite or BPEL process and then use the same Format Builder definition with a File adapter on the right side of the composite to write the data back out in the same fixed data format When the File Adapter is dropped on the Composite the File Adapter Wizard Appears. Skip Past the first page, Step 1 of 9 by pressing the Next button. In Step 2 enter a service name of your choice as shown below, then press Next   When the Native Format Builder appears, skip the welcome page by pressing next. Also press the Next button to accept the settings on Step 3 of 9 On Step 4, select Read File and press the Next button as shown below.   On Step 5 enter a directory that will contain a file with the input data, then  Press the Next button as shown below. In step 6, enter *.txt or another file format to select input files from the input directory mentioned in step 5. ALSO check the “Files contain Multiple Messages” checkbox and set the “Publish Messages in Batches of” field to 1.  The value can be set higher to increase the number of logical order group records returned on each poll of the file adapter.  In other words, it determines the number of Orders that will be sent to each instance of a Mediator or Composite processing using the File Adapter.   Skip Step 7 by pressing the Next button In Step 8 press the Gear Icon on the right side to load the Native Format Builder.       Native Format Builder  appears Before diving into the format, here is an overview of the process. Approach - Bottom up Assuming an Order is a grouping of item records and a summary record…. - Define a separate  Complex Type for each Record Type found in the group.    (One for itemRecord and one for summaryRecord) - Define a Complex Type to contain the Group of Record types defined above   (LogicalOrderRecord) - Define a top level element to represent an order.  (order)   The order element will be of type LogicalOrderRecord   Defining the Format In Step 1 select   “Create new”  and  “Complex Type” and “Next”   In Step two browse to and select a file containing the test data shown at the start of this article. A link is provided at the end of this article to download a file containing the test data. Press the Next button     In Step 3 Complex types must be define for each type of input record. Select the Root-Element and Click on the Add Complex Type icon This creates a new empty complex type definition shown below. The fastest way to create the definition is to highlight the first line of the Sample File data and drag the line onto the  <new_complex_type> Format Builder introspects the data and provides a grid to define additional fields. Change the “Complex Type Name” to  “itemRecord” Then click on the ruler to indicate the position of fixed columns.  Drag the red triangle icons to the exact columns if necessary. Double click on an existing red triangle to remove an unwanted entry. In the case below fields are define in columns 0-3, 4-28, 29-eol When the field definitions are correct, press the “Generate Fields” button. Field entries named C1, C2 and C3 will be created as shown below. Click on the field names and rename them from C1->itemNum, C2->itemDesc and C3->itemCost  When all the fields are correctly defined press OK to save the complex type.        Next, the process is repeated to define a Complex Type for the SummaryRecord. Select the Root-Element in the schema tree and press the new complex type icon Then highlight and drag the Summary Record from the sample data onto the <new_complex_type>   Change the complex type name to “summaryRecord” Mark the fixed fields for Order Number and Order Date. Press the Generate Fields button and rename C1 and C2 to itemNum and orderDate respectively.   The last complex type to be defined is a type to hold the group of items and the summary record. Select the Root-Element in the schema tree and click the new complex type icon Select the “<new_complex_type>” entry and click the pencil icon   On the Complex Type Details page change the name and type of each input field. Change line 1 to be named item and set the Type  to “itemRecord” Change line 2 to be named summary and set the Type to “summaryRecord” We also need to indicate that itemRecords repeat in the input file. Click the pencil icon at the right side of the item line. On the Edit Details page change the “Max Occurs” entry from 1 to UNBOUNDED. We also need to indicate how to identify an itemRecord.  Since each item record has “.” in column 32 we can use this fact to differentiate an item record from a summary record. Change the “Look Ahead” field to value 32 and enter a period in the “Look For” field Press the OK button to save entry.     Finally, its time to create a top level element to represent an order. Select the “Root-Element” in the schema tree and press the New element icon Click on the <new_element> and press the pencil icon.   Set the Element Name to “order” and change the Data Type to “logicalOrderRecord” Press the OK button to save the element definition.   The final definition should match the screenshot below. Press the Next Button to view the definition source.     Press the Test Button to test the definition   Press the Green Triangle Icon to run the test.   And we are presented with an unwelcome error. The error states that the processor ran out of data while working through the definition. The processor was unable to differentiate between itemRecords and summaryRecords and therefore treated the entire file as a list of itemRecords.  At end of file, the “summary” portion of the logicalOrderRecord remained unprocessed but mandatory.   This root cause of this error is the loss of our “lookAhead” definition used to identify itemRecords. This appears to be a bug in the  Native Format Builder 11.1.1.4.0 Luckily, a simple workaround exists. Press the Cancel button and return to the “Step 4 of 4” Window. Manually add    nxsd:lookAhead="32" nxsd:lookFor="."   attributes after the maxOccurs attribute of the item element. as shown in the highlighted text below.   When the lookAhead and lookFor attributes have been added Press the Test button and on the Test page press the Green Triangle. The test is now successful, the first order in the file is returned by the File Adapter.     Below is a complete listing of the Result XML from the right column of the screen above   Try running it The downloaded input test file and completed schema file can be used for testing without following all the Native Format Builder steps in this example. Use the following link to download a file containing the sample data. Download Sample Input Data This is the best approach rather than cutting and pasting the input data at the top of the article.  Since the data is fixed length it’s very important to watch out for trailing spaces in the data and to ensure an eol character at the end of every line. The download file is correctly formatted. The final schema definition can be downloaded at the following link Download Completed Schema Definition   - Save the inputData.txt file to a known location like the xsd folder in your project. - Save the inputData_6.xsd file to the xsd folder in your project. - At step 1 in the Native Format Builder wizard  (as shown above) check the “Edit existing” radio button,    then browse and select the inputData_6.xsd file - At step 2 of the Format Builder configuration Wizard (as shown above) supply the path and filename for    the inputData.txt file. - You can then proceed to the test page and run a test. - Remember the wizard bug will drop the lookAhead and lookFor attributes,  you will need to manually add   nxsd:lookAhead="32" nxsd:lookFor="."    after the maxOccurs attribute of the item element in the   LogicalOrderRecord Complex Type.  (as shown above)   Good Luck with your Format Project

    Read the article

  • Using an alternate JSON Serializer in ASP.NET Web API

    - by Rick Strahl
    The new ASP.NET Web API that Microsoft released alongside MVC 4.0 Beta last week is a great framework for building REST and AJAX APIs. I've been working with it for quite a while now and I really like the way it works and the complete set of features it provides 'in the box'. It's about time that Microsoft gets a decent API for building generic HTTP endpoints into the framework. DataContractJsonSerializer sucks As nice as Web API's overall design is one thing still sucks: The built-in JSON Serialization uses the DataContractJsonSerializer which is just too limiting for many scenarios. The biggest issues I have with it are: No support for untyped values (object, dynamic, Anonymous Types) MS AJAX style Date Formatting Ugly serialization formats for types like Dictionaries To me the most serious issue is dealing with serialization of untyped objects. I have number of applications with AJAX front ends that dynamically reformat data from business objects to fit a specific message format that certain UI components require. The most common scenario I have there are IEnumerable query results from a database with fields from the result set rearranged to fit the sometimes unconventional formats required for the UI components (like jqGrid for example). Creating custom types to fit these messages seems like overkill and projections using Linq makes this much easier to code up. Alas DataContractJsonSerializer doesn't support it. Neither does DataContractSerializer for XML output for that matter. What this means is that you can't do stuff like this in Web API out of the box:public object GetAnonymousType() { return new { name = "Rick", company = "West Wind", entered= DateTime.Now }; } Basically anything that doesn't have an explicit type DataContractJsonSerializer will not let you return. FWIW, the same is true for XmlSerializer which also doesn't work with non-typed values for serialization. The example above is obviously contrived with a hardcoded object graph, but it's not uncommon to get dynamic values returned from queries that have anonymous types for their result projections. Apparently there's a good possibility that Microsoft will ship Json.NET as part of Web API RTM release.  Scott Hanselman confirmed this as a footnote in his JSON Dates post a few days ago. I've heard several other people from Microsoft confirm that Json.NET will be included and be the default JSON serializer, but no details yet in what capacity it will show up. Let's hope it ends up as the default in the box. Meanwhile this post will show you how you can use it today with the beta and get JSON that matches what you should see in the RTM version. What about JsonValue? To be fair Web API DOES include a new JsonValue/JsonObject/JsonArray type that allow you to address some of these scenarios. JsonValue is a new type in the System.Json assembly that can be used to build up an object graph based on a dictionary. It's actually a really cool implementation of a dynamic type that allows you to create an object graph and spit it out to JSON without having to create .NET type first. JsonValue can also receive a JSON string and parse it without having to actually load it into a .NET type (which is something that's been missing in the core framework). This is really useful if you get a JSON result from an arbitrary service and you don't want to explicitly create a mapping type for the data returned. For serialization you can create an object structure on the fly and pass it back as part of an Web API action method like this:public JsonValue GetJsonValue() { dynamic json = new JsonObject(); json.name = "Rick"; json.company = "West Wind"; json.entered = DateTime.Now; dynamic address = new JsonObject(); address.street = "32 Kaiea"; address.zip = "96779"; json.address = address; dynamic phones = new JsonArray(); json.phoneNumbers = phones; dynamic phone = new JsonObject(); phone.type = "Home"; phone.number = "808 123-1233"; phones.Add(phone); phone = new JsonObject(); phone.type = "Home"; phone.number = "808 123-1233"; phones.Add(phone); //var jsonString = json.ToString(); return json; } which produces the following output (formatted here for easier reading):{ name: "rick", company: "West Wind", entered: "2012-03-08T15:33:19.673-10:00", address: { street: "32 Kaiea", zip: "96779" }, phoneNumbers: [ { type: "Home", number: "808 123-1233" }, { type: "Mobile", number: "808 123-1234" }] } If you need to build a simple JSON type on the fly these types work great. But if you have an existing type - or worse a query result/list that's already formatted JsonValue et al. become a pain to work with. As far as I can see there's no way to just throw an object instance at JsonValue and have it convert into JsonValue dictionary. It's a manual process. Using alternate Serializers in Web API So, currently the default serializer in WebAPI is DataContractJsonSeriaizer and I don't like it. You may not either, but luckily you can swap the serializer fairly easily. If you'd rather use the JavaScriptSerializer built into System.Web.Extensions or Json.NET today, it's not too difficult to create a custom MediaTypeFormatter that uses these serializers and can replace or partially replace the native serializer. Here's a MediaTypeFormatter implementation using the ASP.NET JavaScriptSerializer:using System; using System.Net.Http.Formatting; using System.Threading.Tasks; using System.Web.Script.Serialization; using System.Json; using System.IO; namespace Westwind.Web.WebApi { public class JavaScriptSerializerFormatter : MediaTypeFormatter { public JavaScriptSerializerFormatter() { SupportedMediaTypes.Add(new System.Net.Http.Headers.MediaTypeHeaderValue("application/json")); } protected override bool CanWriteType(Type type) { // don't serialize JsonValue structure use default for that if (type == typeof(JsonValue) || type == typeof(JsonObject) || type== typeof(JsonArray) ) return false; return true; } protected override bool CanReadType(Type type) { if (type == typeof(IKeyValueModel)) return false; return true; } protected override System.Threading.Tasks.Taskobject OnReadFromStreamAsync(Type type, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext) { var task = Taskobject.Factory.StartNew(() = { var ser = new JavaScriptSerializer(); string json; using (var sr = new StreamReader(stream)) { json = sr.ReadToEnd(); sr.Close(); } object val = ser.Deserialize(json,type); return val; }); return task; } protected override System.Threading.Tasks.Task OnWriteToStreamAsync(Type type, object value, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext, System.Net.TransportContext transportContext) { var task = Task.Factory.StartNew( () = { var ser = new JavaScriptSerializer(); var json = ser.Serialize(value); byte[] buf = System.Text.Encoding.Default.GetBytes(json); stream.Write(buf,0,buf.Length); stream.Flush(); }); return task; } } } Formatter implementation is pretty simple: You override 4 methods to tell which types you can handle and then handle the input or output streams to create/parse the JSON data. Note that when creating output you want to take care to still allow JsonValue/JsonObject/JsonArray types to be handled by the default serializer so those objects serialize properly - if you let either JavaScriptSerializer or JSON.NET handle them they'd try to render the dictionaries which is very undesirable. If you'd rather use Json.NET here's the JSON.NET version of the formatter:// this code requires a reference to JSON.NET in your project #if true using System; using System.Net.Http.Formatting; using System.Threading.Tasks; using System.Web.Script.Serialization; using System.Json; using Newtonsoft.Json; using System.IO; using Newtonsoft.Json.Converters; namespace Westwind.Web.WebApi { public class JsonNetFormatter : MediaTypeFormatter { public JsonNetFormatter() { SupportedMediaTypes.Add(new System.Net.Http.Headers.MediaTypeHeaderValue("application/json")); } protected override bool CanWriteType(Type type) { // don't serialize JsonValue structure use default for that if (type == typeof(JsonValue) || type == typeof(JsonObject) || type == typeof(JsonArray)) return false; return true; } protected override bool CanReadType(Type type) { if (type == typeof(IKeyValueModel)) return false; return true; } protected override System.Threading.Tasks.Taskobject OnReadFromStreamAsync(Type type, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext) { var task = Taskobject.Factory.StartNew(() = { var settings = new JsonSerializerSettings() { NullValueHandling = NullValueHandling.Ignore, }; var sr = new StreamReader(stream); var jreader = new JsonTextReader(sr); var ser = new JsonSerializer(); ser.Converters.Add(new IsoDateTimeConverter()); object val = ser.Deserialize(jreader, type); return val; }); return task; } protected override System.Threading.Tasks.Task OnWriteToStreamAsync(Type type, object value, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext, System.Net.TransportContext transportContext) { var task = Task.Factory.StartNew( () = { var settings = new JsonSerializerSettings() { NullValueHandling = NullValueHandling.Ignore, }; string json = JsonConvert.SerializeObject(value, Formatting.Indented, new JsonConverter[1] { new IsoDateTimeConverter() } ); byte[] buf = System.Text.Encoding.Default.GetBytes(json); stream.Write(buf,0,buf.Length); stream.Flush(); }); return task; } } } #endif   One advantage of the Json.NET serializer is that you can specify a few options on how things are formatted and handled. You get null value handling and you can plug in the IsoDateTimeConverter which is nice to product proper ISO dates that I would expect any Json serializer to output these days. Hooking up the Formatters Once you've created the custom formatters you need to enable them for your Web API application. To do this use the GlobalConfiguration.Configuration object and add the formatter to the Formatters collection. Here's what this looks like hooked up from Application_Start in a Web project:protected void Application_Start(object sender, EventArgs e) { // Action based routing (used for RPC calls) RouteTable.Routes.MapHttpRoute( name: "StockApi", routeTemplate: "stocks/{action}/{symbol}", defaults: new { symbol = RouteParameter.Optional, controller = "StockApi" } ); // WebApi Configuration to hook up formatters and message handlers // optional RegisterApis(GlobalConfiguration.Configuration); } public static void RegisterApis(HttpConfiguration config) { // Add JavaScriptSerializer formatter instead - add at top to make default //config.Formatters.Insert(0, new JavaScriptSerializerFormatter()); // Add Json.net formatter - add at the top so it fires first! // This leaves the old one in place so JsonValue/JsonObject/JsonArray still are handled config.Formatters.Insert(0, new JsonNetFormatter()); } One thing to remember here is the GlobalConfiguration object which is Web API's static configuration instance. I think this thing is seriously misnamed given that GlobalConfiguration could stand for anything and so is hard to discover if you don't know what you're looking for. How about WebApiConfiguration or something more descriptive? Anyway, once you know what it is you can use the Formatters collection to insert your custom formatter. Note that I insert my formatter at the top of the list so it takes precedence over the default formatter. I also am not removing the old formatter because I still want JsonValue/JsonObject/JsonArray to be handled by the default serialization mechanism. Since they process in sequence and I exclude processing for these types JsonValue et al. still get properly serialized/deserialized. Summary Currently DataContractJsonSerializer in Web API is a pain, but at least we have the ability with relatively limited effort to replace the MediaTypeFormatter and plug in our own JSON serializer. This is useful for many scenarios - if you have existing client applications that used MVC JsonResult or ASP.NET AJAX results from ASMX AJAX services you can plug in the JavaScript serializer and get exactly the same serializer you used in the past so your results will be the same and don't potentially break clients. JSON serializers do vary a bit in how they serialize some of the more complex types (like Dictionaries and dates for example) and so if you're migrating it might be helpful to ensure your client code doesn't break when you switch to ASP.NET Web API. Going forward it looks like Microsoft is planning on plugging in Json.Net into Web API and make that the default. I think that's an awesome choice since Json.net has been around forever, is fast and easy to use and provides a ton of functionality as part of this great library. I just wish Microsoft would have figured this out sooner instead of now at the last minute integrating with it especially given that Json.Net has a similar set of lower level JSON objects JsonValue/JsonObject etc. which now will end up being duplicated by the native System.Json stuff. It's not like we don't already have enough confusion regarding which JSON serializer to use (JavaScriptSerializer, DataContractJsonSerializer, JsonValue/JsonObject/JsonArray and now Json.net). For years I've been using my own JSON serializer because the built in choices are both limited. However, with an official encorsement of Json.Net I'm happily moving on to use that in my applications. Let's see and hope Microsoft gets this right before ASP.NET Web API goes gold.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api  AJAX  ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Announcing release of ASP.NET MVC 3, IIS Express, SQL CE 4, Web Farm Framework, Orchard, WebMatrix

    - by ScottGu
    I’m excited to announce the release today of several products: ASP.NET MVC 3 NuGet IIS Express 7.5 SQL Server Compact Edition 4 Web Deploy and Web Farm Framework 2.0 Orchard 1.0 WebMatrix 1.0 The above products are all free. They build upon the .NET 4 and VS 2010 release, and add a ton of additional value to ASP.NET (both Web Forms and MVC) and the Microsoft Web Server stack. ASP.NET MVC 3 Today we are shipping the final release of ASP.NET MVC 3.  You can download and install ASP.NET MVC 3 here.  The ASP.NET MVC 3 source code (released under an OSI-compliant open source license) can also optionally be downloaded here. ASP.NET MVC 3 is a significant update that brings with it a bunch of great features.  Some of the improvements include: Razor ASP.NET MVC 3 ships with a new view-engine option called “Razor” (in addition to continuing to support/enhance the existing .aspx view engine).  Razor minimizes the number of characters and keystrokes required when writing a view template, and enables a fast, fluid coding workflow. Unlike most template syntaxes, with Razor you do not need to interrupt your coding to explicitly denote the start and end of server blocks within your HTML. The Razor parser is smart enough to infer this from your code. This enables a compact and expressive syntax which is clean, fast and fun to type.  You can learn more about Razor from some of the blog posts I’ve done about it over the last 6 months Introducing Razor New @model keyword in Razor Layouts with Razor Server-Side Comments with Razor Razor’s @: and <text> syntax Implicit and Explicit code nuggets with Razor Layouts and Sections with Razor Today’s release supports full code intellisense support for Razor (both VB and C#) with Visual Studio 2010 and the free Visual Web Developer 2010 Express. JavaScript Improvements ASP.NET MVC 3 enables richer JavaScript scenarios and takes advantage of emerging HTML5 capabilities. The AJAX and Validation helpers in ASP.NET MVC 3 now use an Unobtrusive JavaScript based approach.  Unobtrusive JavaScript avoids injecting inline JavaScript into HTML, and enables cleaner separation of behavior using the new HTML 5 “data-“ attribute convention (which conveniently works on older browsers as well – including IE6). This keeps your HTML tight and clean, and makes it easier to optionally swap out or customize JS libraries.  ASP.NET MVC 3 now includes built-in support for posting JSON-based parameters from client-side JavaScript to action methods on the server.  This makes it easier to exchange data across the client and server, and build rich JavaScript front-ends.  We think this capability will be particularly useful going forward with scenarios involving client templates and data binding (including the jQuery plugins the ASP.NET team recently contributed to the jQuery project).  Previous releases of ASP.NET MVC included the core jQuery library.  ASP.NET MVC 3 also now ships the jQuery Validate plugin (which our validation helpers use for client-side validation scenarios).  We are also now shipping and including jQuery UI by default as well (which provides a rich set of client-side JavaScript UI widgets for you to use within projects). Improved Validation ASP.NET MVC 3 includes a bunch of validation enhancements that make it even easier to work with data. Client-side validation is now enabled by default with ASP.NET MVC 3 (using an onbtrusive javascript implementation).  Today’s release also includes built-in support for Remote Validation - which enables you to annotate a model class with a validation attribute that causes ASP.NET MVC to perform a remote validation call to a server method when validating input on the client. The validation features introduced within .NET 4’s System.ComponentModel.DataAnnotations namespace are now supported by ASP.NET MVC 3.  This includes support for the new IValidatableObject interface – which enables you to perform model-level validation, and allows you to provide validation error messages specific to the state of the overall model, or between two properties within the model.  ASP.NET MVC 3 also supports the improvements made to the ValidationAttribute class in .NET 4.  ValidationAttribute now supports a new IsValid overload that provides more information about the current validation context, such as what object is being validated.  This enables richer scenarios where you can validate the current value based on another property of the model.  We’ve shipped a built-in [Compare] validation attribute  with ASP.NET MVC 3 that uses this support and makes it easy out of the box to compare and validate two property values. You can use any data access API or technology with ASP.NET MVC.  This past year, though, we’ve worked closely with the .NET data team to ensure that the new EF Code First library works really well for ASP.NET MVC applications.  These two posts of mine cover the latest EF Code First preview and demonstrates how to use it with ASP.NET MVC 3 to enable easy editing of data (with end to end client+server validation support).  The final release of EF Code First will ship in the next few weeks. Today we are also publishing the first preview of a new MvcScaffolding project.  It enables you to easily scaffold ASP.NET MVC 3 Controllers and Views, and works great with EF Code-First (and is pluggable to support other data providers).  You can learn more about it – and install it via NuGet today - from Steve Sanderson’s MvcScaffolding blog post. Output Caching Previous releases of ASP.NET MVC supported output caching content at a URL or action-method level. With ASP.NET MVC V3 we are also enabling support for partial page output caching – which allows you to easily output cache regions or fragments of a response as opposed to the entire thing.  This ends up being super useful in a lot of scenarios, and enables you to dramatically reduce the work your application does on the server.  The new partial page output caching support in ASP.NET MVC 3 enables you to easily re-use cached sub-regions/fragments of a page across multiple URLs on a site.  It supports the ability to cache the content either on the web-server, or optionally cache it within a distributed cache server like Windows Server AppFabric or memcached. I’ll post some tutorials on my blog that show how to take advantage of ASP.NET MVC 3’s new output caching support for partial page scenarios in the future. Better Dependency Injection ASP.NET MVC 3 provides better support for applying Dependency Injection (DI) and integrating with Dependency Injection/IOC containers. With ASP.NET MVC 3 you no longer need to author custom ControllerFactory classes in order to enable DI with Controllers.  You can instead just register a Dependency Injection framework with ASP.NET MVC 3 and it will resolve dependencies not only for Controllers, but also for Views, Action Filters, Model Binders, Value Providers, Validation Providers, and Model Metadata Providers that you use within your application. This makes it much easier to cleanly integrate dependency injection within your projects. Other Goodies ASP.NET MVC 3 includes dozens of other nice improvements that help to both reduce the amount of code you write, and make the code you do write cleaner.  Here are just a few examples: Improved New Project dialog that makes it easy to start new ASP.NET MVC 3 projects from templates. Improved Add->View Scaffolding support that enables the generation of even cleaner view templates. New ViewBag property that uses .NET 4’s dynamic support to make it easy to pass late-bound data from Controllers to Views. Global Filters support that allows specifying cross-cutting filter attributes (like [HandleError]) across all Controllers within an app. New [AllowHtml] attribute that allows for more granular request validation when binding form posted data to models. Sessionless controller support that allows fine grained control over whether SessionState is enabled on a Controller. New ActionResult types like HttpNotFoundResult and RedirectPermanent for common HTTP scenarios. New Html.Raw() helper to indicate that output should not be HTML encoded. New Crypto helpers for salting and hashing passwords. And much, much more… Learn More about ASP.NET MVC 3 We will be posting lots of tutorials and samples on the http://asp.net/mvc site in the weeks ahead.  Below are two good ASP.NET MVC 3 tutorials available on the site today: Build your First ASP.NET MVC 3 Application: VB and C# Building the ASP.NET MVC 3 Music Store We’ll post additional ASP.NET MVC 3 tutorials and videos on the http://asp.net/mvc site in the future. Visit it regularly to find new tutorials as they are published. How to Upgrade Existing Projects ASP.NET MVC 3 is compatible with ASP.NET MVC 2 – which means it should be easy to update existing MVC projects to ASP.NET MVC 3.  The new features in ASP.NET MVC 3 build on top of the foundational work we’ve already done with the MVC 1 and MVC 2 releases – which means that the skills, knowledge, libraries, and books you’ve acquired are all directly applicable with the MVC 3 release.  MVC 3 adds new features and capabilities – it doesn’t obsolete existing ones. You can upgrade existing ASP.NET MVC 2 projects by following the manual upgrade steps in the release notes.  Alternatively, you can use this automated ASP.NET MVC 3 upgrade tool to easily update your  existing projects. Localized Builds Today’s ASP.NET MVC 3 release is available in English.  We will be releasing localized versions of ASP.NET MVC 3 (in 9 languages) in a few days.  I’ll blog pointers to the localized downloads once they are available. NuGet Today we are also shipping NuGet – a free, open source, package manager that makes it easy for you to find, install, and use open source libraries in your projects. It works with all .NET project types (including ASP.NET Web Forms, ASP.NET MVC, WPF, WinForms, Silverlight, and Class Libraries).  You can download and install it here. NuGet enables developers who maintain open source projects (for example, .NET projects like Moq, NHibernate, Ninject, StructureMap, NUnit, Windsor, Raven, Elmah, etc) to package up their libraries and register them with an online gallery/catalog that is searchable.  The client-side NuGet tools – which include full Visual Studio integration – make it trivial for any .NET developer who wants to use one of these libraries to easily find and install it within the project they are working on. NuGet handles dependency management between libraries (for example: library1 depends on library2). It also makes it easy to update (and optionally remove) libraries from your projects later. It supports updating web.config files (if a package needs configuration settings). It also allows packages to add PowerShell scripts to a project (for example: scaffold commands). Importantly, NuGet is transparent and clean – and does not install anything at the system level. Instead it is focused on making it easy to manage libraries you use with your projects. Our goal with NuGet is to make it as simple as possible to integrate open source libraries within .NET projects.  NuGet Gallery This week we also launched a beta version of the http://nuget.org web-site – which allows anyone to easily search and browse an online gallery of open source packages available via NuGet.  The site also now allows developers to optionally submit new packages that they wish to share with others.  You can learn more about how to create and share a package here. There are hundreds of open-source .NET projects already within the NuGet Gallery today.  We hope to have thousands there in the future. IIS Express 7.5 Today we are also shipping IIS Express 7.5.  IIS Express is a free version of IIS 7.5 that is optimized for developer scenarios.  It works for both ASP.NET Web Forms and ASP.NET MVC project types. We think IIS Express combines the ease of use of the ASP.NET Web Server (aka Cassini) currently built-into Visual Studio today with the full power of IIS.  Specifically: It’s lightweight and easy to install (less than 5Mb download and a quick install) It does not require an administrator account to run/debug applications from Visual Studio It enables a full web-server feature set – including SSL, URL Rewrite, and other IIS 7.x modules It supports and enables the same extensibility model and web.config file settings that IIS 7.x support It can be installed side-by-side with the full IIS web server as well as the ASP.NET Development Server (they do not conflict at all) It works on Windows XP and higher operating systems – giving you a full IIS 7.x developer feature-set on all Windows OS platforms IIS Express (like the ASP.NET Development Server) can be quickly launched to run a site from a directory on disk.  It does not require any registration/configuration steps. This makes it really easy to launch and run for development scenarios.  You can also optionally redistribute IIS Express with your own applications if you want a lightweight web-server.  The standard IIS Express EULA now includes redistributable rights. Visual Studio 2010 SP1 adds support for IIS Express.  Read my VS 2010 SP1 and IIS Express blog post to learn more about what it enables.  SQL Server Compact Edition 4 Today we are also shipping SQL Server Compact Edition 4 (aka SQL CE 4).  SQL CE is a free, embedded, database engine that enables easy database storage. No Database Installation Required SQL CE does not require you to run a setup or install a database server in order to use it.  You can simply copy the SQL CE binaries into the \bin directory of your ASP.NET application, and then your web application can use it as a database engine.  No setup or extra security permissions are required for it to run. You do not need to have an administrator account on the machine. Just copy your web application onto any server and it will work. This is true even of medium-trust applications running in a web hosting environment. SQL CE runs in-memory within your ASP.NET application and will start-up when you first access a SQL CE database, and will automatically shutdown when your application is unloaded.  SQL CE databases are stored as files that live within the \App_Data folder of your ASP.NET Applications. Works with Existing Data APIs SQL CE 4 works with existing .NET-based data APIs, and supports a SQL Server compatible query syntax.  This means you can use existing data APIs like ADO.NET, as well as use higher-level ORMs like Entity Framework and NHibernate with SQL CE.  This enables you to use the same data programming skills and data APIs you know today. Supports Development, Testing and Production Scenarios SQL CE can be used for development scenarios, testing scenarios, and light production usage scenarios.  With the SQL CE 4 release we’ve done the engineering work to ensure that SQL CE won’t crash or deadlock when used in a multi-threaded server scenario (like ASP.NET).  This is a big change from previous releases of SQL CE – which were designed for client-only scenarios and which explicitly blocked running in web-server environments.  Starting with SQL CE 4 you can use it in a web-server as well. There are no license restrictions with SQL CE.  It is also totally free. Tooling Support with VS 2010 SP1 Visual Studio 2010 SP1 adds support for SQL CE 4 and ASP.NET Projects.  Read my VS 2010 SP1 and SQL CE 4 blog post to learn more about what it enables.  Web Deploy and Web Farm Framework 2.0 Today we are also releasing Microsoft Web Deploy V2 and Microsoft Web Farm Framework V2.  These services provide a flexible and powerful way to deploy ASP.NET applications onto either a single server, or across a web farm of machines. You can learn more about these capabilities from my previous blog posts on them: Introducing the Microsoft Web Farm Framework Automating Deployment with Microsoft Web Deploy Visit the http://iis.net website to learn more and install them. Both are free. Orchard 1.0 Today we are also releasing Orchard v1.0.  Orchard is a free, open source, community based project.  It provides Content Management System (CMS) and Blogging System support out of the box, and makes it possible to easily create and manage web-sites without having to write code (site owners can customize a site through the browser-based editing tools built-into Orchard).  Read these tutorials to learn more about how you can setup and manage your own Orchard site. Orchard itself is built as an ASP.NET MVC 3 application using Razor view templates (and by default uses SQL CE 4 for data storage).  Developers wishing to extend an Orchard site with custom functionality can open and edit it as a Visual Studio project – and add new ASP.NET MVC Controllers/Views to it.  WebMatrix 1.0 WebMatrix is a new, free, web development tool from Microsoft that provides a suite of technologies that make it easier to enable website development.  It enables a developer to start a new site by browsing and downloading an app template from an online gallery of web applications (which includes popular apps like Umbraco, DotNetNuke, Orchard, WordPress, Drupal and Joomla).  Alternatively it also enables developers to create and code web sites from scratch. WebMatrix is task focused and helps guide developers as they work on sites.  WebMatrix includes IIS Express, SQL CE 4, and ASP.NET - providing an integrated web-server, database and programming framework combination.  It also includes built-in web publishing support which makes it easy to find and deploy sites to web hosting providers. You can learn more about WebMatrix from my Introducing WebMatrix blog post this summer.  Visit http://microsoft.com/web to download and install it today. Summary I’m really excited about today’s releases – they provide a bunch of additional value that makes web development with ASP.NET, Visual Studio and the Microsoft Web Server a lot better.  A lot of folks worked hard to share this with you today. On behalf of my whole team – we hope you enjoy them! Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Cannot determine ethernet address for proxy ARP on PPTP

    - by Linux Intel
    I installed pptp server on a centos 6 64bit server PPTP Server ip : 55.66.77.10 PPTP Local ip : 10.0.0.1 Client1 IP : 10.0.0.60 centos 5 64bit Client2 IP : 10.0.0.61 centos5 64bit PPTP Server can ping Client1 And client 1 can ping PPTP Server PPTP Server can ping Client2 And client 2 can ping PPTP Server The problem is client 1 can not ping Client 2 and i get this error also on PPTP server error log Cannot determine ethernet address for proxy ARP Ping from Client2 to Client1 PING 10.0.0.60 (10.0.0.60) 56(84) bytes of data. --- 10.0.0.60 ping statistics --- 6 packets transmitted, 0 received, 100% packet loss, time 5000ms route -n on PPTP Server Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.60 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 10.0.0.61 0.0.0.0 255.255.255.255 UH 0 0 0 ppp1 55.66.77.10 0.0.0.0 255.255.255.248 U 0 0 0 eth0 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth0 0.0.0.0 55.66.77.19 0.0.0.0 UG 0 0 0 eth0 route -n On Client 1 Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 55.66.77.10 70.14.13.19 255.255.255.255 UGH 0 0 0 eth0 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth1 0.0.0.0 70.14.13.19 0.0.0.0 UG 0 0 0 eth0 route -n On Client 2 Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 55.66.77.10 84.56.120.60 255.255.255.255 UGH 0 0 0 eth1 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth0 0.0.0.0 84.56.120.60 0.0.0.0 UG 0 0 0 eth1 cat /etc/ppp/options.pptpd on PPTP server ############################################################################### # $Id: options.pptpd,v 1.11 2005/12/29 01:21:09 quozl Exp $ # # Sample Poptop PPP options file /etc/ppp/options.pptpd # Options used by PPP when a connection arrives from a client. # This file is pointed to by /etc/pptpd.conf option keyword. # Changes are effective on the next connection. See "man pppd". # # You are expected to change this file to suit your system. As # packaged, it requires PPP 2.4.2 and the kernel MPPE module. ############################################################################### # Authentication # Name of the local system for authentication purposes # (must match the second field in /etc/ppp/chap-secrets entries) name pptpd # Strip the domain prefix from the username before authentication. # (applies if you use pppd with chapms-strip-domain patch) #chapms-strip-domain # Encryption # (There have been multiple versions of PPP with encryption support, # choose with of the following sections you will use.) # BSD licensed ppp-2.4.2 upstream with MPPE only, kernel module ppp_mppe.o # {{{ refuse-pap refuse-chap refuse-mschap # Require the peer to authenticate itself using MS-CHAPv2 [Microsoft # Challenge Handshake Authentication Protocol, Version 2] authentication. require-mschap-v2 # Require MPPE 128-bit encryption # (note that MPPE requires the use of MSCHAP-V2 during authentication) require-mppe-128 # }}} # OpenSSL licensed ppp-2.4.1 fork with MPPE only, kernel module mppe.o # {{{ #-chap #-chapms # Require the peer to authenticate itself using MS-CHAPv2 [Microsoft # Challenge Handshake Authentication Protocol, Version 2] authentication. #+chapms-v2 # Require MPPE encryption # (note that MPPE requires the use of MSCHAP-V2 during authentication) #mppe-40 # enable either 40-bit or 128-bit, not both #mppe-128 #mppe-stateless # }}} # Network and Routing # If pppd is acting as a server for Microsoft Windows clients, this # option allows pppd to supply one or two DNS (Domain Name Server) # addresses to the clients. The first instance of this option # specifies the primary DNS address; the second instance (if given) # specifies the secondary DNS address. #ms-dns 10.0.0.1 #ms-dns 10.0.0.2 # If pppd is acting as a server for Microsoft Windows or "Samba" # clients, this option allows pppd to supply one or two WINS (Windows # Internet Name Services) server addresses to the clients. The first # instance of this option specifies the primary WINS address; the # second instance (if given) specifies the secondary WINS address. #ms-wins 10.0.0.3 #ms-wins 10.0.0.4 # Add an entry to this system's ARP [Address Resolution Protocol] # table with the IP address of the peer and the Ethernet address of this # system. This will have the effect of making the peer appear to other # systems to be on the local ethernet. # (you do not need this if your PPTP server is responsible for routing # packets to the clients -- James Cameron) proxyarp # Normally pptpd passes the IP address to pppd, but if pptpd has been # given the delegate option in pptpd.conf or the --delegate command line # option, then pppd will use chap-secrets or radius to allocate the # client IP address. The default local IP address used at the server # end is often the same as the address of the server. To override this, # specify the local IP address here. # (you must not use this unless you have used the delegate option) #10.8.0.100 # Logging # Enable connection debugging facilities. # (see your syslog configuration for where pppd sends to) debug # Print out all the option values which have been set. # (often requested by mailing list to verify options) #dump # Miscellaneous # Create a UUCP-style lock file for the pseudo-tty to ensure exclusive # access. lock # Disable BSD-Compress compression nobsdcomp # Disable Van Jacobson compression # (needed on some networks with Windows 9x/ME/XP clients, see posting to # poptop-server on 14th April 2005 by Pawel Pokrywka and followups, # http://marc.theaimsgroup.com/?t=111343175400006&r=1&w=2 ) novj novjccomp # turn off logging to stderr, since this may be redirected to pptpd, # which may trigger a loopback nologfd # put plugins here # (putting them higher up may cause them to sent messages to the pty) cat /etc/ppp/options.pptp on Client1 and Client2 ############################################################################### # $Id: options.pptp,v 1.3 2006/03/26 23:11:05 quozl Exp $ # # Sample PPTP PPP options file /etc/ppp/options.pptp # Options used by PPP when a connection is made by a PPTP client. # This file can be referred to by an /etc/ppp/peers file for the tunnel. # Changes are effective on the next connection. See "man pppd". # # You are expected to change this file to suit your system. As # packaged, it requires PPP 2.4.2 or later from http://ppp.samba.org/ # and the kernel MPPE module available from the CVS repository also on # http://ppp.samba.org/, which is packaged for DKMS as kernel_ppp_mppe. ############################################################################### # Lock the port lock # Authentication # We don't need the tunnel server to authenticate itself noauth # We won't do PAP, EAP, CHAP, or MSCHAP, but we will accept MSCHAP-V2 # (you may need to remove these refusals if the server is not using MPPE) refuse-pap refuse-eap refuse-chap refuse-mschap # Compression # Turn off compression protocols we know won't be used nobsdcomp nodeflate # Encryption # (There have been multiple versions of PPP with encryption support, # choose which of the following sections you will use. Note that MPPE # requires the use of MSCHAP-V2 during authentication) # # Note that using PPTP with MPPE and MSCHAP-V2 should be considered # insecure: # http://marc.info/?l=pptpclient-devel&m=134372640219039&w=2 # https://github.com/moxie0/chapcrack/blob/master/README.md # http://technet.microsoft.com/en-us/security/advisory/2743314 # http://ppp.samba.org/ the PPP project version of PPP by Paul Mackarras # ppp-2.4.2 or later with MPPE only, kernel module ppp_mppe.o # If the kernel is booted in FIPS mode (fips=1), the ppp_mppe.ko module # is not allowed and PPTP-MPPE is not available. # {{{ # Require MPPE 128-bit encryption #require-mppe-128 # }}} # http://mppe-mppc.alphacron.de/ fork from PPP project by Jan Dubiec # ppp-2.4.2 or later with MPPE and MPPC, kernel module ppp_mppe_mppc.o # {{{ # Require MPPE 128-bit encryption #mppe required,stateless # }}} IPtables is stopped on clients and server, Also net.ipv4.ip_forward = 1 is enabled on PPTP Server. How can i solve this problem .?

    Read the article

  • Rendering ASP.NET Script References into the Html Header

    - by Rick Strahl
    One thing that I’ve come to appreciate in control development in ASP.NET that use JavaScript is the ability to have more control over script and script include placement than ASP.NET provides natively. Specifically in ASP.NET you can use either the ClientScriptManager or ScriptManager to embed scripts and script references into pages via code. This works reasonably well, but the script references that get generated are generated into the HTML body and there’s very little operational control for placement of scripts. If you have multiple controls or several of the same control that need to place the same scripts onto the page it’s not difficult to end up with scripts that render in the wrong order and stop working correctly. This is especially critical if you load script libraries with dependencies either via resources or even if you are rendering referenced to CDN resources. Natively ASP.NET provides a host of methods that help embedding scripts into the page via either Page.ClientScript or the ASP.NET ScriptManager control (both with slightly different syntax): RegisterClientScriptBlock Renders a script block at the top of the HTML body and should be used for embedding callable functions/classes. RegisterStartupScript Renders a script block just prior to the </form> tag and should be used to for embedding code that should execute when the page is first loaded. Not recommended – use jQuery.ready() or equivalent load time routines. RegisterClientScriptInclude Embeds a reference to a script from a url into the page. RegisterClientScriptResource Embeds a reference to a Script from a resource file generating a long resource file string All 4 of these methods render their <script> tags into the HTML body. The script blocks give you a little bit of control by having a ‘top’ and ‘bottom’ of the document location which gives you some flexibility over script placement and precedence. Script includes and resource url unfortunately do not even get that much control – references are simply rendered into the page in the order of declaration. The ASP.NET ScriptManager control facilitates this task a little bit with the abililty to specify scripts in code and the ability to programmatically check what scripts have already been registered, but it doesn’t provide any more control over the script rendering process itself. Further the ScriptManager is a bear to deal with generically because generic code has to always check and see if it is actually present. Some time ago I posted a ClientScriptProxy class that helps with managing the latter process of sending script references either to ClientScript or ScriptManager if it’s available. Since I last posted about this there have been a number of improvements in this API, one of which is the ability to control placement of scripts and script includes in the page which I think is rather important and a missing feature in the ASP.NET native functionality. Handling ScriptRenderModes One of the big enhancements that I’ve come to rely on is the ability of the various script rendering functions described above to support rendering in multiple locations: /// <summary> /// Determines how scripts are included into the page /// </summary> public enum ScriptRenderModes { /// <summary> /// Inherits the setting from the control or from the ClientScript.DefaultScriptRenderMode /// </summary> Inherit, /// Renders the script include at the location of the control /// </summary> Inline, /// <summary> /// Renders the script include into the bottom of the header of the page /// </summary> Header, /// <summary> /// Renders the script include into the top of the header of the page /// </summary> HeaderTop, /// <summary> /// Uses ClientScript or ScriptManager to embed the script include to /// provide standard ASP.NET style rendering in the HTML body. /// </summary> Script, /// <summary> /// Renders script at the bottom of the page before the last Page.Controls /// literal control. Note this may result in unexpected behavior /// if /body and /html are not the last thing in the markup page. /// </summary> BottomOfPage } This enum is then applied to the various Register functions to allow more control over where scripts actually show up. Why is this useful? For me I often render scripts out of control resources and these scripts often include things like a JavaScript Library (jquery) and a few plug-ins. The order in which these can be loaded is critical so that jQuery.js always loads before any plug-in for example. Typically I end up with a general script layout like this: Core Libraries- HeaderTop Plug-ins: Header ScriptBlocks: Header or Script depending on other dependencies There’s also an option to render scripts and CSS at the very bottom of the page before the last Page control on the page which can be useful for speeding up page load when lots of scripts are loaded. The API syntax of the ClientScriptProxy methods is closely compatible with ScriptManager’s using static methods and control references to gain access to the page and embedding scripts. For example, to render some script into the current page in the header: // Create script block in header ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "hello_function", "function helloWorld() { alert('hello'); }", true, ScriptRenderModes.Header); // Same again - shouldn't be rendered because it's the same id ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "hello_function", "function helloWorld() { alert('hello'); }", true, ScriptRenderModes.Header); // Create a second script block in header ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "hello_function2", "function helloWorld2() { alert('hello2'); }", true, ScriptRenderModes.Header); // This just calls ClientScript and renders into bottom of document ClientScriptProxy.Current.RegisterStartupScript(this,typeof(ControlResources), "call_hello", "helloWorld();helloWorld2();", true); which generates: <html xmlns="http://www.w3.org/1999/xhtml" > <head><title> </title> <script type="text/javascript"> function helloWorld() { alert('hello'); } </script> <script type="text/javascript"> function helloWorld2() { alert('hello2'); } </script> </head> <body> … <script type="text/javascript"> //<![CDATA[ helloWorld();helloWorld2();//]]> </script> </form> </body> </html> Note that the scripts are generated into the header rather than the body except for the last script block which is the call to RegisterStartupScript. In general I wouldn’t recommend using RegisterStartupScript – ever. It’s a much better practice to use a script base load event to handle ‘startup’ code that should fire when the page first loads. So instead of the code above I’d actually recommend doing: ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "call_hello", "$().ready( function() { alert('hello2'); });", true, ScriptRenderModes.Header); assuming you’re using jQuery on the page. For script includes from a Url the following demonstrates how to embed scripts into the header. This example injects a jQuery and jQuery.UI script reference from the Google CDN then checks each with a script block to ensure that it has loaded and if not loads it from a server local location: // load jquery from CDN ClientScriptProxy.Current.RegisterClientScriptInclude(this, typeof(ControlResources), "http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js", ScriptRenderModes.HeaderTop); // check if jquery loaded - if it didn't we're not online string scriptCheck = @"if (typeof jQuery != 'object') document.write(unescape(""%3Cscript src='{0}' type='text/javascript'%3E%3C/script%3E""));"; string jQueryUrl = ClientScriptProxy.Current.GetWebResourceUrl(this, typeof(ControlResources), ControlResources.JQUERY_SCRIPT_RESOURCE); ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "jquery_register", string.Format(scriptCheck,jQueryUrl),true, ScriptRenderModes.HeaderTop); // Load jquery-ui from cdn ClientScriptProxy.Current.RegisterClientScriptInclude(this, typeof(ControlResources), "http://ajax.googleapis.com/ajax/libs/jqueryui/1.7.2/jquery-ui.min.js", ScriptRenderModes.Header); // check if we need to load from local string jQueryUiUrl = ResolveUrl("~/scripts/jquery-ui-custom.min.js"); ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "jqueryui_register", string.Format(scriptCheck, jQueryUiUrl), true, ScriptRenderModes.Header); // Create script block in header ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "hello_function", "$().ready( function() { alert('hello'); });", true, ScriptRenderModes.Header); which in turn generates this HTML: <html xmlns="http://www.w3.org/1999/xhtml" > <head> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js" type="text/javascript"></script> <script type="text/javascript"> if (typeof jQuery != 'object') document.write(unescape("%3Cscript src='/WestWindWebToolkitWeb/WebResource.axd?d=DIykvYhJ_oXCr-TA_dr35i4AayJoV1mgnQAQGPaZsoPM2LCdvoD3cIsRRitHKlKJfV5K_jQvylK7tsqO3lQIFw2&t=633979863959332352' type='text/javascript'%3E%3C/script%3E")); </script> <title> </title> <script src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.7.2/jquery-ui.min.js" type="text/javascript"></script> <script type="text/javascript"> if (typeof jQuery != 'object') document.write(unescape("%3Cscript src='/WestWindWebToolkitWeb/scripts/jquery-ui-custom.min.js' type='text/javascript'%3E%3C/script%3E")); </script> <script type="text/javascript"> $().ready(function() { alert('hello'); }); </script> </head> <body> …</body> </html> As you can see there’s a bit more control in this process as you can inject both script includes and script blocks into the document at the top or bottom of the header, plus if necessary at the usual body locations. This is quite useful especially if you create custom server controls that interoperate with script and have certain dependencies. The above is a good example of a useful switchable routine where you can switch where scripts load from by default – the above pulls from Google CDN but a configuration switch may automatically switch to pull from the local development copies if your doing development for example. How does it work? As mentioned the ClientScriptProxy object mimicks many of the ScriptManager script related methods and so provides close API compatibility with it although it contains many additional overloads that enhance functionality. It does however work against ScriptManager if it’s available on the page, or Page.ClientScript if it’s not so it provides a single unified frontend to script access. There are however many overloads of the original SM methods like the above to provide additional functionality. The implementation of script header rendering is pretty straight forward – as long as a server header (ie. it has to have runat=”server” set) is available. Otherwise these routines fall back to using the default document level insertions of ScriptManager/ClientScript. Given that there is a server header it’s relatively easy to generate the script tags and code and append them to the header either at the top or bottom. I suspect Microsoft didn’t provide header rendering functionality precisely because a runat=”server” header is not required by ASP.NET so behavior would be slightly unpredictable. That’s not really a problem for a custom implementation however. Here’s the RegisterClientScriptBlock implementation that takes a ScriptRenderModes parameter to allow header rendering: /// <summary> /// Renders client script block with the option of rendering the script block in /// the Html header /// /// For this to work Header must be defined as runat="server" /// </summary> /// <param name="control">any control that instance typically page</param> /// <param name="type">Type that identifies this rendering</param> /// <param name="key">unique script block id</param> /// <param name="script">The script code to render</param> /// <param name="addScriptTags">Ignored for header rendering used for all other insertions</param> /// <param name="renderMode">Where the block is rendered</param> public void RegisterClientScriptBlock(Control control, Type type, string key, string script, bool addScriptTags, ScriptRenderModes renderMode) { if (renderMode == ScriptRenderModes.Inherit) renderMode = DefaultScriptRenderMode; if (control.Page.Header == null || renderMode != ScriptRenderModes.HeaderTop && renderMode != ScriptRenderModes.Header && renderMode != ScriptRenderModes.BottomOfPage) { RegisterClientScriptBlock(control, type, key, script, addScriptTags); return; } // No dupes - ref script include only once const string identifier = "scriptblock_"; if (HttpContext.Current.Items.Contains(identifier + key)) return; HttpContext.Current.Items.Add(identifier + key, string.Empty); StringBuilder sb = new StringBuilder(); // Embed in header sb.AppendLine("\r\n<script type=\"text/javascript\">"); sb.AppendLine(script); sb.AppendLine("</script>"); int? index = HttpContext.Current.Items["__ScriptResourceIndex"] as int?; if (index == null) index = 0; if (renderMode == ScriptRenderModes.HeaderTop) { control.Page.Header.Controls.AddAt(index.Value, new LiteralControl(sb.ToString())); index++; } else if(renderMode == ScriptRenderModes.Header) control.Page.Header.Controls.Add(new LiteralControl(sb.ToString())); else if (renderMode == ScriptRenderModes.BottomOfPage) control.Page.Controls.AddAt(control.Page.Controls.Count-1,new LiteralControl(sb.ToString())); HttpContext.Current.Items["__ScriptResourceIndex"] = index; } Note that the routine has to keep track of items inserted by id so that if the same item is added again with the same key it won’t generate two script entries. Additionally the code has to keep track of how many insertions have been made at the top of the document so that entries are added in the proper order. The RegisterScriptInclude method is similar but there’s some additional logic in here to deal with script file references and ClientScriptProxy’s (optional) custom resource handler that provides script compression /// <summary> /// Registers a client script reference into the page with the option to specify /// the script location in the page /// </summary> /// <param name="control">Any control instance - typically page</param> /// <param name="type">Type that acts as qualifier (uniqueness)</param> /// <param name="url">the Url to the script resource</param> /// <param name="ScriptRenderModes">Determines where the script is rendered</param> public void RegisterClientScriptInclude(Control control, Type type, string url, ScriptRenderModes renderMode) { const string STR_ScriptResourceIndex = "__ScriptResourceIndex"; if (string.IsNullOrEmpty(url)) return; if (renderMode == ScriptRenderModes.Inherit) renderMode = DefaultScriptRenderMode; // Extract just the script filename string fileId = null; // Check resource IDs and try to match to mapped file resources // Used to allow scripts not to be loaded more than once whether // embedded manually (script tag) or via resources with ClientScriptProxy if (url.Contains(".axd?r=")) { string res = HttpUtility.UrlDecode( StringUtils.ExtractString(url, "?r=", "&", false, true) ); foreach (ScriptResourceAlias item in ScriptResourceAliases) { if (item.Resource == res) { fileId = item.Alias + ".js"; break; } } if (fileId == null) fileId = url.ToLower(); } else fileId = Path.GetFileName(url).ToLower(); // No dupes - ref script include only once const string identifier = "script_"; if (HttpContext.Current.Items.Contains( identifier + fileId ) ) return; HttpContext.Current.Items.Add(identifier + fileId, string.Empty); // just use script manager or ClientScriptManager if (control.Page.Header == null || renderMode == ScriptRenderModes.Script || renderMode == ScriptRenderModes.Inline) { RegisterClientScriptInclude(control, type,url, url); return; } // Retrieve script index in header int? index = HttpContext.Current.Items[STR_ScriptResourceIndex] as int?; if (index == null) index = 0; StringBuilder sb = new StringBuilder(256); url = WebUtils.ResolveUrl(url); // Embed in header sb.AppendLine("\r\n<script src=\"" + url + "\" type=\"text/javascript\"></script>"); if (renderMode == ScriptRenderModes.HeaderTop) { control.Page.Header.Controls.AddAt(index.Value, new LiteralControl(sb.ToString())); index++; } else if (renderMode == ScriptRenderModes.Header) control.Page.Header.Controls.Add(new LiteralControl(sb.ToString())); else if (renderMode == ScriptRenderModes.BottomOfPage) control.Page.Controls.AddAt(control.Page.Controls.Count-1, new LiteralControl(sb.ToString())); HttpContext.Current.Items[STR_ScriptResourceIndex] = index; } There’s a little more code here that deals with cleaning up the passed in Url and also some custom handling of script resources that run through the ScriptCompressionModule – any script resources loaded in this fashion are automatically cached based on the resource id. Raw urls extract just the filename from the URL and cache based on that. All of this to avoid doubling up of scripts if called multiple times by multiple instances of the same control for example or several controls that all load the same resources/includes. Finally RegisterClientScriptResource utilizes the previous method to wrap the WebResourceUrl as well as some custom functionality for the resource compression module: /// <summary> /// Returns a WebResource or ScriptResource URL for script resources that are to be /// embedded as script includes. /// </summary> /// <param name="control">Any control</param> /// <param name="type">A type in assembly where resources are located</param> /// <param name="resourceName">Name of the resource to load</param> /// <param name="renderMode">Determines where in the document the link is rendered</param> public void RegisterClientScriptResource(Control control, Type type, string resourceName, ScriptRenderModes renderMode) { string resourceUrl = GetClientScriptResourceUrl(control, type, resourceName); RegisterClientScriptInclude(control, type, resourceUrl, renderMode); } /// <summary> /// Works like GetWebResourceUrl but can be used with javascript resources /// to allow using of resource compression (if the module is loaded). /// </summary> /// <param name="control"></param> /// <param name="type"></param> /// <param name="resourceName"></param> /// <returns></returns> public string GetClientScriptResourceUrl(Control control, Type type, string resourceName) { #if IncludeScriptCompressionModuleSupport // If wwScriptCompression Module through Web.config is loaded use it to compress // script resources by using wcSC.axd Url the module intercepts if (ScriptCompressionModule.ScriptCompressionModuleActive) { string url = "~/wwSC.axd?r=" + HttpUtility.UrlEncode(resourceName); if (type.Assembly != GetType().Assembly) url += "&t=" + HttpUtility.UrlEncode(type.FullName); return WebUtils.ResolveUrl(url); } #endif return control.Page.ClientScript.GetWebResourceUrl(type, resourceName); } This code merely retrieves the resource URL and then simply calls back to RegisterClientScriptInclude with the URL to be embedded which means there’s nothing specific to deal with other than the custom compression module logic which is nice and easy. What else is there in ClientScriptProxy? ClientscriptProxy also provides a few other useful services beyond what I’ve already covered here: Transparent ScriptManager and ClientScript calls ClientScriptProxy includes a host of routines that help figure out whether a script manager is available or not and all functions in this class call the appropriate object – ScriptManager or ClientScript – that is available in the current page to ensure that scripts get embedded into pages properly. This is especially useful for control development where controls have no control over the scripting environment in place on the page. RegisterCssLink and RegisterCssResource Much like the script embedding functions these two methods allow embedding of CSS links. CSS links are appended to the header or to a form declared with runat=”server”. LoadControlScript Is a high level resource loading routine that can be used to easily switch between different script linking modes. It supports loading from a WebResource, a url or not loading anything at all. This is very useful if you build controls that deal with specification of resource urls/ids in a standard way. Check out the full Code You can check out the full code to the ClientScriptProxyClass here: ClientScriptProxy.cs ClientScriptProxy Documentation (class reference) Note that the ClientScriptProxy has a few dependencies in the West Wind Web Toolkit of which it is part of. ControlResources holds a few standard constants and script resource links and the ScriptCompressionModule which is referenced in a few of the script inclusion methods. There’s also another useful ScriptContainer companion control  to the ClientScriptProxy that allows scripts to be placed onto the page’s markup including the ability to specify the script location and script minification options. You can find all the dependencies in the West Wind Web Toolkit repository: West Wind Web Toolkit Repository West Wind Web Toolkit Home Page© Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  JavaScript  

    Read the article

  • Guide to reduce TFS database growth using the Test Attachment Cleaner

    - by terje
    Recently there has been several reports on TFS databases growing too fast and growing too big.  Notable this has been observed when one has started to use more features of the Testing system.  Also, the TFS 2010 handles test results differently from TFS 2008, and this leads to more data stored in the TFS databases. As a consequence of this there has been released some tools to remove unneeded data in the database, and also some fixes to correct for bugs which has been found and corrected during this process.  Further some preventive practices and maintenance rules should be adopted. A lot of people have blogged about this, among these are: Anu’s very important blog post here describes both the problem and solutions to handle it.  She describes both the Test Attachment Cleaner tool, and also some QFE/CU releases to fix some underlying bugs which prevented the tool from being fully effective. Brian Harry’s blog post here describes the problem too This forum thread describes the problem with some solution hints. Ravi Shanker’s blog post here describes best practices on solving this (TBP) Grant Holidays blogpost here describes strategies to use the Test Attachment Cleaner both to detect space problems and how to rectify them.   The problem can be divided into the following areas: Publishing of test results from builds Publishing of manual test results and their attachments in particular Publishing of deployment binaries for use during a test run Bugs in SQL server preventing total cleanup of data (All the published data above is published into the TFS database as attachments.) The test results will include all data being collected during the run.  Some of this data can grow rather large, like IntelliTrace logs and video recordings.   Also the pushing of binaries which happen for automated test runs, including tests run during a build using code coverage which will include all the files in the deployment folder, contributes a lot to the size of the attached data.   In order to handle this systematically, I have set up a 3-stage process: Find out if you have a database space issue Set up your TFS server to minimize potential database issues If you have the “problem”, clean up the database and otherwise keep it clean   Analyze the data Are your database( s) growing ?  Are unused test results growing out of proportion ? To find out about this you need to query your TFS database for some of the information, and use the Test Attachment Cleaner (TAC) to obtain some  more detailed information. If you don’t have too many databases you can use the SQL Server reports from within the Management Studio to analyze the database and table sizes. Or, you can use a set of queries . I find queries often faster to use because I can tweak them the way I want them.  But be aware that these queries are non-documented and non-supported and may change when the product team wants to change them. If you have multiple Project Collections, find out which might have problems: (Disclaimer: The queries below work on TFS 2010. They will not work on Dev-11, since the table structure have been changed.  I will try to update them for Dev-11 when it is released.) Open a SQL Management Studio session onto the SQL Server where you have your TFS Databases. Use the query below to find the Project Collection databases and their sizes, in descending size order.  use master select DB_NAME(database_id) AS DBName, (size/128) SizeInMB FROM sys.master_files where type=0 and substring(db_name(database_id),1,4)='Tfs_' and DB_NAME(database_id)<>'Tfs_Configuration' order by size desc Doing this on one of our SQL servers gives the following results: It is pretty easy to see on which collection to start the work   Find out which tables are possibly too large Keep a special watch out for the Tfs_Attachment table. Use the script at the bottom of Grant’s blog to find the table sizes in descending size order. In our case we got this result: From Grant’s blog we learnt that the tbl_Content is in the Version Control category, so the major only big issue we have here is the tbl_AttachmentContent.   Find out which team projects have possibly too large attachments In order to use the TAC to find and eventually delete attachment data we need to find out which team projects have these attachments. The team project is a required parameter to the TAC. Use the following query to find this, replace the collection database name with whatever applies in your case:   use Tfs_DefaultCollection select p.projectname, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by p.projectname order by sum(a.compressedlength) desc In our case we got this result (had to remove some names), out of more than 100 team projects accumulated over quite some years: As can be seen here it is pretty obvious the “Byggtjeneste – Projects” are the main team project to take care of, with the ones on lines 2-4 as the next ones.  Check which attachment types takes up the most space It can be nice to know which attachment types takes up the space, so run the following query: use Tfs_DefaultCollection select a.attachmenttype, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by a.attachmenttype order by sum(a.compressedlength) desc We then got this result: From this it is pretty obvious that the problem here is the binary files, as also mentioned in Anu’s blog. Check which file types, by their extension, takes up the most space Run the following query use Tfs_DefaultCollection select SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999)as Extension, sum(compressedlength)/1024 as SizeInKB from tbl_Attachment group by SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999) order by sum(compressedlength) desc This gives a result like this:   Now you should have collected enough information to tell you what to do – if you got to do something, and some of the information you need in order to set up your TAC settings file, both for a cleanup and for scheduled maintenance later.    Get your TFS server and environment properly set up Even if you have got the problem or if have yet not got the problem, you should ensure the TFS server is set up so that the risk of getting into this problem is minimized.  To ensure this you should install the following set of updates and components. The assumption is that your TFS Server is at SP1 level. Install the QFE for KB2608743 – which also contains detailed instructions on its use, download from here. The QFE changes the default settings to not upload deployed binaries, which are used in automated test runs. Binaries will still be uploaded if: Code coverage is enabled in the test settings. You change the UploadDeploymentItem to true in the testsettings file. Be aware that this might be reset back to false by another user which haven't installed this QFE. The hotfix should be installed to The build servers (the build agents) The machine hosting the Test Controller Local development computers (Visual Studio) Local test computers (MTM) It is not required to install it to the TFS Server, test agents or the build controller – it has no effect on these programs. If you use the SQL Server 2008 R2 you should also install the CU 10 (or later).  This CU fixes a potential problem of hanging “ghost” files.  This seems to happen only in certain trigger situations, but to ensure it doesn’t bite you, it is better to make sure this CU is installed. There is no such CU for SQL Server 2008 pre-R2 Work around:  If you suspect hanging ghost files, they can be – with some mental effort, deduced from the ghost counters using the following SQL query: use master SELECT DB_NAME(database_id) as 'database',OBJECT_NAME(object_id) as 'objectname', index_type_desc,ghost_record_count,version_ghost_record_count,record_count,avg_record_size_in_bytes FROM sys.dm_db_index_physical_stats (DB_ID(N'<DatabaseName>'), OBJECT_ID(N'<TableName>'), NULL, NULL , 'DETAILED') The problem is a stalled ghost cleanup process.  Restarting the SQL server after having stopped all components that depends on it, like the TFS Server and SPS services – that is all applications that connect to the SQL server. Then restart the SQL server, and finally start up all dependent processes again.  (I would guess a complete server reboot would do the trick too.) After this the ghost cleanup process will run properly again. The fix will come in the next CU cycle for SQL Server R2 SP1.  The R2 pre-SP1 and R2 SP1 have separate maintenance cycles, and are maintained individually. Each have its own set of CU’s. When it comes I will add the link here to that CU. The "hanging ghost file” issue came up after one have run the TAC, and deleted enourmes amount of data.  The SQL Server can get into this hanging state (without the QFE) in certain cases due to this. And of course, install and set up the Test Attachment Cleaner command line power tool.  This should be done following some guidelines from Ravi Shanker: “When you run TAC, ensure that you are deleting small chunks of data at regular intervals (say run TAC every night at 3AM to delete data that is between age 730 to 731 days) – this will ensure that small amounts of data are being deleted and SQL ghosted record cleanup can catch up with the number of deletes performed. “ This rule minimizes the risk of the ghosted hang problem to occur, and further makes it easier for the SQL server ghosting process to work smoothly. “Run DBCC SHRINKDB post the ghosted records are cleaned up to physically reclaim the space on the file system” This is the last step in a 3 step process of removing SQL server data. First they are logically deleted. Then they are cleaned out by the ghosting process, and finally removed using the shrinkdb command. Cleaning out the attachments The TAC is run from the command line using a set of parameters and controlled by a settingsfile.  The parameters point out a server uri including the team project collection and also point at a specific team project. So in order to run this for multiple team projects regularly one has to set up a script to run the TAC multiple times, once for each team project.  When you install the TAC there is a very useful readme file in the same directory. When the deployment binaries are published to the TFS server, ALL items are published up from the deployment folder. That often means much more files than you would assume are necessary. This is a brute force technique. It works, but you need to take care when cleaning up. Grant has shown how their settings file looks in his blog post, removing all attachments older than 180 days , as long as there are no active workitems connected to them. This setting can be useful to clean out all items, both in a clean-up once operation, and in a general There are two scenarios we need to consider: Cleaning up an existing overgrown database Maintaining a server to avoid an overgrown database using scheduled TAC   1. Cleaning up a database which has grown too big due to these attachments. This job is a “Once” job.  We do this once and then move on to make sure it won’t happen again, by taking the actions in 2) below.  In this scenario you should only consider the large files. Your goal should be to simply reduce the size, and don’t bother about  the smaller stuff. That can be left a scheduled TAC cleanup ( 2 below). Here you can use a very general settings file, and just remove the large attachments, or you can choose to remove any old items.  Grant’s settings file is an example of the last one.  A settings file to remove only large attachments could look like this: <!-- Scenario : Remove large files --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> </Attachment> </DeletionCriteria> Or like this: If you want only to remove dll’s and pdb’s about that size, add an Extensions-section.  Without that section, all extensions will be deleted. <!-- Scenario : Remove large files of type dll's and pdb's --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="dll" /> <Include value="pdb" /> </Extensions> </Attachment> </DeletionCriteria> Before you start up your scheduled maintenance, you should clear out all older items. 2. Scheduled maintenance using the TAC If you run a schedule every night, and remove old items, and also remove them in small batches.  It is important to run this often, like every night, in order to keep the number of deleted items low. That way the SQL ghost process works better. One approach could be to delete all items older than some number of days, let’s say 180 days. This could be combined with restricting it to keep attachments with active or resolved bugs.  Doing this every night ensures that only small amounts of data is deleted. <!-- Scenario : Remove old items except if they have active or resolved bugs --> <DeletionCriteria> <TestRun> <AgeInDays OlderThan="180" /> </TestRun> <Attachment /> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved"/> </LinkedBugs> </DeletionCriteria> In my experience there are projects which are left with active or resolved workitems, akthough no further work is done.  It can be wise to have a cleanup process with no restrictions on linked bugs at all. Note that you then have to remove the whole LinkedBugs section. A approach which could work better here is to do a two step approach, use the schedule above to with no LinkedBugs as a sweeper cleaning task taking away all data older than you could care about.  Then have another scheduled TAC task to take out more specifically attachments that you are not likely to use. This task could be much more specific, and based on your analysis clean out what you know is troublesome data. <!-- Scenario : Remove specific files early --> <DeletionCriteria> <TestRun > <AgeInDays OlderThan="30" /> </TestRun> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="iTrace"/> <Include value="dll"/> <Include value="pdb"/> <Include value="wmv"/> </Extensions> </Attachment> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved" /> </LinkedBugs> </DeletionCriteria> The readme document for the TAC says that it recognizes “internal” extensions, but it does recognize any extension. To run the tool do the following command: tcmpt attachmentcleanup /collection:your_tfs_collection_url /teamproject:your_team_project /settingsfile:path_to_settingsfile /outputfile:%temp%/teamproject.tcmpt.log /mode:delete   Shrinking the database You could run a shrink database command after the TAC has run in cases where there are a lot of data being deleted.  In this case you SHOULD do it, to free up all that space.  But, after the shrink operation you should do a rebuild indexes, since the shrink operation will leave the database in a very fragmented state, which will reduce performance. Note that you need to rebuild indexes, reorganizing is not enough. For smaller amounts of data you should NOT shrink the database, since the data will be reused by the SQL server when it need to add more records.  In fact, it is regarded as a bad practice to shrink the database regularly.  So on a daily maintenance schedule you should NOT shrink the database. To shrink the database you do a DBCC SHRINKDATABASE command, and then follow up with a DBCC INDEXDEFRAG afterwards.  I find the easiest way to do this is to create a SQL Maintenance plan including the Shrink Database Task and the Rebuild Index Task and just execute it when you need to do this.

    Read the article

  • The Execute SQL Task

    In this article we are going to take you through the Execute SQL Task in SQL Server Integration Services for SQL Server 2005 (although it appies just as well to SQL Server 2008).  We will be covering all the essentials that you will need to know to effectively use this task and make it as flexible as possible. The things we will be looking at are as follows: A tour of the Task. The properties of the Task. After looking at these introductory topics we will then get into some examples. The examples will show different types of usage for the task: Returning a single value from a SQL query with two input parameters. Returning a rowset from a SQL query. Executing a stored procedure and retrieveing a rowset, a return value, an output parameter value and passing in an input parameter. Passing in the SQL Statement from a variable. Passing in the SQL Statement from a file. Tour Of The Task Before we can start to use the Execute SQL Task in our packages we are going to need to locate it in the toolbox. Let's do that now. Whilst in the Control Flow section of the package expand your toolbox and locate the Execute SQL Task. Below is how we found ours. Now drag the task onto the designer. As you can see from the following image we have a validation error appear telling us that no connection manager has been assigned to the task. This can be easily remedied by creating a connection manager. There are certain types of connection manager that are compatable with this task so we cannot just create any connection manager and these are detailed in a few graphics time. Double click on the task itself to take a look at the custom user interface provided to us for this task. The task will open on the general tab as shown below. Take a bit of time to have a look around here as throughout this article we will be revisting this page many times. Whilst on the general tab, drop down the combobox next to the ConnectionType property. In here you will see the types of connection manager which this task will accept. As with SQL Server 2000 DTS, SSIS allows you to output values from this task in a number of formats. Have a look at the combobox next to the Resultset property. The major difference here is the ability to output into XML. If you drop down the combobox next to the SQLSourceType property you will see the ways in which you can pass a SQL Statement into the task itself. We will have examples of each of these later on but certainly when we saw these for the first time we were very excited. Next to the SQLStatement property if you click in the empty box next to it you will see ellipses appear. Click on them and you will see the very basic query editor that becomes available to you. Alternatively after you have specified a connection manager for the task you can click on the Build Query button to bring up a completely different query editor. This is slightly inconsistent. Once you've finished looking around the general tab, move on to the next tab which is the parameter mapping tab. We shall, again, be visiting this tab throughout the article but to give you an initial heads up this is where you define the input, output and return values from your task. Note this is not where you specify the resultset. If however you now move on to the ResultSet tab this is where you define what variable will receive the output from your SQL Statement in whatever form that is. Property Expressions are one of the most amazing things to happen in SSIS and they will not be covered here as they deserve a whole article to themselves. Watch out for this as their usefulness will astound you. For a more detailed discussion of what should be the parameter markers in the SQL Statements on the General tab and how to map them to variables on the Parameter Mapping tab see Working with Parameters and Return Codes in the Execute SQL Task. Task Properties There are two places where you can specify the properties for your task. One is in the task UI itself and the other is in the property pane which will appear if you right click on your task and select Properties from the context menu. We will be doing plenty of property setting in the UI later so let's take a moment to have a look at the property pane. Below is a graphic showing our properties pane. Now we shall take you through all the properties and tell you exactly what they mean. A lot of these properties you will see across all tasks as well as the package because of everything's base structure The Container. BypassPrepare Should the statement be prepared before sending to the connection manager destination (True/False) Connection This is simply the name of the connection manager that the task will use. We can get this from the connection manager tray at the bottom of the package. DelayValidation Really interesting property and it tells the task to not validate until it actually executes. A usage for this may be that you are operating on table yet to be created but at runtime you know the table will be there. Description Very simply the description of your Task. Disable Should the task be enabled or not? You can also set this through a context menu by right clicking on the task itself. DisableEventHandlers As a result of events that happen in the task, should the event handlers for the container fire? ExecValueVariable The variable assigned here will get or set the execution value of the task. Expressions Expressions as we mentioned earlier are a really powerful tool in SSIS and this graphic below shows us a small peek of what you can do. We select a property on the left and assign an expression to the value of that property on the right causing the value to be dynamically changed at runtime. One of the most obvious uses of this is that the property value can be built dynamically from within the package allowing you a great deal of flexibility FailPackageOnFailure If this task fails does the package? FailParentOnFailure If this task fails does the parent container? A task can he hosted inside another container i.e. the For Each Loop Container and this would then be the parent. ForcedExecutionValue This property allows you to hard code an execution value for the task. ForcedExecutionValueType What is the datatype of the ForcedExecutionValue? ForceExecutionResult Force the task to return a certain execution result. This could then be used by the workflow constraints. Possible values are None, Success, Failure and Completion. ForceExecutionValue Should we force the execution result? IsolationLevel This is the transaction isolation level of the task. IsStoredProcedure Certain optimisations are made by the task if it knows that the query is a Stored Procedure invocation. The docs say this will always be false unless the connection is an ADO connection. LocaleID Gets or sets the LocaleID of the container. LoggingMode Should we log for this container and what settings should we use? The value choices are UseParentSetting, Enabled and Disabled. MaximumErrorCount How many times can the task fail before we call it a day? Name Very simply the name of the task. ResultSetType How do you want the results of your query returned? The choices are ResultSetType_None, ResultSetType_SingleRow, ResultSetType_Rowset and ResultSetType_XML. SqlStatementSource Your Query/SQL Statement. SqlStatementSourceType The method of specifying the query. Your choices here are DirectInput, FileConnection and Variables TimeOut How long should the task wait to receive results? TransactionOption How should the task handle being asked to join a transaction? Usage Examples As we move through the examples we will only cover in them what we think you must know and what we think you should see. This means that some of the more elementary steps like setting up variables will be covered in the early examples but skipped and simply referred to in later ones. All these examples used the AventureWorks database that comes with SQL Server 2005. Returning a Single Value, Passing in Two Input Parameters So the first thing we are going to do is add some variables to our package. The graphic below shows us those variables having been defined. Here the CountOfEmployees variable will be used as the output from the query and EndDate and StartDate will be used as input parameters. As you can see all these variables have been scoped to the package. Scoping allows us to have domains for variables. Each container has a scope and remember a package is a container as well. Variable values of the parent container can be seen in child containers but cannot be passed back up to the parent from a child. Our following graphic has had a number of changes made. The first of those changes is that we have created and assigned an OLEDB connection manager to this Task ExecuteSQL Task Connection. The next thing is we have made sure that the SQLSourceType property is set to Direct Input as we will be writing in our statement ourselves. We have also specified that only a single row will be returned from this query. The expressions we typed in was: SELECT COUNT(*) AS CountOfEmployees FROM HumanResources.Employee WHERE (HireDate BETWEEN ? AND ?) Moving on now to the Parameter Mapping tab this is where we are going to tell the task about our input paramaters. We Add them to the window specifying their direction and datatype. A quick word here about the structure of the variable name. As you can see SSIS has preceeded the variable with the word user. This is a default namespace for variables but you can create your own. When defining your variables if you look at the variables window title bar you will see some icons. If you hover over the last one on the right you will see it says "Choose Variable Columns". If you click the button you will see a list of checkbox options and one of them is namespace. after checking this you will see now where you can define your own namespace. The next tab, result set, is where we need to get back the value(s) returned from our statement and assign to a variable which in our case is CountOfEmployees so we can use it later perhaps. Because we are only returning a single value then if you remember from earlier we are allowed to assign a name to the resultset but it must be the name of the column (or alias) from the query. A really cool feature of Business Intelligence Studio being hosted by Visual Studio is that we get breakpoint support for free. In our package we set a Breakpoint so we can break the package and have a look in a watch window at the variable values as they appear to our task and what the variable value of our resultset is after the task has done the assignment. Here's that window now. As you can see the count of employess that matched the data range was 2. Returning a Rowset In this example we are going to return a resultset back to a variable after the task has executed not just a single row single value. There are no input parameters required so the variables window is nice and straight forward. One variable of type object. Here is the statement that will form the soure for our Resultset. select p.ProductNumber, p.name, pc.Name as ProductCategoryNameFROM Production.ProductCategory pcJOIN Production.ProductSubCategory pscON pc.ProductCategoryID = psc.ProductCategoryIDJOIN Production.Product pON psc.ProductSubCategoryID = p.ProductSubCategoryID We need to make sure that we have selected Full result set as the ResultSet as shown below on the task's General tab. Because there are no input parameters we can skip the parameter mapping tab and move straight to the Result Set tab. Here we need to Add our variable defined earlier and map it to the result name of 0 (remember we covered this earlier) Once we run the task we can again set a breakpoint and have a look at the values coming back from the task. In the following graphic you can see the result set returned to us as a COM object. We can do some pretty interesting things with this COM object and in later articles that is exactly what we shall be doing. Return Values, Input/Output Parameters and Returning a Rowset from a Stored Procedure This example is pretty much going to give us a taste of everything. We have already covered in the previous example how to specify the ResultSet to be a Full result set so we will not cover it again here. For this example we are going to need 4 variables. One for the return value, one for the input parameter, one for the output parameter and one for the result set. Here is the statement we want to execute. Note how much cleaner it is than if you wanted to do it using the current version of DTS. In the Parameter Mapping tab we are going to Add our variables and specify their direction and datatypes. In the Result Set tab we can now map our final variable to the rowset returned from the stored procedure. It really is as simple as that and we were amazed at how much easier it is than in DTS 2000. Passing in the SQL Statement from a Variable SSIS as we have mentioned is hugely more flexible than its predecessor and one of the things you will notice when moving around the tasks and the adapters is that a lot of them accept a variable as an input for something they need. The ExecuteSQL task is no different. It will allow us to pass in a string variable as the SQL Statement. This variable value could have been set earlier on from inside the package or it could have been populated from outside using a configuration. The ResultSet property is set to single row and we'll show you why in a second when we look at the variables. Note also the SQLSourceType property. Here's the General Tab again. Looking at the variable we have in this package you can see we have only two. One for the return value from the statement and one which is obviously for the statement itself. Again we need to map the Result name to our variable and this can be a named Result Name (The column name or alias returned by the query) and not 0. The expected result into our variable should be the amount of rows in the Person.Contact table and if we look in the watch window we see that it is.   Passing in the SQL Statement from a File The final example we are going to show is a really interesting one. We are going to pass in the SQL statement to the task by using a file connection manager. The file itself contains the statement to run. The first thing we are going to need to do is create our file connection mananger to point to our file. Click in the connections tray at the bottom of the designer, right click and choose "New File Connection" As you can see in the graphic below we have chosen to use an existing file and have passed in the name as well. Have a look around at the other "Usage Type" values available whilst you are here. Having set that up we can now see in the connection manager tray our file connection manager sitting alongside our OLE-DB connection we have been using for the rest of these examples. Now we can go back to the familiar General Tab to set up how the task will accept our file connection as the source. All the other properties in this task are set up exactly as we have been doing for other examples depending on the options chosen so we will not cover them again here.   We hope you will agree that the Execute SQL Task has changed considerably in this release from its DTS predecessor. It has a lot of options available but once you have configured it a few times you get to learn what needs to go where. We hope you have found this article useful.

    Read the article

  • Working with PivotTables in Excel

    - by Mark Virtue
    PivotTables are one of the most powerful features of Microsoft Excel.  They allow large amounts of data to be analyzed and summarized in just a few mouse clicks. In this article, we explore PivotTables, understand what they are, and learn how to create and customize them. Note:  This article is written using Excel 2010 (Beta).  The concept of a PivotTable has changed little over the years, but the method of creating one has changed in nearly every iteration of Excel.  If you are using a version of Excel that is not 2010, expect different screens from the ones you see in this article. A Little History In the early days of spreadsheet programs, Lotus 1-2-3 ruled the roost.  Its dominance was so complete that people thought it was a waste of time for Microsoft to bother developing their own spreadsheet software (Excel) to compete with Lotus.  Flash-forward to 2010, and Excel’s dominance of the spreadsheet market is greater than Lotus’s ever was, while the number of users still running Lotus 1-2-3 is approaching zero.  How did this happen?  What caused such a dramatic reversal of fortunes? Industry analysts put it down to two factors:  Firstly, Lotus decided that this fancy new GUI platform called “Windows” was a passing fad that would never take off.  They declined to create a Windows version of Lotus 1-2-3 (for a few years, anyway), predicting that their DOS version of the software was all anyone would ever need.  Microsoft, naturally, developed Excel exclusively for Windows.  Secondly, Microsoft developed a feature for Excel that Lotus didn’t provide in 1-2-3, namely PivotTables.  The PivotTables feature, exclusive to Excel, was deemed so staggeringly useful that people were willing to learn an entire new software package (Excel) rather than stick with a program (1-2-3) that didn’t have it.  This one feature, along with the misjudgment of the success of Windows, was the death-knell for Lotus 1-2-3, and the beginning of the success of Microsoft Excel. Understanding PivotTables So what is a PivotTable, exactly? Put simply, a PivotTable is a summary of some data, created to allow easy analysis of said data.  But unlike a manually created summary, Excel PivotTables are interactive.  Once you have created one, you can easily change it if it doesn’t offer the exact insights into your data that you were hoping for.  In a couple of clicks the summary can be “pivoted” – rotated in such a way that the column headings become row headings, and vice versa.  There’s a lot more that can be done, too.  Rather than try to describe all the features of PivotTables, we’ll simply demonstrate them… The data that you analyze using a PivotTable can’t be just any data – it has to be raw data, previously unprocessed (unsummarized) – typically a list of some sort.  An example of this might be the list of sales transactions in a company for the past six months. Examine the data shown below: Notice that this is not raw data.  In fact, it is already a summary of some sort.  In cell B3 we can see $30,000, which apparently is the total of James Cook’s sales for the month of January.  So where is the raw data?  How did we arrive at the figure of $30,000?  Where is the original list of sales transactions that this figure was generated from?  It’s clear that somewhere, someone must have gone to the trouble of collating all of the sales transactions for the past six months into the summary we see above.  How long do you suppose this took?  An hour?  Ten?  Probably. If we were to track down the original list of sales transactions, it might look something like this: You may be surprised to learn that, using the PivotTable feature of Excel, we can create a monthly sales summary similar to the one above in a few seconds, with only a few mouse clicks.  We can do this – and a lot more too! How to Create a PivotTable First, ensure that you have some raw data in a worksheet in Excel.  A list of financial transactions is typical, but it can be a list of just about anything:  Employee contact details, your CD collection, or fuel consumption figures for your company’s fleet of cars. So we start Excel… …and we load such a list… Once we have the list open in Excel, we’re ready to start creating the PivotTable. Click on any one single cell within the list: Then, from the Insert tab, click the PivotTable icon: The Create PivotTable box appears, asking you two questions:  What data should your new PivotTable be based on, and where should it be created?  Because we already clicked on a cell within the list (in the step above), the entire list surrounding that cell is already selected for us ($A$1:$G$88 on the Payments sheet, in this example).  Note that we could select a list in any other region of any other worksheet, or even some external data source, such as an Access database table, or even a MS-SQL Server database table.  We also need to select whether we want our new PivotTable to be created on a new worksheet, or on an existing one.  In this example we will select a new one: The new worksheet is created for us, and a blank PivotTable is created on that worksheet: Another box also appears:  The PivotTable Field List.  This field list will be shown whenever we click on any cell within the PivotTable (above): The list of fields in the top part of the box is actually the collection of column headings from the original raw data worksheet.  The four blank boxes in the lower part of the screen allow us to choose the way we would like our PivotTable to summarize the raw data.  So far, there is nothing in those boxes, so the PivotTable is blank.  All we need to do is drag fields down from the list above and drop them in the lower boxes.  A PivotTable is then automatically created to match our instructions.  If we get it wrong, we only need to drag the fields back to where they came from and/or drag new fields down to replace them. The Values box is arguably the most important of the four.  The field that is dragged into this box represents the data that needs to be summarized in some way (by summing, averaging, finding the maximum, minimum, etc).  It is almost always numerical data.  A perfect candidate for this box in our sample data is the “Amount” field/column.  Let’s drag that field into the Values box: Notice that (a) the “Amount” field in the list of fields is now ticked, and “Sum of Amount” has been added to the Values box, indicating that the amount column has been summed. If we examine the PivotTable itself, we indeed find the sum of all the “Amount” values from the raw data worksheet: We’ve created our first PivotTable!  Handy, but not particularly impressive.  It’s likely that we need a little more insight into our data than that. Referring to our sample data, we need to identify one or more column headings that we could conceivably use to split this total.  For example, we may decide that we would like to see a summary of our data where we have a row heading for each of the different salespersons in our company, and a total for each.  To achieve this, all we need to do is to drag the “Salesperson” field into the Row Labels box: Now, finally, things start to get interesting!  Our PivotTable starts to take shape….   With a couple of clicks we have created a table that would have taken a long time to do manually. So what else can we do?  Well, in one sense our PivotTable is complete.  We’ve created a useful summary of our source data.  The important stuff is already learned!  For the rest of the article, we will examine some ways that more complex PivotTables can be created, and ways that those PivotTables can be customized. First, we can create a two-dimensional table.  Let’s do that by using “Payment Method” as a column heading.  Simply drag the “Payment Method” heading to the Column Labels box: Which looks like this: Starting to get very cool! Let’s make it a three-dimensional table.  What could such a table possibly look like?  Well, let’s see… Drag the “Package” column/heading to the Report Filter box: Notice where it ends up…. This allows us to filter our report based on which “holiday package” was being purchased.  For example, we can see the breakdown of salesperson vs payment method for all packages, or, with a couple of clicks, change it to show the same breakdown for the “Sunseekers” package: And so, if you think about it the right way, our PivotTable is now three-dimensional.  Let’s keep customizing… If it turns out, say, that we only want to see cheque and credit card transactions (i.e. no cash transactions), then we can deselect the “Cash” item from the column headings.  Click the drop-down arrow next to Column Labels, and untick “Cash”: Let’s see what that looks like…As you can see, “Cash” is gone. Formatting This is obviously a very powerful system, but so far the results look very plain and boring.  For a start, the numbers that we’re summing do not look like dollar amounts – just plain old numbers.  Let’s rectify that. A temptation might be to do what we’re used to doing in such circumstances and simply select the whole table (or the whole worksheet) and use the standard number formatting buttons on the toolbar to complete the formatting.  The problem with that approach is that if you ever change the structure of the PivotTable in the future (which is 99% likely), then those number formats will be lost.  We need a way that will make them (semi-)permanent. First, we locate the “Sum of Amount” entry in the Values box, and click on it.  A menu appears.  We select Value Field Settings… from the menu: The Value Field Settings box appears. Click the Number Format button, and the standard Format Cells box appears: From the Category list, select (say) Accounting, and drop the number of decimal places to 0.  Click OK a few times to get back to the PivotTable… As you can see, the numbers have been correctly formatted as dollar amounts. While we’re on the subject of formatting, let’s format the entire PivotTable.  There are a few ways to do this.  Let’s use a simple one… Click the PivotTable Tools/Design tab: Then drop down the arrow in the bottom-right of the PivotTable Styles list to see a vast collection of built-in styles: Choose any one that appeals, and look at the result in your PivotTable:   Other Options We can work with dates as well.  Now usually, there are many, many dates in a transaction list such as the one we started with.  But Excel provides the option to group data items together by day, week, month, year, etc.  Let’s see how this is done. First, let’s remove the “Payment Method” column from the Column Labels box (simply drag it back up to the field list), and replace it with the “Date Booked” column: As you can see, this makes our PivotTable instantly useless, giving us one column for each date that a transaction occurred on – a very wide table! To fix this, right-click on any date and select Group… from the context-menu: The grouping box appears.  We select Months and click OK: Voila!  A much more useful table: (Incidentally, this table is virtually identical to the one shown at the beginning of this article – the original sales summary that was created manually.) Another cool thing to be aware of is that you can have more than one set of row headings (or column headings): …which looks like this…. You can do a similar thing with column headings (or even report filters). Keeping things simple again, let’s see how to plot averaged values, rather than summed values. First, click on “Sum of Amount”, and select Value Field Settings… from the context-menu that appears: In the Summarize value field by list in the Value Field Settings box, select Average: While we’re here, let’s change the Custom Name, from “Average of Amount” to something a little more concise.  Type in something like “Avg”: Click OK, and see what it looks like.  Notice that all the values change from summed totals to averages, and the table title (top-left cell) has changed to “Avg”: If we like, we can even have sums, averages and counts (counts = how many sales there were) all on the same PivotTable! Here are the steps to get something like that in place (starting from a blank PivotTable): Drag “Salesperson” into the Column Labels Drag “Amount” field down into the Values box three times For the first “Amount” field, change its custom name to “Total” and it’s number format to Accounting (0 decimal places) For the second “Amount” field, change its custom name to “Average”, its function to Average and it’s number format to Accounting (0 decimal places) For the third “Amount” field, change its name to “Count” and its function to Count Drag the automatically created field from Column Labels to Row Labels Here’s what we end up with: Total, average and count on the same PivotTable! Conclusion There are many, many more features and options for PivotTables created by Microsoft Excel – far too many to list in an article like this.  To fully cover the potential of PivotTables, a small book (or a large website) would be required.  Brave and/or geeky readers can explore PivotTables further quite easily:  Simply right-click on just about everything, and see what options become available to you.  There are also the two ribbon-tabs: PivotTable Tools/Options and Design.  It doesn’t matter if you make a mistake – it’s easy to delete the PivotTable and start again – a possibility old DOS users of Lotus 1-2-3 never had. We’ve included an Excel that should work with most versions of Excel, so you can download to practice your PivotTable skills. Download Our Practice Excel File Similar Articles Productive Geek Tips Magnify Selected Cells In Excel 2007Share Access Data with Excel in Office 2010Make Excel 2007 Print Gridlines In Workbook FileMake Excel 2007 Always Save in Excel 2003 FormatConvert Older Excel Documents to Excel 2007 Format TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Ben & Jerry’s Free Cone Day, 3/23/10 New Stinger from McAfee Helps Remove ‘FakeAlert’ Threats Google Apps Marketplace: Tools & Services For Google Apps Users Get News Quick and Precise With Newser Scan for Viruses in Ubuntu using ClamAV Replace Your Windows Task Manager With System Explorer

    Read the article

  • Is your team is a high-performing team?

    As a child I can remember looking out of the car window as my father drove along the Interstate in Florida while seeing prisoners wearing bright orange jump suits and prison guards keeping a watchful eye on them. The prisoners were taking part in a prison road gang. These road gangs were formed to help the state maintain the state highway infrastructure. The prisoner’s primary responsibilities are to pick up trash and debris from the roadway. This is a prime example of a work group or working group used by most prison systems in the United States. Work groups or working groups can be defined as a collection of individuals or entities working together to achieve a specific goal or accomplish a specific set of tasks. Typically these groups are only established for a short period of time and are dissolved once the desired outcome has been achieved. More often than not group members usually feel as though they are expendable to the group and some even dread that they are even in the group. "A team is a small number of people with complementary skills who are committed to a common purpose, performance goals, and approach for which they are mutually accountable." (Katzenbach and Smith, 1993) So how do you determine that a team is a high-performing team?  This can be determined by three base line criteria that include: consistently high quality output, the promotion of personal growth and well being of all team members, and most importantly the ability to learn and grow as a unit. Initially, a team can successfully create high-performing output without meeting all three criteria, however this will erode over time because team members will feel detached from the group or that they are not growing then the quality of the output will decline. High performing teams are similar to work groups because they both utilize a collection of individuals or entities to accomplish tasks. What distinguish a high-performing team from a work group are its characteristics. High-performing teams contain five core characteristics. These characteristics are what separate a group from a team. The five characteristics of a high-performing team include: Purpose, Performance Measures, People with Tasks and Relationship Skills, Process, and Preparation and Practice. A high-performing team is much more than a work group, and typically has a life cycle that can vary from team to team. The standard team lifecycle consists of five states and is comparable to a human life cycle. The five states of a high-performing team lifecycle include: Formulating, Storming, Normalizing, Performing, and Adjourning. The Formulating State of a team is first realized when the team members are first defined and roles are assigned to all members. This initial stage is very important because it can set the tone for the team and can ultimately determine its success or failure. In addition, this stage requires the team to have a strong leader because team members are normally unclear about specific roles, specific obstacles and goals that my lay ahead of them.  Finally, this stage is where most team members initially meet one another prior to working as a team unless the team members already know each other. The Storming State normally arrives directly after the formulation of a new team because there are still a lot of unknowns amongst the newly formed assembly. As a general rule most of the parties involved in the team are still getting used to the workload, pace of work, deadlines and the validity of various tasks that need to be performed by the group.  In this state everything is questioned because there are so many unknowns. Items commonly questioned include the credentials of others on the team, the actual validity of a project, and the leadership abilities of the team leader.  This can be exemplified by looking at the interactions between animals when they first meet.  If we look at a scenario where two people are walking directly toward each other with their dogs. The dogs will automatically enter the Storming State because they do not know the other dog. Typically in this situation, they attempt to define which is more dominating via play or fighting depending on how the dogs interact with each other. Once dominance has been defined and accepted by both dogs then they will either want to play or leave depending on how the dogs interacted and other environmental variables. Once the Storming State has been realized then the Normalizing State takes over. This state is entered by a team once all the questions of the Storming State have been answered and the team has been tested by a few tasks or projects.  Typically, participants in the team are filled with energy, and comradery, and a strong alliance with team goals and objectives.  A high school football team is a perfect example of the Normalizing State when they start their season.  The player positions have been assigned, the depth chart has been filled and everyone is focused on winning each game. All of the players encourage and expect each other to perform at the best of their abilities and are united by competition from other teams. The Performing State is achieved by a team when its history, working habits, and culture solidify the team as one working unit. In this state team members can anticipate specific behaviors, attitudes, reactions, and challenges are seen as opportunities and not problems. Additionally, each team member knows their role in the team’s success, and the roles of others. This is the most productive state of a group and is where all the time invested working together really pays off. If you look at an Olympic figure skating team skate you can easily see how the time spent working together benefits their performance. They skate as one unit even though it is comprised of two skaters. Each skater has their routine completely memorized as well as their partners. This allows them to anticipate each other’s moves on the ice makes their skating look effortless. The final state of a team is the Adjourning State. This state is where accomplishments by the team and each individual team member are recognized. Additionally, this state also allows for reflection of the interactions between team members, work accomplished and challenges that were faced. Finally, the team celebrates the challenges they have faced and overcome as a unit. Currently in the workplace teams are divided into two different types: Co-located and Distributed Teams. Co-located teams defined as the traditional group of people working together in an office, according to Andy Singleton of Assembla. This traditional type of a team has dominated business in the past due to inadequate technology, which forced workers to primarily interact with one another via face to face meetings.  Team meetings are primarily lead by the person with the highest status in the company. Having personally, participated in meetings of this type, usually a select few of the team members dominate the flow of communication which reduces the input of others in group discussions. Since discussions are dominated by a select few individuals the discussions and group discussion are skewed in favor of the individuals who communicate the most in meetings. In addition, Team members might not give their full opinions on a topic of discussion in part not to offend or create controversy amongst the team and can alter decision made in meetings towards those of the opinions of the dominating team members. Distributed teams are by definition spread across an area or subdivided into separate sections. That is exactly what distributed teams when compared to a more traditional team. It is common place for distributed teams to have team members across town, in the next state, across the country and even with the advances in technology over the last 20 year across the world. These teams allow for more diversity compared to the other type of teams because they allow for more flexibility regarding location. A team could consist of a 30 year old male Italian project manager from New York, a 50 year old female Hispanic from California and a collection of programmers from India because technology allows them to communicate as if they were standing next to one another.  In addition, distributed team members consult with more team members prior to making decisions compared to traditional teams, and take longer to come to decisions due to the changes in time zones and cultural events. However, team members feel more empowered to speak out when they do not agree with the team and to notify others of potential issues regarding the work that the team is doing. Virtual teams which are a subset of the distributed team type is changing organizational strategies due to the fact that a team can now in essence be working 24 hrs a day because of utilizing employees in various time zones and locations.  A primary example of this is with customer services departments, a company can have multiple call centers spread across multiple time zones allowing them to appear to be open 24 hours a day while all a employees work from 9AM to 5 PM every day. Virtual teams also allow human resources departments to go after the best talent for the company regardless of where the potential employee works because they will be a part of a virtual team all that is need is the proper technology to be setup to allow everyone to communicate. In addition to allowing employees to work from home, the company can save space and resources by not having to provide a desk for every team member. In fact, those team members that randomly come into the office can actually share one desk amongst multiple people. This is definitely a cost cutting plus given the current state of the economy. One thing that can turn a team into a high-performing team is leadership. High-performing team leaders need to focus on investing in ongoing personal development, provide team members with direction, structure, and resources needed to accomplish their work, make the right interventions at the right time, and help the team manage boundaries between the team and various external parties involved in the teams work. A team leader needs to invest in ongoing personal development in order to effectively manage their team. People have said that attitude is everything; this is very true about leaders and leadership. A team takes on the attitudes and behaviors of its leaders. This can potentially harm the team and the team’s output. Leaders must concentrate on self-awareness, and understanding their team’s group dynamics to fully understand how to lead them. In addition, always learning new leadership techniques from other effective leaders is also very beneficial. Providing team members with direction, structure, and resources that they need to accomplish their work collectively sounds easy, but it is not.  Leaders need to be able to effectively communicate with their team on how their work helps the company reach for its organizational vision. Conversely, the leader needs to allow his team to work autonomously within specific guidelines to turn the company’s vision into a reality.  This being said the team must be appropriately staffed according to the size of the team’s tasks and their complexity. These tasks should be clear, and be meaningful to the company’s objectives and allow for feedback to be exchanged with the leader and the team member and the leader and upper management. Now if the team is properly staffed, and has a clear and full understanding of what is to be done; the company also must supply the workers with the proper tools to achieve the tasks that they are asked to do. No one should be asked to dig a hole without being given a shovel.  Finally, leaders must reward their team members for accomplishments that they achieve. Awards could range from just a simple congratulatory email, a party to close the completion of a large project, or other monetary rewards. Managing boundaries is very important for team leaders because it can alter attitudes of team members and can add undue stress to the team which will force them to loose focus on the tasks at hand for the group. Team leaders should promote communication between team members so that burdens are shared amongst the team and solutions can be derived from hearing the opinions of multiple sources. This also reinforces team camaraderie and working as a unit. Team leaders must manage the type and timing of interventions as to not create an even bigger mess within the team. Poorly timed interventions can really deflate team members and make them question themselves. This could really increase further and undue interventions by the team leader. Typically, the best time for interventions is when the team is just starting to form so that all unproductive behaviors are removed from the team and that it can retain focus on its agenda. If an intervention is effectively executed the team will feel energized about the work that they are doing, promote communication and interaction amongst the group and improve moral overall. High-performing teams are very import to organizations because they consistently produce high quality output and develop a collective purpose for their work. This drive to succeed allows team members to utilize specific talents allowing for growth in these areas.  In addition, these team members usually take on a sense of ownership with their projects and feel that the other team members are irreplaceable. References: http://blog.assembla.com/assemblablog/tabid/12618/bid/3127/Three-ways-to-organize-your-team-co-located-outsourced-or-global.aspx Katzenbach, J.R. & Smith, D.K. (1993). The Wisdom of Teams: Creating the High-performance Organization. Boston: Harvard Business School.

    Read the article

< Previous Page | 596 597 598 599 600 601 602 603 604 605 606  | Next Page >