Search Results

Search found 2291 results on 92 pages for 'webserver'.

Page 68/92 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • How to implement this .NET client and PHP server synchronization scenario?

    - by pbean
    I have a PHP webserver and a .NET mobile application. The .NET application needs data from a database, which is provided (for now) by the php webserver. I'm fairly new to this kind of scenario so I'm not sure what the best practices are. I ran into a couple of problems and I am not certain how to overcome them. For now, I have the following setup. I run a PHP SOAP server which has a couple of operations which simply retrieve data from the database. For this web service I have created a WSDL file. In Visual Studio I added a web reference to my project using the WSDL file and it generated some classes for it. I simply call something like "MyWebService.GetItems();" and I get an array of items in my .NET application, which come straight from the database. Next I also serialize all these retrieved objects to local (permanent) storage. I face a couple of challenges which I don't know how to resolve properly. The idea is for the mobile client to synchronize the data once (at the start of the day), before working, and then use the local storage throughout the day, and synchronize it back at the end of the day. At the moment all data is downloaded through SOAP, and not a subset (only what is needed). How would I know which new information should be sent to the client? Only the server knows what is new, but only the client knows for sure which data it already has. I seem to be doing double work now. The data which is transferred with SOAP basically already are serialized objects. But at the moment I first retrieve all objects through SOAP and the .NET framework automatically deserializes it. Then I serialize all data again myself. It would be more efficient to simply download it to storage, and then deserialize it. The mobile device does not have a lot of memory. In my test data this is not really a problem, but with potentially tenths of thousands of records I can imagine this will become a problem. Currently when I call the SOAP method, it loads all data into memory (as an array) but as described above perhaps it would be better to have it stored into storage directly, and only retrieve from storage the objects that are needed. But how would I store this? An array would serialize to one big XML (or binary) file from which I cannot choose which objects to load. Would I make (possible tenths of thousands) separate files? Also at the end of the day when I want to send the changes back to the server, how would I know which objects to send... since they're not in memory? I hope my troubles are clear and I hope you are able to help me figure out how to implement this the best way. :) Some things are already fixed (like using .NET on the mobile device, using PHP and having a MySQL database server) but other things can most certainly be changed (like using SOAP).

    Read the article

  • REST WCF service locks thread when called using AJAX in an ASP.Net site

    - by Jupaol
    I have a WCF REST service consumed in an ASP.Net site, from a page, using AJAX. I want to be able to call methods from my service async, which means I will have callback handlers in my javascript code and when the methods finish, the output will be updated. The methods should run in different threads, because each method will take different time to complete their task I have the code semi-working, but something strange is happening because the first time I execute the code after compiling, it works, running each call in a different threads but subsequent calls blocs the service, in such a way that each method call has to wait until the last call ends in order to execute the next one. And they are running on the same thread. I have had the same problem before when I was using Page Methods, and I solved it by disabling the session in the page but I have not figured it out how to do the same when consuming WCF REST services Note: Methods complete time (running them async should take only 7 sec and the result should be: Execute1 - Execute3 - Execute2) Execute1 -- 2 sec Execute2 -- 7 sec Execute3 -- 4 sec Output After compiling Output subsequent calls (this is the problem) I will post the code...I'll try to simplify it as much as I can Service Contract [ServiceContract( SessionMode = SessionMode.NotAllowed )] public interface IMyService { // I have other 3 methods like these: Execute2 and Execute3 [OperationContract] [WebInvoke( RequestFormat = WebMessageFormat.Json, ResponseFormat = WebMessageFormat.Json, UriTemplate = "/Execute1", Method = "POST")] string Execute1(string param); } [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] [ServiceBehavior( InstanceContextMode = InstanceContextMode.PerCall )] public class MyService : IMyService { // I have other 3 methods like these: Execute2 (7 sec) and Execute3(4 sec) public string Execute1(string param) { var t = Observable.Start(() => Thread.Sleep(2000), Scheduler.NewThread); t.First(); return string.Format("Execute1 on: {0} count: {1} at: {2} thread: {3}", param, "0", DateTime.Now.ToString(), Thread.CurrentThread.ManagedThreadId.ToString()); } } ASPX page <%@ Page EnableSessionState="False" Title="Home Page" Language="C#" MasterPageFile="~/Site.master" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="RestService._Default" %> <asp:Content ID="HeaderContent" runat="server" ContentPlaceHolderID="HeadContent"> <script type="text/javascript"> function callMethodAsync(url, data) { $("#message").append("<br/>" + new Date()); $.ajax({ cache: false, type: "POST", async: true, url: url, data: '"de"', contentType: "application/json", dataType: "json", success: function (msg) { $("#message").append("<br/>&nbsp;&nbsp;&nbsp;" + msg); }, error: function (xhr) { alert(xhr.responseText); } }); } $(function () { $("#callMany").click(function () { $("#message").html(""); callMethodAsync("/Execute1", "hello"); callMethodAsync("/Execute2", "crazy"); callMethodAsync("/Execute3", "world"); }); }); </script> </asp:Content> <asp:Content ID="BodyContent" runat="server" ContentPlaceHolderID="MainContent"> <input type="button" id="callMany" value="Post Many" /> <div id="message"> </div> </asp:Content> Web.config (relevant) <system.webServer> <modules runAllManagedModulesForAllRequests="true" /> </system.webServer> <system.serviceModel> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true" /> <standardEndpoints> <webHttpEndpoint> <standardEndpoint name="" helpEnabled="true" automaticFormatSelectionEnabled="true" /> </webHttpEndpoint> </standardEndpoints> </system.serviceModel> Global.asax void Application_Start(object sender, EventArgs e) { RouteTable.Routes.Ignore("{resource}.axd/{*pathInfo}"); RouteTable.Routes.Add(new ServiceRoute("", new WebServiceHostFactory(), typeof(MyService))); }

    Read the article

  • Solution: Testing Web Services with MSTest on Team Build

    - by Martin Hinshelwood
    Guess what. About 20 minutes after I fixed the build, Allan broke it again! Update: 4th March 2010 – After having huge problems getting this working I read Billy Wang’s post which showed me the light. The problem here is that even though the test passes locally it will not during an Automated Build. When you send your tests to the build server it does not understand that you want to spin up the web site and run tests against that! When you run the test in Visual Studio it spins up the web site anyway, but would you expect your test to pass if you told the website not to spin up? Of course not. So, when you send the code to the build server you need to tell it what to spin up. First, the best way to get the parameters you need is to right click on the method you want to test and select “Create Unit Test”. This will detect wither you are running in IIS or ASP.NET Development Server or None, and create the relevant tags. Figure: Right clicking on “SaveDefaultProjectFile” will produce a context menu with “Create Unit tests…” on it. If you use this option it will AutoDetect most of the Attributes that are required. /// <summary> ///A test for SSW.SQLDeploy.SilverlightUI.Web.Services.IProfileService.SaveDefaultProjectFile ///</summary> // TODO: Ensure that the UrlToTest attribute specifies a URL to an ASP.NET page (for example, // http://.../Default.aspx). This is necessary for the unit test to be executed on the web server, // whether you are testing a page, web service, or a WCF service. [TestMethod()] [HostType("ASP.NET")] [AspNetDevelopmentServerHost("D:\\Workspaces\\SSW\\SSW\\SqlDeploy\\DEV\\Main\\SSW.SQLDeploy.SilverlightUI.Web", "/")] [UrlToTest("http://localhost:3100/")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] public void SaveDefaultProjectFileTest() { IProfileService target = new ProfileService(); // TODO: Initialize to an appropriate value string strComputerName = string.Empty; // TODO: Initialize to an appropriate value bool expected = false; // TODO: Initialize to an appropriate value bool actual; actual = target.SaveDefaultProjectFile(strComputerName); Assert.AreEqual(expected, actual); Assert.Inconclusive("Verify the correctness of this test method."); } Figure: Auto created code that shows the attributes required to run correctly in IIS or in this case ASP.NET Development Server If you are a purist and don’t like creating unit tests like this then you just need to add the three attributes manually. HostType – This attribute specified what host to use. Its an extensibility point, so you could write your own. Or you could just use “ASP.NET”. UrlToTest – This specifies the start URL. For most tests it does not matter which page you call, as long as it is a valid page otherwise your test may not run on the server, but may pass anyway. AspNetDevelopmentServerHost – This is a nasty one, it is only used if you are using ASP.NET Development Host and is unnecessary if you are using IIS. This sets the host settings and the first value MUST be the physical path to the root of your web application. OK, so all that was rubbish and I could not get anything working using the MSDN documentation. Google provided very little help until I ran into Billy Wang’s post  and I heard that heavenly music that all developers hear when understanding dawns that what they have been doing up until now is just plain stupid. I am sure that the above will work when I am doing Web Unit Tests, but there is a much easier way when doing web services. You need to add the AspNetDevelopmentServer attribute to your code. This will tell MSTest to spin up an ASP.NET Development server to host the service. Specify the path to the web application you want to use. [AspNetDevelopmentServer("WebApp1", "D:\\Workspaces\\SSW\\SSW\\SqlDeploy\\DEV\\Main\\SSW.SQLDeploy.SilverlightUI.Web")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] [TestMethod] public void ProfileService_Integration_SaveDefaultProjectFile_Returns_True() { ProfileServiceClient target = new ProfileServiceClient(); bool isTrue = target.SaveDefaultProjectFile("Mav"); Assert.AreEqual(true, isTrue); } Figure: This AspNetDevelopmentServer will make sure that the specified web application is launched. Now we can run the test and have it pass, but if the dynamically assigned ASP.NET Development server port changes what happens to the details in your app.config that was generated when creating a reference to the web service? Well, it would be wrong and the test would fail. This is where Billy’s helper method comes in. Once you have created an instance of your service call, and it has loaded the config, but before you make any calls to it you need to go in and dynamically set the Endpoint address to the same address as your dynamically hosted Web Application. using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.VisualStudio.TestTools.UnitTesting; using System.Reflection; using System.ServiceModel.Description; using System.ServiceModel; namespace SSW.SQLDeploy.Test { class WcfWebServiceHelper { public static bool TryUrlRedirection(object client, TestContext context, string identifier) { bool result = true; try { PropertyInfo property = client.GetType().GetProperty("Endpoint"); string webServer = context.Properties[string.Format("AspNetDevelopmentServer.{0}", identifier)].ToString(); Uri webServerUri = new Uri(webServer); ServiceEndpoint endpoint = (ServiceEndpoint)property.GetValue(client, null); EndpointAddressBuilder builder = new EndpointAddressBuilder(endpoint.Address); builder.Uri = new Uri(endpoint.Address.Uri.OriginalString.Replace(endpoint.Address.Uri.Authority, webServerUri.Authority)); endpoint.Address = builder.ToEndpointAddress(); } catch (Exception e) { context.WriteLine(e.Message); result = false; } return result; } } } Figure: This fixes a problem with the URL in your web.config not being the same as the dynamically hosted ASP.NET Development server port. We can now add a call to this method after we created the Proxy object and change the Endpoint for the Service to the correct one. This process is wrapped in an assert as if it fails there is no point in continuing. [AspNetDevelopmentServer("WebApp1", D:\\Workspaces\\SSW\\SSW\\SqlDeploy\\DEV\\Main\\SSW.SQLDeploy.SilverlightUI.Web")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] [TestMethod] public void ProfileService_Integration_SaveDefaultProjectFile_Returns_True() { ProfileServiceClient target = new ProfileServiceClient(); Assert.IsTrue(WcfWebServiceHelper.TryUrlRedirection(target, TestContext, "WebApp1")); bool isTrue = target.SaveDefaultProjectFile("Mav"); Assert.AreEqual(true, isTrue); } Figure: Editing the Endpoint from the app.config on the fly to match the dynamically hosted ASP.NET Development Server URL and port is now easy. As you can imagine AspNetDevelopmentServer poses some problems of you have multiple developers. What are the chances of everyone using the same location to store the source? What about if you are using a build server, how do you tell MSTest where to look for the files? To the rescue is a property called" “%PathToWebRoot%” which is always right on the build server. It will always point to your build drop folder for your solutions web sites. Which will be “\\tfs.ssw.com.au\BuildDrop\[BuildName]\Debug\_PrecompiledWeb\” or whatever your build drop location is. So lets change the code above to add this. [AspNetDevelopmentServer("WebApp1", "%PathToWebRoot%\\SSW.SQLDeploy.SilverlightUI.Web")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] [TestMethod] public void ProfileService_Integration_SaveDefaultProjectFile_Returns_True() { ProfileServiceClient target = new ProfileServiceClient(); Assert.IsTrue(WcfWebServiceHelper.TryUrlRedirection(target, TestContext, "WebApp1")); bool isTrue = target.SaveDefaultProjectFile("Mav"); Assert.AreEqual(true, isTrue); } Figure: Adding %PathToWebRoot% to the AspNetDevelopmentServer path makes it work everywhere. Now we have another problem… this will ONLY run on the build server and will fail locally as %PathToWebRoot%’s default value is “C:\Users\[profile]\Documents\Visual Studio 2010\Projects”. Well this sucks… How do we get the test to run on any build server and any developer laptop. Open “Tools | Options | Test Tools | Test Execution” in Visual Studio and you will see a field called “Web application root directory”. This is where you override that default above. Figure: You can override the default website location for tests. In my case I would put in “D:\Workspaces\SSW\SSW\SqlDeploy\DEV\Main” and all the developers working with this branch would put in the folder that they have mapped. Can you see a problem? What is I create a “$/SSW/SqlDeploy/DEV/34567” branch from Main and I want to run tests in there. Well… I would have to change the value above. This is not ideal, but as you can put your projects anywhere on a computer, it has to be done. Conclusion Although this looks convoluted and complicated there are real problems being solved here that mean that you have a test ANYWHERE solution. Any build server, any Developer workstation. Resources: http://billwg.blogspot.com/2009/06/testing-wcf-web-services.html http://tough-to-find.blogspot.com/2008/04/testing-asmx-web-services-in-visual.html http://msdn.microsoft.com/en-us/library/ms243399(VS.100).aspx http://blogs.msdn.com/dscruggs/archive/2008/09/29/web-tests-unit-tests-the-asp-net-development-server-and-code-coverage.aspx http://www.5z5.com/News/?543f8bc8b36b174f Technorati Tags: VS2010,MSTest,Team Build 2010,Team Build,Visual Studio,Visual Studio 2010,Visual Studio ALM,Team Test,Team Test 2010

    Read the article

  • Domain registration and DNS, what am I actually paying for? [on hold]

    - by jozxyqk
    Long story short I'm quite confused as to exactly what is offered by domain registration and dns service sites. When I go to the url "http://google.com", my PC connects to a name server and gets the IP for "google.com", then connects to the IP and says, give me the page for "http://google.com". AFAIK there are many name servers and they all cache these bits of information in some hierarchical network, but ultimately a DNS record must come from a single source (not sure what this is called). There are different kinds of records, that might not an IP but an alias/redirect to other records for example. Lets say I want my own domain name for some server. Maybe it even has a static IP but I want a nicer thing for people to remember, or my ISP assigns dynamic IPs and I want a URL that always works, or my website is hosted on a shared machine so the browser needs to send "http://mydnsname.com" to the webserver to distinguish it from other requests to the same IP but for different sites. Registering a domain costs a small amount of money per year. Where does this money go, not that I'm complaining :P? Is that really all it costs to maintain the entire DNS system of nameservers? If I just register the domain and nothing else, what do I get? Is that just reserving a name or hosting WHOIS information or have I paid for a dns recrord to be hosted? Can a domain alone have a record, such as an IP or be an alias to another? A bunch of sites out there offer other services, in addition to domain registration (I'm assuming they register the domain through another party for me). One example is "dynamic DNS" (DDNS), but isn't this just a regular DNS record that's updated regularly? Does it cost extra to update more often? Without a DDNS, can a DNS record still point to an IP? I've also seen the term "managed DNS" and have no idea where that fits in.

    Read the article

  • Apache2 on Raspbian: Multiviews is enabled but not working

    - by Christian L
    I recently moved webserver, from a ubuntuserver set up by my brother (I have sudo) to a rasbianserver set up by my self. On the other server multiviews worked out of the box, but on the raspbian it does not seem to work althoug it seems to be enabled out of the box there as well. What I am trying to do is to get it to find my.doma.in/mobile.php when I enter my.doma.in/mobile in the adress field. I am using the same available-site-file as I did before, the file looks as this: <VirtualHost *:80> ServerName my.doma.in ServerAdmin [email protected] DocumentRoot /home/christian/www/do <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /home/christian/www/do> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> From what I have read various places while googling this issue I found that the negotiation module had to be enabled so I tried to enable it. sudo a2enmod negotiation Giving me this result Module negotiation already enabled I have read through the /etc/apache2/apache2.conf and I did not find anything in particular that seemed to be helping me there, but please do ask if you think I should post it. Any ideas on how to solve this through getting Multiviews to work?

    Read the article

  • April 2013 Release of the Ajax Control Toolkit

    - by Stephen.Walther
    I’m excited to announce the April 2013 release of the Ajax Control Toolkit. For this release, we focused on improving two controls: the AjaxFileUpload and the MaskedEdit controls. You can download the latest release from CodePlex at http://AjaxControlToolkit.CodePlex.com or, better yet, you can execute the following NuGet command within Visual Studio 2010/2012: There are three builds of the Ajax Control Toolkit: .NET 3.5, .NET 4.0, and .NET 4.5. A Better AjaxFileUpload Control We completely rewrote the AjaxFileUpload control for this release. We had two primary goals. First, we wanted to support uploading really large files. In particular, we wanted to support uploading multi-gigabyte files such as video files or application files. Second, we wanted to support showing upload progress on as many browsers as possible. The previous version of the AjaxFileUpload could show upload progress when used with Google Chrome or Mozilla Firefox but not when used with Apple Safari or Microsoft Internet Explorer. The new version of the AjaxFileUpload control shows upload progress when used with any browser. Using the AjaxFileUpload Control Let me walk-through using the AjaxFileUpload in the most basic scenario. And then, in following sections, I can explain some of its more advanced features. Here’s how you can declare the AjaxFileUpload control in a page: <ajaxToolkit:ToolkitScriptManager runat="server" /> <ajaxToolkit:AjaxFileUpload ID="AjaxFileUpload1" AllowedFileTypes="mp4" OnUploadComplete="AjaxFileUpload1_UploadComplete" runat="server" /> The exact appearance of the AjaxFileUpload control depends on the features that a browser supports. In the case of Google Chrome, which supports drag-and-drop upload, here’s what the AjaxFileUpload looks like: Notice that the page above includes two Ajax Control Toolkit controls: the AjaxFileUpload and the ToolkitScriptManager control. You always need to include the ToolkitScriptManager with any page which uses Ajax Control Toolkit controls. The AjaxFileUpload control declared in the page above includes an event handler for its UploadComplete event. This event handler is declared in the code-behind page like this: protected void AjaxFileUpload1_UploadComplete(object sender, AjaxControlToolkit.AjaxFileUploadEventArgs e) { // Save uploaded file to App_Data folder AjaxFileUpload1.SaveAs(MapPath("~/App_Data/" + e.FileName)); } This method saves the uploaded file to your website’s App_Data folder. I’m assuming that you have an App_Data folder in your project – if you don’t have one then you need to create one or you will get an error. There is one more thing that you must do in order to get the AjaxFileUpload control to work. The AjaxFileUpload control relies on an HTTP Handler named AjaxFileUploadHandler.axd. You need to declare this handler in your application’s root web.config file like this: <configuration> <system.web> <compilation debug="true" targetFramework="4.5" /> <httpRuntime targetFramework="4.5" maxRequestLength="42949672" /> <httpHandlers> <add verb="*" path="AjaxFileUploadHandler.axd" type="AjaxControlToolkit.AjaxFileUploadHandler, AjaxControlToolkit"/> </httpHandlers> </system.web> <system.webServer> <validation validateIntegratedModeConfiguration="false"/> <handlers> <add name="AjaxFileUploadHandler" verb="*" path="AjaxFileUploadHandler.axd" type="AjaxControlToolkit.AjaxFileUploadHandler, AjaxControlToolkit"/> </handlers> <security> <requestFiltering> <requestLimits maxAllowedContentLength="4294967295"/> </requestFiltering> </security> </system.webServer> </configuration> Notice that the web.config file above also contains configuration settings for the maxRequestLength and maxAllowedContentLength. You need to assign large values to these configuration settings — as I did in the web.config file above — in order to accept large file uploads. Supporting Chunked File Uploads Because one of our primary goals with this release was support for large file uploads, we added support for client-side chunking. When you upload a file using a browser which fully supports the HTML5 File API — such as Google Chrome or Mozilla Firefox — then the file is uploaded in multiple chunks. You can see chunking in action by opening F12 Developer Tools in your browser and observing the Network tab: Notice that there is a crazy number of distinct post requests made (about 360 distinct requests for a 1 gigabyte file). Each post request looks like this: http://localhost:24338/AjaxFileUploadHandler.axd?contextKey={DA8BEDC8-B952-4d5d-8CC2-59FE922E2923}&fileId=B7CCE31C-6AB1-BB28-2940-49E0C9B81C64 &fileName=Sita_Sings_the_Blues_480p_2150kbps.mp4&chunked=true&firstChunk=false Each request posts another chunk of the file being uploaded. Notice that the request URL includes a chunked=true parameter which indicates that the browser is breaking the file being uploaded into multiple chunks. Showing Upload Progress on All Browsers The previous version of the AjaxFileUpload control could display upload progress only in the case of browsers which fully support the HTML5 File API. The new version of the AjaxFileUpload control can display upload progress in the case of all browsers. If a browser does not fully support the HTML5 File API then the browser polls the server every few seconds with an Ajax request to determine the percentage of the file that has been uploaded. This technique of displaying progress works with any browser which supports making Ajax requests. There is one catch. Be warned that this new feature only works with the .NET 4.0 and .NET 4.5 versions of the AjaxControlToolkit. To show upload progress, we are taking advantage of the new ASP.NET HttpRequest.GetBufferedInputStream() and HttpRequest.GetBufferlessInputStream() methods which are not supported by .NET 3.5. For example, here is what the Network tab looks like when you use the AjaxFileUpload with Microsoft Internet Explorer: Here’s what the requests in the Network tab look like: GET /WebForm1.aspx?contextKey={DA8BEDC8-B952-4d5d-8CC2-59FE922E2923}&poll=1&guid=9206FF94-76F9-B197-D1BC-EA9AD282806B HTTP/1.1 Notice that each request includes a poll=1 parameter. This parameter indicates that this is a polling request to get the size of the file buffered on the server. Here’s what the response body of a request looks like when about 20% of a file has been uploaded: Buffering to a Temporary File When you upload a file using the AjaxFileUpload control, the file upload is buffered to a temporary file located at Path.GetTempPath(). When you call the SaveAs() method, as we did in the sample page above, the temporary file is copied to a new file and then the temporary file is deleted. If you don’t call the SaveAs() method, then you must ensure that the temporary file gets deleted yourself. For example, if you want to save the file to a database then you will never call the SaveAs() method and you are responsible for deleting the file. The easiest way to delete the temporary file is to call the AjaxFileUploadEventArgs.DeleteTemporaryData() method in the UploadComplete handler: protected void AjaxFileUpload1_UploadComplete(object sender, AjaxControlToolkit.AjaxFileUploadEventArgs e) { // Save uploaded file to a database table e.DeleteTemporaryData(); } You also can call the static AjaxFileUpload.CleanAllTemporaryData() method to delete all temporary data and not only the temporary data related to the current file upload. For example, you might want to call this method on application start to ensure that all temporary data is removed whenever your application restarts. A Better MaskedEdit Extender This release of the Ajax Control Toolkit contains bug fixes for the top-voted issues related to the MaskedEdit control. We closed over 25 MaskedEdit issues. Here is a complete list of the issues addressed with this release: · 17302 MaskedEditExtender MaskType=Date, Mask=99/99/99 Undefined JS Error · 11758 MaskedEdit causes error in JScript when working with 2-digits year · 18810 Maskededitextender/validator Date validation issue · 23236 MaskEditValidator does not work with date input using format dd/mm/yyyy · 23042 Webkit based browsers (Safari, Chrome) and MaskedEditExtender · 26685 MaskedEditExtender@(ClearMaskOnLostFocus=false) adds a zero character when you each focused to target textbox · 16109 MaskedEditExtender: Negative amount, followed by decimal, sets value to positive · 11522 MaskEditExtender of AjaxtoolKit-1.0.10618.0 does not work properly for Hungarian Culture · 25988 MaskedEditExtender – CultureName (HU-hu) > DateSeparator · 23221 MaskedEditExtender date separator problem · 15233 Day and month swap in Dynamic user control · 15492 MaskedEditExtender with ClearMaskOnLostFocus and with MaskedEditValidator with ClientValidationFunction · 9389 MaskedEditValidator – when on no entry · 11392 MaskedEdit Number format messed up · 11819 MaskedEditExtender erases all values beyond first comma separtor · 13423 MaskedEdit(Extender/Validator) combo problem · 16111 MaskedEditValidator cannot validate date with DayMonthYear in UserDateFormat of MaskedEditExtender · 10901 MaskedEdit: The months and date fields swap values when you hit submit if UserDateFormat is set. · 15190 MaskedEditValidator can’t make use of MaskedEditExtender’s UserDateFormat property · 13898 MaskedEdit Extender with custom date type mask gives javascript error · 14692 MaskedEdit error in “yy/MM/dd” format. · 16186 MaskedEditExtender does not handle century properly in a date mask · 26456 MaskedEditBehavior. ConvFmtTime : function(input,loadFirst) fails if this._CultureAMPMPlaceholder == “” · 21474 Error on MaskedEditExtender working with number format · 23023 MaskedEditExtender’s ClearMaskOnLostFocus property causes problems for MaskedEditValidator when set to false · 13656 MaskedEditValidator Min/Max Date value issue Conclusion This latest release of the Ajax Control Toolkit required many hours of work by a team of talented developers. I want to thank the members of the Superexpert team for the long hours which they put into this release.

    Read the article

  • Project Navigation and File Nesting in ASP.NET MVC Projects

    - by Rick Strahl
    More and more I’m finding myself getting lost in the files in some of my larger Web projects. There’s so much freaking content to deal with – HTML Views, several derived CSS pages, page level CSS, script libraries, application wide scripts and page specific script files etc. etc. Thankfully I use Resharper and the Ctrl-T Go to Anything which autocompletes you to any file, type, member rapidly. Awesome except when I forget – or when I’m not quite sure of the name of what I’m looking for. Project navigation is still important. Sometimes while working on a project I seem to have 30 or more files open and trying to locate another new file to open in the solution often ends up being a mental exercise – “where did I put that thing?” It’s those little hesitations that tend to get in the way of workflow frequently. To make things worse most NuGet packages for client side frameworks and scripts, dump stuff into folders that I generally don’t use. I’ve never been a fan of the ‘Content’ folder in MVC which is just an empty layer that doesn’t serve much of a purpose. It’s usually the first thing I nuke in every MVC project. To me the project root is where the actual content for a site goes – is there really a need to add another folder to force another path into every resource you use? It’s ugly and also inefficient as it adds additional bytes to every resource link you embed into a page. Alternatives I’ve been playing around with different folder layouts recently and found that moving my cheese around has actually made project navigation much easier. In this post I show a couple of things I’ve found useful and maybe you find some of these useful as well or at least get some ideas what can be changed to provide better project flow. The first thing I’ve been doing is add a root Code folder and putting all server code into that. I’m a big fan of treating the Web project root folder as my Web root folder so all content comes from the root without unneeded nesting like the Content folder. By moving all server code out of the root tree (except for Code) the root tree becomes a lot cleaner immediately as you remove Controllers, App_Start, Models etc. and move them underneath Code. Yes this adds another folder level for server code, but it leaves only code related things in one place that’s easier to jump back and forth in. Additionally I find myself doing a lot less with server side code these days, more with client side code so I want the server code separated from that. The root folder itself then serves as the root content folder. Specifically I have the Views folder below it, as well as the Css and Scripts folders which serve to hold only common libraries and global CSS and Scripts code. These days of building SPA style application, I also tend to have an App folder there where I keep my application specific JavaScript files, as well as HTML View templates for client SPA apps like Angular. Here’s an example of what this looks like in a relatively small project: The goal is to keep things that are related together, so I don’t end up jumping around so much in the solution to get to specific project items. The Code folder may irk some of you and hark back to the days of the App_Code folder in non Web-Application projects, but these days I find myself messing with a lot less server side code and much more with client side files – HTML, CSS and JavaScript. Generally I work on a single controller at a time – once that’s open it’s open that’s typically the only server code I work with regularily. Business logic lives in another project altogether, so other than the controller and maybe ViewModels there’s not a lot of code being accessed in the Code folder. So throwing that off the root and isolating seems like an easy win. Nesting Page specific content In a lot of my existing applications that are pure server side MVC application perhaps with some JavaScript associated with them , I tend to have page level javascript and css files. For these types of pages I actually prefer the local files stored in the same folder as the parent view. So typically I have a .css and .js files with the same name as the view in the same folder. This looks something like this: In order for this to work you have to also make a configuration change inside of the /Views/web.config file, as the Views folder is blocked with the BlockViewHandler that prohibits access to content from that folder. It’s easy to fix by changing the path from * to *.cshtml or *.vbhtml so that view retrieval is blocked:<system.webServer> <handlers> <remove name="BlockViewHandler"/> <add name="BlockViewHandler" path="*.cshtml" verb="*" preCondition="integratedMode" type="System.Web.HttpNotFoundHandler" /> </handlers> </system.webServer> With this in place, from inside of your Views you can then reference those same resources like this:<link href="~/Views/Admin/QuizPrognosisItems.css" rel="stylesheet" /> and<script src="~/Views/Admin/QuizPrognosisItems.js"></script> which works fine. JavaScript and CSS files in the Views folder deploy just like the .cshtml files do and can be referenced from this folder as well. Making this happen is not really as straightforward as it should be with just Visual Studio unfortunately, as there’s no easy way to get the file nesting from the VS IDE directly (you have to modify the .csproj file). However, Mads Kristensen has a nice Visual Studio Add-in that provides file nesting via a short cut menu option. Using this you can select each of the ‘child’ files and then nest them under a parent file. In the case above I select the .js and .css files and nest them underneath the .cshtml view. I was even toying with the idea of throwing the controller.cs files into the Views folder, but that’s maybe going a little too far :-) It would work however as Visual Studio doesn’t publish .cs files and the compiler doesn’t care where the files live. There are lots of options and if you think that would make life easier it’s another option to help group related things together. Are there any downside to this? Possibly – if you’re using automated minification/packaging tools like ASP.NET Bundling or Grunt/Gulp with Uglify, it becomes a little harder to group script and css files for minification as you may end up looking in multiple folders instead of a single folder. But – again that’s a one time configuration step that’s easily handled and much less intrusive then constantly having to search for files in your project. Client Side Folders The particular project shown above in the screen shots above is a traditional server side ASP.NET MVC application with most content rendered into server side Razor pages. There’s a fair amount of client side stuff happening on these pages as well – specifically several of these pages are self contained single page Angular applications that deal with 1 or maybe 2 separate views and the layout I’ve shown above really focuses on the server side aspect where there are Razor views with related script and css resources. For applications that are more client centric and have a lot more script and HTML template based content I tend to use the same layout for the server components, but the client side code can often be broken out differently. In SPA type applications I tend to follow the App folder approach where all the application pieces that make the SPA applications end up below the App folder. Here’s what that looks like for me – here this is an AngularJs project: In this case the App folder holds both the application specific js files, and the partial HTML views that get loaded into this single SPA page application. In this particular Angular SPA application that has controllers linked to particular partial views, I prefer to keep the script files that are associated with the views – Angular Js Controllers in this case – with the actual partials. Again I like the proximity of the view with the main code associated with the view, because 90% of the UI application code that gets written is handled between these two files. This approach works well, but only if controllers are fairly closely aligned with the partials. If you have many smaller sub-controllers or lots of directives where the alignment between views and code is more segmented this approach starts falling apart and you’ll probably be better off with separate folders in js folder. Following Angular conventions you’d have controllers/directives/services etc. folders. Please note that I’m not saying any of these ways are right or wrong  – this is just what has worked for me and why! Skipping Project Navigation altogether with Resharper I’ve talked a bit about project navigation in the project tree, which is a common way to navigate and which we all use at least some of the time, but if you use a tool like Resharper – which has Ctrl-T to jump to anything, you can quickly navigate with a shortcut key and autocomplete search. Here’s what Resharper’s jump to anything looks like: Resharper’s Goto Anything box lets you type and quick search over files, classes and members of the entire solution which is a very fast and powerful way to find what you’re looking for in your project, by passing the solution explorer altogether. As long as you remember to use (which I sometimes don’t) and you know what you’re looking for it’s by far the quickest way to find things in a project. It’s a shame that this sort of a simple search interface isn’t part of the native Visual Studio IDE. Work how you like to work Ultimately it all comes down to workflow and how you like to work, and what makes *you* more productive. Following pre-defined patterns is great for consistency, as long as they don’t get in the way you work. A lot of the default folder structures in Visual Studio for ASP.NET MVC were defined when things were done differently. These days we’re dealing with a lot more diverse project content than when ASP.NET MVC was originally introduced and project organization definitely is something that can get in the way if it doesn’t fit your workflow. So take a look and see what works well and what might benefit from organizing files differently. As so many things with ASP.NET, as things evolve and tend to get more complex I’ve found that I end up fighting some of the conventions. The good news is that you don’t have to follow the conventions and you have the freedom to do just about anything that works for you. Even though what I’ve shown here diverges from conventions, I don’t think anybody would stumble over these relatively minor changes and not immediately figure out where things live, even in larger projects. But nevertheless think long and hard before breaking those conventions – if there isn’t a good reason to break them or the changes don’t provide improved workflow then it’s not worth it. Break the rules, but only if there’s a quantifiable benefit. You may not agree with how I’ve chosen to divert from the standard project structures in this article, but maybe it gives you some ideas of how you can mix things up to make your existing project flow a little nicer and make it easier to navigate for your environment. © Rick Strahl, West Wind Technologies, 2005-2014Posted in ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • C# 5.0 Async/Await Demo Code

    - by Paulo Morgado
    I’ve published the sample code I use to demonstrate the use of async/await in C# 5.0. You can find it here. Projects PauloMorgado.AyncDemo.WebServer This project is a simple web server implemented as a console application using Microsoft ASP.NET Web API self hosting and serves an image (with a delay) that is accessed by the other projects. This project has a dependency on Json.NET due to the fact the the Microsoft ASP.NET Web API hosting has a dependency on Json.NET. The application must be run on a command prompt with administrative privileges or a urlacl must be added to allow the use of the following command: netsh http add urlacl url=http://+:9090/ user=machine\username To remove the urlacl, just use the following command: netsh http delete urlacl url=http://+:9090/ PauloMorgado.AsyncDemo.WindowsForms This Windows Forms project contains three regions that must be uncommented one at a time: Sync with WebClient This code retrieves the image through a synchronous call using the WebClient class. Async with WebClient This code retrieves the image through an asynchronous call using the WebClient class. Async with HttpClient with cancelation This code retrieves the image through an asynchronous call with cancelation using the HttpClient class. PauloMorgado.AsyncDemo.Wpf This WPF project contains three regions that must be uncommented one at a time: Sync with WebClient This code retrieves the image through a synchronous call using the WebClient class. Async with WebClient This code retrieves the image through an asynchronous call using the WebClient class. Async with HttpClient with cancelation This code retrieves the image through an asynchronous call with cancelation using the HttpClient class.

    Read the article

  • How do I configure postfix starttls

    - by Michael Temeschinko
    I need to install postfix on my webserver couse I need to use sendmail for my website. I only need to send mail not recieve or relay. send with starttls (port 587) via relay smtp.strato.de here is what happens Jul 15 00:02:38 negrita postfix/smtp[7120]: Host offered STARTTLS: [smtp.strato.de] Jul 15 00:02:38 negrita postfix/smtp[7120]: C717A181252: to=<[email protected]>, relay=smtp.strato.de[81.169.145.133]:587, delay=0.31, delays=0.09/0/0.16/0.04, dsn=5.7.0, status=bounced (host smtp.strato.de[81.169.145.133] said: 530 5.7.0 Bitte konfigurieren Sie ihr E-Mailprogramm fuer Authentifizierung am SMTP Server, wie auf www.strato.de/email-hilfe beschrieben. - Please configure your mail client for using SMTP Server Authentication (in reply to MAIL FROM command)) Jul 15 00:02:38 negrita postfix/cleanup[7118]: 29F5F181254: message-id=<20120714220238.29F5F181254@negrita> Jul 15 00:02:38 negrita postfix/qmgr[7102]: 29F5F181254: from=<>, size=2548, nrcpt=1 (queue active) Jul 15 00:02:38 negrita postfix/bounce[7121]: C717A181252: sender non-delivery notification: 29F5F181254 Jul 15 00:02:38 negrita postfix/qmgr[7102]: C717A181252: removed Jul 15 00:02:39 negrita postfix/local[7122]: 29F5F181254: to=<michael@negrita>, relay=local, delay=1.1, delays=0.04/0/0/1.1, dsn=2.0.0, status=sent (delivered to command: procmail -a "$EXTENSION") Jul 15 00:02:39 negrita postfix/qmgr[7102]: 29F5F181254: removed Jul 15 08:05:18 negrita postfix/master[1083]: daemon started -- version 2.9.1, configuration /etc/postfix Jul 15 08:05:29 negrita postfix/master[1083]: reload -- version 2.9.1, configuration /etc/postfix and my config michael@negrita:~$ postconf -n biff = no config_directory = /etc/postfix delay_warning_time = 4h home_mailbox = /home/michael/Maildir/ html_directory = /usr/share/doc/postfix/html inet_interfaces = localhost mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 mydomain = example.com myhostname = negrita mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 notify_classes = resource, software, protocol readme_directory = /usr/share/doc/postfix recipient_delimiter = + relayhost = [smtp.strato.de]:587 smtp_sasl_password_maps = hash:/etc/postfix/sasl/passwd smtp_tls_enforce_peername = no smtp_tls_note_starttls_offer = yes smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes soft_bounce = yes the user and password is ok couse I can send mail with my thunderbird thanks in advance mike

    Read the article

  • What is recommended minimum object size for gzip performance benefits?

    - by utt73
    I'm working on improving page speed display times, and one of the methods is to gzip content from the webserver. Google recommends: Note that gzipping is only beneficial for larger resources. Due to the overhead and latency of compression and decompression, you should only gzip files above a certain size threshold; we recommend a minimum range between 150 and 1000 bytes. Gzipping files below 150 bytes can actually make them larger. We serve our content through Akamai, using their network for a proxy and CDN. What they've told me: Following up on your question regarding what is the minimum size Akamai will compress the requested object when sending it to the end user: The minimum size is 860 bytes. My reply: What is the reason(s) for why Akamai's minimum size is 860 bytes? And why, for example, is this not the case for files Akamai serves for facebook? (see below) Google recommends to gzip more agressively. And that seems appropriate on our site where the most frequent hits, by far, are AJAX calls that are <860 bytes. Akamai's response: The reasons 860 bytes is the minimum size for compression is twofold: (1) The overhead of compressing an object under 860 bytes outweighs performance gain. (2) Objects under 860 bytes can be transmitted via a single packet anyway, so there isn't a compelling reason to compress them. So I'm here for some fact checking. Is the 860 byte limit due to packet size the end of this reasoning? Why would high traffic sites push this down to the 150 byte limit... just to save on bandwidth costs (since CDNs base their charges on bandwith offloaded from origin), or is there a performance gain in doing so?

    Read the article

  • CodePlex Daily Summary for Tuesday, September 18, 2012

    CodePlex Daily Summary for Tuesday, September 18, 2012Popular ReleasesfastJSON: v2.0.5: 2.0.5 - fixed number parsing for invariant format - added a test for German locale number testing (,. problems)????????API for .Net SDK: SDK for .Net ??? Release 4: 2012?9?17??? ?????,???????????????。 ?????Release 3??????,???????,???,??? ??????????????????SDK,????????。 ??,??????? That's all.VidCoder: 1.4.0 Beta: First Beta release! Catches up to HandBrake nightlies with SVN 4937. Added PGS (Blu-ray) subtitle support. Additional framerates available: 30, 50, 59.94, 60 Additional sample rates available: 8, 11.025, 12 and 16 kHz Additional higher bitrates available for audio. Same as Source Constant Framerate available. Added Apple TV 3 preset. Added new Bob deinterlacing option. Introduced process isolation for encodes. Now if HandBrake crashes, VidCoder will keep running and continue pro...DNN Metro7 style Skin package: Metro7 style Skin for DotNetNuke 06.02.01: Stabilization release fixed this issues: Links not worked on FF, Chrome and Safari Modified packaging with own manifest file for install and source package. Moved the user Image on the Login to the left side. Moved h2 font-size to 24px. Note : This release Comes w/o source package about we still work an a solution. Who Needs the Visual Studio source files please go to source and download it from there. Known 16 CSS issues that related to the skin.css. All others are DNN default o...Visual Studio Icon Patcher: Version 1.5.1: This fixes a bug in the 1.5 release where it would crash when no language packs were installed for VS2010.sheetengine - Isometric HTML5 JavaScript Display Engine: sheetengine v1.1.0: This release of sheetengine introduces major drawing optimizations. A background canvas is created with the full drawn scenery onto which only the changed parts are redrawn. For example a moving object will cause only its bounding box to be redrawn instead of the full scene. This background canvas is copied to the main canvas in each iteration. For this reason the size of the bounding box of every object needs to be defined and also the width and height of the background canvas. The example...VFPX: Desktop Alerts 1.0.2: This update for the Desktop Alerts contains changes to behavior for setting custom sounds for alerts. I have removed ALERTWAV.TXT from the project, and also removed DA_DEFAULTSOUND from the VFPALERT.H file. The AlertManager class and Alert class both have a "default" cSound of ADDBS(JUSTPATH(_VFP.ServerName))+"alert.wav" --- so, as long as you distribute a sound file with the file name "alert.wav" along with the EXE, that file will be used. You can set your own sound file globally by setti...MCEBuddy 2.x: MCEBuddy 2.2.15: Changelog for 2.2.15 (32bit and 64bit) 1. Added support for %originalfilepath% to get the source file full path. Used for custom commands only. 2. Added support for better parsing of Media Portal XML files to extract ShowName and Episode Name and download additional details from TVDB (like Season No, Episode No etc). 3. Added support for TVDB seriesID in metadata 4. Added support for eMail non blocking UI testCrashReporter.NET : Exception reporting library for C# and VB.NET: CrashReporter.NET 1.2: *Added html mail format which shows hierarchical exception report for better understanding.PDF Viewer Web part: PDF Viewer Web Part: PDF Viewer Web PartMicrosoft Ajax Minifier: Microsoft Ajax Minifier 4.67: Fix issue #18629 - incorrectly handling null characters in string literals and not throwing an error when outside string literals. update for Issue #18600 - forgot to make the ///#DEBUG= directive also set a known-global for the given debug namespace. removed the kill-switch for disregarding preprocessor define-comments (///#IF and the like) and created a separate CodeSettings.IgnorePreprocessorDefines property for those who really need to turn that off. Some people had been setting -kil...MPC-BE: Media Player Classic BE 1.0.1.0 build 1122: MPC-BE is a free and open source audio and video player for Windows. MPC-BE is based on the original "Media Player Classic" project (Gabest) and "Media Player Classic Home Cinema" project (Casimir666), contains additional features and bug fixes. Supported Operating Systems: Windows XP SP2, Vista, 7 32bit/64bit System Requirements: An SSE capable CPU The latest DirectX 9.0c runtime (June 2010). Install it regardless of the operating system, they all need it. Web installer: http://www.micro...Preactor Object Model: Visual Studio Template .NET 3.5: Visual Studio Template with all the necessary files to get started with POM. You will still need to Get the Preactor.ObjectModel and Preactor.ObjectModleExtensions libraries from Nuget though. You will also need to sign with assembly with a strong name key.Lakana - WPF Framework: Lakana V2: Lakana V2 contains : - Lakana WPF Forms (with sample project) - Lakana WPF Navigation (with sample project)myCollections: Version 2.3.0.0: New in this version : Added TheGamesDB.net API for games and nds Added Fast search options Added order by Artist/Album for music Fixed several provider Performance improvement New Splash Screen BugFixingMicrosoft SQL Server Product Samples: Database: OData QueryFeed workflow activity: The OData QueryFeed sample activity shows how to create a workflow activity that consumes an OData resource, and renders entity properties in a Microsoft Excel 2010 worksheet or Microsoft Word 2010 document. Using the sample QueryFeed activity, you can consume any OData resource. The sample activity uses LINQ to project OData metadata into activity designer expression items. By setting activity expressions, a fully qualified OData query string is constructed consisting of Resource, Filter, Or...F# 3.0 Sample Pack: FSharp 3.0 Sample Pack for Visual Studio 2012 RTM: F# 3.0 Sample Pack for Visual Studio 2012 RTMANPR MX: ANPR_MX Release 1: ANPR MX Release 1 Features: Correctly detects plate area for the average North American plate. (It won't work for the "European" plate size.) Provides potential values for the recognized plate. Allows images 800x600 and below (.jpg / .png). The example requires the VC 10 runtime & .NET 4 Framework to be already installed. The Source code project was made on Visual Studio 2010.Cocktail: Cocktail v1.0.1: PrerequisitesVisual Studio 2010 with SP1 (any edition but Express) Optional: Silverlight 4 or 5 Note: Install Silverlight 4 Tools and then the Silverlight 4 Toolkit. Likewise for Silverlight 5 Tools and the Silverlight 5 Toolkit DevForce Express 6.1.8.1 Included in the Cocktail download, DevForce Express requires registration) Important: Install DevForce after all other components. Download contentsDebug and release assemblies API documentation Source code License.txt Re...weber: weber v0.1: first release, creates a basic browser shell and allows user to navigate to web sites.New Projects.NET Code Editor & Compiler Component: .Net compiler component with integrated advanced text box, VisualStudio like highlightning, ability to intercept and show StandardOutput strings.NET Plugin Manager: Provides agnostic functionality for tiered plugin loading, unloading, and plugin collection management.Amazon Control Panel v2: Amazon Control Panel is a application that lets you control you Amazon Seller Central account using the Amazon MWS (Merchant Web Service) API.AutoSPSourceBuilder: AutoSPSourceBuilder: a utility for automatically building a SharePoint 2010 or 2013 install source including service packs, language packs & cumulative updates.CAOS: RBAC acess controllChat Forum: An Internet  forum,  or message  board,  is  an online discussion  site conversations  in  the  form  of  posted  messages.CRM 2011 - Many-To-Many Relationship Entity View: This Silverlight Web Resource for CRM 2011 will allow user to see N:N relationship entity data from single place.dardasim: dardasim gil and lior Tel Cabir DolphinsDBAManage: ???????ERP??,????!DimDate Generator: A SSIS project for generation a data dimansion table and data.DNL: eine grße .net bibliothek für entwicklerDouban FM for Metro: A music radio client for http://douban.fm running on Windows 8 / WinRTExtended WPF Control: Extended WPF Control for research and learning.FizzBuzzDaveC: Implements the classic FizzBuzz programmine exercise.HamStart: Nothing for now...Infopath XSN Modifier: A tool for editing the dataconnections of Infopath.KH Picture Resizer: Picture Resizer ermöglicht es Bilder per Drag and Dop zu verkleinern. Das Program wurde in C# geschrieben und nutzt Windows Forms.Korean String Extension for .NET: ?? ??? ??? ????? ???? string??? ??? ????? Extension library for "string" class that enhances "Hangul Jamo system" features Lucky Loot - Tattoo Shop Management Application: Lucky Loot - Tattoo Shop Management Application Por: Eric Gabriel Rodrigues Castoldi Objetivo: Trabalho de Conclusão de Curso de Sistemas de InformaçãoMagnOS - C# Cosmos Operating System: MagnOS is an Open Source operating system, made to learn how to make operating systems with Cosmos.Móa mày: Project m?iOData Samples: A collection of samples demonstrating solutions and functionality in WCF Data Services, ODataLib and EdmLib.Online Image Editor: Online Photo CanvasOptimuss Administración: La mejor aplicación de Gestión y Control EscolarOptimuss Obelix: La mejor aplicación de Gestión y Control EscolarPersonal Website: My personal websitePlanisoft: Proyecto de Planilla para clinica los fresnosPROYECTODT: ..................................................................................................................PtLibrary: PtLibrary stands for Peter Thönell's Delphi library. PtSettings and PtSettingsGUI make the management and use of settings extremely easy and powerful.RTS WebServer: A small lightweight, modern and fast webserver (template). with in the feature the newest technologies like SPDY and websocketsStandards: Standards is an Intranet application (using Windows authentication) designed to document and manage company standards. It is written in C#/MVC 4.Truttle OS: This is an OS I made with CosmosWfp System: zdgdsfgdsfgzpo: projekt na zaliczenie zpo???: ???

    Read the article

  • Caveats with the runAllManagedModulesForAllRequests in IIS 7/8

    - by Rick Strahl
    One of the nice enhancements in IIS 7 (and now 8) is the ability to be able to intercept non-managed - ie. non ASP.NET served - requests from within ASP.NET managed modules. This opened up a ton of new functionality that could be applied across non-managed content using .NET code. I thought I had a pretty good handle on how IIS 7's Integrated mode pipeline works, but when I put together some samples last tonight I realized that the way that managed and unmanaged requests fire into the pipeline is downright confusing especially when it comes to the runAllManagedModulesForAllRequests attribute. There are a number of settings that can affect whether a managed module receives non-ASP.NET content requests such as static files or requests from other frameworks like PHP or ASP classic, and this is topic of this blog post. Native and Managed Modules The integrated mode IIS pipeline for IIS 7 and later - as the name suggests - allows for integration of ASP.NET pipeline events in the IIS request pipeline. Natively IIS runs unmanaged code and there are a host of native mode modules that handle the core behavior of IIS. If you set up a new IIS site or application without managed code support only the native modules are supported and fired without any interaction between native and managed code. If you use the Integrated pipeline with managed code enabled however things get a little more confusing as there both native modules and .NET managed modules can fire against the same IIS request. If you open up the IIS Modules dialog you see both managed and unmanaged modules. Unmanaged modules point at physical files on disk, while unmanaged modules point at .NET types and files referenced from the GAC or the current project's BIN folder. Both native and managed modules can co-exist and execute side by side and on the same request. When running in IIS 7 the IIS pipeline actually instantiates a the ASP.NET  runtime (via the System.Web.PipelineRuntime class) which unlike the core HttpRuntime classes in ASP.NET receives notification callbacks when IIS integrated mode events fire. The IIS pipeline is smart enough to detect whether managed handlers are attached and if they're none these notifications don't fire, improving performance. The good news about all of this for .NET devs is that ASP.NET style modules can be used for just about every kind of IIS request. All you need to do is create a new Web Application and enable ASP.NET on it, and then attach managed handlers. Handlers can look at ASP.NET content (ie. ASPX pages, MVC, WebAPI etc. requests) as well as non-ASP.NET content including static content like HTML files, images, javascript and css resources etc. It's very cool that this capability has been surfaced. However, with that functionality comes a lot of responsibility. Because every request passes through the ASP.NET pipeline if managed modules (or handlers) are attached there are possible performance implications that come with it. Running through the ASP.NET pipeline does add some overhead. ASP.NET and Your Own Modules When you create a new ASP.NET project typically the Visual Studio templates create the modules section like this: <system.webServer> <validation validateIntegratedModeConfiguration="false" /> <modules runAllManagedModulesForAllRequests="true" > </modules> </system.webServer> Specifically the interesting thing about this is the runAllManagedModulesForAllRequest="true" flag, which seems to indicate that it controls whether any registered modules always run, even when the value is set to false. Realistically though this flag does not control whether managed code is fired for all requests or not. Rather it is an override for the preCondition flag on a particular handler. With the flag set to the default true setting, you can assume that pretty much every IIS request you receive ends up firing through your ASP.NET module pipeline and every module you have configured is accessed even by non-managed requests like static files. In other words, your module will have to handle all requests. Now so far so obvious. What's not quite so obvious is what happens when you set the runAllManagedModulesForAllRequest="false". You probably would expect that immediately the non-ASP.NET requests no longer get funnelled through the ASP.NET Module pipeline. But that's not what actually happens. For example, if I create a module like this:<add name="SharewareModule" type="HowAspNetWorks.SharewareMessageModule" /> by default it will fire against ALL requests regardless of the runAllManagedModulesForAllRequests flag. Even if the value runAllManagedModulesForAllRequests="false", the module is fired. Not quite expected. So what is the runAllManagedModulesForAllRequests really good for? It's essentially an override for managedHandler preCondition. If I declare my handler in web.config like this:<add name="SharewareModule" type="HowAspNetWorks.SharewareMessageModule" preCondition="managedHandler" /> and the runAllManagedModulesForAllRequests="false" my module only fires against managed requests. If I switch the flag to true, now my module ends up handling all IIS requests that are passed through from IIS. The moral of the story here is that if you intend to only look at ASP.NET content, you should always set the preCondition="managedHandler" attribute to ensure that only managed requests are fired on this module. But even if you do this, realize that runAllManagedModulesForAllRequests="true" can override this setting. runAllManagedModulesForAllRequests and Http Application Events Another place the runAllManagedModulesForAllRequest attribute affects is the Global Http Application object (typically in global.asax) and the Application_XXXX events that you can hook up there. So while the events there are dynamically hooked up to the application class, they basically behave as if they were set with the preCodition="managedHandler" configuration switch. The end result is that if you have runAllManagedModulesForAllRequests="true" you'll see every Http request passed through the Application_XXXX events, and you only see ASP.NET requests with the flag set to "false". What's all that mean? Configuring an application to handle requests for both ASP.NET and other content requests can be tricky especially if you need to mix modules that might require both. Couple of things are important to remember. If your module doesn't need to look at every request, by all means set a preCondition="managedHandler" on it. This will at least allow it to respond to the runAllManagedModulesForAllRequests="false" flag and then only process ASP.NET requests. Look really carefully to see whether you actually need runAllManagedModulesForAllRequests="true" in your applications as set by the default new project templates in Visual Studio. Part of the reason, this is the default because it was required for the initial versions of IIS 7 and ASP.NET 2 in order to handle MVC extensionless URLs. However, if you are running IIS 7 or later and .NET 4.0 you can use the ExtensionlessUrlHandler instead to allow you MVC functionality without requiring runAllManagedModulesForAllRequests="true": <handlers> <remove name="ExtensionlessUrlHandler-Integrated-4.0" /> <add name="ExtensionlessUrlHandler-Integrated-4.0" path="*." verb="GET,HEAD,POST,DEBUG,PUT,DELETE,PATCH,OPTIONS" type="System.Web.Handlers.TransferRequestHandler" preCondition="integratedMode,runtimeVersionv4.0" /> </handlers> Oddly this is the default for Visual Studio 2012 MVC template apps, so I'm not sure why the default template still adds runAllManagedModulesForAllRequests="true" is - it should be enabled only if there's a specific need to access non ASP.NET requests. As a side note, it's interesting that when you access a static HTML resource, you can actually write into the Response object and get the output to show, which is trippy. I haven't looked closely to see how this works - whether ASP.NET just fires directly into the native output stream or whether the static requests are re-routed directly through the ASP.NET pipeline once a managed code module is detected. This doesn't work for all non ASP.NET resources - for example, I can't do the same with ASP classic requests, but it makes for an interesting demo when injecting HTML content into a static HTML page :-) Note that on the original Windows Server 2008 and Vista (IIS 7.0) you might need a HotFix in order for ExtensionLessUrlHandler to work properly for MVC projects. On my live server I needed it (about 6 months ago), but others have observed that the latest service updates have integrated this functionality and the hotfix is not required. On IIS 7.5 and later I've not needed any patches for things to just work. Plan for non-ASP.NET Requests It's important to remember that if you write a .NET Module to run on IIS 7, there's no way for you to prevent non-ASP.NET requests from hitting your module. So make sure you plan to support requests to extensionless URLs, to static resources like files. Luckily ASP.NET creates a full Request and full Response object for you for non ASP.NET content. So even for static files and even for ASP classic for example, you can look at Request.FilePath or Request.ContentType (in post handler pipeline events) to determine what content you are dealing with. As always with Module design make sure you check for the conditions in your code that make the module applicable and if a filter fails immediately exit - minimize the code that runs if your module doesn't need to process the request.© Rick Strahl, West Wind Technologies, 2005-2012Posted in IIS7   ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • rsync problems and security concerns

    - by MB.
    Hi I am attempting to use rsync to copy files between two linux servers. both on 10.04.4 I have set up the ssh and a script running under a cron job. this is the message i get back from the cron job. To: mark@ubuntu Subject: Cron ~/rsync.sh Content-Type: text/plain; charset=ANSI_X3.4-1968 X-Cron-Env: X-Cron-Env: X-Cron-Env: X-Cron-Env: Message-Id: <20120708183802.E0D54FC2C0@ubuntu Date: Sun, 8 Jul 2012 14:38:01 -0400 (EDT) rsync: link_stat "/home/mark/#342#200#223rsh=ssh" failed: No such file or directory (2) rsync: opendir "/Library/WebServer/Documents/.cache" failed: Permission denied (13) rsync: recv_generator: mkdir "/Library/Library" failed: Permission denied (13) * Skipping any contents from this failed directory * rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1060) [sender=3.0.7] Q.1 can anyone tell me why I get this message -- rsync: link_stat "/home/mark/#342#200#223rsh=ssh" failed: No such file or directory (2) the script is: #!/bin/bash SOURCEPATH='/Library' DESTPATH='/Library' DESTHOST='192.168.1.15' DESTUSER='mark' LOGFILE='rsync.log' echo $'\n\n' >> $LOGFILE rsync -av –rsh=ssh $SOURCEPATH $DESTUSER@$DESTHOST:$DESTPATH 2>&1 >> $LOGFILE echo “Completed at: `/bin/date`” >> $LOGFILE Q2. I know I have several problems with the permissions all of the files I am copying usually require me to use sudo to manipulate them. My question is then is there a way i can run this job without giving my user root access or using root in the login ?? Thanks for the help .

    Read the article

  • In Joomla In htaccess REQUEST_URI is always returning index.php instead of SEF URL

    - by Saumil
    I have installed joomsef version 3.9.9 with the Joomla 1.5.25. Now I want to set https for some of the section of my site(e.g URI starts with /events/) while wanting rest of all urls on http.I am setting rules in .htaccess file but not getting output as expected. I am checking REQUEST_URI of the SEF urls but always getting index.php as URI. Here is my htaccess code. ########## Begin - Custom redirects # # If you need to redirect some pages, or set a canonical non-www to # www redirect (or vice versa), place that code here. Ensure those # redirects use the correct RewriteRule syntax and the [R=301,L] flags. # ########## End - Custom redirects # Uncomment following line if your webserver's URL # is not directly related to physical file paths. # Update Your Joomla! Directory (just / for root) # RewriteBase / ########## Begin - Joomla! core SEF Section # RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}] # # If the requested path and file is not /index.php and the request # has not already been internally rewritten to the index.php script RewriteCond %{REQUEST_URI} !^/index\.php # and the request is for root, or for an extensionless URL, or the # requested URL ends with one of the listed extensions RewriteCond %{REQUEST_URI} (/[^.]*|\.(php|html?|feed|pdf|raw))$ [NC] # and the requested path and file doesn't directly match a physical file RewriteCond %{REQUEST_FILENAME} !-f # and the requested path and file doesn't directly match a physical folder RewriteCond %{REQUEST_FILENAME} !-d # internally rewrite the request to the index.php script RewriteRule .* index.php [L] # ########## End - Joomla! core SEF Section # Here is my code e.g site url is http://mydomain.com/events RewriteCond %{REQUEST_URI} ^/(events)$ RewriteCond %{HTTPS} !ON RewriteRule (.*) https://%{REQUEST_HOST}%{REQUEST_URI}/$1 [L,R=301] I am not getting why REQUEST_URI is reffering index.php even though my url in address bar is like this http://mydomain.com/events . I am using JOOMSEF(Joomla extension for SEF URLS).If I am removing other rules from the htaccess file then joomla stops working. I am not getting a way to handle this as I am not expert.Please let me know if someone has passed through same situation and have solution or suggest some work around. Thanks

    Read the article

  • Ruby workflow in Windows

    - by Rig
    I've done some searching and quite haven't come across the answer I am looking for. I do not think this is a duplicate of this question. I believe Windows could be a suitable development environment based on the mix of answers in that question. I have been developing in Ruby (mostly Rails but not entirely) for about a year now for personal projects on a Macbook Pro however that machine has faced an untimely death and has been replaced with a nice Windows 7 machine. Ruby development felt almost natural on the Mac after doing some research and setting up the typical stack. My environment then included the standard (Linux like) stuff built into OSX, Text Wrangler, Git, RVM, et al. Not too much of a deviation from what the 'devotees' tend to assume. Now I am setting up my new Windows box for continuing that development. What would my development environment look like? Should I just cave and run Linux in a VM? Ideally I would develop in Windows native. I am aware of the Windows Ruby installer. It seems decent but its not exactly as nice as RVM in the osx/linux world. Mercurial/Git are available so I would assume they play into the stack. Does one develop entirely in Windows? Does one run a webserver in a Linux VM and use it as a test bed while developing in Windows? Do it all in a VM? What does the standard Windows Ruby developer environment look like and what is the workflow? What would a typical step through be for adding a new feature to an ongoing project and what would the technology stack look like?

    Read the article

  • game multiplayer service development

    - by nomad
    I'm currently working on a multiplayer game. I've looked at a number of multiplayer services(player.io, playphone, gamespy, and others) but nothing really hits the mark. They are missing features, lack platform support or cost too much. What I'm looking for is a simple poor man's version of steam or xbox live. Not the game marketplace side of those two but the multiplayer services. User accounts, profiles, presence info, friends, game stats, invites, on/offline messaging. Basically I'm looking for a unified multiplayer platform for all my games across devices. Since I can't find what I'm planning to roll my own piece by piece. I plan to save on server resources by making most of the communication p2p. Things like game data and voice chat can be handled between peers and the server keeps track of user presence and only send updates when needed or requested. I know this runs the risk of cheating but that isn't a concern right now. I plan to run this on a Amazon ec2 micro server for development then move to a small to large instance when finished. I figure user accounts would be the simplest to start with. Users can create accounts online or using in game dialog, login/out, change profile info. The user can access this info online or in game. I will need user authentication and secure communication between server and client. I figure all info will be stored in a database but I dont know how it can be stored securely and accessed from webserver and game services. I would appreciate and links to tutorials, info or advice anyone could provide to get me started. Any programming language is fine but I plan to use c# on the server and c/c++ on devices. I would like to get started right away but I'm in no hurry to get it finished just yet. If you know of a service that already fits my requirements please let me know.

    Read the article

  • Debugging Node.js applications for Windows Azure

    - by cibrax
    In case you are developing a new web application with Node.js for Windows Azure, you might notice there is no easy way to debug the application unless you are developing in an integrated IDE like Cloud9. For those that develop applications locally using a text editor (or WebMatrix) and Windows Azure Powershell for Node.js, it requires some steps not documented anywhere for the moment. I spent a few hours on this the other day I practically got nowhere until I received some help from Tomek and the rest of them. The IISNode version that currently ships with the Windows Azure for Node.js SDK does not support debugging by default, so you need to install the IISNode full version available in the github repository.  Once you have installed the full version, you need to enable debugging for the web application by modifying the web.config file <iisnode debuggingEnabled="true" loggingEnabled="true" devErrorsEnabled="true" /> The xml above needs to be inserted within the existing “<system.webServer/>” section. The last step is to open a WebKit browser (e.g. Chrome) and navigate to the URL where your application is hosted but adding the segment “/debug” to  the end. The full URL to the node.js application must be used, for example, http://localhost:81/myserver.js/debug That should open a new instance of Node inspector on the browser, so you can debug the application from there. Enjoy!!

    Read the article

  • How to spread XML Sitemaps over several webservers behind AWS loadbalancer?

    - by Jurik
    We have a web portal with almost a million products and way more other urls. I wrote a script that checks database. If there is a new url needed or an old one update, this script will update/create the XML Sitemaps. But we have several servers behind the load balancer at our rented AWS space. Further this script checks database for each url if there was an update so that it updates the appropriate xml file too. My question is how to spread those XML Sitemaps over all webservers behind this AWS load balancer? Our approaches/ideas: we could just generate them on one server with a cron job and copy them to the other servers, but this could be difficult because of automatic raising numbers of servers and so on. we put them on our S3 - but this one is not avaible thru our domain, so I guess google will have a problem with it I let my script run on every webserver but change it in a way that it will generate each time all xml files if they do not exist. But then I would have conflicts with updated URLs in my database, where I saved timestamp of last changed value of every url Is there another - better - solution that I do not know? Are there any special services by amazon for such cases?

    Read the article

  • Can't access any functions after chown command.

    - by explorex
    I am not being to access any functions in my desktop and I don't have an OS besides Ubuntu and I am new to Ubuntu. I think I rebooted my computer thinking that Google Chrome crashed. I opened Google Chrome but it showed opening message but never opened so I restarted my computer. and when my system was loading (I was playing with keyboard dont know what I typed) and when by Ubuntu loaded, I was unable to access anything some of characteristics are listed below: I cannot hear any sound I cannot access wired ethernet connection on the right corner where I usually enable to access internet and I have no internet. There is no local apache server either. when ever I try to start apacer I get setuid must be root or something. When I type sudo then I get message setuid must be root. I cannot access orther external storage devices like pendrive and portable hard drive and cannot mount my other drives with FAT32 filesystem. When I try to start my apache webserver with out typing sudo then I get message cannnot open socket or something like it. I remember also doing command chown -R www-data / earlier and got error message I cannot shutdown my computer, it only logs off

    Read the article

  • What makes a person contribute to opensource? [duplicate]

    - by Jibin
    This question already has an answer here: Why develop free, open source programs? [closed] 14 answers I know this is controversial, but.. There are many great projects like Apache Webserver or Hadoop provided by the OpenSource Community. I often feel that the people that actually benefit (financially), from these projects are developers like me, sitting in India, working for MNCs, who has never contributed anything to any opensource project so far, but earn handsomely due to my basic googling skills & the community provided documentation. Is it fair ? I mean no other industry in the world face such dilemma.Those who work get paid. I mean I almost am starting to feel guilty of taking such advantage of some thing that I contribute nothing to. I had to do projects every semester in college (we could choose projects) & I used to enjoy coding then. I want to contribute. But contributing to opensource is not a task [unlike college or office work]. And life is so busy .. Are all these opensource contributors really jobless ?[just kidding..] Could someone please share some personal experiences on how you guys started contributing or any advice on why I should contribute or what attitude in life makes you keep aside time or is it that you just crazly love writing code or is it that you just love to see your name in the contributors list or do you have a local coding group you hang out with ? Do you feel I am destined to do this ? This is my part of contributing back to the world ? Whats that basic mentality that make you guys want to contribute, while I just want to finish my work and go home. What makes you guys tick ?

    Read the article

  • Is /home encryption useful on a server?

    - by Dennis
    I've got a question about the use of encryption: I set up a Ubuntu 12.04 server to use it as a router, file server for backups and webserver. Of course, it is probably not the best idea to put backups on the same system as a web server, but it is only for private usage and I don't want to spend too much money. So I thought it is not a bad idea to set up /home-encryption for the backup-user-account with which I do my backups. But in the same moment, another quesiton arises: Does it still makes sense? Via SSH, root login is disabled. And access to the /home-folder of that user is reduced to the user itself. So the only scenario to access the /home-folder is to connect keyboard/display to the server, login as root and change to /home. Or have I overseen a scenario? In case I am right, you can only access the /home-folder from "outside" as the backup-user. But than, encryption also doesn't make sense anymore. Am I right about that thoughts? Or do you still see a way to access the /home-folder of the backup user so that encryption still makes sense? Thanks for your help in advance!

    Read the article

  • What is recommended minimum object size for gzip benefits?

    - by utt73
    I'm working on improving page speed display times, and one of the methods is to gzip content from the webserver. Google recommends: Note that gzipping is only beneficial for larger resources. Due to the overhead and latency of compression and decompression, you should only gzip files above a certain size threshold; we recommend a minimum range between 150 and 1000 bytes. Gzipping files below 150 bytes can actually make them larger. We serve our content through Akamai, using their network for a proxy and CDN. What they've told me: Following up on your question regarding what is the minimum size Akamai will compress the requested object when sending it to the end user: The minimum size is 860 bytes. My reply: What is the reason(s) for why Akamai's minimum size is 860 bytes? And why, for example, is this not the case for files Akamai serves for facebook? (see below) Google recommends to gzip more agressively. And that seems appropriate on our site where the most frequent hits, by far, are AJAX calls that are <860 bytes. Akamai's response: The reasons 860 bytes is the minimum size for compression is twofold: (1) The overhead of compressing an object under 860 bytes outweighs performance gain. (2) Objects under 860 bytes can be transmitted via a single packet anyway, so there isn't a compelling reason to compress them. So I'm here for some fact checking. Is the 860 byte limit due to packet size the end of this reasoning? Why would high traffic sites push this lower/closer to the 150 byte limit... just to save on bandwidth costs, or is there a performance gain in doing so?

    Read the article

  • Migration of PowerBuilder application to multiplatform

    - by Alex Bibiano
    I developed a client/server application with PowerBuilder in the past for medical clinics and done maintenance for it until now. Now, some clients are asking me to develop a release for Mac/Linux and need some advice about what programming language/technology is best suited for it and the learning curve. It’s not a very very big program but I’m the only developer and have done it in my spare time. PowerBuilder is very productive for this kind of projects (database centric), but it’s not multiplatform and it’s hard to sell PowerBuilder application now days (web, .NET, java sells a lot better with his marketing). My programming skills: - I studied C and C++ in the past (university) but never used it on real projects - Have some Java experience but not in desktop applications - Some experience with Ruby on Rails for web projects - Good skills with PowerBuilder and C# (.NET) (there are my main developing languages) My first dilemma is if I change the desktop application to a web interface, but I think the user will lose some user-experience, and some doctors don’t have a clinic (they are alone at home with my software). I think installing a web application (with webserver) for one user will be overwhelming. If I continue developing desktop application, what is at the moment a good framework/toolset to learn having my skills? Somebody has had similar experiences? A lot of thanks

    Read the article

  • Differences when Running with OutputCache managed module under ASP.NET IIS7.x with Cache-control header

    - by Shawn Cicoria
    This post is to report some differences when using MVC or IHttpHandlers if you’re attempting to set the Cache-control : max-age or s-maxage value under IIS7.x using the HttpResponse.Cache methods. [UPDATE]: 2011-3-14 – The missing piece was calling  Response.Cache.SetSlidingExpiration(true) as follows: context.Response.Cache.SetCacheability(HttpCacheability.Public); context.Response.Cache.SetMaxAge(TimeSpan.FromMinutes(5)); context.Response.ContentType = "image/jpeg"; context.Response.Cache.SetSlidingExpiration(true);   Under IIS7.x if you us one of the following 2 methods, you will only get a Cache-ability of “public”.  public ActionResult Image2() { MemoryStream oStream = new MemoryStream(); using (Bitmap obmp = ImageUtil.RenderImage("Respone.Cache.Setxx calls", 5, DateTime.Now)) { obmp.Save(oStream, ImageFormat.Jpeg); oStream.Position = 0; Response.Cache.SetCacheability(HttpCacheability.Public); Response.Cache.SetMaxAge(TimeSpan.FromMinutes(5)); return new FileStreamResult(oStream, "image/jpeg"); } } Method 2 – which is just a plain old HttpHandler and really isn’t MVC3, but under the same MVC ASP.NET application, same result. public class image : IHttpHandler { public void ProcessRequest(HttpContext context) { using (var image = ImageUtil.RenderImage("called from IHttpHandler direct", 5, DateTime.Now)) { context.Response.Cache.SetCacheability(HttpCacheability.Public); context.Response.Cache.SetMaxAge(TimeSpan.FromMinutes(5)); context.Response.ContentType = "image/jpeg"; image.Save(context.Response.OutputStream, ImageFormat.Jpeg); } } } Using the following under MVC3 (I haven’t tried under earlier versions) will work by applying the OutputCacheAttribute to your Action: [OutputCache(Location = OutputCacheLocation.Any, Duration = 300)] public ActionResult Image1() { MemoryStream oStream = new MemoryStream(); using (Bitmap obmp = ImageUtil.RenderImage("called with OutputCacheAttribute", 5, DateTime.Now)) { obmp.Save(oStream, ImageFormat.Jpeg); oStream.Position = 0; return new FileStreamResult(oStream, "image/jpeg"); } } To remove the “OutputCache” module, you use the following in your web.config: <system.webServer> <validation validateIntegratedModeConfiguration="false"/> <modules runAllManagedModulesForAllRequests="true"> <!--<remove name="OutputCache"/>--> </modules>

    Read the article

  • Writing Web "server less" applications

    - by crodjer
    TL;DR What are the prospects of write applications which are completely based on a REST database server (CouchDB) and web applications which directly access the DB instead of having a web server in between? I recently started looking up some NoSQL databases. MongoDB seems to be a popular choices. I also liked the project. But I personally liked the REST interface of CouchDB. So what I wanted to know is if there was the possibility of applications (maybe cached apps in web browser, a chrome extension etc.) which could just just query the database directly with no requirement of a webserver in between. All the computational logic would reside in the client application and the database will do what it does, CRUD. Since mostly (I don't know which doesn't) client frameworks support REST quaries, it could be a good way writing applications well optimized for respective framework. These applications though won't be doing complicated computation, but still provide enough functionality which could replace lots of conventional applications. Are existing resources and projects which would help me move towards writing such applications and also the scope and moving towards developing in this way? Are their any technical/security issues with this? This post will help me decide to look into project like CouchDB (and maybe Dive into Erlang later) or stay with the conventional frameworks (like django) and SQL databases. Update A specific point of such apps I had in mind is creation of offline applications just by replicating couchdb data on client.

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >