Search Results

Search found 83184 results on 3328 pages for 'asp php net'.

Page 59/3328 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • Setting up Apache 2.2 + FastCGI + SuExec + PHP-FPM on Centos 6

    - by mr1031011
    I'm trying to follow this very detailed instruction here, I simply changed from www-data user to apache user, and is using /var/www/hosts/sitename/public_html instead of /home/user/public_html However, I spent the whole day trying to figure out why the php file content is displayed without being parsed correctly. I just cant's seem to figure this out. Below is my current config: /etc/httpd/conf.d/fastcgi.conf User apache Group apache LoadModule fastcgi_module modules/mod_fastcgi.so # dir for IPC socket files FastCgiIpcDir /var/run/mod_fastcgi # wrap all fastcgi script calls in suexec FastCgiWrapper On # global FastCgiConfig can be overridden by FastCgiServer options in vhost config FastCgiConfig -idle-timeout 20 -maxClassProcesses 1 # sample PHP config # see /usr/share/doc/mod_fastcgi-2.4.6 for php-wrapper script # don't forget to disable mod_php in /etc/httpd/conf.d/php.conf! # # to enable privilege separation, add a "SuexecUserGroup" directive # and chown the php-wrapper script and parent directory accordingly # see also http://www.brandonturner.net/blog/2009/07/fastcgi_with_php_opcode_cache/ # FastCgiServer /var/www/www-data/php5-fcgi #AddType application/x-httpd-php .php AddHandler php-fcgi .php Action php-fcgi /fcgi-bin/php5-fcgi Alias /fcgi-bin/ /var/www/www-data/ #FastCgiExternalServer /usr/lib/cgi-bin/php5-fcgi -socket /tmp/php5-fpm.sock -pass-header Authorization #DirectoryIndex index.php # <Location /fcgi-bin/> # Order Deny,Allow # Deny from All # Allow from env=REDIRECT_STATUS SetHandler fcgid-script Options +ExecCGI </Location> /etc/httpd/conf.d/vhost.conf <VirtualHost> DirectoryIndex index.php index.html index.shtml index.cgi SuexecUserGroup www.mysite.com mygroup Alias /fcgi-bin/ /var/www/www-data/www.mysite.com/ DocumentRoot /var/www/hosts/mysite.com/w/w/w/www/ <Directory /var/www/hosts/mysite.com/w/w/w/www/> Options -Indexes FollowSymLinks AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> PS: 1. Also, with PHP5.5, do I even need FPM or is it already included? 2. I'm using mod_fastcgi, not sure if this is the problem and it I should switch to mod_fcgid? There seems to be conflicting records on the internet considering which one is better. I have many virtual hosts running on the machine and hope to be able to provide each user with their own opcache

    Read the article

  • PHP files are downloaded, not executed in UserDir on Apache

    - by Fabian
    We're running a webserver using Debian 6.0.3 with Apache 2, we recently upgraded from Debian 5 to 6. Since then php scripts in the user directories (using mod_userdir) have stopped working, they are downloaded instead of being executed. There is also a website using php outside of the user directories, and that one continues to work fine, so PHP seems to generally work on the server. I tested it with several PHP files, among the a simple phpinfo file that works fine on the main site, but is just downloaded when copying it to one of the user directories. The php files and the directory containing them are executable for everyone. The option in the Apache php5.conf that by default disables PHP in the user directories, is commented out, so the php5.ini looks like this: <IfModule mod_php5.c> <FilesMatch "\.ph(p3?|tml)$"> SetHandler application/x-httpd-php </FilesMatch> <FilesMatch "\.phps$"> SetHandler application/x-httpd-php-source </FilesMatch> # To re-enable php in user directories comment the following lines # (from <IfModule ...> to </IfModule>.) Do NOT set it to On as it # prevents .htaccess files from disabling it. #<IfModule mod_userdir.c> # <Directory /home/*/public_html> # php_admin_value engine Off # </Directory> #</IfModule> </IfModule> We restarted Apache after changing this. I'm running out of ideas now what the problem could be, and I don't know how I could really determine which problem is preventing those php files from being executed. Any ideas on how I can solve this? Update: Strangely, PHP seems to work fine in subfolders of user directories, so if I copy a PHP file from /home/user/public_html/ to /home/user/public_html/test/ it suddenly works.

    Read the article

  • Searching major search engines with text such as <%#

    - by Daniel Dyson
    If I type '<%# vs <%"' into any of the major search engines, everything is stripped out except the 'vs'. I understand why they do this. I would just like to know if anyone knows of a way to escape illegal characters so that they are searched properly. I know this is not strictly a programming question, but it is relevant.

    Read the article

  • Converting John Resig's Templating Engine to work with PHP Templates

    - by Serhiy
    I'm trying to convert the John Resig's new ASP.NET Templating Engine to work with PHP. Essentially what I would like to achieve is the ability to use certain Kohana Views via a JavaScript templating engine, that way I can use the same views for both a standard PHP request and a jQuery AJAX request. I'm starting with the basics and would like to be able to convert http://github.com/nje/jquery-tmpl/blob/master/jquery.tmpl.js To work with php like so... <li><a href="{%= link %}">{%= title %}</a> - {%= description %}</li> <li><a href="<?= $link ?>"><?= $title ?></a> - <?= description ?></li> The RexEx in it is a bit over my head and it's apparently not as easy as changing the %} to ? in lines 148 to 158. Any help would be highly appreciated. I'm also not sure of how to take care of the $ difference that PHP variables have. Thanks, Serhiy

    Read the article

  • PHP SOAP Transfering Files

    - by blackmage
    Hey, I am new to SOAP and I am trying to learn how to transfer files (.zip files) between a client and server using PHP and SOAP. Currently I have a set up that looks something like this: enter code here require('libraries/nusoap/nusoap.php'); $server = new nusoap_server; $server-configureWSDL('server', 'urn:server'); $server-wsdl-schemaTargetNamespace = 'urn:server'; $server-register('sendFile', array('value' = 'xsd:string'), array('return' = 'xsd:string'), 'urn:server', 'urn:server#sendFile'); But I am unsure on what the return type should be if not a string? I am thinking of using a base64_encode but I am not sure how to. Any advice would be greatly appreciated.

    Read the article

  • Updating asp:SqlDataSource Parameter via asp:LinkButton

    - by Mattec
    I'll try to explain what I'm doing the best I can, but I'm pretty new to asp.net so be patient with me. I have a SqlDataSource which returns a simple select statement based on the WHERE clause using @COURSE_ID What I want to-do is every time any one of 2 (this will change as it's going to be generated) asp:LinkButtons are pressed, they will change the @COURSEID value which i'd like to associate with the specific button. Buttons: <asp:LinkButton ID="LinkButton2" runat="server" onclick="MenuUpdate_Click">Course1</asp:LinkButton> <asp:LinkButton ID="LinkButton1" runat="server" onclick="MenuUpdate_Click">Course2</asp:LinkButton> <asp:SqlDataSource ID="SqlDataSource1" runat="server" ConnectionString="<%$ ConnectionStrings:connString %>" SelectCommand="SELECT Chapter.chapterName, Chapter.chapterID FROM Chapter WHERE Chapter.courseID = @COURSE_ID " C# protected void MenuUpdate_Click(object sender, EventArgs e) { Parameter p = SqlDataSource1.SelectParameters["COURSE_ID"]; SqlDataSource1.SelectParameters.Remove(p); SqlDataSource1.SelectParameters.Add("COURSE_ID", THIS NEEDS TO BE VALUE ASSOCIATED TO BUTTON); ListView1.DataBind(); UpdatePanel1.Update(); } If anyone has any suggestions that'd be great, I've been trying lots of different things all night with no success :( Thanks

    Read the article

  • ADO.NET Batch Insert with over 2000 parameters

    - by Liming
    Hello all, I'm using Enterprise library, but the idea is the same. I have a SqlStringCommand and the sql is constructed using StringBuilder in the forms of "insert into table (column1, column2, column3) values (@param1-X, @param2-X, @parm3-X)"+" " where "X" represents a "for loop" about 700 rows StringBuilder sb = new StringBuilder(); for(int i=0; i<700; i++) { sb.Append("insert into table (column1, column2, column3) values (@param1-"+i+", @param2-"+i, +",@parm3-"+i+") " ); } followed by constructing a command object injecting all the parameters w/ values into it. Essentially, 700 rows with 3 parameters, I ended up with 2100 parameters for this "one sql" Statement. It ran fine for about a few days and suddenly I got this error =============================================================== A severe error occurred on the current command. The results, if any, should be discarded. at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) at System.Data.SqlClient.SqlCommand.InternalExecuteNon Any pointers are greatly appreciated.

    Read the article

  • Converting John Resig's JavaScript Templating Engine to work with PHP Templates

    - by Serhiy
    I'm trying to convert the John Resig's Templating Engine to work with PHP. Essentially what I would like to achieve is the ability to use certain Kohana Views via a JavaScript templating engine, that way I can use the same views for both a standard PHP request and a jQuery AJAX request. I'm starting with the basics and would like to be able to convert http://github.com/nje/jquery-tmpl/blob/master/jquery.tmpl.js To work with php like so... ### From This ### <li><a href="{%= link %}">{%= title %}</a> - {%= description %}</li> ### Into This ### <li><a href="<?= $link ?>"><?= $title ?></a> - <?= description ?></li> The RexEx in it is a bit over my head and it's apparently not as easy as changing the %} to ? in lines 148 to 158. Any help would be highly appreciated. I'm also not sure of how to take care of the $ difference that PHP variables have. Thanks, Serhiy

    Read the article

  • Calling a generic function in VB.NET / C#

    - by Quandary
    Question: I want to call a generic function, defined as: Public Shared Function DeserializeFromXML(Of T)(Optional ByRef strFileNameAndPath As String = Nothing) As T Now when I call it, I wanted to do it with any of the variants below: Dim x As New XMLserialization.cConfiguration x = XMLserialization.XMLserializeLDAPconfig.DeserializeFromXML(Of x)() x = XMLserialization.XMLserializeLDAPconfig.DeserializeFromXML(GetType(x))() x = XMLserialization.XMLserializeLDAPconfig.DeserializeFromXML(Of GetType(x))() But it doesn't work. I find it very annoying and unreadable having to type x = XMLserialization.XMLserializeLDAPconfig.DeserializeFromXML(Of XMLserialization.cConfiguration)() Is there a way to call a generic function by getting the type from the instance ?

    Read the article

  • Using PHP script to fill in XML

    - by Yaaqov
    Hi - we are all familiar with "embedding" PHP scripts in HTML pages to do tasks like displaying form results, but how can that be done in a way that displays XML? For example, if I wrote: <? xml version='1.0' ?> <Responses> <?php $body = $_GET['Body']; $fromPh = $_GET['From']; echo echo "<Msg>Your number is: $fromPh, and you typed: $body.</Msg>" ?> </Response> I get a message that states "parse error, unexpected T_STRING on line 1". Any help or tutorials would be appreciated.

    Read the article

  • Unit Testing (xUnit) an ASP.NET Mvc Controller with a custom input model?

    - by Danny Douglass
    I'm having a hard time finding information on what I expect to be a pretty straightforward scenario. I'm trying to unit test an Action on my ASP.NET Mvc 2 Controller that utilizes a custom input model w/ DataAnnotions. My testing framework is xUnit, as mentioned in the title. Here is my custom Input Model: public class EnterPasswordInputModel { [Required(ErrorMessage = "")] public string Username { get; set; } [Required(ErrorMessage = "Password is a required field.")] public string Password { get; set; } } And here is my Controller (took out some logic to simplify for this ex.): [HttpPost] public ActionResult EnterPassword(EnterPasswordInputModel enterPasswordInput) { if (!ModelState.IsValid) return View(); // do some logic to validate input // if valid - next View on successful validation return View("NextViewName"); // else - add and display error on current view return View(); } And here is my xUnit Fact (also simplified): [Fact] public void EnterPassword_WithValidInput_ReturnsNextView() { // Arrange var controller = CreateLoginController(userService.Object); // Act var result = controller.EnterPassword( new EnterPasswordInputModel { Username = username, Password = password }) as ViewResult; // Assert Assert.Equal("NextViewName", result.ViewName); } When I run my test I get the following error on my test fact when trying to retrieve the controller result (Act section): System.NullReferenceException: Object reference not set to an instance of an object. Thanks in advance for any help you can offer!

    Read the article

  • facebook php-sdk not logging out

    - by Meisam Mulla
    I'm having a hard time getting this to work. I use the following to generate the logout url: $logout = "https://www.facebook.com/logout.php?next=" . urlencode('http://' . $_SERVER['SERVER_NAME'] . $_SERVER['PHP_SELF']) . "&access_token=" . $facebook->getAccessToken(); Which generates the correct (worked with the last version) url: https://www.facebook.com/logout.php?next=http%3A%2F%2F...&access_token=AA....ZD However this does not actually log the user out. I tried using $facebook->getLogoutUrl(array('next' => 'myurl')) which generates pretty much the same url. This also did not work. I am lost as to why its not logging the user out. I actually tried manually putting the address into the address bar but it redirects me to the Facebook homepage.

    Read the article

  • PHP 5.4: disable warning "Creating default object from empty value"

    - by Werner
    I want to migrate code from PHP 5.2 to 5.4. This worked fine so far except that all the code I use makes extensive use of just using an object with a member without any initialisation, like: $MyObject->MyMember = "Hello"; which results in the warning: "Creating default object from empty value" I know that the solution would be to use: $MyObject = new stdClass(); $MyObject->MyMember = "Hello"; but it would be A LOT OF WORK to change this in all my code, because I use this many times in different projects. I know, it's not good style, but unfortunately I'm not able to spend the next weeks adding this to all of my code. I know I could set the php error_reporting to not reporting warnings, but I want to be able to still get other warnings and notices. This warning doesn't seem to be effected by enable or disable E_STRICT at all. So is there a way to just disable this warning?!

    Read the article

  • Easy to use/learn PHP framework?

    - by Meredith
    I need to build a php app, and I was thinking about using a framework (never used one before). I've been browsing around some but most of them seems kinda complicated, I really liked what I saw about Symfony, but it looks like I will have to spend like a month until I really understand how to use it, and in one month I could code the app I have in mind 5 times without a framework. But I want to use one to "standardize" my code and prevent bugs. So I was wondering if someone could share with me which php frameworks you think are easier to learn how to use. My application will use mysql, and it will have some sort of "search engine" to search data that will be populated on the database using a few "scraper scripts" (that I also wants to code using the framework).

    Read the article

  • PHP - Internal APIs/Libraries - What makes sense?

    - by Mark Locker
    I've been having a discussion lately with some colleagues about the best way to approach a new project, and thought it'd be interesting to get some external thoughts thrown into the mix. Basically, we're redeveloping a fairly large site (written in PHP) and have differing opinions on how the platform should be setup. Requirements: The platform will need to support multiple internal websites, as well as external (non-PHP) projects which at the moment consist of a mobile app and a toolbar. We have no plans/need in the foreseeable future to open up an API externally (for use in products other than our own). My opinion: We should have a library of well documented native model classes which can be shared between projects. These models will represent everything in our database and can take advantage of object orientated features such as inheritance, traits, magic methods, etc. etc. As well as employing ORM. We can then add an API layer on top of these models which can basically accept requests and route them to the appropriate methods, translating the response so that it can be used platform independently. This routing for each method can be setup as and when it's required. Their opinion: We should have a single HTTP API which is used by all projects (internal PHP ones or otherwise). My thoughts: To me, there are a number of issues with using the sole HTTP API approach: It will be very expensive performance wise. One page request will result in several additional http requests (which although local, are still ones that Apache will need to handle). You'll lose all of the best features PHP has for OO development. From simple inheritance, to employing the likes of ORM which can save you writing a lot of code. For internal projects, the actual process makes me cringe. To get a users name, for example, a request would go out of our box, over the LAN, back in, then run through a script which calls a method, JSON encodes the output and feeds that back. That would then need to be JSON decoded, and be presented as an array ready to use. Working with arrays, as appose to objects, makes me sad in a modern PHP framework. Their thoughts (and my responses): Having one method of doing thing keeps things simple. - You'd only do things differently if you were using a different language anyway. It will become robust. - Seeing as the API will run off the library of models, I think my option would be just as robust. What do you think? I'd be really interested to hear the thoughts of others on this, especially as opinions on both sides are not founded on any past experience.

    Read the article

  • Announcing Entity Framework Code-First (CTP5 release)

    - by ScottGu
    This week the data team released the CTP5 build of the new Entity Framework Code-First library.  EF Code-First enables a pretty sweet code-centric development workflow for working with data.  It enables you to: Develop without ever having to open a designer or define an XML mapping file Define model objects by simply writing “plain old classes” with no base classes required Use a “convention over configuration” approach that enables database persistence without explicitly configuring anything Optionally override the convention-based persistence and use a fluent code API to fully customize the persistence mapping I’m a big fan of the EF Code-First approach, and wrote several blog posts about it this summer: Code-First Development with Entity Framework 4 (July 16th) EF Code-First: Custom Database Schema Mapping (July 23rd) Using EF Code-First with an Existing Database (August 3rd) Today’s new CTP5 release delivers several nice improvements over the CTP4 build, and will be the last preview build of Code First before the final release of it.  We will ship the final EF Code First release in the first quarter of next year (Q1 of 2011).  It works with all .NET application types (including both ASP.NET Web Forms and ASP.NET MVC projects). Installing EF Code First You can install and use EF Code First CTP5 using one of two ways: Approach 1) By downloading and running a setup program.  Once installed you can reference the EntityFramework.dll assembly it provides within your projects.      or: Approach 2) By using the NuGet Package Manager within Visual Studio to download and install EF Code First within a project.  To do this, simply bring up the NuGet Package Manager Console within Visual Studio (View->Other Windows->Package Manager Console) and type “Install-Package EFCodeFirst”: Typing “Install-Package EFCodeFirst” within the Package Manager Console will cause NuGet to download the EF Code First package, and add it to your current project: Doing this will automatically add a reference to the EntityFramework.dll assembly to your project:   NuGet enables you to have EF Code First setup and ready to use within seconds.  When the final release of EF Code First ships you’ll also be able to just type “Update-Package EFCodeFirst” to update your existing projects to use the final release. EF Code First Assembly and Namespace The CTP5 release of EF Code First has an updated assembly name, and new .NET namespace: Assembly Name: EntityFramework.dll Namespace: System.Data.Entity These names match what we plan to use for the final release of the library. Nice New CTP5 Improvements The new CTP5 release of EF Code First contains a bunch of nice improvements and refinements. Some of the highlights include: Better support for Existing Databases Built-in Model-Level Validation and DataAnnotation Support Fluent API Improvements Pluggable Conventions Support New Change Tracking API Improved Concurrency Conflict Resolution Raw SQL Query/Command Support The rest of this blog post contains some more details about a few of the above changes. Better Support for Existing Databases EF Code First makes it really easy to create model layers that work against existing databases.  CTP5 includes some refinements that further streamline the developer workflow for this scenario. Below are the steps to use EF Code First to create a model layer for the Northwind sample database: Step 1: Create Model Classes and a DbContext class Below is all of the code necessary to implement a simple model layer using EF Code First that goes against the Northwind database: EF Code First enables you to use “POCO” – Plain Old CLR Objects – to represent entities within a database.  This means that you do not need to derive model classes from a base class, nor implement any interfaces or data persistence attributes on them.  This enables the model classes to be kept clean, easily testable, and “persistence ignorant”.  The Product and Category classes above are examples of POCO model classes. EF Code First enables you to easily connect your POCO model classes to a database by creating a “DbContext” class that exposes public properties that map to the tables within a database.  The Northwind class above illustrates how this can be done.  It is mapping our Product and Category classes to the “Products” and “Categories” tables within the database.  The properties within the Product and Category classes in turn map to the columns within the Products and Categories tables – and each instance of a Product/Category object maps to a row within the tables. The above code is all of the code required to create our model and data access layer!  Previous CTPs of EF Code First required an additional step to work against existing databases (a call to Database.Initializer<Northwind>(null) to tell EF Code First to not create the database) – this step is no longer required with the CTP5 release.  Step 2: Configure the Database Connection String We’ve written all of the code we need to write to define our model layer.  Our last step before we use it will be to setup a connection-string that connects it with our database.  To do this we’ll add a “Northwind” connection-string to our web.config file (or App.Config for client apps) like so:   <connectionStrings>          <add name="Northwind"          connectionString="data source=.\SQLEXPRESS;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|\northwind.mdf;User Instance=true"          providerName="System.Data.SqlClient" />   </connectionStrings> EF “code first” uses a convention where DbContext classes by default look for a connection-string that has the same name as the context class.  Because our DbContext class is called “Northwind” it by default looks for a “Northwind” connection-string to use.  Above our Northwind connection-string is configured to use a local SQL Express database (stored within the \App_Data directory of our project).  You can alternatively point it at a remote SQL Server. Step 3: Using our Northwind Model Layer We can now easily query and update our database using the strongly-typed model layer we just built with EF Code First. The code example below demonstrates how to use LINQ to query for products within a specific product category.  This query returns back a sequence of strongly-typed Product objects that match the search criteria: The code example below demonstrates how we can retrieve a specific Product object, update two of its properties, and then save the changes back to the database: EF Code First handles all of the change-tracking and data persistence work for us, and allows us to focus on our application and business logic as opposed to having to worry about data access plumbing. Built-in Model Validation EF Code First allows you to use any validation approach you want when implementing business rules with your model layer.  This enables a great deal of flexibility and power. Starting with this week’s CTP5 release, EF Code First also now includes built-in support for both the DataAnnotation and IValidatorObject validation support built-into .NET 4.  This enables you to easily implement validation rules on your models, and have these rules automatically be enforced by EF Code First whenever you save your model layer.  It provides a very convenient “out of the box” way to enable validation within your applications. Applying DataAnnotations to our Northwind Model The code example below demonstrates how we could add some declarative validation rules to two of the properties of our “Product” model: We are using the [Required] and [Range] attributes above.  These validation attributes live within the System.ComponentModel.DataAnnotations namespace that is built-into .NET 4, and can be used independently of EF.  The error messages specified on them can either be explicitly defined (like above) – or retrieved from resource files (which makes localizing applications easy). Validation Enforcement on SaveChanges() EF Code-First (starting with CTP5) now automatically applies and enforces DataAnnotation rules when a model object is updated or saved.  You do not need to write any code to enforce this – this support is now enabled by default.  This new support means that the below code – which violates our above rules – will automatically throw an exception when we call the “SaveChanges()” method on our Northwind DbContext: The DbEntityValidationException that is raised when the SaveChanges() method is invoked contains a “EntityValidationErrors” property that you can use to retrieve the list of all validation errors that occurred when the model was trying to save.  This enables you to easily guide the user on how to fix them.  Note that EF Code-First will abort the entire transaction of changes if a validation rule is violated – ensuring that our database is always kept in a valid, consistent state. EF Code First’s validation enforcement works both for the built-in .NET DataAnnotation attributes (like Required, Range, RegularExpression, StringLength, etc), as well as for any custom validation rule you create by sub-classing the System.ComponentModel.DataAnnotations.ValidationAttribute base class. UI Validation Support A lot of our UI frameworks in .NET also provide support for DataAnnotation-based validation rules. For example, ASP.NET MVC, ASP.NET Dynamic Data, and Silverlight (via WCF RIA Services) all provide support for displaying client-side validation UI that honor the DataAnnotation rules applied to model objects. The screen-shot below demonstrates how using the default “Add-View” scaffold template within an ASP.NET MVC 3 application will cause appropriate validation error messages to be displayed if appropriate values are not provided: ASP.NET MVC 3 supports both client-side and server-side enforcement of these validation rules.  The error messages displayed are automatically picked up from the declarative validation attributes – eliminating the need for you to write any custom code to display them. Keeping things DRY The “DRY Principle” stands for “Do Not Repeat Yourself”, and is a best practice that recommends that you avoid duplicating logic/configuration/code in multiple places across your application, and instead specify it only once and have it apply everywhere. EF Code First CTP5 now enables you to apply declarative DataAnnotation validations on your model classes (and specify them only once) and then have the validation logic be enforced (and corresponding error messages displayed) across all applications scenarios – including within controllers, views, client-side scripts, and for any custom code that updates and manipulates model classes. This makes it much easier to build good applications with clean code, and to build applications that can rapidly iterate and evolve. Other EF Code First Improvements New to CTP5 EF Code First CTP5 includes a bunch of other improvements as well.  Below are a few short descriptions of some of them: Fluent API Improvements EF Code First allows you to override an “OnModelCreating()” method on the DbContext class to further refine/override the schema mapping rules used to map model classes to underlying database schema.  CTP5 includes some refinements to the ModelBuilder class that is passed to this method which can make defining mapping rules cleaner and more concise.  The ADO.NET Team blogged some samples of how to do this here. Pluggable Conventions Support EF Code First CTP5 provides new support that allows you to override the “default conventions” that EF Code First honors, and optionally replace them with your own set of conventions. New Change Tracking API EF Code First CTP5 exposes a new set of change tracking information that enables you to access Original, Current & Stored values, and State (e.g. Added, Unchanged, Modified, Deleted).  This support is useful in a variety of scenarios. Improved Concurrency Conflict Resolution EF Code First CTP5 provides better exception messages that allow access to the affected object instance and the ability to resolve conflicts using current, original and database values.  Raw SQL Query/Command Support EF Code First CTP5 now allows raw SQL queries and commands (including SPROCs) to be executed via the SqlQuery and SqlCommand methods exposed off of the DbContext.Database property.  The results of these method calls can be materialized into object instances that can be optionally change-tracked by the DbContext.  This is useful for a variety of advanced scenarios. Full Data Annotations Support EF Code First CTP5 now supports all standard DataAnnotations within .NET, and can use them both to perform validation as well as to automatically create the appropriate database schema when EF Code First is used in a database creation scenario.  Summary EF Code First provides an elegant and powerful way to work with data.  I really like it because it is extremely clean and supports best practices, while also enabling solutions to be implemented very, very rapidly.  The code-only approach of the library means that model layers end up being flexible and easy to customize. This week’s CTP5 release further refines EF Code First and helps ensure that it will be really sweet when it ships early next year.  I recommend using NuGet to install and give it a try today.  I think you’ll be pleasantly surprised by how awesome it is. Hope this helps, Scott

    Read the article

  • Measuring ASP.NET and SharePoint output cache

    - by DigiMortal
    During ASP.NET output caching week in my local blog I wrote about how to measure ASP.NET output cache. As my posting was based on real work and real-life results then I thought that this posting is maybe interesting to you too. So here you can read what I did, how I did and what was the result. Introduction Caching is not effective without measuring it. As MVP Henn Sarv said in one of his sessions then you will get what you measure. And right he is. Lately I measured caching on local Microsoft community portal to make sure that our caching strategy is good enough in environment where this system lives. In this posting I will show you how to start measuring the cache of your web applications. Although the application measured is built on SharePoint Server publishing infrastructure, all those counters have same meaning as similar counters under pure ASP.NET applications. Measured counters I used Performance Monitor and the following performance counters (their names are similar on ASP.NET and SharePoint WCMS): Total number of objects added – how much objects were added to output cache. Total object discards – how much objects were deleted from output cache. Cache hit count – how many times requests were served by cache. Cache hit ratio – percent of requests served from cache. The first three counters are cumulative while last one is coefficient. You can use also other counters to measure the full effect of caching (memory, processor, disk I/O, network load etc before and after caching). Measuring process The measuring I describe here started from freshly restarted web server. I measured application during 12 hours that covered also time ranges when users are most active. The time range does not include late evening hours and night because there is nothing to measure during these hours. During measuring we performed no maintenance or administrative tasks on server. All tasks performed were related to usual daily content management and content monitoring. Also we had no advertisement campaigns or other promotions running at same time. The results You can see the results on following graphic.   Total number of objects added   Total object discards   Cache hit count   Cache hit ratio You can see that adds and discards are growing in same tempo. It is good because cache expires and not so popular items are not kept in memory. If there are more popular content then the these lines may have bigger distance between them. Cache hit count grows faster and this shows that more and more content is served from cache. In current case it shows that cache is filled optimally and we can do even better if we tune caches more. The site contains also pages that are discarded when some subsite changes (page was added/modified/deleted) and one modification may affect about four or five pages. This may also decrease cache hit count because during day the site gets about 5-10 new pages. Cache hit ratio is currently extremely good. The suggested minimum is about 85% but after some tuning and measuring I achieved 98.7% as a result. This is due to the fact that new pages are most often requested and after new pages are added the older ones are requested only sometimes. So they get discarded from cache and only some of these will return sometimes back to cache. Although this may also indicate the need for additional SEO work the result is very well in technical means. Conclusion Measuring ASP.NET output cache is not complex thing to do and you can start by measuring performance of cache as a start. Later you can move on and measure caching effect to other counters such as disk I/O, network, processors etc. What you have to achieve is optimal cache that is not full of items asked only couple of times per day (you can avoid this by not using too long cache durations). After some tuning you should be able to boost cache hit ratio up to at least 85%.

    Read the article

  • Asp.net Session State Revisited

    - by karan@dotnet
    Every now and then I see doubts and queries which I believe is the most discussed topic in the .net environment - Asp.net Sessions. So what really are they, why are they needed and what does browser and .net do with it. These and some of the other questions I hope to answer with this post. Because of the stateless nature of the HTTP protocol there is always a need of state management in a web application. There are many other ways to store data but I feel Session state is amongst the most powerful one. The ASP.NET session state is a technology that lets you store server-side, user-specific data. Our web applications can then use data to process request from the user for which the session state was instantiated. So when does a session is first created? When we start a asp.net application a non-expiring cookie is created and its called as ASP.NET_SessionId. Basically there are two methods for this depending upon how you configure this setting in your config file. The session ID can be a part of cookie as discussed above(called as ASP.NET_SessionId) or it is embedded in the browser’s URL. For the latter part we have to set cookie-less session in our web.config file. These Session ID’s are 120-bit random number that is represented by 20-character string. The cookie will be alive until you close your browser. If you browse from one app to another within the same domain, then both the apps will use the same session ID to track the session state. Why reuse? so that you don’t have to create a new session ID for each request. One can abandon one particular Session by calling Session.Abandon() which will stop the page processing and clear out the session data. A subsequent page request causes a brand new session object to be instantiated. So what happened to my cookie? Well the session cookie is still there even when one Session.Abandon() is called and another session object is created. The Session.Abandon() lets you clear out your session state without waiting for session timeout. By default, this time-out is a 20-minute sliding expiration. This expiration is refreshed every time that the user makes a request to the Web site and presents the session ID cookie. The Abandon method sets a flag in the session state object that indicates that the session state should be abandoned. If your app does not have global.asax then your session cookie will be killed at the end of each page request. So you need to have a global.asax file and Session_Start() handler to make sure that the session cookie will remain intact once its issued after the first page hit. The runtime invokes global.asax’s Session_OnEnd() when you call Session.Abandon() or the session times out. The session manager stores session data in HttpCache with sliding expiration where this timeout can be configured in the <sessionState> of web.config file. When the timeout is up the HttpCache will remove the session state object. Sometimes we want particular pages not to time out as compared to other pages in our applications. We can handle this in two ways. First, we can set a timer or may be a JavaScript function that refreshes the page after fixed intervals of time. The only thing being the page being cached locally and then the request is not made to the server so to prevent that you can add this to your page: <%@ OutputCache Location="None" VaryByParam="None" %> Second approach is to move your page into its own folder and then add a web.config to that folder to control the timeout. Also not all pages in your application will need access to session state. For those pages that do not, you can indicate that session state is not needed and prevent session data from being fetched from the store in requests to these pages. You can disable the session state at page level like this:<%@ Page EnableSessionState="False" %>tbc…

    Read the article

  • Which Language Next? Python? Ruby? [closed]

    - by Ryan Craig
    I am a beginning Webmaster (relatively), with 2+ years of php experience. I also have some java training and a bit of .net. My company is now close to redeveloping the website that I work on, which is coded primarily in php, but has some poorly-written .net in part as well (it's confusing and ill-planned, but I didn't make any of those decisions. Can anyone say action-oriented .net and JScript?). So, I'm trying to decide which language I should learn next to quickly develop a new site. I will probably just redevelop it at first in php because I'm very comfortable with it. However, I'd like to migrate in the next year to something newer and more forward-thinking. This being said, .net is out of the question a little bit. We need cheap developers who are fast and can get pages up quickly. In this part of the country, part-time .net developers are hard to find. So, we need something that will be pretty standard in the next few years, but we have some .net SOAP 1.1 APIs that we use on our actual service (separate from the corporate website), that we will need to integrate part of the site with. Developing with php and SOAP is much more difficult than doing the same thing. So, I may have to develop the API collaborative part in .net just to be easy, and then I'd like to use something else that is fast, flexible, forward thinking, and will be relatively standard and easy to find developers for. So, any ideas? Python and Django? Ruby on Rails? Another framework? Thanks for your thoughts. Sorry, I know this was long, but it's all very convoluted and confusing so I needed to be slightly long-winded.

    Read the article

  • Separate php.ini file for each Apache virtual host?

    - by Calvin L
    Is it possible to have a separate php.ini file that overrides the default php.ini file for each virtual host? I'm running Apache/2.2.14, PHP 5.3.2-1. For example I have several vhosts pointing to domains in my /var/www/ directory: /var/www/website1.com /var/www/website2.com What I'd like is to be able to place a custom php.ini file in each directory that would override the default values only for that vhost, but keep the original defaults if the value isn't specified: /var/www/website1.com/htdocs/ /var/www/website1.com/php.ini

    Read the article

  • How to get full query string parameters not UrlDecoded

    - by developerit
    Introduction While developing Developer IT’s website, we came across a problem when the user search keywords containing special character like the plus ‘+’ char. We found it while looking for C++ in our search engine. The request parameter output in ASP.NET was “c “. I found it strange that it removed the ‘++’ and replaced it with a space… Analysis After a bit of Googling and Reflection, it turns out that ASP.NET calls UrlDecode on each parameters retreived by the Request(“item”) method. The Request.Params property is affected by this two since it mashes all QueryString, Forms and other collections into a single one. Workaround Finally, I solve the puzzle usign the Request.RawUrl property and parsing it with the same RegEx I use in my url re-writter. The RawUrl not affected by anything. As its name say it, it’s raw. Published on http://www.developerit.com/

    Read the article

  • Parallelism in .NET – Part 5, Partitioning of Work

    - by Reed
    When parallelizing any routine, we start by decomposing the problem.  Once the problem is understood, we need to break our work into separate tasks, so each task can be run on a different processing element.  This process is called partitioning. Partitioning our tasks is a challenging feat.  There are opposing forces at work here: too many partitions adds overhead, too few partitions leaves processors idle.  Trying to work the perfect balance between the two extremes is the goal for which we should aim.  Luckily, the Task Parallel Library automatically handles much of this process.  However, there are situations where the default partitioning may not be appropriate, and knowledge of our routines may allow us to guide the framework to making better decisions. First off, I’d like to say that this is a more advanced topic.  It is perfectly acceptable to use the parallel constructs in the framework without considering the partitioning taking place.  The default behavior in the Task Parallel Library is very well-behaved, even for unusual work loads, and should rarely be adjusted.  I have found few situations where the default partitioning behavior in the TPL is not as good or better than my own hand-written partitioning routines, and recommend using the defaults unless there is a strong, measured, and profiled reason to avoid using them.  However, understanding partitioning, and how the TPL partitions your data, helps in understanding the proper usage of the TPL. I indirectly mentioned partitioning while discussing aggregation.  Typically, our systems will have a limited number of Processing Elements (PE), which is the terminology used for hardware capable of processing a stream of instructions.  For example, in a standard Intel i7 system, there are four processor cores, each of which has two potential hardware threads due to Hyperthreading.  This gives us a total of 8 PEs – theoretically, we can have up to eight operations occurring concurrently within our system. In order to fully exploit this power, we need to partition our work into Tasks.  A task is a simple set of instructions that can be run on a PE.  Ideally, we want to have at least one task per PE in the system, since fewer tasks means that some of our processing power will be sitting idle.  A naive implementation would be to just take our data, and partition it with one element in our collection being treated as one task.  When we loop through our collection in parallel, using this approach, we’d just process one item at a time, then reuse that thread to process the next, etc.  There’s a flaw in this approach, however.  It will tend to be slower than necessary, often slower than processing the data serially. The problem is that there is overhead associated with each task.  When we take a simple foreach loop body and implement it using the TPL, we add overhead.  First, we change the body from a simple statement to a delegate, which must be invoked.  In order to invoke the delegate on a separate thread, the delegate gets added to the ThreadPool’s current work queue, and the ThreadPool must pull this off the queue, assign it to a free thread, then execute it.  If our collection had one million elements, the overhead of trying to spawn one million tasks would destroy our performance. The answer, here, is to partition our collection into groups, and have each group of elements treated as a single task.  By adding a partitioning step, we can break our total work into small enough tasks to keep our processors busy, but large enough tasks to avoid overburdening the ThreadPool.  There are two clear, opposing goals here: Always try to keep each processor working, but also try to keep the individual partitions as large as possible. When using Parallel.For, the partitioning is always handled automatically.  At first, partitioning here seems simple.  A naive implementation would merely split the total element count up by the number of PEs in the system, and assign a chunk of data to each processor.  Many hand-written partitioning schemes work in this exactly manner.  This perfectly balanced, static partitioning scheme works very well if the amount of work is constant for each element.  However, this is rarely the case.  Often, the length of time required to process an element grows as we progress through the collection, especially if we’re doing numerical computations.  In this case, the first PEs will finish early, and sit idle waiting on the last chunks to finish.  Sometimes, work can decrease as we progress, since previous computations may be used to speed up later computations.  In this situation, the first chunks will be working far longer than the last chunks.  In order to balance the workload, many implementations create many small chunks, and reuse threads.  This adds overhead, but does provide better load balancing, which in turn improves performance. The Task Parallel Library handles this more elaborately.  Chunks are determined at runtime, and start small.  They grow slowly over time, getting larger and larger.  This tends to lead to a near optimum load balancing, even in odd cases such as increasing or decreasing workloads.  Parallel.ForEach is a bit more complicated, however. When working with a generic IEnumerable<T>, the number of items required for processing is not known in advance, and must be discovered at runtime.  In addition, since we don’t have direct access to each element, the scheduler must enumerate the collection to process it.  Since IEnumerable<T> is not thread safe, it must lock on elements as it enumerates, create temporary collections for each chunk to process, and schedule this out.  By default, it uses a partitioning method similar to the one described above.  We can see this directly by looking at the Visual Partitioning sample shipped by the Task Parallel Library team, and available as part of the Samples for Parallel Programming.  When we run the sample, with four cores and the default, Load Balancing partitioning scheme, we see this: The colored bands represent each processing core.  You can see that, when we started (at the top), we begin with very small bands of color.  As the routine progresses through the Parallel.ForEach, the chunks get larger and larger (seen by larger and larger stripes). Most of the time, this is fantastic behavior, and most likely will out perform any custom written partitioning.  However, if your routine is not scaling well, it may be due to a failure in the default partitioning to handle your specific case.  With prior knowledge about your work, it may be possible to partition data more meaningfully than the default Partitioner. There is the option to use an overload of Parallel.ForEach which takes a Partitioner<T> instance.  The Partitioner<T> class is an abstract class which allows for both static and dynamic partitioning.  By overriding Partitioner<T>.SupportsDynamicPartitions, you can specify whether a dynamic approach is available.  If not, your custom Partitioner<T> subclass would override GetPartitions(int), which returns a list of IEnumerator<T> instances.  These are then used by the Parallel class to split work up amongst processors.  When dynamic partitioning is available, GetDynamicPartitions() is used, which returns an IEnumerable<T> for each partition.  If you do decide to implement your own Partitioner<T>, keep in mind the goals and tradeoffs of different partitioning strategies, and design appropriately. The Samples for Parallel Programming project includes a ChunkPartitioner class in the ParallelExtensionsExtras project.  This provides example code for implementing your own, custom allocation strategies, including a static allocator of a given chunk size.  Although implementing your own Partitioner<T> is possible, as I mentioned above, this is rarely required or useful in practice.  The default behavior of the TPL is very good, often better than any hand written partitioning strategy.

    Read the article

  • Parallelism in .NET – Part 12, More on Task Decomposition

    - by Reed
    Many tasks can be decomposed using a Data Decomposition approach, but often, this is not appropriate.  Frequently, decomposing the problem into distinctive tasks that must be performed is a more natural abstraction. However, as I mentioned in Part 1, Task Decomposition tends to be a bit more difficult than data decomposition, and can require a bit more effort.  Before we being parallelizing our algorithm based on the tasks being performed, we need to decompose our problem, and take special care of certain considerations such as ordering and grouping of tasks. Up to this point in this series, I’ve focused on parallelization techniques which are most appropriate when a problem space can be decomposed by data.  Using PLINQ and the Parallel class, I’ve shown how problem spaces where there is a collection of data, and each element needs to be processed, can potentially be parallelized. However, there are many other routines where this is not appropriate.  Often, instead of working on a collection of data, there is a single piece of data which must be processed using an algorithm or series of algorithms.  Here, there is no collection of data, but there may still be opportunities for parallelism. As I mentioned before, in cases like this, the approach is to look at your overall routine, and decompose your problem space based on tasks.  The idea here is to look for discrete “tasks,” individual pieces of work which can be conceptually thought of as a single operation. Let’s revisit the example I used in Part 1, an application startup path.  Say we want our program, at startup, to do a bunch of individual actions, or “tasks”.  The following is our list of duties we must perform right at startup: Display a splash screen Request a license from our license manager Check for an update to the software from our web server If an update is available, download it Setup our menu structure based on our current license Open and display our main, welcome Window Hide the splash screen The first step in Task Decomposition is breaking up the problem space into discrete tasks. This, naturally, can be abstracted as seven discrete tasks.  In the serial version of our program, if we were to diagram this, the general process would appear as: These tasks, obviously, provide some opportunities for parallelism.  Before we can parallelize this routine, we need to analyze these tasks, and find any dependencies between tasks.  In this case, our dependencies include: The splash screen must be displayed first, and as quickly as possible. We can’t download an update before we see whether one exists. Our menu structure depends on our license, so we must check for the license before setting up the menus. Since our welcome screen will notify the user of an update, we can’t show it until we’ve downloaded the update. Since our welcome screen includes menus that are customized based off the licensing, we can’t display it until we’ve received a license. We can’t hide the splash until our welcome screen is displayed. By listing our dependencies, we start to see the natural ordering that must occur for the tasks to be processed correctly. The second step in Task Decomposition is determining the dependencies between tasks, and ordering tasks based on their dependencies. Looking at these tasks, and looking at all the dependencies, we quickly see that even a simple decomposition such as this one can get quite complicated.  In order to simplify the problem of defining the dependencies, it’s often a useful practice to group our tasks into larger, discrete tasks.  The goal when grouping tasks is that you want to make each task “group” have as few dependencies as possible to other tasks or groups, and then work out the dependencies within that group.  Typically, this works best when any external dependency is based on the “last” task within the group when it’s ordered, although that is not a firm requirement.  This process is often called Grouping Tasks.  In our case, we can easily group together tasks, effectively turning this into four discrete task groups: 1. Show our splash screen – This needs to be left as its own task.  First, multiple things depend on this task, mainly because we want this to start before any other action, and start as quickly as possible. 2. Check for Update and Download the Update if it Exists - These two tasks logically group together.  We know we only download an update if the update exists, so that naturally follows.  This task has one dependency as an input, and other tasks only rely on the final task within this group. 3. Request a License, and then Setup the Menus – Here, we can group these two tasks together.  Although we mentioned that our welcome screen depends on the license returned, it also depends on setting up the menu, which is the final task here.  Setting up our menus cannot happen until after our license is requested.  By grouping these together, we further reduce our problem space. 4. Display welcome and hide splash - Finally, we can display our welcome window and hide our splash screen.  This task group depends on all three previous task groups – it cannot happen until all three of the previous groups have completed. By grouping the tasks together, we reduce our problem space, and can naturally see a pattern for how this process can be parallelized.  The diagram below shows one approach: The orange boxes show each task group, with each task represented within.  We can, now, effectively take these tasks, and run a large portion of this process in parallel, including the portions which may be the most time consuming.  We’ve now created two parallel paths which our process execution can follow, hopefully speeding up the application startup time dramatically. The main point to remember here is that, when decomposing your problem space by tasks, you need to: Define each discrete action as an individual Task Discover dependencies between your tasks Group tasks based on their dependencies Order the tasks and groups of tasks

    Read the article

  • Should a c# dev switch to VB.net when the team language base is mixed?

    - by jjr2527
    I recently joined a new development team where the language preferences are mixed on the .net platform. Dev 1: Knows VB.net, does not know c# Dev 2: Knows VB.net, does not know c# Dev 3: Knows c# and VB.net, prefers c# Dev 4: Knows c# and VB6(VB.net should be pretty easy to pick up), prefers c# It seems to me that the thought leaders in the .net space are c# devs almost universally. I also thought that some 3rd party tools didn't support VB.net but when I started looking into it I didn't find any good examples. I would prefer to get the whole team on c# but if there isn't any good reason to force the issue aside from preference then I don't think that is the right choice. Are there any reasons I should lead folks away from VB.net?

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >