Search Results

Search found 83184 results on 3328 pages for 'asp php net'.

Page 51/3328 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • pgsql.so is not loaded in PHP

    - by Obay
    Hi, I've been tasked to create a PHP app which accesses an existing PostgreSQL database. This is my first time working with Postgre, not to mention the PHP has already been installed in the Linux box on which the app is supposed to run. I have no experience setting up this stuff, I just code. My question is that I can't seem to get the Postgre extension working in PHP. I checked the php.ini file, there were no "extension=..." lines. So I added "extension=pgsql.so". I then checked the "extension_dir" and found that there were only 2 files in there (ldap.so, phpcups.so), I added a pgsql.so file taken from another Linux box. I restarted httpd. And it does not work. I couldn't find any "pgsql" or "postgre" in phpinfo(). Forgive my noobness. I know too little Linux. I would really appreciate it if you can point me to the right direction.

    Read the article

  • How do I solve an AntiForgeryToken exception that occurs after an iisreset in my ASP.Net MVC app?

    - by Colin Newell
    I’m having problems with the AntiForgeryToken in ASP.Net MVC. If I do an iisreset on my web server and a user continues with their session they get bounced to a login page. Not terrible but then the AntiForgery token blows up and the only way to get going again is to blow away the cookie on the browser. With the beta version of version 1 it used to go wrong when reading the cookie back in for me so I used to scrub it before asking for a validation token but that was fixed when it was released. For now I think I’ll roll back to my code that fixed the beta problem but I can’t help but think I’m missing something. Is there a simpler solution, heck should I just drop their helper and create a new one from scratch? I get the feeling that a lot of the problem is the fact that it’s tied so deeply into the old ASP.Net pipeline and is trying to kludge it into doing something it wasn’t really designed to do. I had a look in the source code for the ASP.Net MVC 2 RC and it doesn't look like the code has changed much so while I haven't tried it, I don't think there are any answers there. Here is the relevant part of the stack trace of the exception. Edit: I just realised I didn't mention that this is just trying to insert the token on the GET request. This isn't the validation that occurs when you do a POST kicking off. System.Web.Mvc.HttpAntiForgeryException: A required anti-forgery token was not supplied or was invalid. ---> System.Web.HttpException: Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that <machineKey> configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster. ---> System.Web.UI.ViewStateException: Invalid viewstate. Client IP: 127.0.0.1 Port: 4991 User-Agent: scrubbed ViewState: scrubbed Referer: blah Path: /oursite/Account/Login ---> System.Security.Cryptography.CryptographicException: Padding is invalid and cannot be removed. at System.Security.Cryptography.RijndaelManagedTransform.DecryptData(Byte[] inputBuffer, Int32 inputOffset, Int32 inputCount, Byte[]& outputBuffer, Int32 outputOffset, PaddingMode paddingMode, Boolean fLast) at System.Security.Cryptography.RijndaelManagedTransform.TransformFinalBlock(Byte[] inputBuffer, Int32 inputOffset, Int32 inputCount) at System.Security.Cryptography.CryptoStream.FlushFinalBlock() at System.Web.Configuration.MachineKeySection.EncryptOrDecryptData(Boolean fEncrypt, Byte[] buf, Byte[] modifier, Int32 start, Int32 length, IVType ivType, Boolean useValidationSymAlgo) at System.Web.UI.ObjectStateFormatter.Deserialize(String inputString) --- End of inner exception stack trace --- --- End of inner exception stack trace --- at System.Web.UI.ViewStateException.ThrowError(Exception inner, String persistedState, String errorPageMessage, Boolean macValidationError) at System.Web.UI.ViewStateException.ThrowMacValidationError(Exception inner, String persistedState) at System.Web.UI.ObjectStateFormatter.Deserialize(String inputString) at System.Web.UI.ObjectStateFormatter.System.Web.UI.IStateFormatter.Deserialize(String serializedState) at System.Web.Mvc.AntiForgeryDataSerializer.Deserialize(String serializedToken) --- End of inner exception stack trace --- at System.Web.Mvc.AntiForgeryDataSerializer.Deserialize(String serializedToken) at System.Web.Mvc.HtmlHelper.GetAntiForgeryTokenAndSetCookie(String salt, String domain, String path) at System.Web.Mvc.HtmlHelper.AntiForgeryToken(String salt, String domain, String path)

    Read the article

  • php cli script hangs with no messages

    - by julio
    Hi-- I've written a PHP script that runs via SSH and nohup, meant to process records from a database and do stuff with them (eg. process some images, update some rows). It works fine with small loads, up to maybe 10k records. I have some larger datasets that process around 40k records (not a lot, I realize, but it adds up to a lot of work when each record requires the download and processing of up to 50 images). The larger datasets can take days to process. Sometimes I'll see in my debug logs memory errors, which are clear enough-- but sometimes the script just appears to "die" or go zombie on me. My tail of the debug log just stops, with no error messages, the tail of the nohup log ends with no error, and the process is still showing in a ps list, looking like this-- 26075 pts/0 S 745:01 /usr/bin/php ./import.php but no work is getting done. Can anyone give me some ideas on why a process would just quit? The obvious things (like a php script timeout and memory issues) are not a factor, as far as I can tell. Thanks for any tips PS-- this is hosted on a godaddy VDS (not my choice). I am sort of suspecting that godaddy has some kind of limits that might kick in on me despite what overrides I put in the code (such as set_time_limit(0);).

    Read the article

  • ASP.NET MVC app on IIS7 with WebForms content is throwing NTLM authenticate box

    - by Jon
    I have an ASP.NET MVC app that also contains some WebForms content (for SSRS ReportViewer). This is deployed to IIS7 and the MVC pages of the app work fine, but when I try to browse to the aspx page I am prompted with the NTLM auth box. I do not have NTLM enabled, I only have Anonymous auth enabled. I have this deployed and fully working on an IIS6 box, the only other difference is that the IIS6 box is in our company domain, but the IIS7 box is not (I fail to see how this could be the issue as the MVC stuff is working fine). Any thoughts? Thanks.

    Read the article

  • Guessess of my session value conflicts

    - by SmartestVEGA
    I have a asp.net web form which will submit information to come as emails. Whenever user fill the form and click on submit button,the information user entered will be sent as email. This web form has 4 page. but the web form will not use all 4 page on all requests. if the user select a particular value in first page, the form will bypass the 3rd page and go the last 4th page(like...page1,2,4). IF it is any other values selected in the first page. form will navigate as page1,2,3,4. So now my problem is when multiple users access the same website, the value in the first page get combines from different users and the form will act abnormally.Sometime it will bypass sometimes it will not bypass the page3 Show below is the variable decalrations: Public strRoleType As String = String.Empty Protected Shared isAreaSelected As Integer = 0 Protected Shared isStoreSelected As Integer = 0 Protected Shared isHeadOfficeSelected As Integer = 0 Protected Shared isRegionSelected As Integer = 0 I guess the problem is with strRoleType variable whether it is getting values from different users. Do any have any work around?

    Read the article

  • How do I setup a Master Page with ASP.net?

    - by Michael
    Hi there, I'm normally a ColdFusion developer, but I'm having to work on a new site using some ASP.net hosting only, so forgive me if my questions seem very trivial. For numerous reasons, the website will be relatively static in the sense that it will mainly be using includes etc...that's about as complex as it will get with this. Now, I heard about the ability to set a master in ASP.net. Would anyone please be able to explain to me in a step process on how to do this? I have of course been searching for some time now on this topic but most results yield little help or no help at all since the search terms are slightly ambiguous. It would be nice to have this functionality for the long run. Any help or advice would be great. Many thanks. Michael.

    Read the article

  • PHP/Java Bridge - Access Java objects in PHP

    - by Omer Hassan
    I have a Red5 application which defines some public Java methods. When I start the server, an object of the application class gets created. I am trying to call the public methods of the application class from PHP using the existing instance of the Java application class. So here's my Java application: public class App extends org.red5.server.adapter.ApplicationAdapter { public boolean appStart(IScope app) { // This method gets called when the application starts // and an object of this App class is created. return true; } // This is the method I would like to call from PHP. public void f() { } } From PHP, I would like to get access to the App object that is created and call the method f() on it. I have tried playing around with this thing called "context". So in the Java method App.appStart(), I did this: // Save a reference to this App object to be retrieved later in PHP. new PhpScriptContextFactory().getContext().put("x", this); And in PHP, I tried to access the saved object like this: require_once("http://localhost:5080/JavaBridge/java/Java.inc"); var_dump(java_is_null(java_context()->get("x"))); Unfortunately, the java_is_null() function in PHP returns true. I also tried saving the App object in a static variable of App class but when I access that variable in PHP, its value is null.

    Read the article

  • Running PHP scripts as the owner of the PHP file: security issues

    - by thomasrutter
    I'm using suexec to ensure that PHP scripts (and other CGI/FastCGI apps) are run as the account holder associated with the relevant virtual host. This allows for securing each users' scripts from reading/writing by other users. However, it occurs to me that this opens up a different security hole. Previously, the web server ran as an unprivileged user, with read-only access to user's files (unless the user changed the file permissions for some reason). Now, the web user can also write to user's files. So while I've prevented different users taking advantage of each other's scripts, I've made it so that in the event that some application has a remote code injection vulnerability, it now has not only read access but also write access to all that user's scripts and website. How can I deal with this? One idea I've had is to create a second user account for each user account in the system, so that each user has their own user account, and all their scripts are run under another user account. But that seems cumbersome.

    Read the article

  • Security issues of running PHP scripts as the owner of the PHP file with suexec

    - by thomasrutter
    I'm using suexec to ensure that PHP scripts (and other CGI/FastCGI apps) are run as the account holder associated with the relevant virtual host. This allows for securing each users' scripts from reading/writing by other users. However, it occurs to me that this opens up a different security hole. Previously, the web server ran as an unprivileged user, with read-only access to user's files (unless the user changed the file permissions for some reason). Now, the web server can also write to user's files. So while I've prevented different users taking advantage of each other's scripts, I've made it so that in the event that some application has a remote code injection vulnerability, it now has not only read access but also write access to all that user's scripts and website. How can I deal with this? One idea I've had is to create a second user account for each user account in the system, so that each user has their own user account, and all their scripts are run under another user account. But that seems cumbersome.

    Read the article

  • how to make a column width size fixed in datagridview asp?

    - by user306671
    Hi, i have this column in a datagridview on aspx page <asp:TemplateField HeaderText="Observacion"> <ItemTemplate> <asp:Label ID="lblOrderID" runat="server" Text='<%# Eval("Observacion") %>'></asp:Label> </ItemTemplate> <ItemStyle Width="200px" Wrap="False" /> </asp:TemplateField> I have set up the itemstyle with and wrap to false, but anyways the width columns grows the the data is too long. i just want to change the height of the column not the width. Here us the complete code of the datagridview <asp:GridView ID="GridView1" runat="server" AutoGenerateDeleteButton="True" CellPadding="4" EnableModelValidation="True" ForeColor="#333333" GridLines="None" AutoGenerateColumns="False"> <columns> <asp:boundfield datafield="ID_OBSERVACION" visible="False" /> <asp:boundfield datafield="AUTOR" headertext="Autor" /> <asp:boundfield datafield="FECHA" headertext="Fecha" /> <asp:TemplateField HeaderText="Observacion"> <ItemTemplate> <asp:Label ID="lblOrderID" runat="server" Text='<%# Eval("Observacion") %>'></asp:Label> </ItemTemplate> <ItemStyle Width="200px" Wrap="False" /> </asp:TemplateField> </columns> <AlternatingRowStyle BackColor="White" ForeColor="#284775" Wrap="False" /> <EditRowStyle BackColor="#999999" /> <FooterStyle BackColor="#5D7B9D" Font-Bold="True" ForeColor="White" /> <HeaderStyle BackColor="#5D7B9D" Font-Bold="True" ForeColor="White" /> <PagerStyle BackColor="#284775" ForeColor="White" HorizontalAlign="Center" /> <RowStyle BackColor="#F7F6F3" ForeColor="#333333" Wrap="False" /> <SelectedRowStyle BackColor="#E2DED6" Font-Bold="True" ForeColor="#333333" /> </asp:GridView>

    Read the article

  • Parallelism in .NET – Part 20, Using Task with Existing APIs

    - by Reed
    Although the Task class provides a huge amount of flexibility for handling asynchronous actions, the .NET Framework still contains a large number of APIs that are based on the previous asynchronous programming model.  While Task and Task<T> provide a much nicer syntax as well as extending the flexibility, allowing features such as continuations based on multiple tasks, the existing APIs don’t directly support this workflow. There is a method in the TaskFactory class which can be used to adapt the existing APIs to the new Task class: TaskFactory.FromAsync.  This method provides a way to convert from the BeginOperation/EndOperation method pair syntax common through .NET Framework directly to a Task<T> containing the results of the operation in the task’s Result parameter. While this method does exist, it unfortunately comes at a cost – the method overloads are far from simple to decipher, and the resulting code is not always as easily understood as newer code based directly on the Task class.  For example, a single call to handle WebRequest.BeginGetResponse/EndGetReponse, one of the easiest “pairs” of methods to use, looks like the following: var task = Task.Factory.FromAsync<WebResponse>( request.BeginGetResponse, request.EndGetResponse, null); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The compiler is unfortunately unable to infer the correct type, and, as a result, the WebReponse must be explicitly mentioned in the method call.  As a result, I typically recommend wrapping this into an extension method to ease use.  For example, I would place the above in an extension method like: public static class WebRequestExtensions { public static Task<WebResponse> GetReponseAsync(this WebRequest request) { return Task.Factory.FromAsync<WebResponse>( request.BeginGetResponse, request.EndGetResponse, null); } } This dramatically simplifies usage.  For example, if we wanted to asynchronously check to see if this blog supported XHTML 1.0, and report that in a text box to the user, we could do: var webRequest = WebRequest.Create("http://www.reedcopsey.com"); webRequest.GetReponseAsync().ContinueWith(t => { using (var sr = new StreamReader(t.Result.GetResponseStream())) { string str = sr.ReadLine();; this.textBox1.Text = string.Format("Page at {0} supports XHTML 1.0: {1}", t.Result.ResponseUri, str.Contains("XHTML 1.0")); } }, TaskScheduler.FromCurrentSynchronizationContext());   By using a continuation with a TaskScheduler based on the current synchronization context, we can keep this request asynchronous, check based on the first line of the response string, and report the results back on our UI directly.

    Read the article

  • ASP.Net 4.5 Garbage Collection Improvement

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2013/06/24/asp.net-4.5-garbage-collection-improvement.aspxI just read Five Great .NET Framework 4.5 Features on CodeProject by Shivprasad koirala. Feature 5 in his article mentions the GC background cleanup and has a good explanation of the work the GC has to do for ASP.Net on the server. “Garbage collector is one real heavy task in a .NET application. And it becomes heavier when it is an ASP.NET application. ASP.NET applications run on the server and a lot of clients send requests to the server thus creating loads of objects, making the GC really work hard for cleaning up unwanted objects.” “To overcome the above problem, server GC was introduced. In server GC there is one more thread created which runs in the background. This thread works in the background and keeps cleaning…objects thus minimizing the load on the main GC thread. Due to double GC threads running, the main application threads are less suspended, thus increasing application throughput. To enable server GC, we need to use the gcServer XML tag and enable it to true.” <configuration> <runtime> <gcServer enabled="true"/> </runtime> </configuration> This is not done by default. The MSDN information page says “There are only two garbage collection options, workstation or server. For single-processor computers, the default workstation garbage collection should be the fastest option. Either workstation or server can be used for two-processor computers. Server garbage collection should be the fastest option for more than two processors. Use the GCSettingsIsServerGC property to determine if server garbage collection is enabled.” “In the .NET Framework 4 and earlier versions, concurrent garbage collection is not available when server garbage collection is enabled. Starting with the .NET Framework 4.5, server garbage collection is concurrent. To use non-concurrent server garbage collection, set the <gcServer> element to true and the <gcConcurrent> element to false. “ So if you’re using ASP.Net 4.5 and have a multi-core server, you should try turning on the Server Garbage Collection and do some profiling to see if it improves the performance of your site.

    Read the article

  • ASP.NET MVC 3 Hosting :: Deploying ASP.NET MVC 3 web application to server where ASP.NET MVC 3 is not installed

    - by mbridge
    You can built sample application on ASP.NET MVC 3 for deploying it to your hosting first. To try it out first put it to web server where ASP.NET MVC 3 installed. In this posting I will tell you what files you need and where you can find them. Here are the files you need to upload to get application running on server where ASP.NET MVC 3 is not installed. Also you can deploying ASP.NET MVC 3 web application to server where ASP.NET MVC 3 is not installed like this example: you can change reference to System.Web.Helpers.dll to be the local one so it is copied to bin folder of your application. First file in this list is my web application dll and you don’t need it to get ASP.NET MVC 3 running. All other files are located at the following folder: C:\Program Files\Microsoft ASP.NET\ASP.NET Web Pages\v1.0\Assemblies\ If there are more files needed in some other scenarios then please leave me a comment here. And… don’t forget to convert the folder in IIS to application. While developing an application locally, this isn’t a problem. But when you are ready to deploy your application to a hosting provider, this might well be a problem if the hoster does not have the ASP.NET MVC assemblies installed in the GAC. Fortunately, ASP.NET MVC is still bin-deployable. If your hosting provider has ASP.NET 3.5 SP1 installed, then you’ll only need to include the MVC DLL. If your hosting provider is still on ASP.NET 3.5, then you’ll need to deploy all three. It turns out that it’s really easy to do so. Also, ASP.NET MVC runs in Medium Trust, so it should work with most hosting providers’ Medium Trust policies. It’s always possible that a hosting provider customizes their Medium Trust policy to be draconian. Deployment is easy when you know what to copy in archive for publishing your web site on ASP.NET MVC 3 or later versions. What I like to do is use the Publish feature of Visual Studio to publish to a local directory and then upload the files to my hosting provider. If your hosting provider supports FTP, you can often skip this intermediate step and publish directly to the FTP site. The first thing I do in preparation is to go to my MVC web application project and expand the References node in the project tree. Select the aforementioned three assemblies and in the Properties dialog, set Copy Local to True. Now just right click on your application and select Publish. This brings up the following Publish wizard Notice that in this example, I selected a local directory. When I hit Publish, all the files needed to deploy my app are available in the directory I chose, including the assemblies that were in the GAC. Another ASP.NET MVC 3 article: - New Features in ASP.NET MVC 3 - ASP.NET MVC 3 First Look

    Read the article

  • PECL OCI8 2.0 Production Release Announcement

    - by cj
    The PHP OCI8 2.0.6 extension for Oracle Database is now "production" status. The source code is available on PECL. This can be used immediately to update your OCI8 extension in PHP 5.2 and later versions. The extension compiles with Oracle 10.2 or later client libraries. Oracle's standard cross-version database connectivity applies. OCI8 2.0 and PHP 5.5.5 RPMs for Oracle and Red Hat Linux are available from oss.oracle.com. Windows DLLs are available on PECL for PHP 5.3, PHP 5.4 and PHP 5.5. OCI8 2.0 source code will also be automatically included in the next major version of PHP. New Functionality Oracle Database 12c Implicit Result Set support. IRS's make it easy to pass query results back from stored PL/SQL procedures or anonymous PL/SQL blocks. Individual IRS statement resources, each corresponding to a single query, can be obtained with the new function oci_get_implicit_resultset(). These 'child' statement resources can be passed to any oci_fetch_* function. See Using PHP and Oracle Database 12c Implicit Result Sets and the PHP Manual: oci_get_implicit_resultset(). DTrace Dynamic Trace static probes. This well respected DTrace tracing framework is available on a number of platforms, including Oracle Linux. PHP OCI8 static user-space probes can be enabled with PHP's --enable-dtrace configuration option. See Using PHP DTrace on Oracle Linux. Documentation is also available in the PHP Manual OCI8 and DTrace Dynamic Tracing Improved Functionality Using oci_execute($s, OCI_NO_AUTO_COMMIT) for a SELECT no longer unnecessarily initiates an internal ROLLBACK during connection close. This can improve overall scalability by reducing "round trips" between PHP and the database. Changed Functionality PHP OCI8 2.0's minimum pre-requisites are now PHP 5.2 and Oracle client library 10.2. Later versions of both are usable and, in fact, recommended. Use the older PHP OCI8 1.4.10 extension when using PHP 4.3.9 through to PHP 5.1.x, or when only Oracle Database 9.2 client libraries are available. oci_set_*($connection, ...) meta data setting call error handling is fixed so that oci_error($connection) works for these calls. Note: The old, deprecated function aliases like ocilogon still exist but are not recommended for new applications. Phpinfo() Changes Some cosmetic changes were made to the output of php --ri oci8 and the phpinfo() function. The oci8.event and oci8.connection_class values are now shown only when the Oracle client libraries support the respective functionality. Connection statistics are now in a separate phpinfo() table. Temporary LOB and Collection support status lines in phpinfo() output were removed. These two features have always been enabled since 2007. Oci_internal_debug() Changes The oci_internal_debug() function is now a no-op. Use PHP's --enable-dtrace functionality with DTrace or SystemTap instead. References OCI8 Extension source code and Windows DLLs http://pecl.php.net/package/oci8 Oracle Linux RPMs oss.oracle.com PHP Manual for OCI8 OCI8 and DTrace Dynamic Tracing Oracle OpenWorld Conference paper What's New in Oracle Database 12c for PHP

    Read the article

  • How to insert selected rows value of Gridview into Database in .net

    - by MAS1
    I am Developing Windows Form Application in .Net, I want to insert selected rows value of Gridview into database. First Column of my GridView is Checkbox, when user check one or more checkbox from gridview, i want to insert values of respective rows into Database. In Web application i done this using DataKeyNames property of GridView.Want to know how to do it in Windows Form Application. I am using Visual Studio 2005

    Read the article

  • Windows components in .net

    - by JGC
    hi I need a component in .net which able me to partition a year to some part which is making by clicking at the beginning of the part and click again at the end of that. the shape below is a sample of my need but I create it by buttons and back-color of them for showing for you: I don't know the name of this component to search for that. does anyone know this component or something like this? thank you

    Read the article

  • Parallelism in .NET – Part 10, Cancellation in PLINQ and the Parallel class

    - by Reed
    Many routines are parallelized because they are long running processes.  When writing an algorithm that will run for a long period of time, its typically a good practice to allow that routine to be cancelled.  I previously discussed terminating a parallel loop from within, but have not demonstrated how a routine can be cancelled from the caller’s perspective.  Cancellation in PLINQ and the Task Parallel Library is handled through a new, unified cooperative cancellation model introduced with .NET 4.0. Cancellation in .NET 4 is based around a new, lightweight struct called CancellationToken.  A CancellationToken is a small, thread-safe value type which is generated via a CancellationTokenSource.  There are many goals which led to this design.  For our purposes, we will focus on a couple of specific design decisions: Cancellation is cooperative.  A calling method can request a cancellation, but it’s up to the processing routine to terminate – it is not forced. Cancellation is consistent.  A single method call requests a cancellation on every copied CancellationToken in the routine. Let’s begin by looking at how we can cancel a PLINQ query.  Supposed we wanted to provide the option to cancel our query from Part 6: double min = collection .AsParallel() .Min(item => item.PerformComputation()); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } We would rewrite this to allow for cancellation by adding a call to ParallelEnumerable.WithCancellation as follows: var cts = new CancellationTokenSource(); // Pass cts here to a routine that could, // in parallel, request a cancellation try { double min = collection .AsParallel() .WithCancellation(cts.Token) .Min(item => item.PerformComputation()); } catch (OperationCanceledException e) { // Query was cancelled before it finished } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, if the user calls cts.Cancel() before the PLINQ query completes, the query will stop processing, and an OperationCanceledException will be raised.  Be aware, however, that cancellation will not be instantaneous.  When cts.Cancel() is called, the query will only stop after the current item.PerformComputation() elements all finish processing.  cts.Cancel() will prevent PLINQ from scheduling a new task for a new element, but will not stop items which are currently being processed.  This goes back to the first goal I mentioned – Cancellation is cooperative.  Here, we’re requesting the cancellation, but it’s up to PLINQ to terminate. If we wanted to allow cancellation to occur within our routine, we would need to change our routine to accept a CancellationToken, and modify it to handle this specific case: public void PerformComputation(CancellationToken token) { for (int i=0; i<this.iterations; ++i) { // Add a check to see if we've been canceled // If a cancel was requested, we'll throw here token.ThrowIfCancellationRequested(); // Do our processing now this.RunIteration(i); } } With this overload of PerformComputation, each internal iteration checks to see if a cancellation request was made, and will throw an OperationCanceledException at that point, instead of waiting until the method returns.  This is good, since it allows us, as developers, to plan for cancellation, and terminate our routine in a clean, safe state. This is handled by changing our PLINQ query to: try { double min = collection .AsParallel() .WithCancellation(cts.Token) .Min(item => item.PerformComputation(cts.Token)); } catch (OperationCanceledException e) { // Query was cancelled before it finished } PLINQ is very good about handling this exception, as well.  There is a very good chance that multiple items will raise this exception, since the entire purpose of PLINQ is to have multiple items be processed concurrently.  PLINQ will take all of the OperationCanceledException instances raised within these methods, and merge them into a single OperationCanceledException in the call stack.  This is done internally because we added the call to ParallelEnumerable.WithCancellation. If, however, a different exception is raised by any of the elements, the OperationCanceledException as well as the other Exception will be merged into a single AggregateException. The Task Parallel Library uses the same cancellation model, as well.  Here, we supply our CancellationToken as part of the configuration.  The ParallelOptions class contains a property for the CancellationToken.  This allows us to cancel a Parallel.For or Parallel.ForEach routine in a very similar manner to our PLINQ query.  As an example, we could rewrite our Parallel.ForEach loop from Part 2 to support cancellation by changing it to: try { var cts = new CancellationTokenSource(); var options = new ParallelOptions() { CancellationToken = cts.Token }; Parallel.ForEach(customers, options, customer => { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // Check for cancellation here options.CancellationToken.ThrowIfCancellationRequested(); // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { theStore.EmailCustomer(customer); customer.LastEmailContact = DateTime.Now; } }); } catch (OperationCanceledException e) { // The loop was cancelled } Notice that here we use the same approach taken in PLINQ.  The Task Parallel Library will automatically handle our cancellation in the same manner as PLINQ, providing a clean, unified model for cancellation of any parallel routine.  The TPL performs the same aggregation of the cancellation exceptions as PLINQ, as well, which is why a single exception handler for OperationCanceledException will cleanly handle this scenario.  This works because we’re using the same CancellationToken provided in the ParallelOptions.  If a different exception was thrown by one thread, or a CancellationToken from a different CancellationTokenSource was used to raise our exception, we would instead receive all of our individual exceptions merged into one AggregateException.

    Read the article

  • Parallelism in .NET – Part 18, Task Continuations with Multiple Tasks

    - by Reed
    In my introduction to Task continuations I demonstrated how the Task class provides a more expressive alternative to traditional callbacks.  Task continuations provide a much cleaner syntax to traditional callbacks, but there are other reasons to switch to using continuations… Task continuations provide a clean syntax, and a very simple, elegant means of synchronizing asynchronous method results with the user interface.  In addition, continuations provide a very simple, elegant means of working with collections of tasks. Prior to .NET 4, working with multiple related asynchronous method calls was very tricky.  If, for example, we wanted to run two asynchronous operations, followed by a single method call which we wanted to run when the first two methods completed, we’d have to program all of the handling ourselves.  We would likely need to take some approach such as using a shared callback which synchronized against a common variable, or using a WaitHandle shared within the callbacks to allow one to wait for the second.  Although this could be accomplished easily enough, it requires manually placing this handling into every algorithm which requires this form of blocking.  This is error prone, difficult, and can easily lead to subtle bugs. Similar to how the Task class static methods providing a way to block until multiple tasks have completed, TaskFactory contains static methods which allow a continuation to be scheduled upon the completion of multiple tasks: TaskFactory.ContinueWhenAll. This allows you to easily specify a single delegate to run when a collection of tasks has completed.  For example, suppose we have a class which fetches data from the network.  This can be a long running operation, and potentially fail in certain situations, such as a server being down.  As a result, we have three separate servers which we will “query” for our information.  Now, suppose we want to grab data from all three servers, and verify that the results are the same from all three. With traditional asynchronous programming in .NET, this would require using three separate callbacks, and managing the synchronization between the various operations ourselves.  The Task and TaskFactory classes simplify this for us, allowing us to write: var server1 = Task.Factory.StartNew( () => networkClass.GetResults(firstServer) ); var server2 = Task.Factory.StartNew( () => networkClass.GetResults(secondServer) ); var server3 = Task.Factory.StartNew( () => networkClass.GetResults(thirdServer) ); var result = Task.Factory.ContinueWhenAll( new[] {server1, server2, server3 }, (tasks) => { // Propogate exceptions (see below) Task.WaitAll(tasks); return this.CompareTaskResults( tasks[0].Result, tasks[1].Result, tasks[2].Result); }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This is clean, simple, and elegant.  The one complication is the Task.WaitAll(tasks); statement. Although the continuation will not complete until all three tasks (server1, server2, and server3) have completed, there is a potential snag.  If the networkClass.GetResults method fails, and raises an exception, we want to make sure to handle it cleanly.  By using Task.WaitAll, any exceptions raised within any of our original tasks will get wrapped into a single AggregateException by the WaitAll method, providing us a simplified means of handling the exceptions.  If we wait on the continuation, we can trap this AggregateException, and handle it cleanly.  Without this line, it’s possible that an exception could remain uncaught and unhandled by a task, which later might trigger a nasty UnobservedTaskException.  This would happen any time two of our original tasks failed. Just as we can schedule a continuation to occur when an entire collection of tasks has completed, we can just as easily setup a continuation to run when any single task within a collection completes.  If, for example, we didn’t need to compare the results of all three network locations, but only use one, we could still schedule three tasks.  We could then have our completion logic work on the first task which completed, and ignore the others.  This is done via TaskFactory.ContinueWhenAny: var server1 = Task.Factory.StartNew( () => networkClass.GetResults(firstServer) ); var server2 = Task.Factory.StartNew( () => networkClass.GetResults(secondServer) ); var server3 = Task.Factory.StartNew( () => networkClass.GetResults(thirdServer) ); var result = Task.Factory.ContinueWhenAny( new[] {server1, server2, server3 }, (firstTask) => { return this.ProcessTaskResult(firstTask.Result); }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, instead of working with all three tasks, we’re just using the first task which finishes.  This is very useful, as it allows us to easily work with results of multiple operations, and “throw away” the others.  However, you must take care when using ContinueWhenAny to properly handle exceptions.  At some point, you should always wait on each task (or use the Task.Result property) in order to propogate any exceptions raised from within the task.  Failing to do so can lead to an UnobservedTaskException.

    Read the article

  • PHP SOAP Error: maximum string content length quota (8192) has been exceeded while reading XML data

    - by Sadi
    I am trying to use bing with PHP SOAP. It works fine for short strings. But for larger string I get following error: Fatal error: Uncaught SoapFault exception: [a:DeserializationFailed] The formatter threw an exception while trying to deserialize the message: Error in deserializing body of request message for operation 'Translate'. The maximum string content length quota (8192) has been exceeded while reading XML data. This quota may be increased by changing the MaxStringContentLength property on the XmlDictionaryReaderQuotas object used when creating the XML reader. Line 9, position 227. in ........ I am using XAMPP in windows 7. All the solution I found are for .net not the PHP. PHP has no web.config file as far as I concern. Any idea how can I solve it. Thank you Sadi

    Read the article

  • Access Products/Category/Attribute Info from php with Magento API

    - by Jason
    Need to be able to pull Magento products into an external template. Need to be able to get all products data (description, title, attributes, categories, image, etc). And need to be able to filter by category, attribute and also search on name. These calls will be made from the same server that the Magento install is on. What's the best way to do this? Will be using php on both linux & windows (2 separate sites). Have tried using the Magento API & Soap to access from php but haven't been able to get that to work yet. All I get is this error every time. Fatal error: Uncaught SoapFault exception: [WSDL] SOAP-ERROR: Parsing WSDL: Couldn't load from 'http://mymagento.com/cart/index.php/api/?wsdl' : Extra content at the end of the document in.....

    Read the article

  • Multithreading/Parallel Processing in PHP

    - by manyxcxi
    I have a PHP script that will generate a report using PHPExcel from data queried from a MySQL DB. Currently, it is linear in processing in that it gets the data back from MySQL, reads in the Excel template, writes the data to the template, then outputs it. I have optimized the code to the point that the data is only iterated over once, and there is very little processing done on the PHP side. The query returns hundreds of lines in less than .001 seconds, so it is running fast enough. After some timing I have found my bottlenecks to be (surprise, surprise) reading the template and writing the output. I would like to do this: Spawn a thread/process to read the template Spawn a thread/process to fetch the data Return back to parent thread - Parent thread will wait until both are complete Proceed on as normal My main questions are is this possible, is it worth it? If yes to both, how would you tackle it? Also, it is PHP 5 on CentOS

    Read the article

  • PHP CLI version issue

    - by seengee
    I am trying to use Zend Tool on my media temple Grid Server account. The problem is that the installed CLI version of PHP is 4.4.8 and Zend Framework needs PHP5. On an account basis its possible to choose PHP 4 or 5 but not so for CLI. Its possible to globally select to use PHP5 by using the extension .php5 but in the case of Zend Tool which is called by a shell script zf.sh i'm not sure what options i have. PHP5 is on the server at /usr/bin/php5 and someone at MT has suggested creating an alias so php=/usr/bin/php5 but i'm not sure that will work. Any ideas?

    Read the article

  • PHP-GD: Dealing with Unicode characters

    - by sehugg
    I am developing a web service that renders characters using the PHP GD extension, using a user-selected TTF font. This works fine in ASCII-land, but there are a few problems: The string to be rendered comes in as UTF-8. I would like to limit the list of user-selectable fonts to be only those which can render the string properly, as some fonts only have glyphs for ASCII characters, ISO 8601, etc. In the case where some decorative characters are included, it would be fine to render the majority of characters in the selected font and render the decorative characters in Arial (or whatever font contains the extended glyphs). It does not seem like PHP-GD has support for querying the font metadata sufficiently to figure out if a character can be rendered in a given font. What is a good way to get font metrics into PHP? Is there a command-line utility that can dump in XML or other parsable format?

    Read the article

  • Compiling my own PHP extension on Windows with Visual Studio 2008

    - by Mickey Shine
    I wrote a PHP extension and it could be compiled and run under linux successfully. But on windows, I met some problems. I did the compiling on windows according to http://blog.slickedit.com/?p=128 with PHP source version 5.2.10, and after the compiling it generated the dll file. But when I tried to use the dll file, it reported me the memory problems when starting Apache(Wamp server). And then I started the debugging process, it seemed that REGISTER_INI_ENTRIES() had problems. Here is the PHP extension source code, http://www.bluefly.cn/xsplit.tar.gz , and it works fine on Linux. But I also want to make it work on Windows. Sorry I am not a pro so that I hope someone can help me. Any help is appreciated and thanks in advance~

    Read the article

  • PHP SOAP Windows: maximum string content length quota (8192) has been exceeded while reading XML dat

    - by Sadi
    I am trying to use bing with PHP SOAP. It works fine for short strings. But for larger string I get following error: Fatal error: Uncaught SoapFault exception: [a:DeserializationFailed] The formatter threw an exception while trying to deserialize the message: Error in deserializing body of request message for operation 'Translate'. The maximum string content length quota (8192) has been exceeded while reading XML data. This quota may be increased by changing the MaxStringContentLength property on the XmlDictionaryReaderQuotas object used when creating the XML reader. Line 9, position 227. in ........ I am using XAMPP in windows 7. All the solution I found are for .net not the PHP. PHP has no web.config file as far as I concern. Any idea how can I solve it. Thank you Sadi

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >