Search Results

Search found 25198 results on 1008 pages for 'failed request tracing'.

Page 443/1008 | < Previous Page | 439 440 441 442 443 444 445 446 447 448 449 450  | Next Page >

  • .NET HttpModule does not send form variables to PHP file on RewritePath()

    - by jammus
    Hello friends. We have an application running on IIS 6 which uses a custom HttpModule to rewrite urls. This works great (well done us) except in the case where the Context.RewritePath destination is a .php file. The php file is executed as expected, however the $_POST collection is empty meaning it cannot access any forms which are submitted to rewritten urls. The problem does not exist when rewriting to .aspx files as the Request.Form collection is fine. My question therefore has two parts: Why is the $_POST collection not being populated? Is there a way to ensure that the .php $_POST collection is correctly populated after a rewrite? I don't have much to show in the way of code. There's just a simple: context.RewritePath(newPath); once the HttpModule has figured out where to send the request. Edit: Interestingly, if I do var_dump(file_get_contents('php://input')); in the PHP file (method described here) the contents of the form is displayed. So the data is reaching the PHP script but not the $_POST array.

    Read the article

  • Magentoo add slashes to file_name

    - by Kein
    I try extend an core class. But catch error: Warning: include( \\\\\\\\\\\\MyModule\Ajaxsearch\Model\Resource\Eav\Mysql4\Product\Collection \\\\\\\\\\.php) [function.include]: failed to open stream:... Why magento add slashes to addres? May be config error?

    Read the article

  • Saving tags into a database table in CakePHP

    - by Cameron
    I have the following setup for my CakePHP app: Posts id title content Topics id title Topic_Posts id topic_id post_id So basically I have a table of Topics (tags) that are all unique and have an id. And then they can be attached to post using the Topic_Posts join table. When a user creates a new post they will fill in the topics by typing them in to a textarea separated by a comma which will then save these into the Topics table if they do not already exist and then save the references into the Topic_posts table. I have the models set up like so: Post model: class Post extends AppModel { public $name = 'Post'; public $hasAndBelongsToMany = array( 'Topic' => array('with' => 'TopicPost') ); } Topic model: class Topic extends AppModel { public $hasMany = array( 'TopicPost' ); } TopicPost model: class TopicPost extends AppModel { public $belongsTo = array( 'Topic', 'Post' ); } And for the New post method I have this so far: public function add() { if ($this->request->is('post')) { //$this->Post->create(); if ($this->Post->saveAll($this->request->data)) { // Redirect the user to the newly created post (pass the slug for performance) $this->redirect(array('controller'=>'posts','action'=>'view','id'=>$this->Post->id)); } else { $this->Session->setFlash('Server broke!'); } } } As you can see I have used saveAll but how do I go about dealing with the Topic data?

    Read the article

  • Integration tests - "no exceptions are thrown" approach. Does it make sense?

    - by Andrew Florko
    Sometimes integration tests are rather complex to write or developers have no enough time to check output - does it make sense to write tests that make sure "no exceptions are thrown" only? Such tests provide some input parameters set(s) and doesn't check the result, but only make sure code not failed with exception? May be such tests are not very useful but appropriate in situations when you have no time?

    Read the article

  • Tomcat startup logs - SEVERE: Error filterStart how to get a stack trace?

    - by Benju
    When I start Tomcat I get the following error: Jun 10, 2010 5:17:25 PM org.apache.catalina.core.StandardContext start SEVERE: Error filterStart Jun 10, 2010 5:17:25 PM org.apache.catalina.core.StandardContext start SEVERE: Context [/mywebapplication] startup failed due to previous errors It seems odd that the logs for Tomcat would not include a stack trace. Does somebody have a suggestion for how to increase the logging in Tomcat to get stack traces for errors like this?

    Read the article

  • Strange behavior with ajax call complete in JavaScript modules

    - by user2598794
    I have 3 simple modules with JavaScript code and JQuery ajax call. First module lots.js: var Lots = (function ($) { var self = this; var processIsRunning; return { getLots: function (lotsUrl) { var items = []; self.processIsRunning = true; var request = $.ajax({ url: lotsUrl, type: 'POST', success: function (data) { //some code } }); $.when(request).done(function() { //some code self.processIsRunning = false; }); }, isComplete: function () { return !self.processIsRunning; } }; }(jQuery)); Module bids.js: var Bids = (function ($) { return { makeBids: function (bidUrl) { //some code } }; }(jQuery)); Module app.js which bundles all together: var App = (function () { var lots_url = null; var bid_url = null; var self = this; return { if (!self.lots_url) { self.lots_url = lotsUrl; } GetLots: function (lotsUrl) { Lots.getLots(self.lots_url); }, MakeBids: function makeBid(bidUrl) { //some code var isComp = Lots.isComplete(); while (!isComp) { isComp = Lots.isComplete(); } Bids.makeBids(self.bid_url); } }; }()); But in the 'while' loop I always get 'isComplete=false'. In debug I see that 'processIsRunning' in Lots module is always true. What's the problem?

    Read the article

  • Apache mod_rewrite - forward domain root to subdirectory

    - by DuFace
    I have what I originally assumed to be a simple problem. I am using shared hosting for my website (so I don't have access to the Apache configuration) and have only been given a single folder to store all my content in. This is all well and good but it means that all my subdomains must have their virtual document root's inside public_html, meaning they effectively become a folder on my main domain. What I'd like to do is organise my public_html something like this: public_html/ www/ index.php ... sub1/ index.php ... some_library/ ... This way, all my web content is still in public_html but only a small fraction of it will be served to the client. I can easily achieve this for all the subdomains, but it's the primary domain that I'm having issues with. I created a .htaccess file in public_html with the following: Options +SymLinksIfOwnerMatch # I'm not allowed to use FollowSymLinks RewriteEngine on RewriteBase / RewriteCond %{REQUEST_URI} !^/www [NC] RewriteRule ^(.*)$ /www/$1 [L] This works fairly well, but for some strange reason www.example.com/stuff is translated into a request for www.example.com/www/stuff and hence a 404 error is given. It was my understanding that unless an 'R' flag was specified, mod_rewrite was purely internal so I can't understand why the request is generated as that implies (to me at least) redirection. I assumed this would be a trivial problem to solve as all I actually want to do is forward all requests for the root of www.example.com to a subdirectory, but I've spent hours searching for answers and none are quite correct. I find it difficult to believe I'm the only person to have this issue. I apologise if this question has been answered on here before, I did search and trawl but couldn't find an appropriate answer. Please could someone shed some light on this?

    Read the article

  • How would a user stay logged in to a REST-based website?

    - by unforgiven3
    A year or so ago I asked this question: Can you help me understand this? “Common REST Mistakes: Sessions are irrelevant”. My question was essentially this: Okay, I get that HTTP authentication is done automatically on every message - but how? Is the username/password sent with every request? Doesn't that just increase attack surface area? I feel like I'm missing part of the puzzle. The answers I received made perfect sense in the context of a mobile (iPhone, Android, WP7) app - when talking to a REST service, the app would just send user credentials along with each request. That worked great for me. But now, I would like to better understand how one would secure a REST-like website, like StackOverflow itself or something like Reddit. How would things work if it was a user logged in via a web browser instead of logged in via an iPhone app? What happens when a user logs in? Are the credentials saved in the browser somehow? How would the browser know what credentials to send with subsequent REST requests? What if it's a JavaScript call to a webservice? How would the JavaScript call include user credentials? I'll be quite frank: my understanding of security when it comes to websites is pretty limited. I enjoyed working with REST services from an app perspective, but now I want to try and build a website that is based on REST principles, and I'm finding myself to be pretty lost. If there is anything in the above question that is unclear that you'd like me to clarify, please leave a comment and I'll address it.

    Read the article

  • unable to install latest phpUnit in Ubuntu 10.04

    - by Dex Barrett
    I'm trying to install PHPUnit in Ubuntu 10.04 but I get these error messages sudo pear install -a pear.phpunit.de/PHPUnit Duplicate package channel://pear.phpunit.de/File_Iterator-1.3.3 found Duplicate package channel://pear.phpunit.de/File_Iterator-1.3.2 found install failed I tried reinstalling PEAR, upgrading it; updated the PEAR and PHPUnit channel; cleared the PEAR's cache but still no luck, I keep getting the same error. Does anyone have the same problem and know how to solve it? Thank you.

    Read the article

  • Incompatible library creating new project with Aptana

    - by Phil Rice
    I am a ruby and rails newbie, so my abilities to debug this are somewhat limited. I have just added the eclipse plugin which failed, then downloaded the latest aptana studio which also failed. The failure was the same in both cases. The nature of the failure is that when I create a new rails project, I get an error message about an incompatible library version "C:/Ruby193/lib/ruby/gems/1.9.1/gems/mongrel-1.1.5-x86-mswin32-60/lib/http11.so". The project is actually created, along with directories and files. Google searches around this error message have only returned a couple of hits, which were not very helpful I am wondering if this is about 64 bit libraries. My software stack is: Windows 7 home premium 64bit Aptana RadRails, build: 2.0.5.1278709071 Ruby1.9.3 gem 1.8.24 The console shows: "4320" C:/Ruby193/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:36:in `require': iconv will be deprecated in the future, use String#encode instead. C:/Ruby193/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:36:in `require': incompatible library version - C:/Ruby193/lib/ruby/gems/1.9.1/gems/mongrel-1.1.5-x86-mswin32-60/lib/http11.so (LoadError) from C:/Ruby193/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:36:in `require' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:156:in `block in require' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:521:in `new_constants_in' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:156:in `require' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/mongrel-1.1.5-x86-mswin32-60/lib/mongrel.rb:12:in `<top (required)>' from C:/Ruby193/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:60:in `require' from C:/Ruby193/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:60:in `rescue in require' from C:/Ruby193/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:35:in `require' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:156:in `block in require' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:521:in `new_constants_in' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:156:in `require' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/rack-1.0.0/lib/rack/handler/mongrel.rb:1:in `<top (required)>' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/rack-1.0.0/lib/rack/handler.rb:17:in `const_get' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/rack-1.0.0/lib/rack/handler.rb:17:in `block in get' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/rack-1.0.0/lib/rack/handler.rb:17:in `each' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/rack-1.0.0/lib/rack/handler.rb:17:in `get' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/rails-2.3.4/lib/commands/server.rb:45:in `<top (required)>' from C:/Ruby193/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:36:in `require' from C:/Ruby193/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:36:in `require' from script/server:3:in `<top (required)>' from -e:2:in `load' from -e:2:in `<main>'

    Read the article

  • Lightweight HTTP application/server for static content

    - by PartlyCloudy
    Hi, I am in need of a scalable and performant HTTP application/server that will be used for static file serving/uploading. So I only need support for GET and PUT operations. However, there are a few extra features that I need: Custom authentication: I need to check credentials against a database for each request. Thus I must be able to integrate propietary database interaction. Support for signed access keys: The access to resources via PUT should be signed using a key like http://uri/?key=foo The key then contains information about the request like md5(user + path + secret) which allows me to block unwanted requests. The application/server should allow me to check for this. Performance: I'd like to avoid piping content as much as possible. Otherwise the whole application could be implemented in Perl/etc. in a few lines as CGI. Perlbal (in webserver mode) looks nice, however the single-threaded model does not fit with my database lookup and it does also not support query strings. Lighttp/Nginx/… have some modules for these tasks, however it is not feasible putting everything together without ending up writing own extensions/modules. So how would you solve this? Are there other leightweight webservers available for this? Should I implement an application inside of a webserver (i.e. CGI). How can I avoid/speed up piping content between the webserver and my application. Thanks in advance!

    Read the article

  • Django - partially validating form

    - by aeter
    I'm new to Django, trying to process some forms. I have this form for entering information (creating a new ad) in one template: class Ad(models.Model): ... category = models.CharField("Category",max_length=30, choices=CATEGORIES) sub_category = models.CharField("Subcategory",max_length=4, choices=SUBCATEGORIES) location = models.CharField("Location",max_length=30, blank=True) title = models.CharField("Title",max_length=50) ... I validate it with "is_valid()" just fine. Basically for the second validation (another template) I want to validate only against "category" and "sub_category": In another template, I want to use 2 fields from the same form ("category" and "sub_category") for filtering information - and now the "is_valid()" method would not work correctly, cause it validates the entire form, and I need to validate only 2 fields. I have tried with the following: ... if request.method == 'POST': # If a filter for data has been submitted: form = AdForm(request.POST) try: form = form.clean() category = form.category sub_category = form.sub_category latest_ads_list = Ad.objects.filter(category=category) except ValidationError: latest_ads_list = Ad.objects.all().order_by('pub_date') else: latest_ads_list = Ad.objects.all().order_by('pub_date') form = AdForm() ... but it doesn't work. How can I validate only the 2 fields category and sub_category?

    Read the article

  • SQL Server job (stored proc) trace

    - by Jit
    Hi Friends, I need your suggestion on tracing the issue. We are running data load jobs at early morning and loading the data from Excel file into SQL Server 2005 db. When job runs on production server, many times it takes 2 to 3 hours to complete the tasks. We could drill down to one job step which is taking 99% of the total time to finish. While running the job step (stored procs) on staging environment (with the same production database restored) takes 9 to 10 minutes, the same takes hours on production server when it run at early morning as part of job. The production server always stuck up at the very job step. I would like to run trace on the very job step (around 10 stored procs run for each user in while loop within the job step) and collect the info to figure out the issue. What are the ways available in SQL Server 2005 to achieve the same? I want to run the trace only for these SPs and not for certain period time period on production server, as trace give lots of information and it becomes very difficult for me (as not being DBA) to analyze that much of trace information and figure out the issue. So I want to collect info about specific SPs only. Let me know what you suggest. Appreciate your time and help. Thanks.

    Read the article

  • PHP problem with getimagesize()

    - by RobHardgood
    I'm using the getimagesize() function in PHP and it keeps returning an error: getimagesize(image.php?name=username&pic=picture) [function.getimagesize]: failed to open stream: No such file or directory I'm not doing anything strange with it. The only problem I can imagine is that the path URL is another PHP script that returns a page with an image header, and there is an ampersand in that URL. Here is my code: $location = "image.php?name=username&pic=picture"; $size = getimagesize($location);

    Read the article

  • How to create svn folder in mac os x

    - by niceramar
    hi, i am working on iphone project, i like to create an svn folder and link that one to my server I tried to run the below command fsp3s-MacBook-Pro:~ fsp3$ svnadmin create /ram/Code/SVN i got the below error svnadmin: Repository creation failed svnadmin: Could not create top-level directory svnadmin: Can't create directory '/ram/Code/SVN': No such file or directory How to create an SVN folder in mac os x? thanks!

    Read the article

  • Need Help with .NET OpenId HttpHandler

    - by Mark E
    I'm trying to use a single HTTPHandler to authenticate a user's open id and receive a claimresponse. The initial authentication works, but the claimresponse does not. The error I receive is "This webpage has a redirect loop." What am I doing wrong? public class OpenIdLogin : IHttpHandler { private HttpContext _context = null; public void ProcessRequest(HttpContext context) { _context = context; var openid = new OpenIdRelyingParty(); var response = openid.GetResponse(); if (response == null) { // Stage 2: user submitting Identifier openid.CreateRequest(context.Request.Form["openid_identifier"]).RedirectToProvider(); } else { // Stage 3: OpenID Provider sending assertion response switch (response.Status) { case AuthenticationStatus.Authenticated: //FormsAuthentication.RedirectFromLoginPage(response.ClaimedIdentifier, false); string identifier = response.ClaimedIdentifier; //****** TODO only proceed if we don't have the user's extended info in the database ************** ClaimsResponse claim = response.GetExtension<ClaimsResponse>(); if (claim == null) { //IAuthenticationRequest req = openid.CreateRequest(identifier); IAuthenticationRequest req = openid.CreateRequest(Identifier.Parse(identifier)); var fields = new ClaimsRequest(); fields.Email = DemandLevel.Request; req.AddExtension(fields); req.RedirectingResponse.Send(); //Is this correct? } else { context.Response.ContentType = "text/plain"; context.Response.Write(claim.Email); //claim.FullName; } break; case AuthenticationStatus.Canceled: //TODO break; case AuthenticationStatus.Failed: //TODO break; } } }

    Read the article

  • XMLHttpRequest POST Data Size

    - by usurper
    Hi, Is there a size limit to a XHR POST request? I am using the POST method for saving textdata into MySQL using PHP script and the data is cut off. Firebug sends me the following message: ... Firebug request size limit has been reached by Firebug. ... This is my code for sending the data: function makeXHR(recordData) { xmlhttp = createXHR(); var body = "q=" + encodeURIComponent(recordData); xmlhttp.open("POST", "insertRowData.php", true); xmlhttp.setRequestHeader("Content-Type", "application/x-www-form-urlencoded"); xmlhttp.setRequestHeader("Content-length", body.length); xmlhttp.setRequestHeader("Connection", "close"); xmlhttp.onreadystatechange = function() { if(xmlhttp.readyState == 4 && xmlhttp.status == 200) { //alert(xmlhttp.responseText); alert("Records were saved successfully!"); } } xmlhttp.send(body); } The only solution I can think of is splitting the data and making a queue of XHR requests but I don't like it. Is there another way?

    Read the article

  • Unit test helper methods?

    - by Aly
    Hi, I have classes which prviously had massive methods so i subdivided the work of this method into 'helper' methods. These helper methods are declared private to enforce encapsulation - however I want to unit test the big public methods, is it good to unit test the helper methods too as if one of them fail the public method that calls it will also fail - but this way we can identify why it failed. Also in order to test these using a mock object I would need to change their visibility from private to protected, is this desirable?

    Read the article

  • How can I prevent double file uploading with Amazon S3?

    - by Tony
    I decided to use Amazon S3 for document storage for an app I am creating. One issue I run into is while I need to upload the files to S3, I need to create a document object in my app so my users can perform CRUD actions. One solution is to allow for a double upload. A user uploads a document to the server my Rails app lives on. I validate and create the object, then pass it on to S3. One issue with this is progress indicators become more complicated. Using most out-of-the-box plugins would show the client that file has finished uploading because it is on my server, but then there would be a decent delay when the file was going from my server to S3. This also introduces unnecessary bandwidth (at least it does not seem necessary) The other solution I am thinking about is to upload the file directly to S3 with one AJAX request, and when that is successful, make a second AJAX request to store the object in my database. One issue here is that I would have to validate the file after it is uploaded which means I have to run some clean up code in S3 if the validation fails. Both seem equally messy. Does anyone have something more elegant working that they would not mind sharing? I would imagine this is a common situation with "cloud storage" being quite popular today. Maybe I am looking at this wrong.

    Read the article

  • What did you develop using a microcontroller?

    - by DR
    I've always been fascinated by microcontrollers and I'm planning to do a few hobby projects just to satisfy my inner geek :) I'm looking for ideas and motivation, so what did you develop using a microcontroller? If possible please state the microcontroller and/or development environment and an estimate on hardware costs beyond the basic equipment (if applicable). I'm interested in both successful and failed projects and any problems you encountered.

    Read the article

  • Displaying performance metrics in a modern web app?

    - by Charles
    We're updating our ancient internal PHP application at work. Right now, we gather extensive performance measurements on every pageview, and log them to the database. Additionally, users requested that some of the metrics be displayed at the bottom of the page. This worked out pretty well for us, because the last thing that the application does on every request is include the file containing the HTML footer. The updated parts of the application use an MVC framework and a Dispatch/Request/Response loop. The page footer is no longer the last thing done. In fact, it could very well be the first thing done, before the rest of the page is created. Because we can grab the Response before it's returned to the user, we could try to include placeholders for the performance metrics in the footer and simply replace them with the actual numbers, but this strikes me as a bad idea somehow. How do you handle this in your modern web app? While we're using PHP, I'm curious how it's done in a Ruby/Rails app, and in your favorite Python framework.

    Read the article

  • jQuery mobile ajax login form authentication

    - by Jakub Zak
    I know i already asked simillar question, but now when I work with jQuery Mobile I can't figure it out. So I have this form: <div data-role="page" data-theme="a" id="login_page"> <div data-role="header" data-position="fixed"> <h1>****</h1> </div> <div data-role="content"> <form id="login_form" method="POST" data-ajax="false"> <label for="basic">Username:</label> <input type="text" name="name" id="username" value=""/> <label for="basic">Password:</label> <input type="password" name="password" id="password" value=""/> <input type="submit" value="Login" id="login" name="login"/> </form> </div> <div data-role="footer" data-position="fixed"> <div data-role="navbar"></div> </div> </div> And I need to submit Username and Password to php script, where php replies and send "success" or "failed". Here is php: <?php session_start(); $username = $_POST["name"]; $password = $_POST["password"]; include('mysql_connection.php'); mysql_select_db("jzperson_imesUsers", $con); $res1 = mysql_query("SELECT * FROM temp_login WHERE username='$username' AND password='$password'"); $count=mysql_num_rows($res1); if($count==1){ echo "success"; }else{ echo "failed"; } ?> And to do all this I want to use this script: $(document).ready(function() { $("form").submit(function(){ $.mobile.showPageLoadingMsg(); $.ajax({ url: "http://imes.jzpersonal.com/login_control.php", type: "POST", dataType: "jsonp", jsonp: "jsoncallback", data: $("form#login_form").serialize(), success: function( response ){ $.mobile.changePage( "http://imes.jzpersonal.com/user_panel.html"); } }); return false; }); }); But I can't make it work, I know I must have mistakes in there, I just can't find them, or better way to do it. Thank you in advance for any help.

    Read the article

  • How to get around LazyInitializationException in scheduled jobs?

    - by Shreerang
    I am working on a J2EE server application which is deployed on Tomcat. I use Spring source as MVC framework and Hibernate as ORM provider. My object model has lot of Lazy relationships (dependent objects are fetched on request). The high level design is like Service level methods call a few DAO methods to perform database operation. The service method is called either from the Flex UI or as a scheduled job. When it is called from Flex UI, the service method works fine i.e. it fetches some objects using DAO methods and even Lazy loading works. This is possible by the use of OpenSessionInViewFilter configured with the UI servlet. But when the same service method is called as scheduled Job, it gives LazyInitializationException. I can not configure OpenSessionInViewFilter because there is no servlet or UI request associated with that. I tried configuring Transaction around the scheduled job method so that service method starts a transaction and all the DAO methods participate in that same transaction, hoping that the transaction will remain active and hibernate session will be available. But it does not work. Please suggest if anyone has ever been able to get such a configuration working. If needed, I can post the Hibernate configuration and log messages. Thanks a lot for help! Shreerang

    Read the article

< Previous Page | 439 440 441 442 443 444 445 446 447 448 449 450  | Next Page >