Search Results

Search found 853 results on 35 pages for 'redirection'.

Page 29/35 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • Cannot debug views in MVC2 project, getting "The resource cannot be found" error

    - by schefdev
    I'm running Visual Studio 2008 sp1 on Win7, with MVC2 RTM installed. I created a new MVC2 project using the wizard and am unable to debug specific pages. With webforms and even MVC1, I was able to sit on a View page, hit F5, and then have the integrated web server in VS2008 start on the page I was working on. Very handy for building up app logic. When I try this now I get a "The resource cannot be found" error page. I retried this just now with a stock new MVC2 Web Application project. Here are the steps I took after creating the new project to reproduce: Open up project settings. Under the Web subtab, set the Start Action to "Current Page". Leave all the other settings as is. Open one of the views up (e.g. Account/Register.aspx) Hit F5 to debug the project Note that the browser window which displays shows the error message "The resource cannot be found". The link I saw in my browser for this run was: http://localhost:49471/Views/Account/Register.aspx I did some googling and found suggestions related to ensuring all HTTP server pieces were installed. I double checked and made sure that "HTTP Errors" and "HTTP Redirection" were both installed. If I leave the project setting as it was originally, set to "Specific Page" with nothing in the text box, then routing works and I always get the default home page. I'm hoping this isn't the only option. Thanks!

    Read the article

  • Using Audio Queue Services to play PCM data over a socket connection

    - by Rohan
    I'm writing a remote desktop client for the iPhone and I'm trying to implement audio redirection. The client is connected to the server over a socket connection, and the server sends 32K chunks of PCM data at a time. I'm trying to use AQS to play the data and it plays the first two seconds (1 buffer worth). However, since the next chunk of data hasn't come in over the socket yet, the next AudioQueueBuffer is empty. When the data comes in, I fill the next available buffer with the data and enqueue it with AudioQueueEnqueueBuffer. However, it never plays these buffers. Does the queue stop playing if there are no buffers in the queue, even if you later add a buffer? Here's the relevant part of the code: void wave_out_write(STREAM s, uint16 tick, uint8 index) { if(items_in_queue == NUM_BUFFERS){ return; } if(!playState.busy){ OSStatus status; status = AudioQueueNewOutput(&playState.dataFormat, AudioOutputCallback, &playState, CFRunLoopGetCurrent(), NULL, 0, &playState.queue); if(status == 0){ for(int i=0; i<NUM_BUFFERS; i++){ AudioQueueAllocateBuffer(playState.queue, 40000, &playState.buffers[i]); } AudioQueueAddPropertyListener(playState.queue, kAudioQueueProperty_IsRunning, MyAudioQueuePropertyListenerProc, &playState); status = AudioQueueStart(playState.queue, NULL); if(status ==0){ playState.busy = True; } else{ return; } } else{ return; } } playState.buffers[queue_hi]->mAudioDataByteSize = s->size; memcpy(playState.buffers[queue_hi]->mAudioData, s->data, s->size); AudioQueueEnqueueBuffer(playState.queue, playState.buffers[queue_hi], 0, 0); queue_hi++; queue_hi = queue_hi % NUM_BUFFERS; items_in_queue++; } void AudioOutputCallback(void* inUserData, AudioQueueRef outAQ, AudioQueueBufferRef outBuffer) { PlayState *playState = (PlayState *)inUserData; items_in_queue--; } Thanks!

    Read the article

  • Two different assembly versions "The located assembly's manifest definition does not match the assem

    - by snicker
    I have a project that I am working on that requires the use of the Mysql Connector for NHibernate, (Mysql.Data.dll). I also want to reference another project (Migrator.NET) in the same project. The problem is even though Migrator.NET is built with the reference to MySql.Data with specific version = false, it still tries to reference the older version of MySql.Data that the library was built with instead of just using the version that is there.. and I get the exception listed in the title: ---- System.IO.FileLoadException : Could not load file or assembly 'MySql.Data, Version=1.0.10.1, Culture=neutral, PublicKeyToken=c5687fc88969c44d' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040) The version I am referencing in the main assembly is 6.1.3.0. How do I get the two assemblies to cooperate? Edit: For those of you specifying Assembly Binding Redirection, I have set this up: <?xml version="1.0" encoding="utf-8" ?> <configuration> <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <dependentAssembly> <assemblyIdentity name="MySql.Data" publicKeyToken="c5687fc88969c44d" culture="neutral"/> <bindingRedirect oldVersion="0.0.0.0-6.1.3.0" newVersion="6.1.3.0"/> </dependentAssembly> </assemblyBinding> </runtime> </configuration> I am referencing this the main assembly in another project and still getting the same errors. If my main assembly is copied local to be used in the other assembly, will it use the settings in app.config or does this information have to be included with every application or assembly that references my main assembly?

    Read the article

  • After Filter redirects to login.jsp, proper servlet doesnt get called

    - by gnomeguru
    My simple project structure is shown in this link. I am using Eclipse and Tomcat 6. There is login.jsp which submits its form to login_servlet. The login_servlet sets a session variable and then redirects to home.jsp. The home.jsp file has links to the 4 JSP files under a directory called /sam. In web.xml I have given the url-pattern as /sam/* for the LogFiler filter. The LogFilter just reads the session variable and does doChain(request,resposne) if valid, else it redirects to /login.jsp. RequestDispatcher rd = request.getRequestDispatcher("/login.jsp"); rd.forward(request,response); Basically I don't want anyone to access files inside /sam directory directly. Now let's say, I try to directly access a file inside /sam directory, the filter kicks in and the redirection to login.jsp works and even the broswers contents are that of login.jsp, but the url in the browser doesn't change. When I enter details and press submit, instead of sending the data to login_servlet, it sends it to sam/login_servlet and then tomcat tells me there is no such servlet here! Obviously there isn't. My doubt is why is it sending it so sam/login_servlet instead of /login_servlet which is usually what it does when I start running the login.jsp on my own. One more thing, is there a way I can apply the servlet to ONLY .jsp files inside /sam diectory? I tried giving the url-pattern like /sam/*.jsp, but Tomcat was refusing to accept that url-pattern.

    Read the article

  • File Uploading In Google Application Engine Using Django

    - by Ayush
    I am using gae with django. I have an project named MusicSite with following url mapping- urls.py from django.conf.urls.defaults import * from MusicSite.views import MainHandler from MusicSite.views import UploadHandler from MusicSite.views import ServeHandler urlpatterns = patterns('',(r'^start/', MainHandler), (r'^upload/', UploadHandler), (r'^/serve/([^/]+)?', ServeHandler), ) There is an application MusicSite inside MusicFun with the following codes- views.py import os import urllib from google.appengine.ext import blobstore from google.appengine.ext import webapp from google.appengine.ext.webapp import blobstore_handlers from google.appengine.ext.webapp import template from google.appengine.ext.webapp.util import run_wsgi_app def MainHandler(request): response=HttpResponse() upload_url = blobstore.create_upload_url('http://localhost: 8000/upload/') response.write('') response.write('' % upload_url) response.write("""Upload File: """) return HttpResponse(response) def UploadHandler(request): upload_files=request.FILES['file'] blob_info = upload_files[0] response.redirect('http://localhost:8000/serve/%s' % blob_info.key()) class ServeHandler(blobstore_handlers.BlobstoreDownloadHandler): def get(self, resource): resource = str(urllib.unquote(resource)) blob_info = blobstore.BlobInfo.get(resource) self.send_blob(blob_info) now whenever a upload a file using /start and click Submit i am taken to a blank page with the following url- localhost:8000/_ah/upload/ahhnb29nbGUtYXBwLWVuZ2luZS1kamFuZ29yGwsSFV9fQmxvYlVwbG9hZFNlc3Npb25fXxgHDA These random alphabets keep varying but the result is same. A blank page after every upload. Somebody please help. The server responses are as below- INFO:root:"GET /start/ HTTP/1.1" 200 - INFO:root:"GET /favicon.ico HTTP/1.1" 404 - INFO:root:Internal redirection to http://localhost:8000/upload/ INFO:root:Upload handler returned 500 ERROR:root:Invalid upload handler response. Only 301, 302 and 303 statuses are permitted and it may not have a content body. INFO:root:"POST /_ah/upload/ ahhnb29nbGUtYXBwLWVuZ2luZS1kamFuZ29yGwsSFV9fQmxvYlVwbG9hZFNlc3Npb25fXxgCDA HTTP/1.1" 500 - INFO:root:"GET /favicon.ico HTTP/1.1" 404 -

    Read the article

  • Authlogic, logout, credential capture and security

    - by Paddy
    Ok this is something weird. I got authlogic-oid installed in my rails app today. Everything works perfectly fine but for one small nuisance. This is what i did: I first register with my google openid. Successful login, redirection and my email, along with my correct openid is stored in my database. I am happy that everything worked fine! Now when i logout, my rails app as usual destroys the session and redirects me back to my root url where i can login again. Now if i try to login it still remembers my last login id. Not a big issue as i can always "Sign in as a different user" but i am wondering if there is anyway to not only logout from my app but also logout from google. I noticed the same with stack overflow's openid authentication system. Why am i so bothered about this, you may ask. But is it not a bad idea if your web apps end user, who happens to be in a cyber cafe, thinks he has logged out from your app and hence from his google account only to realize later that his google account had got hacked by some unworthy loser who just happened to notice that the one before him had not logged out from google and say.. changed his password!! Should i be paranoid? Isn't this a major security lapse while implementing the openid spec? Probably today someone can give me a workaround for this issue and the question is solved for me. But what about the others who have implemented openid in their apps and not implemented a workaround?

    Read the article

  • Authlogic, logout and credential capture

    - by Paddy
    Ok this is something weird. I got authlogic-oid installed in my rails app today. Everything works perfectly fine but for one small nuisance. This is what i did: I first register with my google openid. Successful login, redirection and my email, along with my correct openid is stored in my database. I am happy that everything worked fine! Now when i logout, my rails app as usual destroys the session and redirects me back to my root url where i can login again. Now if i try to login it still remembers my last login id. Not a big issue as i can always "Sign in as a different user" but i am wondering if there is anyway to not only logout from my app but also logout from google. I noticed the same with stack overflow's openid authentication system. Why am i so bothered about this, you may ask. But is it not a bad idea if your web apps end user, who happens to be in a cyber cafe, thinks he has logged out from your app and hence from his google account only to realize later that his google account had got hacked by some unworthy loser who just happened to notice that the one before had not logged out from google and say.. changed his password!! Should i be paranoid?

    Read the article

  • Why does IIS respond to a secure(SSL) page request with a 302 to its non-secure version?

    - by ISawrub
    I have SSL installed at the root of a server. I have a page whose code behind code is supposed to redirect after certain validation to a secure page. Here's the redirect code: switch (PageBase2.GetParameterValue("Environment")) //Retrieves App Setting named Environment from web.config { case "Server": strURL = @"https://" + HttpContext.Current.Request.Url.Authority + "/checkout/payment.aspx"; break; case "Local": strURL = @"http://" + HttpContext.Current.Request.Url.Authority + "/checkout/payment.aspx"; break; default: strURL = @"https://" + HttpContext.Current.Request.Url.Authority + "/checkout/payment.aspx"; break; } Response.Redirect(strURL, false); But the page that's been served by IIS is non-secure. I looked at the firebug console and it appears that the client does make a get request to https://server/checkout/payment.aspx but IIS responds with a 302 to http://server/checkout/payment.aspx Any clues, as to what could be causing it. I've even tried forcing SSL for the page, but it doesn't work I get 403.4 error. (SSL is required to view this resource.) And if i remove the redirection logic and code the payment page to redirect to its SSL version when the connection is not secure using Request.IsSecureConnection, i end up with an endless redirect loop, simply because IIS still won't serve the secure version without a 302. Any ideas?

    Read the article

  • Shell script to emulate warnings-as-errors?

    - by talkaboutquality
    Some compilers let you set warnings as errors, so that you'll never leave any compiler warnings behind, because if you do, the code won't build. This is a Good Thing. Unfortunately, some compilers don't have a flag for warnings-as-errors. I need to write a shell script or wrapper that provides the feature. Presumably it parses the compilation console output and returns failure if there were any compiler warnings (or errors), and success otherwise. "Failure" also means (I think) that object code should not be produced. What's the shortest, simplest UNIX/Linux shell script you can write that meets the explicit requirements above, as well as the following implicit requirements of otherwise behaving just like the compiler: - accepts all flags, options, arguments - supports redirection of stdout and stderr - produces object code and links as directed Key words: elegant, meets all requirements. Extra credit: easy to incorporate into a GNU make file. Thanks for your help. === Clues === This solution to a different problem, using shell functions (?), Append text to stderr redirects in bash, might figure in. Wonder how to invite litb's friend "who knows bash quite well" to address my question? === Answer status === Thanks to Charlie Martin for the short answer, but that, unfortunately, is what I started out with. A while back I used that, released it for office use, and, within a few hours, had its most severe drawback pointed out to me: it will PASS a compilation with no warnings, but only errors. That's really bad because then we're delivering object code that the compiler is sure won't work. The simple solution also doesn't meet the other requirements listed. Thanks to Adam Rosenfield for the shorthand, and Chris Dodd for introducing pipefail to the solution. Chris' answer looks closest, because I think the pipefail should ensure that if compilation actually fails on error, that we'll get failure as we should. Chris, does pipefail work in all shells? And have any ideas on the rest of the implicit requirements listed above?

    Read the article

  • .NET substitute dependent assemblies without recompiling?

    - by RK
    I have a question about how the .NET framework (2.0) resolves dependent assemblies. We're currently involved in a bit of a rewrite of a large ASP.NET application and various satellite executables. There are also some nagging problems with our foundation classes that we developed a new API to solve. So far this is a normal, albeit wide-reaching, update. Our heirarchy is: ASP.NET (aspx) business logic (DLLs) foundation classes (DLLs) So ASP.NET doesn't throw a fit, some of the DLLs (specifically the foundation classes) have a redirection layer that contains the old namespaces/functions and forwards them to the new API. When we replaced the DLLs, ASP.NET picked them up fine (probably because it triggered a recompile). Precompiled applications don't though, even though the same namespaces and classes are in both sets of DLLs. Even when the file is renamed, it complains about the assemblyname attribute being different (which it has to be by necessity). I know you can redirect to differnet versions of the same assembly, but is there any way to direct to a completely different assembly? The alternatives are to recompile the applications (don't really want to because the applications themselves haven't changed) or recompile the old foundation DLL with stubs refering to the new foundation DLL (the new dummy DLL is file system clutter).

    Read the article

  • how to implement word count bash shell

    - by codemax
    hey guys. I am trying to write my own code for the word count in bash shell. I did usual way. But i wanna use pipe's output to count the word. So for eg the 1st command is cat and i am redirecting to a file called med. Now i have to use to 'dup2' function to count the words in that file. How can i write the code for my wc? This is the code for my shell pgm : void process( char* cmd[], int arg_count ) { pid_t pid; pid = fork(); char path[81]; getcwd(path,81); strcat(path,"/"); strcat(path,cmd[0]); if(pid < 0) { cout << "Fork Failed" << endl; exit(-1); } else if( pid == 0 ) { int fd; fd =open("med", O_RDONLY); dup2(fd ,0); execvp( path, cmd ); } else { wait(NULL); } } And my wordcount is : int main(int argc, char *argv[]) { char ch; int count = 0; ifstream infile(argv[1]); while(!infile.eof()) { infile.get(ch); if(ch == ' ') { count++; } } return 0; } I dont know how to do input redirection i want my code to do this : When i just type wordcount in my shell implementation, I want it to count the words in the med file by default. Thanks in advance

    Read the article

  • How to elegantly handle ReturnUrl when using UrlRewrite in ASP.NET 2.0 WebForms

    - by Brian Kim
    I have a folder with multiple .aspx pages that I want to restrict access to. I have added web.config to that folder with <deny users="?"/>. The problem is that ReturnUrl is auto-generated with physical path to the .aspx file while I'm using UrlRewrite. Is there a way to manipulate ReturnUrl without doing manual authentication check and redirection? Is there a way to set ReturnUrl from code-behind or from web.config? EDIT: The application is using ASP.NET 2.0 WebForms. I cannot use 3.5 routing. EDIT 2: It seems like 401 status code is never captured. It returns 302 for protected page and redirects to login page with ReturnUrl. It does not return 401 for protected page. Hmm... Interesting... Ref: http://msdn.microsoft.com/en-us/library/aa480476.aspx This makes things harder... I might have to write reverse rewrite mapping rules to regex match ReturnUrl and replace it if it doesn't return 401... If it does return 401 I can either set RawUrl to Response.RedirectLocation or replace ReturnUrl with RawUrl. Anyone else have any other ideas?

    Read the article

  • Wordpress Rewrite Redirect Failure

    - by Rory Hart
    I'm helping a friend recover from the mess outsourcing a wordpress website caused him (mistake #1) and I have this weird error. The hosting he is using appears to be redirecting www.domain.com to domain.com (NFI why) automatically which works fine in every browser except IE (i know right!). So adding the first redirect fixed that, until I added the permalink redirect. Now when IE goes to an old wordpress link like http://www.domain.com/?p=520 the redirect fails. RewriteEngine On RewriteBase / # Rewrite rule for wierd redirect issue RewriteCond %{HTTP_HOST} ^www.domain.com$ RewriteRule ^/?(.*)$ "http\:\/\/doman\.com\/$1" [R=301,L] # Rewrite Rule for Wordress Permalinks RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] I tested this out with wget and it complains: ERROR: Redirection (301) without location. So it seems likely that IE is suffering from the same error (without the helpful error message). But I haven't a clue how to fix it. I am hoping that he will switch hosting companies but we will see. In the meantime any ideas?

    Read the article

  • grep 5 seconds of input from the serial port inside a shell-script

    - by pica
    I've got a device that I'm operating next to my PC and as it runs it's spitting log lines out it's serial port. I have this wired to my PC and I can see the log lines fine if I'm using either minicom or something like: ttylog -b 115200 -d /dev/ttyS0 I want to write 5 seconds of the device serial output to a temp file (or assign it to a variable) and then later grep that file for keywords that will let me know how the device is operating. I've already tried redirecting the output to a file while running the command in the background, and then sleeping 5 seconds and killing the process, but the log lines never get written to my temp file. Example: touch tempFile ttylog -b 115200 -d /dev/ttyS0 >> tempFile & serialPID=$! sleep 5 #kill ${serialPID} #does not work, gets wrong PID killall ttylog cat tempFile The file gets created but never filled with any data. I can also replace the ttylog line with: ttylog -b 115200 -d /dev/ttyS0 |tee -a tempFile & In neither case do I ever see any log lines logged to stdout or the log file unless I have multiple versions of ttylog running by mistake (see commented out line, D'oh). I have no idea what's going on here. It seems to be a failure of redirection within my script. Am I on the right track? Is there a better way to sample 5 seconds of the serial port?

    Read the article

  • Cold Fusion blank page in IE7 on refresh?

    - by richardtallent
    I'm new to Cold Fusion, have a very basic problem that's really slowing me down. I'm making edits in a text editor and refreshing the page in web browsers for testing. Standard web dev stuff, no browser-sniffing, redirection, or other weirdness, and no proxies involved. When I refresh the page in Chrome or Firefox, everything works fine, but when I refresh in IE7, I get a blank page. View Source shows me: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <HTML><HEAD> <META http-equiv=Content-Type content="text/html; charset=utf-8"></HEAD> <BODY></BODY></HTML> That's it. While I am rendering to the transitional DTD, the real head contains a title, etc. My development server is CF 9, production is 8. This problem has been happening in both. Seems it may only be happening on pages that are the the result of a POST action. I've never experienced this in ASP.NET (my usual development environment) using the same browsers.

    Read the article

  • How to return proper 404 for google while providing user friendly content to the user?

    - by Marek
    I am bouncing between posting this here and on Superuser. Please excuse me if you feel this does not belong here. I am observing the behavior described here - Googlebot is requesting random urls on my site, like aecgeqfx.html or sutwjemebk.html. I am sure that I am not linking these urls from anywhere on my site. I suspect this may be google probing how we handle non existent content - to cite from an answer to the linked question: [google is requesting random urls to] see if your site correctly handles non-existent files (by returning a 404 response header) We have a custom page for nonexistent content - a styled page saying "Content not found, if you believe you got here by error, please contact us", with a few internal links, served (naturally) with a 200 OK. The URL is served directly (no redirection to a single url). I am afraid this may discriminate the site at google - they may not interpret the user friendly page as a 404 - not found and may think we are trying to fake something and provide duplicate content. How should I proceed to ensure that google will not think the site is bogus while providing user friendly message to users in case they click on dead links by accident?

    Read the article

  • ColdFusion blank page in IE7 on refresh?

    - by richardtallent
    I'm new to ColdFusion, have a very basic problem that's really slowing me down. I'm making edits in a text editor and refreshing the page in web browsers for testing. Standard web dev stuff, no browser-sniffing, redirection, or other weirdness, and no proxies involved. When I refresh the page in Chrome or Firefox, everything works fine, but when I refresh in IE7, I get a blank page. View Source shows me: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <HTML><HEAD> <META http-equiv=Content-Type content="text/html; charset=utf-8"></HEAD> <BODY></BODY></HTML> That's it. While I am rendering to the transitional DTD, the real head contains a title, etc. My development server is CF 9, production is 8. This problem has been happening in both. Seems it may only be happening on pages that are the the result of a POST action. I've never experienced this in ASP.NET (my usual development environment) using the same browsers.

    Read the article

  • How do I launch background jobs w/ paramiko?

    - by sophacles
    Here is my scenario: I am trying to automate some tasks using Paramiko. The tasks need to be started in this order (using the notation (host, task)): (A, 1), (B, 2), (C, 2), (A,3), (B,3) -- essentially starting servers and clients for some testing in the correct order. Further, because in the tests networking may get mucked up, and because I need some of the output from the tests, I would like to just redirect output to a file. In similar scenarios the common response is to use 'screen -m -d' or to use 'nohup'. However with paramiko's exec_cmd, nohup doesn't actually exit. Using: bash -c -l nohup test_cmd & doesnt work either, exec_cmd still blocks to process end. In the screen case, output redirection doesn't work very well, (actually, doesnt work at all the best I can figure out). So, after all that explanation, my question is: is there an easy elegant way to detach processes and capture output in such a way as to end paramiko's exec_cmd blocking? Update The dtach command works nicely for this!

    Read the article

  • Password Confirmation Overlay

    - by Alasdair
    Hello, I'm creating a J2EE web application that uses jQuery and Ajax to help with some of the presentation for a user-friendly interface. I've done a lot of work ensuring security around persistant login cookies, and I've decided to request the password from any user that logged in using a persistant login cookie before being allowed to make any changes that could be malicious. This request would only happen once to confirm the user is who they say they are and will last throughout the session. At present, any requests that meet this criteria has their request information stored in session and then the user is forwarded to a page to confirm their password. Once confirmed, the user's original request is then performed and the requestion information removed from session. What I would like to do is avoid all this redirection and minimize what's held in session (even if it's just for a small time), thus improving usability and convenience for the user. I believe that a jQuery overlay could allow me to prompt the user for their password (if required) and then continue to submit the request if successful. I would of originally used ThickBox, but since that's now deprecated I don't see the benefit in implementing it in an application at this development stage. However, I have tried to create an overlay using jQuery but I've scrapped every attempt as I can't seem to make it all come together. My main problem is preventing the submission when the user incorrectly types a password or cancels the overlay. Desired Flow Persistant Login Sensitive Page Submit Password Confirmation Overlay [Continue Submit | (Cancel | Incorrect] I have already created JavaScript code to encrypt the password to be sent in a parameter, but all I need now is a method of controlling the overlay and how best to use Ajax for this purpose. Please ignore the fact that this is a J2EE web application when answering as it is irrelevant really. Thanks in advance, Alasdair

    Read the article

  • problems with Console.SetOut in Release Mode?

    - by Matt Jacobsen
    i have a bunch of Console.WriteLines in my code that I can observe at runtime. I communicate with a native library that I also wrote. I'd like to stick some printf's in the native library and observe them too. I don't see them at runtime however. I've created a convoluted hello world app to demonstrate my problem. When the app runs, I can debug into the native library and see that the hello world is called. The output never lands in the textwriter though. Note that if the same code is run as a console app then everything works fine. C#: [DllImport("native.dll")] static extern void Test(); StreamWriter writer; public Form1() { InitializeComponent(); writer = new StreamWriter(@"c:\output.txt"); writer.AutoFlush = true; System.Console.SetOut(writer); } private void button1_Click(object sender, EventArgs e) { Test(); } and the native part: __declspec(dllexport) void Test() { printf("Hello World"); } Update: hamishmcn below started talking about debug/release builds. I removed the native call in the above button1_click method and just replaced it with a standard Console.WriteLine .net call. When I compiled and ran this in debug mode the messages were redirected to the output file. When I switched to release mode however the calls weren't redirected. Console redirection only seems to work in debug mode. How do I get around this?

    Read the article

  • php sessions in database only writing part of information to the table...

    - by Ronedog
    I'm having difficulty figuring out what's going on here, hoping some one can help me out. I have been using php, mysql storing my session information in the database. The app is only running on localhost, vista. In the php.ini file I commented out the "session.save_handler = files" line and am using a php class to handle the session writes/reads, etc. My login process is this: Submit login credentials via login.php. login.php calls loginprocess.php. loginprocess.php verifies user, and if valid starts a new session and adds data to the session vars, then it redirects to index.php. Here's the problem. the loginprocess.php page has a bunch of session vars that get set like $_SESSION['account_id'] = $account_id; etc. but when I go to index.php and do a var_dump($_SESSION) it just says "array() empty". However, if I do a var_dump($_SESSION) in loginprocess.php, just before the redirection line header("Location: ../index.php"); then it shows all the data in the session variable. If I look in the database where the session information is stored, there is data in the session_id field, created_ts field, and expires field, but the session_data field has nothing inside of it and in the past this is the field where all my session data was stored. How could I be able to var_dump the session in loginprocess.php, but the data not exist in the db table, is it using some kind of caching? I cleared my cookies, etc...but no change. Why is the session_id, being written to the table, but the actual session data is not? Any ideas are appreciated. Thanks.

    Read the article

  • jsf messed up links

    - by Mateusz
    I'm new to JSF. My application is working, but I'm confused with links in browser when using controller. BTW, there is also PrimeFaces in my app so don't be suprised with p: tags. Let's say I have 'list' and 'show' pages with controller doing redirection between them. First I'm on http://localhost:8080/y/r/conversation/list.xhtml page. There is link created with line <p:commandLink action="#{lazyConversationBean.doShow(conv)}" ajax="false" value="View"/>. lazyConversationBean acts here as my Controller. There is method: public String doShow(Conversation c) { this.setSelectedConversation(c); return "view"; } from which I got redirected to ...... again http://localhost:8080/y/r/conversation/list.xhtml (browser shows it) even when it's correct http://localhost:8080/y/r/conversation/view.xhtml page. There I have link <p:commandButton action="#{lazyConversationBean.doList()}" ajax="false" value="Back to list"/> and again controller has method: public String doList() { return "list"; } from which I got redirected to ... yeah, you guessed right ... http://localhost:8080/y/r/conversation/view.xhtml (that is again what browser shows) even when again it is correct http://localhost:8080/y/r/conversation/list.xhtml page. It seams as browser link area is always one step behind page currently being displayed. I don't even know if it's some incorrect behaviour as I have no idea how to query google for this :D Just for test I did this short tutorial, where netbeans created whole stack of code on one of my entities, and behaviour was the same, so it's not PrimeFaces magic related. Can you tell my why it happens, and how to fix it? Users likes to copy correct links ;)

    Read the article

  • Mamimum page fetch with maximum bandwith

    - by Ehsan
    Hi I want to create an application like a spider I've implement fetching page as the following code in multi-thread application but there is two problem 1) I want to use my maximum bandwidth to send/receive request, how should I config my request to do so (Like Download Accelerator application and the like) cause I heard the normal application will use 66% of the available bandwidth. 2) I don't know what exactly HttpWebRequest.KeepAlive do, but as its name implies I think i can create a connection to a website and without closing the connection sending another request to that web site using existing connection. does it boost performance or Im wrong public PageFetchResult Fetch() { PageFetchResult fetchResult = new PageFetchResult(); try { HttpWebRequest req = (HttpWebRequest)HttpWebRequest.Create(URLAddress); HttpWebResponse resp = (HttpWebResponse)req.GetResponse(); Uri requestedURI = new Uri(URLAddress); Uri responseURI = resp.ResponseUri; if (Uri.Equals(requestedURI, responseURI)) { string resultHTML = ""; byte[] reqHTML = ResponseAsBytes(resp); if (!string.IsNullOrEmpty(FetchingEncoding)) resultHTML = Encoding.GetEncoding(FetchingEncoding).GetString(reqHTML); else if (!string.IsNullOrEmpty(resp.CharacterSet)) resultHTML = Encoding.GetEncoding(resp.CharacterSet).GetString(reqHTML); req.Abort(); resp.Close(); fetchResult.IsOK = true; fetchResult.ResultHTML = resultHTML; } else { URLAddress = responseURI.AbsoluteUri; relayPageCount++; if (relayPageCount > 5) { fetchResult.IsOK = false; fetchResult.ErrorMessage = "Maximum page redirection occured."; relayPageCount = 0; return fetchResult; } req.Abort(); resp.Close(); return Fetch(); } } catch (Exception ex) { fetchResult.IsOK = false; fetchResult.ErrorMessage = ex.Message; } return fetchResult; }

    Read the article

  • HttpWebResonse hangs on multiple request

    - by Ehsan
    I've an application that create many web request to donwload the news pages of a web site (i've tested for many web sites) after a while I find out that the application slows down in fetching the html source then I found out that HttpWebResonse fails getting the response. I post only the function that do this job. public PageFetchResult Fetch() { PageFetchResult fetchResult = new PageFetchResult(); try { HttpWebRequest req = (HttpWebRequest)HttpWebRequest.Create(URLAddress); HttpWebResponse resp = (HttpWebResponse)req.GetResponse(); Uri requestedURI = new Uri(URLAddress); Uri responseURI = resp.ResponseUri; if (Uri.Equals(requestedURI, responseURI)) { string resultHTML = ""; byte[] reqHTML = ResponseAsBytes(resp); if (!string.IsNullOrEmpty(FetchingEncoding)) resultHTML = Encoding.GetEncoding(FetchingEncoding).GetString(reqHTML); else if (!string.IsNullOrEmpty(resp.CharacterSet)) resultHTML = Encoding.GetEncoding(resp.CharacterSet).GetString(reqHTML); resp.Close(); fetchResult.IsOK = true; fetchResult.ResultHTML = resultHTML; } else { URLAddress = responseURI.AbsoluteUri; relayPageCount++; if (relayPageCount > 5) { fetchResult.IsOK = false; fetchResult.ErrorMessage = "Maximum page redirection occured."; return fetchResult; } return Fetch(); } } catch (Exception ex) { fetchResult.IsOK = false; fetchResult.ErrorMessage = ex.Message; } return fetchResult; } any solution would greatly appreciate

    Read the article

  • Internet Explorer randomly drops sessions between pages in cakePHP

    - by Emerson Taymor
    Hello everyone, I've come across an extremely unusual bug that my team has literally no idea how to solve. Doing some research, I found some similar solutions that I thought would work, but alas did not. Here is my situation, let me know if I can provide additional insight to help solve the problem. The first step is that someone chooses a country via a flash map. Flash passes this region name (as well as a date) through the URL, which we then convert to a session. The next page contains no Flash and doesn't display the selected region, but it does hold on to it for further down the process. Everything works perfectly in Safari and Firefox; however, in IE sometimes unexpected results occur. Frequently (but not always), the session is dropped completely and no sessions are stored between the first and 2nd pages. Here are the steps that I have taken thus far, unsuccessfully: 1. Changed Security from Medium - Low 2. Changed CheckUserAgent from True - False 3. Changed storing of sessions from PHP - Database Some additional information that may be useful: I have tried printing out the session data in Debug (debug($_SESSION) on my view file and debug set to 2 in config). In Internet Explorer everything prints out as expected EXCEPT when the region and date don't get set. For example: If the region and date don't get set NOTHING is printed out for debug. I don't get the session details at the top, and I don't get the normal dump of calls at the bottom of the page either. I am not using redirection on these pages. Please let me know if you have ANY idea of what is causing this or any solutions. I am beyond frustrated and have tried as much as I can to solve this. Thanks!

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35  | Next Page >