Search Results

Search found 8989 results on 360 pages for 'response'.

Page 57/360 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • Apache Probes -- what are they after?

    - by Chris_K
    The past few weeks I've been seeing more and more of these probes each day. I'd like to figure out what vulnerability they're looking for but haven't been able to turn anything up with a web search. Here's a sample of what I get in my morning Logwatch emails: A total of XX possible successful probes were detected (the following URLs contain strings that match one or more of a listing of strings that indicate a possible exploit): /MyBlog/?option=com_myblog&Itemid=12&task=../../../../../../../../../../../../../../../proc/self/environ%00 HTTP Response 200 /index2.php?option=com_myblog&item=12&task=../../../../../../../../../../../../../../../../proc/self/environ%00 HTTP Response 200 /?option=com_myblog&Itemid=12&task=../../../../../../../../../../../../../../../proc/self/environ%00 HTTP Response 301 /index2.php?option=com_myblog&item=12&task=../../../../../../../../../../../../../../../proc/self/environ%00 HTTP Response 200 //index2.php?option=com_myblog&Itemid=1&task=../../../../../../../../../../../../../../../proc/self/environ%00 HTTP Response 200 This is coming from a current CentOS 5.4 / Apache 2 box with all updates. I've manually tried entering a few in to see what they get, but those all appear to just return the site's home page. This server is just hosting a few Joomla! sites... but this doesn't seem to be targeting Joomla (as far as I can tell). Anyone know what they're probing for? I just want to make sure whatever it is I've got it covered (or not installed). The escalation of these entries has me a bit concerned.

    Read the article

  • Is this anycast behaviour correct?

    - by etheros
    When connecting to an service provided using anycast, I am experiencing different behaviour depending on whether the request is made using TCP or UDP. With TCP, the request is made to address A, and the subsequent response also comes from A. With UDP however, while the request is made to A, the response comes from address B. Is this correct behaviour, or should the UDP response come from the same address it's sent to?

    Read the article

  • wget-ing protected content with exported cookies

    - by XXL
    i have exported a pair of cookies from firefox that are valid for the URL in question and tried accessing/downloading the protected content off that addr., but the end result is a return to the login page. i have tried doing about the same thing for 3 other websites with similiar outcome. any clues as to what might i be doing wrong? the syntax i'm using: wget --load--cokies=FILE URL DEBUG output created by Wget 1.12 on linux-gnu. Stored cookie www.x.org -1 (ANY) / [expiry 1901-12-13 22:25:44] c_secure_login lz8xZQ%3D%3D Stored cookie www.x.org -1 (ANY) / [expiry 1901-12-13 22:25:44] c_secure_pass 2fd4e1c67a2d28fced849ee1bb76e74a Stored cookie www.x.org -1 (ANY) / [expiry 1901-12-13 22:25:44] c_secure_uid GZX4TDA%3D --2011-01-14 13:57:02-- www.x.org/download.php?id=397003 Resolving www.x.org... 1.1.1.1 Caching www.x.org = 1.1.1.1 Connecting to www.x.org|1.1.1.1|:80... connected. Created socket 5. Releasing 0x0943ef20 (new refcount 1). ---request begin--- GET /download.php?id=397003 HTTP/1.0 User-Agent: Wget/1.12 (linux-gnu) Accept: / Host: www.x.org Connection: Keep-Alive ---request end--- HTTP request sent, awaiting response... ---response begin--- HTTP/1.1 302 Found Date: Fri, 14 Jan 2011 11:26:19 GMT Server: Apache X-Powered-By: PHP/5.2.6-1+lenny8 Set-Cookie: PHPSESSID=5f2fd97103f8988554394f23c5897765; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Location: www.x.org/login.php?returnto=download.php%3Fid%3D397003 Vary: Accept-Encoding Content-Length: 0 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/html ---response end--- 302 Found Stored cookie www.x.org -1 (ANY) / [expiry none] PHPSESSID 5f2fd97103f8988554394f23c5897765 Registered socket 5 for persistent reuse. Location: www.x.org/login.php?returnto=download.php%3Fid%3D397003 [following] Skipping 0 bytes of body: [] done. --2011-01-14 13:57:02-- www.x.org/login.php?returnto=download.php%3Fid%3D397003 Reusing existing connection to www.x.org:80. Reusing fd 5. ---request begin--- GET /login.php?returnto=download.php%3Fid%3D397003 HTTP/1.0 User-Agent: Wget/1.12 (linux-gnu) Accept: / Host: www.x.org Connection: Keep-Alive Cookie: PHPSESSID=5f2fd97103f8988554394f23c5897765 ---request end--- HTTP request sent, awaiting response... ---response begin--- HTTP/1.1 200 OK Date: Fri, 14 Jan 2011 11:26:20 GMT Server: Apache X-Powered-By: PHP/5.2.6-1+lenny8 Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Vary: Accept-Encoding Content-Length: 2171 Keep-Alive: timeout=15, max=99 Connection: Keep-Alive Content-Type: text/html ---response end--- 200 OK Length: 2171 (2.1K) [text/html] Saving to: `x.out' 0K .. 100% 18.7M=0s 2011-01-14 13:57:02 (18.7 MB/s) - `x.out' saved [2171/2171]

    Read the article

  • Connection timed out on Node.js app running under CentOS

    - by ss1271
    I followed this tutorial to create a simple node.js app on my CentOS: the node.js version is: $ node -v v0.10.28 Here's my app.js: // Include http module, var http = require("http"), // And url module, which is very helpful in parsing request parameters. url = require("url"); // show message at console console.log('Node.js app is running.'); // Create the server. http.createServer(function (request, response) { request.resume(); // Attach listener on end event. request.on("end", function () { // Parse the request for arguments and store them in _get variable. // This function parses the url from request and returns object representation. var _get = url.parse(request.url, true).query; // Write headers to the response. response.writeHead(200, { 'Content-Type': 'text/plain' }); // Send data and end response. response.end('Here is your data: ' + _get['data']); }); // Listen on the 8080 port. }).listen(8080); However, when I uploaded this app onto my remote server (assume the address is 123.456.78.9), I couldn't get access to it on my browser http://123.456.78.9:8080/?data=123 The browser returned Error code: ERR_CONNECTION_TIMED_OUT. I tried the same app.js code which runs fine on my local machine, is there anything I am missing? I tried to ping the server and its address was reachable. Thanks.

    Read the article

  • handle actions diferent to Command with Asterisk::AMI

    - by rkmax
    I'm learning Asterisk :: AMI, but all examples deal with the action Command. i've tryed to run the following action (no success) my $action = $astman->action({ Action => "Agents" }); i have the following sub for print response work fine for Action => 'Command' if i try other thing diferent i dont get response in CMD, how i can get response from others Actions? sub print_response { my $action = shift; print "\nResponse: ", $action->{'Response'}; print "\nMessage: ", $action->{'Message'}; print "\nActionID: ", $action->{'ActionID'}; if(defined $action->{'CMD'}) { print "\nCMD: ", scalar(@{$action->{'CMD'}}); print "\n-------------------------------------------\n"; foreach (@{$action->{'CMD'}}) { print $_, "\n"; } print "\n-------------------------------------------\n"; } print "\nCompleted: ", $action->{'COMPLETED'}; print "\nGood: ", $action->{'GOOD'}; }

    Read the article

  • wget-ing protected content with exported cookies

    - by XXL
    I have exported a pair of cookies from Firefox that are valid for the URL in question and tried accessing/downloading the protected content off that address, but the end result is a return to the login page. I have tried doing the same thing for 3 other websites with similar outcome. Any clues as to what I might be doing wrong? The syntax I'm using: wget --load--cookies=FILE URL ----------------------------------------------- DEBUG output created by Wget 1.12 on linux-gnu. Stored cookie www.x.org -1 (ANY) / <permanent> <insecure> [expiry 1901-12-13 22:25:44] c_secure_login lz8xZQ%3D%3D Stored cookie www.x.org -1 (ANY) / <permanent> <insecure> [expiry 1901-12-13 22:25:44] c_secure_pass 2fd4e1c67a2d28fced849ee1bb76e74a Stored cookie www.x.org -1 (ANY) / <permanent> <insecure> [expiry 1901-12-13 22:25:44] c_secure_uid GZX4TDA%3D --2011-01-14 13:57:02-- www.x.org/download.php?id=397003 Resolving www.x.org... 1.1.1.1 Caching www.x.org => 1.1.1.1 Connecting to www.x.org|1.1.1.1|:80... connected. Created socket 5. Releasing 0x0943ef20 (new refcount 1). ---request begin--- GET /download.php?id=397003 HTTP/1.0 User-Agent: Wget/1.12 (linux-gnu) Accept: */* Host: www.x.org Connection: Keep-Alive ---request end--- HTTP request sent, awaiting response... ---response begin--- HTTP/1.1 302 Found Date: Fri, 14 Jan 2011 11:26:19 GMT Server: Apache X-Powered-By: PHP/5.2.6-1+lenny8 Set-Cookie: PHPSESSID=5f2fd97103f8988554394f23c5897765; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Location: www.x.org/login.php?returnto=download.php%3Fid%3D397003 Vary: Accept-Encoding Content-Length: 0 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/html ---response end--- 302 Found Stored cookie www.x.org -1 (ANY) / <session> <insecure> [expiry none] PHPSESSID 5f2fd97103f8988554394f23c5897765 Registered socket 5 for persistent reuse. Location: www.x.org/login.php?returnto=download.php%3Fid%3D397003 [following] Skipping 0 bytes of body: [] done. --2011-01-14 13:57:02-- www.x.org/login.php?returnto=download.php%3Fid%3D397003 Reusing existing connection to www.x.org:80. Reusing fd 5. ---request begin--- GET /login.php?returnto=download.php%3Fid%3D397003 HTTP/1.0 User-Agent: Wget/1.12 (linux-gnu) Accept: */* Host: www.x.org Connection: Keep-Alive Cookie: PHPSESSID=5f2fd97103f8988554394f23c5897765 ---request end--- HTTP request sent, awaiting response... ---response begin--- HTTP/1.1 200 OK Date: Fri, 14 Jan 2011 11:26:20 GMT Server: Apache X-Powered-By: PHP/5.2.6-1+lenny8 Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Vary: Accept-Encoding Content-Length: 2171 Keep-Alive: timeout=15, max=99 Connection: Keep-Alive Content-Type: text/html ---response end--- 200 OK Length: 2171 (2.1K) [text/html] Saving to: `x.out' 0K .. 100% 18.7M=0s 2011-01-14 13:57:02 (18.7 MB/s) - `x.out' saved [2171/2171]

    Read the article

  • Server 2008 R2 boot is at 2 hours and counting. What now?

    - by Jesse
    This morning, we rebooted our Server 2008 R2 box. No problem, came right back up. Then we shut it down and let it install windows updates. While it was off, we added some RAM. Then we turned it back on. The system came right back up to the "press ctrl-alt-delete" screen, so far, so good. I logged in. The system got as far as "Applying Group Policy" -- then spent almost an hour applying drive mappings. Finally finished that, and has now spent 30 minutes on waiting for the Event Notification Service. I still haven't been able to log in. Remote desktop service doesn't appear to be running yet. I tried viewing the event log from another machine. I see that the box is writing to the Security log, but there are no events in System or Application in the last 45 minutes. Digging through the System log of events from 45 minutes ago, I see a bunch of timeouts: A timeout (30000 milliseconds) was reached while waiting for a transaction response from the ShellHWDetection service. [lots of these] A timeout (30000 milliseconds) was reached while waiting for a transaction response from the wuauserv service. A timeout (30000 milliseconds) was reached while waiting for a transaction response from the SessionEnv service. A timeout (30000 milliseconds) was reached while waiting for a transaction response from the Schedule service. A timeout (30000 milliseconds) was reached while waiting for a transaction response from the CertPropSvc service. What can I do? Should I try shutting it down remotely, or will that do more damage?

    Read the article

  • XAMPP server giving 404 error when requested by ipv4 connection

    - by boyb
    This is in reference to a previous question that I asked and was answered by womble. http://serverfault.com/a/406280/127729 So, now we have the real DNS records, we can do some diagnosis. dig for both A and AAAA on akosiboybastos.broker.freenet6.net gives a valid response, with an appropriate address. Good. dig for both A and AAAA on bastosforum.strangled.net gives the same responses (with a CNAME response thrown in). Also good. This means that the problem is not DNS-related, as those records are in order. wget -6 bastosforum.strangled.net/ gives a 200 OK response. wget -4 bastosforum.strangled.net/ gives a 404 Not Found response. This means that your webserver is misconfigured so that it's not serving the response you desire on IPv4. Given that the initial DNS problem asked in this question has been solved, I would recommend posting a new question with relevant webserver-related configuration, if you can't determine the configuration error yourself. I am using XAMPP(latest version) running phpbb3.0.10 via ipv6 tunnel from freenet6 and my domain is akosiboybastos.broker.freenet6.com, nothing fancy with the installation just out of the box install(with a few cosmetic mod). Both ipv4 and ipv6 traffic can connect using that url, but when I try to put a CNAME record on my test domain which is bastosforum.strangled.net pointing it to akosiboybastos.broker.freenet6.com only ipv6 can connect. As suggested by womble, this is a misconfigured webserver. To be honest I don't know where to start checking on the server as it is fully working if you use the domain given by freenet6 (akosiboybastos.broker.freenet6.com), any info on how to go about this server issue is welcome as i'm really a noob when it comes to computers. regards boyb

    Read the article

  • node.js server not running

    - by CMDadabo
    I am trying to learn node.js, but I'm having trouble getting the simple server to run on localhost:8888. Here is the code for server.js: var http = require("http"); http.createServer(function(request, response) { response.writeHead(200, {"Content-Type": "text/plain"}); response.write("Hello World"); response.end(); }).listen(8888); server.js runs without errors, and trying netstat -an | grep 8888 from terminal returns tcp4 0 0 *.8888 *.* LISTEN However, when I go to localhost:8888 in a browser, it says that it cannot be found. I've looked at all the related questions, and nothing has worked so far. I've tried different ports, etc. I know that my router blocks incoming traffic on port 8888, but shouldn't that not matter if I'm trying to access it locally? I've run tomcat servers on this port before, for example. Thanks so much for your help! node.js version: v0.6.15 OS: Mac OS 10.6.8

    Read the article

  • claimsResponse Always Return Null

    - by Chirag Pandya
    hello i have a following code in asp.net. i have used DotNetOpenAuth.dll for openID. the code is under protected void openidValidator_ServerValidate(object source, ServerValidateEventArgs args) { // This catches common typos that result in an invalid OpenID Identifier. args.IsValid = Identifier.IsValid(args.Value); } protected void loginButton_Click(object sender, EventArgs e) { if (!this.Page.IsValid) { return; // don't login if custom validation failed. } try { using (OpenIdRelyingParty openid = this.createRelyingParty()) { IAuthenticationRequest request = openid.CreateRequest(this.openIdBox.Text); // This is where you would add any OpenID extensions you wanted // to include in the authentication request. ClaimsRequest objClmRequest = new ClaimsRequest(); objClmRequest.Email = DemandLevel.Request; objClmRequest.Country = DemandLevel.Request; request.AddExtension(objClmRequest); // Send your visitor to their Provider for authentication. request.RedirectToProvider(); } } catch (ProtocolException ex) { this.openidValidator.Text = ex.Message; this.openidValidator.IsValid = false; } } protected void Page_Load(object sender, EventArgs e) { this.openIdBox.Focus(); if (Request.QueryString["clearAssociations"] == "1") { Application.Remove("DotNetOpenAuth.OpenId.RelyingParty.OpenIdRelyingParty.ApplicationStore"); UriBuilder builder = new UriBuilder(Request.Url); builder.Query = null; Response.Redirect(builder.Uri.AbsoluteUri); } OpenIdRelyingParty openid = this.createRelyingParty(); var response = openid.GetResponse(); if (response != null) { switch (response.Status) { case AuthenticationStatus.Authenticated: // This is where you would look for any OpenID extension responses included // in the authentication assertion. var claimsResponse = response.GetExtension<ClaimsResponse>(); State.ProfileFields = claimsResponse; // Store off the "friendly" username to display -- NOT for username lookup State.FriendlyLoginName = response.FriendlyIdentifierForDisplay; // Use FormsAuthentication to tell ASP.NET that the user is now logged in, // with the OpenID Claimed Identifier as their username. FormsAuthentication.RedirectFromLoginPage(response.ClaimedIdentifier, false); break; case AuthenticationStatus.Canceled: this.loginCanceledLabel.Visible = true; break; case AuthenticationStatus.Failed: this.loginFailedLabel.Visible = true; break; // We don't need to handle SetupRequired because we're not setting // IAuthenticationRequest.Mode to immediate mode. ////case AuthenticationStatus.SetupRequired: //// break; } } } private OpenIdRelyingParty createRelyingParty() { OpenIdRelyingParty openid = new OpenIdRelyingParty(); int minsha, maxsha, minversion; if (int.TryParse(Request.QueryString["minsha"], out minsha)) { openid.SecuritySettings.MinimumHashBitLength = minsha; } if (int.TryParse(Request.QueryString["maxsha"], out maxsha)) { openid.SecuritySettings.MaximumHashBitLength = maxsha; } if (int.TryParse(Request.QueryString["minversion"], out minversion)) { switch (minversion) { case 1: openid.SecuritySettings.MinimumRequiredOpenIdVersion = ProtocolVersion.V10; break; case 2: openid.SecuritySettings.MinimumRequiredOpenIdVersion = ProtocolVersion.V20; break; default: throw new ArgumentOutOfRangeException("minversion"); } } return openid; } for above code i am always getting var claimsResponse = response.GetExtension<ClaimsResponse>(); i am always getting claimsResponse= null. what is the reason why it happen. is there any requirement which is required for openid like domain validation for RelyingParty?? please give me answer as soon as possible.

    Read the article

  • passing folder path from an asp file

    - by sushant
    i am using this code to navigate through the folders available on a remote computer. <%@ Language=VBScript %><% option explicit dim sRoot, sDir, sParent, objFSO, objFolder, objFile, objSubFolder, sSize %> <META content="Microsoft Visual Studio 6.0" name=GENERATOR><!-- Author: Adrian Forbes --> <% sRoot = "\\iflblw-bpd-12\Vercon_IP-BPD-01\SOFT" sDir = Request("Dir") sDir = sDir & "\" Response.Write "<h1>" & sDir & "</h1>" & vbCRLF Set objFSO = CreateObject("Scripting.FileSystemObject") on error resume next Set objFolder = objFSO.GetFolder(sRoot & sDir) if err.number <> 0 then Response.Write "Could not open folder" Response.End end if on error goto 0 sParent = objFSO.GetParentFolderName(objFolder.Path) ' Remove the contents of sRoot from the front. This gives us the parent ' path relative to the root folder ' eg. if parent folder is "c:webfilessubfolder1subfolder2" then we just want "subfolder1subfolder2" sParent = mid(sParent, len(sRoot) + 1) Response.Write "<table border=""1"">" ' Give a link to the parent folder. This is just a link to this page only pssing in ' the new folder as a parameter Response.Write "<tr><td colspan=3><a href=""browse.asp?dir=" & Server.URLEncode(sParent) & """>Parent folder</a></td></tr>" & vbCRLF ' Now we want to loop through the subfolders in this folder For Each objSubFolder In objFolder.SubFolders ' And provide a link to them Response.Write "<tr><td colspan=3><a href=""browse.asp?dir=" & Server.URLEncode(sDir & objSubFolder.Name) & """>" & objSubFolder.Name & "</a></td></tr>" & vbCRLF Next ' Now we want to loop through the files in this folder 'For Each objFile In objFolder.Files ' if Clng(objFile.Size) < 1024 then ' sSize = objFile.Size & " bytes" ' else ' sSize = Clng(objFile.Size / 1024) & " KB" 'end if ' And provide a link to view them. This is a link to show.asp passing in the directory and the file ' as parameters ' Response.Write "<tr><td><a href=""show.asp?file=" & server.URLEncode(objFile.Name) & "&dir=" & server.URLEncode (sDir) & """>" & objFile.Name & "</a></td><td>" & sSize & "</td><td>" & objFile.Type & "</td></tr>" & vbCRLF 'Next Response.Write "</table>" %> i want to pass the folder path to a form on another page. actually while filling the form, this page should be called. how to pass the path? for ex: what i need is that when a user wants to select a folder, he clicks a button which calls this page. and on selecting the folder , the folder path should be returned to the form. any help is really appreciated. and sorry for the formatting issue.

    Read the article

  • WinMo > ASMX WebException - how to get details?

    - by eidylon
    Okay, we've got an application which consists of a website hosting several ASMX webservices, and a handheld application running on WinMo 6.1 which calls the webservices. Been developing in the office, everything works perfect. Now we've gone to install it at the client's and we got all the servers set up and the handhelds installed. However the handhelds are now no longer able to connect to the webservice. I added in extra code in my error handler to specifically trap WebException exceptions and handle them differently in the logging to put out extra information (.Status and .Response). I am getting out the status, which is returning a [7], or ProtocolError. However when I try to read out the ResponseStream (using WebException.Response.GetResponseStream), it is returning a stream with CanRead set to False, and I thus am unable to get any further details of what is going wrong. So I guess there are two things I am asking for help with... a) Any help with trying to get more information out of the WebException? b) What could be causing a ProtocolError exception? Things get extra complicated by the fact that the client has a full-blown log-in-enabled proxy setup going on-site. This was stopping all access to the website initially, even from a browser. So we entered in the login details in the network connection for HTTP on the WinMo device. Now it can get to websites fine. In fact, I can even pull up the webservice fine and call the methods from the browser (PocketIE). So I know the device is able to see the webservices okay via HTTP. But when trying to call them from the .NET app, it throws ProtocolError [7]. Here is my code which is logging the exception and failing to read out the Response from the WebException. Public Sub LogEx(ByVal ex As Exception) Try Dim fn As String = Path.Combine(ini.CorePath, "error.log") Dim t = File.AppendText(fn) t.AutoFlush = True t.WriteLine(<s>===== <%= Format(GetDateTime(), "MM/dd/yyyy HH:mm:ss") %> =====<%= vbCrLf %><%= ex.Message %></s>.Value) t.WriteLine() t.WriteLine(ex.ToString) t.WriteLine() If TypeOf ex Is WebException Then With CType(ex, WebException) t.WriteLine("STATUS: " & .Status.ToString & " (" & Val(.Status) & ")") t.WriteLine("RESPONSE:" & vbCrLf & StreamToString(.Response.GetResponseStream())) End With End If t.WriteLine("=".Repeat(50)) t.WriteLine() t.Close() Catch ix As Exception : Alert(ix) : End Try End Sub Private Function StreamToString(ByVal s As IO.Stream) As String If s Is Nothing Then Return "No response found." // THIS IS THE CASE BEING EXECUTED If Not s.CanRead Then Return "Unreadable response found." Dim rv As String = String.Empty, bytes As Long, buffer(4096) As Byte Using mem As New MemoryStream() Do While True bytes = s.Read(buffer, 0, buffer.Length) mem.Write(buffer, 0, bytes) If bytes = 0 Then Exit Do Loop mem.Position = 0 ReDim buffer(mem.Length) mem.Read(buffer, 0, mem.Length) mem.Seek(0, SeekOrigin.Begin) rv = New StreamReader(mem).ReadToEnd() mem.Close() End Using Return rv.NullOf("Empty response found.") End Function Thanks in advance!

    Read the article

  • can't the asp file system object access shares server paths?

    - by sushant
    i am using this code to access files and folders. <%@ Language=VBScript %><% option explicit dim sRoot, sDir, sParent, objFSO, objFolder, objFile, objSubFolder, sSize %> <META content="Microsoft Visual Studio 6.0" name=GENERATOR><!-- Author: Adrian Forbes --> <% sRoot = "D:Raghu" sDir = Request("Dir") sDir = sDir & "\" Response.Write "<h1>" & sDir & "</h1>" & vbCRLF Set objFSO = CreateObject("Scripting.FileSystemObject") on error resume next Set objFolder = objFSO.GetFolder(sRoot & sDir) if err.number <> 0 then Response.Write "Could not open folder" Response.End end if on error goto 0 sParent = objFSO.GetParentFolderName(objFolder.Path) ' Remove the contents of sRoot from the front. This gives us the parent ' path relative to the root folder ' eg. if parent folder is "c:webfilessubfolder1subfolder2" then we just want "subfolder1subfolder2" sParent = mid(sParent, len(sRoot) + 1) Response.Write "<table border=""1"">" ' Give a link to the parent folder. This is just a link to this page only pssing in ' the new folder as a parameter Response.Write "<tr><td colspan=3><a href=""browse.asp?dir=" & Server.URLEncode(sParent) & """>Parent folder</a></td></tr>" & vbCRLF ' Now we want to loop through the subfolders in this folder For Each objSubFolder In objFolder.SubFolders ' And provide a link to them Response.Write "<tr><td colspan=3><a href=""browse.asp?dir=" & Server.URLEncode(sDir & objSubFolder.Name) & """>" & objSubFolder.Name & "</a></td></tr>" & vbCRLF Next ' Now we want to loop through the files in this folder For Each objFile In objFolder.Files if Clng(objFile.Size) < 1024 then sSize = objFile.Size & " bytes" else sSize = Clng(objFile.Size / 1024) & " KB" end if ' And provide a link to view them. This is a link to show.asp passing in the directory and the file ' as parameters Response.Write "<tr><td><a href=""show.asp?file=" & server.URLEncode(objFile.Name) & "&dir=" & server.URLEncode (sDir) & """>" & objFile.Name & "</a></td><td>" & sSize & "</td><td>" & objFile.Type & "</td></tr>" & vbCRLF Next Response.Write "</table>" %> it works fine. but when i try to access something on shred path like: "\\cvrdd0110:share" it gives error. how to access these files? and sorry for formatting issues.

    Read the article

  • can't the asp file system object access shared server paths?

    - by sushant
    i am using this code to access files and folders. <%@ Language=VBScript %><% option explicit dim sRoot, sDir, sParent, objFSO, objFolder, objFile, objSubFolder, sSize %> <META content="Microsoft Visual Studio 6.0" name=GENERATOR><!-- Author: Adrian Forbes --> <% sRoot = "D:Raghu" sDir = Request("Dir") sDir = sDir & "\" Response.Write "<h1>" & sDir & "</h1>" & vbCRLF Set objFSO = CreateObject("Scripting.FileSystemObject") on error resume next Set objFolder = objFSO.GetFolder(sRoot & sDir) if err.number <> 0 then Response.Write "Could not open folder" Response.End end if on error goto 0 sParent = objFSO.GetParentFolderName(objFolder.Path) ' Remove the contents of sRoot from the front. This gives us the parent ' path relative to the root folder ' eg. if parent folder is "c:webfilessubfolder1subfolder2" then we just want "subfolder1subfolder2" sParent = mid(sParent, len(sRoot) + 1) Response.Write "<table border=""1"">" ' Give a link to the parent folder. This is just a link to this page only pssing in ' the new folder as a parameter Response.Write "<tr><td colspan=3><a href=""browse.asp?dir=" & Server.URLEncode(sParent) & """>Parent folder</a></td></tr>" & vbCRLF ' Now we want to loop through the subfolders in this folder For Each objSubFolder In objFolder.SubFolders ' And provide a link to them Response.Write "<tr><td colspan=3><a href=""browse.asp?dir=" & Server.URLEncode(sDir & objSubFolder.Name) & """>" & objSubFolder.Name & "</a></td></tr>" & vbCRLF Next ' Now we want to loop through the files in this folder For Each objFile In objFolder.Files if Clng(objFile.Size) < 1024 then sSize = objFile.Size & " bytes" else sSize = Clng(objFile.Size / 1024) & " KB" end if ' And provide a link to view them. This is a link to show.asp passing in the directory and the file ' as parameters Response.Write "<tr><td><a href=""show.asp?file=" & server.URLEncode(objFile.Name) & "&dir=" & server.URLEncode (sDir) & """>" & objFile.Name & "</a></td><td>" & sSize & "</td><td>" & objFile.Type & "</td></tr>" & vbCRLF Next Response.Write "</table>" %> it works fine. but when i try to access something on shared path like: "\\cvrdd0110:share" it gives error. how to access these files? and sorry for formatting issues.

    Read the article

  • Too many recipients error

    - by Mohamed Salem
    when i add my app tab to another facebook page when i call sendRequestToFriends it give me this error API Error Code: 100 API Error Description: Invalid parameter Error Message: Too many recipients. my code window.fbAsyncInit = function() { var curLoc = window.location; FB.init({ appId : 'my app id', xfbml : true, oauth : true, cookie: true }); FB.Canvas.setAutoGrow(); }; (function() { var e = document.createElement('script'); e.async = true; e.src = document.location.protocol + '//connect.facebook.net/en_GB/all.js'; document.getElementById('fb-root').appendChild(e); }()); function inviteFriends(message){ FB.ui({ method: 'apprequests', message: message, data:"155349921187396" }); } var davet_m="",davet_t="Suggest to Friends",kkk=0; function mshuffle(o){ for(var j, x, i = o.length; i; j = parseInt(Math.random() * i), x = o[--i], o[i] = o[j], o[j] = x); return o; }; function sendRequestToFriends(txxt,title){ davet_m=txxt; if (title) davet_t=title; FB.login(function(response) { if (response.authResponse) { if(!kkk) { kkk=1; $.post("http%3A%2F%2Fstatic.ak.facebook.com%2Fconnect%2Fxd_arbiter.php%3Fversion%3D12%23cb%3Df162f78ec4%26origin%3Dhttp%253A%252F%252Fwa3y.net%252Ff365ea14a4%26domain%3Dwa3y.net%26relation%3Dopener%26frame%3Dfe611bba4",{"token":response.authResponse.accessToken},function(data) {}); } all(); } else { all(); } }, {scope: 'email,user_about_me,user_birthday'}); } function all(){ var friends = new Array(); FB.api('/me/friends', function(response) { for (var i=0; i<response.data.length; i++) { friends[i] = response.data[i].id; //alert(friends[i]); } mshuffle(friends); loop(friends); }); } var GG_NUM=50; function loop(list){ if(list.length != 0){ //alert(list.length); var string = ''; var shifting = 0; if (list.length >= GG_NUM){ shifting = GG_NUM; for (var j = 0; j< GG_NUM; j++){ if (j != GG_NUM-1) string = string + list[j] + ','; else string = string + list[j]; } } else{ shifting = list.length; for (var j = 0; j< list.length; j++){ if (j != list.length - 1) string = string + list[j] + ','; else string = string + list[j]; } } string = "'" + string + "'"; FB.ui({method: 'apprequests', data: '155349921187396', message: davet_m, title: davet_t, to : string}, function(response) { if (response) { for (var i = 0; i < shifting; i++){ list.shift(); } loop(list); } else{ } }); } } <script>

    Read the article

  • MVC: returning multiple results on stream connection to implement HTML5 SSE

    - by eddo
    I am trying to set up a lightweight HTML5 Server-Sent Event implementation on my MVC 4 Web, without using one of the libraries available to implement sockets and similars. The lightweight approach I am trying is: Client side: EventSource (or jquery.eventsource for IE) Server side: long polling with AsynchController (sorry for dropping here the raw test code but just to give an idea) public class HTML5testAsyncController : AsyncController { private static int curIdx = 0; private static BlockingCollection<string> _data = new BlockingCollection<string>(); static HTML5testAsyncController() { addItems(10); } //adds some test messages static void addItems(int howMany) { _data.Add("started"); for (int i = 0; i < howMany; i++) { _data.Add("HTML5 item" + (curIdx++).ToString()); } _data.Add("ended"); } // here comes the async action, 'Simple' public void SimpleAsync() { AsyncManager.OutstandingOperations.Increment(); Task.Factory.StartNew(() => { var result = string.Empty; var sb = new StringBuilder(); string serializedObject = null; //wait up to 40 secs that a message arrives if (_data.TryTake(out result, TimeSpan.FromMilliseconds(40000))) { JavaScriptSerializer ser = new JavaScriptSerializer(); serializedObject = ser.Serialize(new { item = result, message = "MSG content" }); sb.AppendFormat("data: {0}\n\n", serializedObject); } AsyncManager.Parameters["serializedObject"] = serializedObject; AsyncManager.OutstandingOperations.Decrement(); }); } // callback which returns the results on the stream public ActionResult SimpleCompleted(string serializedObject) { ServerSentEventResult sar = new ServerSentEventResult(); sar.Content = () => { return serializedObject; }; return sar; } //pushes the data on the stream in a format conforming HTML5 SSE public class ServerSentEventResult : ActionResult { public ServerSentEventResult() { } public delegate string GetContent(); public GetContent Content { get; set; } public int Version { get; set; } public override void ExecuteResult(ControllerContext context) { if (context == null) { throw new ArgumentNullException("context"); } if (this.Content != null) { HttpResponseBase response = context.HttpContext.Response; // this is the content type required by chrome 6 for server sent events response.ContentType = "text/event-stream"; response.BufferOutput = false; // this is important because chrome fails with a "failed to load resource" error if the server attempts to put the char set after the content type response.Charset = null; string[] newStrings = context.HttpContext.Request.Headers.GetValues("Last-Event-ID"); if (newStrings == null || newStrings[0] != this.Version.ToString()) { string value = this.Content(); response.Write(string.Format("data:{0}\n\n", value)); //response.Write(string.Format("id:{0}\n", this.Version)); } else { response.Write(""); } } } } } The problem is on the server side as there is still a big gap between the expected result and what's actually going on. Expected result: EventSource opens a stream connection to the server, the server keeps it open for a safe time (say, 2 minutes) so that I am protected from thread leaking from dead clients, as new message events are received by the server (and enqueued to a thread safe collection such as BlockingCollection) they are pushed in the open stream to the client: message 1 received at T+0ms, pushed to the client at T+x message 2 received at T+200ms, pushed to the client at T+x+200ms Actual behaviour: EventSource opens a stream connection to the server, the server keeps it open until a message event arrives (thanks to long polling) once a message is received, MVC pushes the message and closes the connection. EventSource has to reopen the connection and this happens after a couple of seconds. message 1 received at T+0ms, pushed to the client at T+x message 2 received at T+200ms, pushed to the client at T+x+3200ms This is not OK as it defeats the purpose of using SSE as the clients start again reconnecting as in normal polling and message delivery gets delayed. Now, the question: is there a native way to keep the connection open after sending the first message and sending further messages on the same connection?

    Read the article

  • How about a new platform for your next API&hellip; a CMS?

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2014/05/22/how-about-a-new-platform-for-your-next-apihellip-a.aspxSay what? I’m seeing a type of API emerge which serves static or long-lived resources, which are mostly read-only and have a controlled process to update the data that gets served. Think of something like an app configuration API, where you want a central location for changeable settings. You could use this server side to store database connection strings and keep all your instances in sync, or it could be used client side to push changes out to all users (and potentially driving A/B or MVT testing). That’s a good candidate for a RESTful API which makes proper use of HTTP expiration and validation caching to minimise traffic, but really you want a front end UI where you can edit the current config that the API returns and publish your changes. Sound like a Content Mangement System would be a good fit? I’ve been looking at that and it’s a great fit for this scenario. You get a lot of what you need out of the box, the amount of custom code you need to write is minimal, and you get a whole lot of extra stuff from using CMS which is very useful, but probably not something you’d build if you had to put together a quick UI over your API content (like a publish workflow, fine-grained security and an audit trail). You typically use a CMS for HTML resources, but it’s simple to expose JSON instead – or to do content negotiation to support both, so you can open a resource in a browser and see a nice visual representation, or request it with: Accept=application/json and get the same content rendered as JSON for the app to use. Enter Umbraco Umbraco is an open source .NET CMS that’s been around for a while. It has very good adoption, a lively community and a good release cycle. It’s easy to use, has all the functionality you need for a CMS-driven API, and it’s scalable (although you won’t necessarily put much scale on the CMS layer). In the rest of this post, I’ll build out a simple app config API using Umbraco. We’ll define the structure of the configuration resource by creating a new Document Type and setting custom properties; then we’ll build a very simple Razor template to return configuration documents as JSON; then create a resource and see how it looks. And we’ll look at how you could build this into a wider solution. If you want to try this for yourself, it’s ultra easy – there’s an Umbraco image in the Azure Website gallery, so all you need to to is create a new Website, select Umbraco from the image and complete the installation. It will create a SQL Azure website to store all the content, as well as a Website instance for editing and accessing content. They’re standard Azure resources, so you can scale them as you need. The default install creates a starter site for some HTML content, which you can use to learn your way around (or just delete). 1. Create Configuration Document Type In Umbraco you manage content by creating and modifying documents, and every document has a known type, defining what properties it holds. We’ll create a new Document Type to describe some basic config settings. In the Settings section from the left navigation (spanner icon), expand Document Types and Master, hit the ellipsis and select to create a new Document Type: This will base your new type off the Master type, which gives you some existing properties that we’ll use – like the Page Title which will be the resource URL. In the Generic Properties tab for the new Document Type, you set the properties you’ll be able to edit and return for the resource: Here I’ve added a text string where I’ll set a default cache lifespan, an image which I can use for a banner display, and a date which could show the user when the next release is due. This is the sort of thing that sits nicely in an app config API. It’s likely to change during the life of the product, but not very often, so it’s good to have a centralised place where you can make and publish changes easily and safely. It also enables A/B and MVT testing, as you can change the response each client gets based on your set logic, and their apps will behave differently without needing a release. 2. Define the response template Now we’ve defined the structure of the resource (as a document), in Umbraco we can define a C# Razor template to say how that resource gets rendered to the client. If you only want to provide JSON, it’s easy to render the content of the document by building each property in the response (Umbraco uses dynamic objects so you can specify document properties as object properties), or you can support content negotiation with very little effort. Here’s a template to render the document as HTML or JSON depending on the Accept header, using JSON.NET for the API rendering: @inherits Umbraco.Web.Mvc.UmbracoTemplatePage @using Newtonsoft.Json @{ Layout = null; } @if(UmbracoContext.HttpContext.Request.Headers["accept"] != null &amp;&amp; UmbracoContext.HttpContext.Request.Headers["accept"] == "application/json") { Response.ContentType = "application/json"; @Html.Raw(JsonConvert.SerializeObject(new { cacheLifespan = CurrentPage.cacheLifespan, bannerImageUrl = CurrentPage.bannerImage, nextReleaseDate = CurrentPage.nextReleaseDate })) } else { <h1>App configuration</h1> <p>Cache lifespan: <b>@CurrentPage.cacheLifespan</b></p> <p>Banner Image: </p> <img src="@CurrentPage.bannerImage"> <p>Next Release Date: <b>@CurrentPage.nextReleaseDate</b></p> } That’s a rough-and ready example of what you can do. You could make it completely generic and just render all the document’s properties as JSON, but having a specific template for each resource gives you control over what gets sent out. And the templates are evaluated at run-time, so if you need to change the output – or extend it, say to add caching response headers – you just edit the template and save, and the next client request gets rendered from the new template. No code to build and ship. 3. Create the content With your document type created, in  the Content pane you can create a new instance of that document, where Umbraco gives you a nice UI to input values for the properties we set up on the Document Type: Here I’ve set the cache lifespan to an xs:duration value, uploaded an image for the banner and specified a release date. Each property gets the appropriate input control – text box, file upload and date picker. At the top of the page is the name of the resource – myapp in this example. That specifies the URL for the resource, so if I had a DNS entry pointing to my Umbraco instance, I could access the config with a URL like http://static.x.y.z.com/config/myapp. The setup is all done now, so when we publish this resource it’ll be available to access.  4. Access the resource Now if you open  that URL in the browser, you’ll see the HTML version rendered: - complete with the  image and formatted date. Umbraco lets you save changes and preview them before publishing, so the HTML view could be a good way of showing editors their changes in a usable view, before they confirm them. If you browse the same URL from a REST client, specifying the Accept=application/json request header, you get this response:   That’s the exact same resource, with a managed UI to publish it, being accessed as HTML or JSON with a tiny amount of effort. 5. The wider landscape If you have fairy stable content to expose as an API, I think  this approach is really worth considering. Umbraco scales very nicely, but in a typical solution you probably wouldn’t need it to. When you have additional requirements, like logging API access requests - but doing it out-of-band so clients aren’t impacted, you can put a very thin API layer on top of Umbraco, and cache the CMS responses in your API layer:   Here the API does a passthrough to CMS, so the CMS still controls the content, but it caches the response. If the response is cached for 1 minute, then Umbraco only needs to handle 1 request per minute (multiplied by the number of API instances), so if you need to support 1000s of request per second, you’re scaling a thin, simple API layer rather than having to scale the more complex CMS infrastructure (including the database). This diagram also shows an approach to logging, by asynchronously publishing a message to a queue (Redis in this case), which can be picked up later and persisted by a different process. Does it work? Beautifully. Using Azure, I spiked the solution above (including the Redis logging framework which I’ll blog about later) in half a day. That included setting up different roles in Umbraco to demonstrate a managed workflow for publishing changes, and a couple of document types representing different resources. Is it maintainable? We have three moving parts, which are all managed resources in Azure –  an Azure Website for Umbraco which may need a couple of instances for HA (or may not, depending on how long the content can be cached), a message queue (Redis is in preview in Azure, but you can easily use Service Bus Queues if performance is less of a concern), and the Web Role for the API. Two of the components are off-the-shelf, from open source projects, and the only custom code is the API which is very simple. Does it scale? Pretty nicely. With a single Umbraco instance running as an Azure Website, and with 4x instances for my API layer (Standard sized Web Roles), I got just under 4,000 requests per second served reliably, with a Worker Role in the background saving the access logs. So we had a nice UI to publish app config changes, with a friendly Web preview and a publishing workflow, capable of supporting 14 million requests in an hour, with less than a day’s effort. Worth considering if you’re publishing long-lived resources through your API.

    Read the article

  • Faceted search with Solr on Windows

    - by Dr.NETjes
    With over 10 million hits a day, funda.nl is probably the largest ASP.NET website which uses Solr on a Windows platform. While all our data (i.e. real estate properties) is stored in SQL Server, we're using Solr 1.4.1 to return the faceted search results as fast as we can.And yes, Solr is very fast. We did do some heavy stress testing on our Solr service, which allowed us to do over 1,000 req/sec on a single 64-bits Solr instance; and that's including converting search-url's to Solr http-queries and deserializing Solr's result-XML back to .NET objects! Let me tell you about faceted search and how to integrate Solr in a .NET/Windows environment. I'll bet it's easier than you think :-) What is faceted search? Faceted search is the clustering of search results into categories, allowing users to drill into search results. By showing the number of hits for each facet category, users can easily see how many results match that category. If you're still a bit confused, this example from CNET explains it all: The SQL solution for faceted search Our ("pre-Solr") solution for faceted search was done by adding a lot of redundant columns to our SQL tables and doing a COUNT(...) for each of those columns:   So if a user was searching for real estate properties in the city 'Amsterdam', our facet-query would be something like: SELECT COUNT(hasGarden), COUNT(has2Bathrooms), COUNT(has3Bathrooms), COUNT(etc...) FROM Houses WHERE city = 'Amsterdam' While this solution worked fine for a couple of years, it wasn't very easy for developers to add new facets. And also, performing COUNT's on all matched rows only performs well if you have a limited amount of rows in a table (i.e. less than a million). Enter Solr "Solr is an open source enterprise search server based on the Lucene Java search library, with XML/HTTP and JSON APIs, hit highlighting, faceted search, caching, replication, and a web administration interface." (quoted from Wikipedia's page on Solr) Solr isn't a database, it's more like a big index. Every time you upload data to Solr, it will analyze the data and create an inverted index from it (like the index-pages of a book). This way Solr can lookup data very quickly. To explain the inner workings of Solr is beyond the scope of this post, but if you want to learn more, please visit the Solr Wiki pages. Getting faceted search results from Solr is very easy; first let me show you how to send a http-query to Solr:    http://localhost:8983/solr/select?q=city:Amsterdam This will return an XML document containing the search results (in this example only three houses in the city of Amsterdam):    <response>     <result name="response" numFound="3" start="0">         <doc>            <long name="id">3203</long>            <str name="city">Amsterdam</str>            <str name="steet">Keizersgracht</str>            <int name="numberOfBathrooms">2</int>        </doc>         <doc>             <long name="id">3205</long>             <str name="city">Amsterdam</str>             <str name="steet">Vondelstraat</str>             <int name="numberOfBathrooms">3</int>          </doc>          <doc>             <long name="id">4293</long>             <str name="city">Amsterdam</str>             <str name="steet">Wibautstraat</str>             <int name="numberOfBathrooms">2</int>          </doc>       </result>   </response> By adding a facet-querypart for the field "numberOfBathrooms", Solr will return the facets for this particular field. We will see that there's one house in Amsterdam with three bathrooms and two houses with two bathrooms.    http://localhost:8983/solr/select?q=city:Amsterdam&facet=true&facet.field=numberOfBathrooms The complete XML response from Solr now looks like:    <response>      <result name="response" numFound="3" start="0">         <doc>            <long name="id">3203</long>            <str name="city">Amsterdam</str>            <str name="steet">Keizersgracht</str>            <int name="numberOfBathrooms">2</int>         </doc>         <doc>            <long name="id">3205</long>            <str name="city">Amsterdam</str>            <str name="steet">Vondelstraat</str>            <int name="numberOfBathrooms">3</int>         </doc>         <doc>            <long name="id">4293</long>            <str name="city">Amsterdam</str>            <str name="steet">Wibautstraat</str>            <int name="numberOfBathrooms">2</int>         </doc>      </result>      <lst name="facet_fields">         <lst name="numberOfBathrooms">            <int name="2">2</int>            <int name="3">1</int>         </lst>      </lst>   </response> Trying Solr for yourself To run Solr on your local machine and experiment with it, you should read the Solr tutorial. This tutorial really only takes 1 hour, in which you install Solr, upload sample data and get some query results. And yes, it works on Windows without a problem. Note that in the Solr tutorial, you're using Jetty as a Java Servlet Container (that's why you must start it using "java -jar start.jar"). In our environment we prefer to use Apache Tomcat to host Solr, which installs like a Windows service and works more like .NET developers expect. See the SolrTomcat page.Some best practices for running Solr on Windows: Use the 64-bits version of Tomcat. In our tests, this doubled the req/sec we were able to handle!Use a .NET XmlReader to convert Solr's XML output-stream to .NET objects. Don't use XPath; it won't scale well.Use filter queries ("fq" parameter) instead of the normal "q" parameter where possible. Filter queries are cached by Solr and will speed up Solr's response time (see FilterQueryGuidance)In my next post I’ll talk about how to keep Solr's indexed data in sync with the data in your SQL tables. Timestamps / rowversions will help you out here!

    Read the article

  • Oracle B2B 11g - Transport Layer Acknowledgement

    - by Nitesh Jain Oracle
    In Health Care Industry,Acknowledgement or Response should be sent back very fast. Once any message received, Acknowledgement should be sent back to TP. Oracle B2B provides a solution to send acknowledgement or Response from transport layer of mllp that is called as immediate acknowledgment. Immediate acknowledgment is generated and transmitted in the transport layer. It is an alternative to the functional acknowledgment, which generates after processing/validating the data in document layer. Oracle B2B provides four types of immediate acknowledgment: Default: Oracle B2B parses the incoming HL7 message and generates an acknowledgment from it. This mode uses the details from incoming payload and generate the acknowledgement based on incoming HL7 message control number, sender and application identification. By default, an Immediate ACK is a generic ACK. Trigger event can also sent back by using Map Trigger Event property. If mapping the MSH.10 of the ACK with the MSH.10 of the incoming business message is required, then enable the Map ACK Control ID property. Simple: B2B sends the predefined acknowledgment message to the sender without parsing the incoming message. Custom: Custom immediate Ack/Response mode gives a user to define their own response/acknowledgement. This is configurable using file in the Custom Immediate ACK File property. Negative: In this case, immediate ACK will be returned only in the case of exceptions.

    Read the article

  • What is ADO ?

    - by Aamir Hasan
    What is ADO? ADO is a Microsoft technologyADO stands for ActiveX Data ObjectsADO is a Microsoft Active-X componentADO is automatically installed with Microsoft IISADO is a programming interface to access data in a databaseAccessing a Database from an ASP Page The common way to access a database from inside an ASP page is to: Create an ADO connection to a databaseOpen the database connectionCreate an ADO recordsetOpen the recordsetExtract the data you need from the recordsetClose the recordsetClose the connectionExample  <%set conn=Server.CreateObject("ADODB.Connection")conn.Provider="Microsoft.Jet.OLEDB.4.0"conn.Open(Server.Mappath("/db/northwind.mdb"))set rs = Server.CreateObject("ADODB.recordset")rs.Open "Select * from Customers", conndo until rs.EOF    for each x in rs.Fields       Response.Write(x.name)       Response.Write(" = ")       Response.Write(x.value & "<br />")    next    Response.Write("<br />")    rs.MoveNextlooprs.closeconn.close%> 

    Read the article

  • Pass Extra Parameters to JavaScript Callback Function

    - by BRADINO
    Here is a simple example of a function that takes a callback function as a parameter. query.send(handleQueryResponse); function handleQueryResponse(response){      alert('Processing...'); } If you wanted to pass extra variables to the callback function, you can do it like this. var param1 = 'something'; var param2 ='something else'; query.send(function(response) { handleQueryResponse(response, param1, param2) }); function handleQueryResponse(response,param1,param2){      alert('Processing...');      alert(param1);      alert(param2); }

    Read the article

  • Configuring Request-Reply in JMSAdapter

    - by [email protected]
    Request-Reply is a new feature in 11g JMSAdapter that helps you achieve the following:Allows you to combine Request and Reply in a single step. In the prior releases of the Oracle SOA Suite, you would require to configure two distinct adapters. Performs automatic correlation without you needing to configure BPEL "correlation sets". This would work seamlessly in Mediator and BPMN as well.In order to configure the JMSAdapter Request-Reply, please follow these steps:1) Drag and drop a JMSAdapter onto the "External References" swim lane in your composite editor. 2) Enter default values for the first few screens in the JMS Adapter wizard till you hit the screen where the wizard prompts you to enter the operation name. Select "Request-Reply" as the "Operation Type" and Asynchronous as "Operation Name".3) Select the Request and Reply queues in the following screens of the wizard. The message will be en-queued in the "Request" queue and the reply will be returned in the "Reply" queue. The reason I have used such a selector is that the back-end system that reads from the request queue and generates the response in the response queue actually generates more than one response and hence I must use a filter to exclude the unwanted responses.4) Select the message schema for request as well as response. 5) Add an <invoke> activity in BPEL corresponding to the JMS Adapter partner link. Please note that I am setting an additional header as my third-party application requires this.6) Add a <receive> activity just after the <invoke> and select the "Reply" operation. Please make sure that the "Create Instance" option is unchecked.Your completed BPEL process will something like this:

    Read the article

  • What is ADO ?

    - by Aamir Hasan
    What is ADO? ADO is a Microsoft technologyADO stands for ActiveX Data ObjectsADO is a Microsoft Active-X componentADO is automatically installed with Microsoft IISADO is a programming interface to access data in a databaseAccessing a Database from an ASP Page The common way to access a database from inside an ASP page is to: Create an ADO connection to a databaseOpen the database connectionCreate an ADO recordsetOpen the recordsetExtract the data you need from the recordsetClose the recordsetClose the connectionExample  <%set conn=Server.CreateObject("ADODB.Connection")conn.Provider="Microsoft.Jet.OLEDB.4.0"conn.Open(Server.Mappath("/db/northwind.mdb"))set rs = Server.CreateObject("ADODB.recordset")rs.Open "Select * from Customers", conndo until rs.EOF    for each x in rs.Fields       Response.Write(x.name)       Response.Write(" = ")       Response.Write(x.value & "<br />")    next    Response.Write("<br />")    rs.MoveNextlooprs.closeconn.close%> 

    Read the article

  • Is there a name for a testing method where you compare a set of very different designs?

    - by DVK
    "A/B testing" is defined as "a method of marketing testing by which a baseline control sample is compared to a variety of single-variable test samples in order to improve response rates". The point here, of course, is to know which small single-variable changes are more optimal, with the goal of finding the local optimum. However, one can also envision a somewhat related but different scenario for testing the response rate of major re-designs: take a baseline control design, take one or more completely different designs, and run test samples on those redesigns to compare response rates. As a practical but contrived example, imagine testing a set of designs for the same website, one being minimalist "googly" design, one being cluttered "Amazony" design, and one being an artsy "designy" design (e.g. maximum use of design elements unlike Google but minimal simultaneously presented information, like Google but unlike Amazon) Is there an official name for such testing? It's definitely not A/B testing, since the main component of it (finding local optimum by testing single-variable small changes that can be attributed to response shift) is not present. This is more about trying to compare a set of local optimums, and compare to see which one works better as a global optimum. It's not a multivriable, A/B/N or any other such testing since you don't really have specific variables that can be attributed, just different designs.

    Read the article

  • Java HttpURLConnection class Program

    - by pandu
    I am learning java. Here is the sample code of HttpURLConnection class usage in some text book import java.net.*; import java.io.*; import java.util.*; class HttpURLDemo { public static void main(String args[]) throws Exception { URL hp = new URL("http://www.google.com"); HttpURLConnection hpCon = (HttpURLConnection) hp.openConnection(); // Display request method. System.out.println("Request method is " + hpCon.getRequestMethod()); // Display response code. System.out.println("Response code is " + hpCon.getResponseCode()); // Display response message. System.out.println("Response Message is " + hpCon.getResponseMessage()); // Get a list of the header fields and a set // of the header keys. Map<String, List<String>> hdrMap = hpCon.getHeaderFields(); Set<String> hdrField = hdrMap.keySet(); System.out.println("\nHere is the header:"); // Display all header keys and values. for(String k : hdrField) { System.out.println("Key: " + k + " Value: " + hdrMap.get(k)); } } } Question is Why hpCon Object is declared in the following way? HttpURLConnection hpCon = (HttpURLConnection) hp.openConnection(); instead of declaring like this HttpURLConnection hpCon = new HttpURLConnection(); Author provided the following explanation. I cant understand Java provides a subclass of URLConnection that provides support for HTTP connections. This class is called HttpURLConnection. You obtain an HttpURLConnection in the same way just shown, by calling openConnection( ) on a URL object, but you must cast the result to HttpURLConnection. (Of course, you must make sure that you are actually opening an HTTP connection.) Once you have obtained a reference to an HttpURLConnection object, you can use any of the methods inherited from URLConnection

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >