Search Results

Search found 58228 results on 2330 pages for 'http api'.

Page 87/2330 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • How do I get all the calendar entries for a particular time range using Google Calendar API

    - by BeWarned
    I want to view events over specific time range for a specific calendar, but am having trouble using the API, It is a generic API, and it reminds me of using the DOM. The problem is that it seems difficult to work with because much of the information is in generic base classes. How do I get the events for a calendar using Groovy or Java? Does anybody have an example of passing credentials using curl? Example code would be appreciated.

    Read the article

  • /phpTest/zologize/axa.php? Another botnet?

    - by M132
    Starring at the log made me think, what is /phpTest/zologize/axa.php and why are bots looking for it? Previously, I had lots of /HNAP1/ requests. Requesting /HNAP1/ from IPs from log revealed, that all of them were sent by Linksys routers. 3 months later, these requests turned out to be generated by a router worm called TheMoon. But requesting /phpTest/zologize/axa.php from these servers returns a 404 error. How these servers got infected, and how can I protect mine from this? 124.11.224.69 - - [02/Feb/2014:00:37:16 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 168 "-" "-" 140.113.238.121 - - [21/Feb/2014:01:24:32 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 168 "-" "-" 77.121.132.79 - - [22/Feb/2014:00:03:56 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 168 "-" "-" 142.4.201.210 - - [24/Feb/2014:21:54:33 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 168 "-" "-" 212.83.168.39 - - [24/Feb/2014:23:16:00 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 168 "-" "-" 87.117.229.210 - - [26/Feb/2014:06:34:58 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 78.100.82.99 - - [26/Feb/2014:08:25:48 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 198.50.205.219 - - [26/Feb/2014:09:59:11 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 210.60.142.107 - - [27/Feb/2014:00:12:12 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 101.109.4.73 - - [27/Feb/2014:08:50:46 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 61.91.128.158 - - [27/Feb/2014:08:59:15 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 201.188.41.175 - - [27/Feb/2014:11:25:42 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 220.133.137.2 - - [27/Feb/2014:12:12:46 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 203.156.104.88 - - [28/Feb/2014:18:11:49 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 61.19.52.58 - - [28/Feb/2014:22:02:56 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 84.2.92.40 - - [28/Feb/2014:23:04:17 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 58.64.205.11 - - [01/Mar/2014:06:08:33 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 21 "-" "-" 113.61.200.151 - - [01/Mar/2014:18:25:25 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 21 "-" "-" 178.33.219.12 - - [03/Mar/2014:14:41:48 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 21 "-" "-" 74.63.220.132 - - [04/Mar/2014:01:16:44 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 21 "-" "-" 187.141.230.106 - - [04/Mar/2014:15:39:26 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 21 "-" "-" 103.22.181.146 - - [09/May/2014:17:16:56 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 502 166 "-" "-" 176.31.200.14 - - [10/May/2014:19:52:24 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 68 "-" "-" 124.120.92.70 - - [12/May/2014:16:19:40 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 68 "-" "-" 219.85.198.142 - - [15/May/2014:19:21:22 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 80.84.53.226 - - [23/May/2014:08:58:25 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 87.213.11.165 - - [25/May/2014:06:20:27 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 122.116.220.106 - - [25/May/2014:07:10:21 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 58.8.128.30 - - [29/May/2014:02:43:49 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 142.4.197.135 - - [29/May/2014:11:36:45 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 178.32.243.65 - - [30/May/2014:01:59:53 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 58.8.164.221 - - [30/May/2014:14:04:16 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 140.127.182.15 - - [01/Jun/2014:14:45:40 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 218.166.43.21 - - [01/Jun/2014:16:07:52 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 178.32.188.140 - - [01/Jun/2014:19:11:46 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 94.23.211.173 - - [05/Jun/2014:00:52:52 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 120.117.105.201 - - [05/Jun/2014:04:39:39 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 187.172.27.146 - - [05/Jun/2014:10:20:22 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 203.195.219.91 - - [05/Jun/2014:10:53:42 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-"

    Read the article

  • What could trigger a change of http status to 500 on the client's end?

    - by VexedPanda
    We have a PHP web application that posts data to itself, and either displays an updated page based on that data, or redirects to another page. An example of this is a script with a paged list on it, where clicking on the Next link causes a post to the same page, which then returns an updated version of the page showing the new set of list items. One client is reporting that IE is displaying friendly error messages when the page updates itself instead of the correct behavior of displaying the updated page. Turning friendly error messages off "corrects" this problem, and displays the updated page normally, indicating no actual server error occurred. When testing from any location other than this client's our web app does not produce any http error statuses, and in this specific situation only produces 200 statuses. (According to Fiddler.) What could be interfering with the HTTP POST and changing the response's http status code to 500 (or another code that would trigger friendly errors in IE)? Are there certain proxies or other network tools that could be misconfigured or buggy in this manner? Is there any way we can alter our application (apart from avoiding posts to the same script, which is not feasible) to get around this misbehavior?

    Read the article

  • Timer_EntityBody, Timer_ConnectionIdle and Connection Closed Unexpectly

    - by ihsany
    We have a windows application, it connects to a web service (XML web service hosted on a Windows 2008 Server IIS 7.5, no antivirus) and fetches some data to the client. But sometimes (around 5%-10% of the requests), it gives an error when trying to connect web service. Here is the client application error log; Exception:System.Net.WebException: The underlying connection was closed: The connection was closed unexpectedly. at System.Web.Services.Protocols.WebClientAsyncResult.WaitForResponse() at System.Web.Services.Protocols.WebClientProtocol.EndSend(IAsyncResult asyncResult, Object& internalAsyncState, Stream& responseStream) at System.Web.Services.Protocols.SoapHttpClientProtocol.EndInvoke(IAsyncResult asyncResult) at APPClient.APPFPService.WEBService.EndAddMoney(IAsyncResult asyncResult) at APPClient.BLL.ServiceAgent.AddMoneyCallback(IAsyncResult ar) From other hand, on the web server, i checked HTTP error logs and i see a long file like this; 2014-06-05 14:02:04 65.82.178.73 53798 SERVER.IP.ADDRESS 80 - - - - - Timer_ConnectionIdle - 2014-06-05 14:07:24 76.109.81.223 58985 SERVER.IP.ADDRESS 80 - - - - - Timer_ConnectionIdle - 2014-06-05 14:07:39 76.109.81.223 2803 SERVER.IP.ADDRESS 80 - - - - - Timer_ConnectionIdle - 2014-06-05 14:08:59 76.109.81.223 52656 SERVER.IP.ADDRESS 80 - - - - - Timer_ConnectionIdle - 2014-06-05 14:09:05 65.82.178.73 53904 SERVER.IP.ADDRESS 80 HTTP/1.1 POST /webservice/webservice.asmx - 2 Timer_EntityBody SYPService 2014-06-05 14:10:55 50.186.180.191 50648 SERVER.IP.ADDRESS 80 - - - - - Timer_ConnectionIdle - Here is a similar situation but it did not help me. UPDATE: When i checked the IIS logs, i see some issues like these; cs-method cs-uri-stem sc-status sc-win32-status time-taken cs-version POST /webservice/webservice.asmx 400 64 46 HTTP/1.1 POST /webservice/webservice.asmx 400 64 134675 HTTP/1.1 POST /webservice/webservice.asmx 400 64 37549 HTTP/1.1 POST /webservice/webservice.asmx 400 64 109 HTTP/1.1 POST /webservice/webservice.asmx 400 64 31 HTTP/1.1 POST /webservice/webservice.asmx 400 64 0 HTTP/1.1 POST /webservice/webservice.asmx 400 64 15 HTTP/1.1 sc-win32-status 64 : The specified network name is no longer available. sc-status 400 : Bad request Also some requests takes around 130 seconds, but some of less than 1 second. This is a windows application which connects to a web service for process some data. There is not a query takes around 130 seconds on the database.

    Read the article

  • Java Logger API

    - by Koppar
    This is a more like a tip rather than technical write up and serves as a quick intro for newbies. The logger API helps to diagnose application level or JDK level issues at runtime. There are 7 levels which decide the detailing in logging (SEVERE, WARNING, INFO, CONFIG, FINE, FINER, FINEST). Its best to start with highest level and as we narrow down, use more detailed logging for a specific area. SEVERE is the highest and FINEST is the lowest. This may not make sense until we understand some jargon. The Logger class provides the ability to stream messages to an output stream in a format that can be controlled by the user. What this translates to is, I can create a logger with this simple invocation and use it add debug messages in my class: import java.util.logging.*; private static final Logger focusLog = Logger.getLogger("java.awt.focus.KeyboardFocusManager"); if (focusLog.isLoggable(Level.FINEST)) { focusLog.log(Level.FINEST, "Calling peer setCurrentFocusOwner}); LogManager acts like a book keeper and all the getLogger calls are forwarded to LogManager. The LogManager itself is a singleton class object which gets statically initialized on JVM start up. More on this later. If there is no existing logger with the given name, a new one is created. If there is one (and not yet GC’ed), then the existing Logger object is returned. By default, a root logger is created on JVM start up. All anonymous loggers are made as the children of the root logger. Named loggers have the hierarchy as per their name resolutions. Eg: java.awt.focus is the parent logger for java.awt.focus.KeyboardFocusManager etc. Before logging any message, the logger checks for the log level specified. If null is specified, the log level of the parent logger will be set. However, if the log level is off, no log messages would be written, irrespective of the parent’s log level. All the messages that are posted to the Logger are handled as a LogRecord object.i.e. FocusLog.log would create a new LogRecord object with the log level and message as its data members). The level of logging and thread number are also tracked. LogRecord is passed on to all the registered Handlers. Handler is basically a means to output the messages. The output may be redirected to either a log file or console or a network logging service. The Handler classes use the LogManager properties to set filters and formatters. During initialization or JVM start up, LogManager looks for logging.properties file in jre/lib and sets the properties if the file is provided. An alternate location for properties file can also be specified by setting java.util.logging.config.file system property. This can be set in Java Control Panel ? Java ? Runtime parameters as -Djava.util.logging.config.file = <mylogfile> or passed as a command line parameter java -Djava.util.logging.config.file = C:/Sunita/myLog The redirection of logging depends on what is specified rather registered as a handler with JVM in the properties file. java.util.logging.ConsoleHandler sends the output to system.err and java.util.logging.FileHandler sends the output to file. File name of the log file can also be specified. If you prefer XML format output, in the configuration file, set java.util.logging.FileHandler.formatter = java.util.logging.XMLFormatter and if you prefer simple text, set set java.util.logging.FileHandler.formatter =java.util.logging.SimpleFormatter Below is the default logging Configuration file: ############################################################ # Default Logging Configuration File # You can use a different file by specifying a filename # with the java.util.logging.config.file system property. # For example java -Djava.util.logging.config.file=myfile ############################################################ ############################################################ # Global properties ############################################################ # "handlers" specifies a comma separated list of log Handler # classes. These handlers will be installed during VM startup. # Note that these classes must be on the system classpath. # By default we only configure a ConsoleHandler, which will only # show messages at the INFO and above levels. handlers= java.util.logging.ConsoleHandler # To also add the FileHandler, use the following line instead. #handlers= java.util.logging.FileHandler, java.util.logging.ConsoleHandler # Default global logging level. # This specifies which kinds of events are logged across # all loggers. For any given facility this global level # can be overriden by a facility specific level # Note that the ConsoleHandler also has a separate level # setting to limit messages printed to the console. .level= INFO ############################################################ # Handler specific properties. # Describes specific configuration info for Handlers. ############################################################ # default file output is in user's home directory. java.util.logging.FileHandler.pattern = %h/java%u.log java.util.logging.FileHandler.limit = 50000 java.util.logging.FileHandler.count = 1 java.util.logging.FileHandler.formatter = java.util.logging.XMLFormatter # Limit the message that are printed on the console to INFO and above. java.util.logging.ConsoleHandler.level = INFO java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter ############################################################ # Facility specific properties. # Provides extra control for each logger. ############################################################ # For example, set the com.xyz.foo logger to only log SEVERE # messages: com.xyz.foo.level = SEVERE Since I primarily use this method to track focus issues, here is how I get detailed awt focus related logging. Just set the logger name to java.awt.focus.level=FINEST and change the default log level to FINEST. Below is a basic sample program. The sample programs are from http://www2.cs.uic.edu/~sloan/CLASSES/java/ and have been modified to illustrate the logging API. By changing the .level property in the logging.properties file, one can control the output written to the logs. To play around with the example, try changing the levels in the logging.properties file and notice the difference in messages going to the log file. Example --------KeyboardReader.java------------------------------------------------------------------------------------- import java.io.*; import java.util.*; import java.util.logging.*; public class KeyboardReader { private static final Logger mylog = Logger.getLogger("samples.input"); public static void main (String[] args) throws java.io.IOException { String s1; String s2; double num1, num2, product; // set up the buffered reader to read from the keyboard BufferedReader br = new BufferedReader (new InputStreamReader (System.in)); System.out.println ("Enter a line of input"); s1 = br.readLine(); if (mylog.isLoggable(Level.SEVERE)) { mylog.log (Level.SEVERE,"The line entered is " + s1); } if (mylog.isLoggable(Level.INFO)) { mylog.log (Level.INFO,"The line has " + s1.length() + " characters"); } if (mylog.isLoggable(Level.FINE)) { mylog.log (Level.FINE,"Breaking the line into tokens we get:"); } int numTokens = 0; StringTokenizer st = new StringTokenizer (s1); while (st.hasMoreTokens()) { s2 = st.nextToken(); numTokens++; if (mylog.isLoggable(Level.FINEST)) { mylog.log (Level.FINEST, " Token " + numTokens + " is: " + s2); } } } } ----------MyFileReader.java---------------------------------------------------------------------------------------- import java.io.*; import java.util.*; import java.util.logging.*; public class MyFileReader extends KeyboardReader { private static final Logger mylog = Logger.getLogger("samples.input.file"); public static void main (String[] args) throws java.io.IOException { String s1; String s2; // set up the buffered reader to read from the keyboard BufferedReader br = new BufferedReader (new FileReader ("MyFileReader.txt")); s1 = br.readLine(); if (mylog.isLoggable(Level.SEVERE)) { mylog.log (Level.SEVERE,"ATTN The line is " + s1); } if (mylog.isLoggable(Level.INFO)) { mylog.log (Level.INFO, "The line has " + s1.length() + " characters"); } if (mylog.isLoggable(Level.FINE)) { mylog.log (Level.FINE,"Breaking the line into tokens we get:"); } int numTokens = 0; StringTokenizer st = new StringTokenizer (s1); while (st.hasMoreTokens()) { s2 = st.nextToken(); numTokens++; if (mylog.isLoggable(Level.FINEST)) { mylog.log (Level.FINEST,"Breaking the line into tokens we get:"); mylog.log (Level.FINEST," Token " + numTokens + " is: " + s2); } } //end of while } // end of main } // end of class ----------MyFileReader.txt------------------------------------------------------------------------------------------ My first logging example -------logging.properties------------------------------------------------------------------------------------------- handlers= java.util.logging.ConsoleHandler, java.util.logging.FileHandler .level= FINEST java.util.logging.FileHandler.pattern = java%u.log java.util.logging.FileHandler.limit = 50000 java.util.logging.FileHandler.count = 1 java.util.logging.FileHandler.formatter = java.util.logging.SimpleFormatter java.util.logging.ConsoleHandler.level = FINEST java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter java.awt.focus.level=ALL ------Output log------------------------------------------------------------------------------------------- May 21, 2012 11:44:55 AM MyFileReader main SEVERE: ATTN The line is My first logging example May 21, 2012 11:44:55 AM MyFileReader main INFO: The line has 24 characters May 21, 2012 11:44:55 AM MyFileReader main FINE: Breaking the line into tokens we get: May 21, 2012 11:44:55 AM MyFileReader main FINEST: Breaking the line into tokens we get: May 21, 2012 11:44:55 AM MyFileReader main FINEST: Token 1 is: My May 21, 2012 11:44:55 AM MyFileReader main FINEST: Breaking the line into tokens we get: May 21, 2012 11:44:55 AM MyFileReader main FINEST: Token 2 is: first May 21, 2012 11:44:55 AM MyFileReader main FINEST: Breaking the line into tokens we get: May 21, 2012 11:44:55 AM MyFileReader main FINEST: Token 3 is: logging May 21, 2012 11:44:55 AM MyFileReader main FINEST: Breaking the line into tokens we get: May 21, 2012 11:44:55 AM MyFileReader main FINEST: Token 4 is: example Invocation command: "C:\Program Files (x86)\Java\jdk1.6.0_29\bin\java.exe" -Djava.util.logging.config.file=logging.properties MyFileReader References Further technical details are available here: http://docs.oracle.com/javase/1.4.2/docs/guide/util/logging/overview.html#1.0 http://docs.oracle.com/javase/1.4.2/docs/api/java/util/logging/package-summary.html http://www2.cs.uic.edu/~sloan/CLASSES/java/

    Read the article

  • Header set Access-Control-Allow-Origin not working with mod_rewrite + mod_jk

    - by tharant
    My first question on here on SF so please forgive me if I manage to bork the post. :) Anyways, I'm using mod_rewrite on one of my machines with a simple rule that redirects to a webapp on another machine. I'm also setting the header 'Access-Control-Allow-Origin' on both machines. The problem is that when I hit the rewrite rule, I loose the 'Access-Control-Allow-Origin' header setting. Here's an example of the Apache config for the first machine: NameVirtualHost 10.0.0.2:80 <VirtualHost 10.0.0.2:80> DocumentRoot /var/www/host.example.com ServerName host.example.com JkMount /webapp/* jkworker Header set Access-Control-Allow-Origin "*" RewriteEngine on RewriteRule ^/otherhost http://otherhost.example.com/webapp [R,L] </VirtualHost> And here's an example of the Apache config for the second: NameVirtualHost 10.0.1.2:80 <VirtualHost 10.0.1.2:80> DocumentRoot /var/www/otherhost.example.com ServerName otherhost.example.com JkMount /webapp/* jkworker Header set Access-Control-Allow-Origin "*" </VirtualHost> When I hit host.example.com we see that the header is set: $ curl -i http://host.example.com/ HTTP/1.1 302 Moved Temporarily Server: Apache/2.2.11 (FreeBSD) mod_ssl/2.2.11 OpenSSL/0.9.7e-p1 DAV/2 mod_jk/1.2.26 Content-Length: 0 Access-Control-Allow-Origin: * Content-Type: text/html;charset=ISO-8859-1 And when I hit otherhost.example.com we see that it too is setting the header: $ curl -i http://otherhost.example.com HTTP/1.1 200 OK Server: Apache/2.0.46 (Red Hat) Location: http://otherhost.example.com/index.htm Content-Length: 0 Access-Control-Allow-Origin: * Content-Type: text/html;charset=UTF-8 But when I try to hit the rewrite rule at host.example.com/otherhost we get no love: $ curl -i http://host.example.com/otherhost/ HTTP/1.1 302 Found Server: Apache/2.2.11 (FreeBSD) mod_ssl/2.2.11 OpenSSL/0.9.7e-p1 DAV/2 mod_jk/1.2.26 Location: http://otherhost.example.com/ Content-Length: 0 Content-Type: text/html; charset=iso-8859-1 Can anybody point out what I'm doing wrong here? Could mod_jk be part of the problem?

    Read the article

  • Oracle HRMS API – Create Employee Element Entry

    - by PRajkumar
    API - pay_element_entry_api.create_element_entry Example -- Lets Try to Create Element Entry "Bonus" for Employee   DECLARE    ln_element_link_id                  PAY_ELEMENT_LINKS_F.ELEMENT_LINK_ID%TYPE;    ld_effective_start_date            DATE;    ld_effective_end_date             DATE;    ln_element_entry_id                PAY_ELEMENT_ENTRIES_F.ELEMENT_ENTRY_ID%TYPE;    ln_object_version_number     PAY_ELEMENT_ENTRIES_F.OBJECT_VERSION_NUMBER %TYPE;    lb_create_warning                    BOOLEAN;    ln_input_value_id                    PAY_INPUT_VALUES_F.INPUT_VALUE_ID%TYPE;    ln_screen_entry_value            PAY_ELEMENT_ENTRY_VALUES_F.SCREEN_ENTRY_VALUE%TYPE;    ln_element_type_id                  PAY_ELEMENT_TYPES_F.ELEMENT_TYPE_ID%TYPE; BEGIN         -- Get Element Link Id         -- ------------------------------           ln_element_link_id :=      hr_entry_api.get_link                                                           (       p_assignment_id      => 33561,                                                                   p_element_type_id   => 50417,                                                                   p_session_date          => TO_DATE('23-JUN-2011')                                                           );          dbms_output.put_line( '  API: Element Link Id: ' || ln_element_link_id );          -- Create Element Entry        -- ------------------------------        pay_element_entry_api.create_element_entry          (     -- Input data elements                -- -----------------------------                p_effective_date                     => TO_DATE('22-JUN-2011'),                p_business_group_id          => fnd_profile.value('PER_BUSINESS_GROUP_ID'),                p_assignment_id                   => 33561,                p_element_link_id                => ln_element_link_id,                p_entry_type                           => 'E',                p_input_value_id1               => 53726,                p_entry_value1                      => 2500,                -- Output data elements                -- --------------------------------                p_effective_start_date          => ld_effective_start_date,                p_effective_end_date           => ld_effective_end_date,                p_element_entry_id             => ln_element_entry_id,                p_object_version_number  => ln_object_version_number,                p_create_warning                 => lb_create_warning          );        dbms_output.put_line( '  API: pay_element_entry_api.create_element_entry successfull - Element Entry Id: ' || ln_element_entry_id );    COMMIT; EXCEPTION           WHEN OTHERS THEN                             ROLLBACK;                             dbms_output.put_line(SQLERRM); END; / SHOW ERR;  

    Read the article

  • Tutorial: Getting Started with the NoSQL JavaScript / Node.js API for MySQL Cluster

    - by Mat Keep
    Tutorial authored by Craig Russell and JD Duncan  The MySQL Cluster team are working on a new NoSQL JavaScript connector for MySQL. The objectives are simplicity and high performance for JavaScript users: - allows end-to-end JavaScript development, from the browser to the server and now to the world's most popular open source database - native "NoSQL" access to the storage layer without going first through SQL transformations and parsing. Node.js is a complete web platform built around JavaScript designed to deliver millions of client connections on commodity hardware. With the MySQL NoSQL Connector for JavaScript, Node.js users can easily add data access and persistence to their web, cloud, social and mobile applications. While the initial implementation is designed to plug and play with Node.js, the actual implementation doesn't depend heavily on Node, potentially enabling wider platform support in the future. Implementation The architecture and user interface of this connector are very different from other MySQL connectors in a major way: it is an asynchronous interface that follows the event model built into Node.js. To make it as easy as possible, we decided to use a domain object model to store the data. This allows for users to query data from the database and have a fully-instantiated object to work with, instead of having to deal with rows and columns of the database. The domain object model can have any user behavior that is desired, with the NoSQL connector providing the data from the database. To make it as fast as possible, we use a direct connection from the user's address space to the database. This approach means that no SQL (pun intended) is needed to get to the data, and no SQL server is between the user and the data. The connector is being developed to be extensible to multiple underlying database technologies, including direct, native access to both the MySQL Cluster "ndb" and InnoDB storage engines. The connector integrates the MySQL Cluster native API library directly within the Node.js platform itself, enabling developers to seamlessly couple their high performance, distributed applications with a high performance, distributed, persistence layer delivering 99.999% availability. The following sections take you through how to connect to MySQL, query the data and how to get started. Connecting to the database A Session is the main user access path to the database. You can get a Session object directly from the connector using the openSession function: var nosql = require("mysql-js"); var dbProperties = {     "implementation" : "ndb",     "database" : "test" }; nosql.openSession(dbProperties, null, onSession); The openSession function calls back into the application upon creating a Session. The Session is then used to create, delete, update, and read objects. Reading data The Session can read data from the database in a number of ways. If you simply want the data from the database, you provide a table name and the key of the row that you want. For example, consider this schema: create table employee (   id int not null primary key,   name varchar(32),   salary float ) ENGINE=ndbcluster; Since the primary key is a number, you can provide the key as a number to the find function. function onSession = function(err, session) {   if (err) {     console.log(err);     ... error handling   }   session.find('employee', 0, onData); }; function onData = function(err, data) {   if (err) {     console.log(err);     ... error handling   }   console.log('Found: ', JSON.stringify(data));   ... use data in application }; If you want to have the data stored in your own domain model, you tell the connector which table your domain model uses, by specifying an annotation, and pass your domain model to the find function. var annotations = new nosql.Annotations(); function Employee = function(id, name, salary) {   this.id = id;   this.name = name;   this.salary = salary;   this.giveRaise = function(percent) {     this.salary *= percent;   } }; annotations.mapClass(Employee, {'table' : 'employee'}); function onSession = function(err, session) {   if (err) {     console.log(err);     ... error handling   }   session.find(Employee, 0, onData); }; Updating data You can update the emp instance in memory, but to make the raise persistent, you need to write it back to the database, using the update function. function onData = function(err, emp) {   if (err) {     console.log(err);     ... error handling   }   console.log('Found: ', JSON.stringify(emp));   emp.giveRaise(0.12); // gee, thanks!   session.update(emp); // oops, session is out of scope here }; Using JavaScript can be tricky because it does not have the concept of block scope for variables. You can create a closure to handle these variables, or use a feature of the connector to remember your variables. The connector api takes a fixed number of parameters and returns a fixed number of result parameters to the callback function. But the connector will keep track of variables for you and return them to the callback. So in the above example, change the onSession function to remember the session variable, and you can refer to it in the onData function: function onSession = function(err, session) {   if (err) {     console.log(err);     ... error handling   }   session.find(Employee, 0, onData, session); }; function onData = function(err, emp, session) {   if (err) {     console.log(err);     ... error handling   }   console.log('Found: ', JSON.stringify(emp));   emp.giveRaise(0.12); // gee, thanks!   session.update(emp, onUpdate); // session is now in scope }; function onUpdate = function(err, emp) {   if (err) {     console.log(err);     ... error handling   } Inserting data Inserting data requires a mapped JavaScript user function (constructor) and a session. Create a variable and persist it: function onSession = function(err, session) {   var data = new Employee(999, 'Mat Keep', 20000000);   session.persist(data, onInsert);   } }; Deleting data To remove data from the database, use the session remove function. You use an instance of the domain object to identify the row you want to remove. Only the key field is relevant. function onSession = function(err, session) {   var key = new Employee(999);   session.remove(Employee, onDelete);   } }; More extensive queries We are working on the implementation of more extensive queries along the lines of the criteria query api. Stay tuned. How to evaluate The MySQL Connector for JavaScript is available for download from labs.mysql.com. Select the build: MySQL-Cluster-NoSQL-Connector-for-Node-js You can also clone the project on GitHub Since it is still early in development, feedback is especially valuable (so don't hesitate to leave comments on this blog, or head to the MySQL Cluster forum). Try it out and see how easy (and fast) it is to integrate MySQL Cluster into your Node.js platforms. You can learn more about other previewed functionality of MySQL Cluster 7.3 here

    Read the article

  • Oracle HRMS API – Delete Employee Element Entry

    - by PRajkumar
    API --  pay_element_entry_api.delete_element_entry    Example -- Consider Employee has Element Entry "Bonus". Lets try to Delete Element Entry "Bonus" using delete API     DECLARE       ld_effective_start_date            DATE;       ld_effective_end_date             DATE;       lb_delete_warning                   BOOLEAN;       ln_object_version_number    PAY_ELEMENT_ENTRIES_F.OBJECT_VERSION_NUMBER%TYPE := 1; BEGIN       -- Delete Element Entry       -- -------------------------------         pay_element_entry_api.delete_element_entry         (    -- Input data elements              -- ------------------------------              p_datetrack_delete_mode    => 'DELETE',              p_effective_date                      => TO_DATE('23-JUNE-2011'),              p_element_entry_id               => 118557,              -- Output data elements              -- --------------------------------              p_object_version_number   => ln_object_version_number,              p_effective_start_date           => ld_effective_start_date,              p_effective_end_date            => ld_effective_end_date,              p_delete_warning                  => lb_delete_warning         );    COMMIT; EXCEPTION         WHEN OTHERS THEN                           ROLLBACK;                           dbms_output.put_line(SQLERRM); END; / SHOW ERR;  

    Read the article

  • JEP 124: Enhance the Certificate Revocation-Checking API

    - by smullan
    Revocation checking is the mechanism to determine the revocation status of a certificate. If it is revoked, it is considered invalid and should not be used. Currently as of JDK 7, the PKIX implementation of java.security.cert.CertPathValidator  includes a revocation checking implementation that supports both OCSP and CRLs, the two main methods of checking revocation. However, there are very few options that allow you to configure the behavior. You can always implement your own revocation checker, but that's a lot of work. JEP 124 (Enhance the Certificate Revocation-Checking API) is one of the 11 new security features in JDK 8. This feature enhances the java.security.cert API to support various revocation settings such as best-effort checking, end-entity certificate checking, and mechanism-specific options and parameters. Let's describe each of these in more detail and show some examples. The features are provided through a new class named PKIXRevocationChecker. A PKIXRevocationChecker instance is returned by a PKIX CertPathValidator as follows: CertPathValidator cpv = CertPathValidator.getInstance("PKIX"); PKIXRevocationChecker prc = (PKIXRevocationChecker)cpv.getRevocationChecker(); You can now set various revocation options by calling different methods of the returned PKIXRevocationChecker object. For example, the best-effort option (called soft-fail) allows the revocation check to succeed if the status cannot be obtained due to a network connection failure or an overloaded server. It is enabled as follows: prc.setOptions(Enum.setOf(Option.SOFT_FAIL)); When the SOFT_FAIL option is specified, you can still obtain any exceptions that may have been thrown due to network issues. This can be useful if you want to log this information or treat it as a warning. You can obtain these exceptions by calling the getSoftFailExceptions method: List<CertPathValidatorException> exceptions = prc.getSoftFailExceptions(); Another new option called ONLY_END_ENTITY allows you to only check the revocation status of the end-entity certificate. This can improve performance, but you should be careful using this option, as the revocation status of CA certificates will not be checked. To set more than one option, simply specify them together, for example: prc.setOptions(Enum.setOf(Option.SOFT_FAIL, Option.ONLY_END_ENTITY)); By default, PKIXRevocationChecker will try to check the revocation status of a certificate using OCSP first, and then CRLs as a fallback. However, you can switch the order using the PREFER_CRLS option, or disable the fallback altogether using the NO_FALLBACK option. For example, here is how you would only use CRLs to check the revocation status: prc.setOptions(Enum.setOf(Option.PREFER_CRLS, Option.NO_FALLBACK)); There are also a number of other useful methods which allow you to specify various options such as the OCSP responder URI, the trusted OCSP responder certificate, and OCSP request extensions. However, one of the most useful features is the ability to specify a cached OCSP response with the setOCSPResponse method. This can be quite useful if the OCSPResponse has already been obtained, for example in a protocol that uses OCSP stapling. After you have set all of your preferred options, you must add the PKIXRevocationChecker to your PKIXParameters object as one of your custom CertPathCheckers before you validate the certificate chain, as follows: PKIXParameters params = new PKIXParameters(keystore); params.addCertPathChecker(prc); CertPathValidatorResult result = cpv.validate(path, params); Early access binaries of JDK 8 can be downloaded from http://jdk8.java.net/download.html

    Read the article

  • TFS API Change WorkItem CreatedDate And ChangedDate To Historic Dates

    - by Tarun Arora
    There may be times when you need to modify the value of the fields “System.CreatedDate” and “System.ChangedDate” on a work item. Richard Hundhausen has a great blog with ample of reason why or why not you should need to set the values of these fields to historic dates. In this blog post I’ll show you, Create a PBI WorkItem linked to a Task work item by pre-setting the value of the field ‘System.ChangedDate’ to a historic date Change the value of the field ‘System.Created’ to a historic date Simulate the historic burn down of a task type work item in a sprint Explain the impact of updating values of the fields CreatedDate and ChangedDate on the Sprint burn down chart Rules of Play      1. You need to be a member of the Project Collection Service Accounts              2. You need to use ‘WorkItemStoreFlags.BypassRules’ when you instantiate the WorkItemStore service // Instanciate Work Item Store with the ByPassRules flag _wis = new WorkItemStore(_tfs, WorkItemStoreFlags.BypassRules);      3. You cannot set the ChangedDate         - Less than the changed date of previous revision         - Greater than current date Walkthrough The walkthrough contains 5 parts 00 – Required References 01 – Connect to TFS Programmatically 02 – Create a Work Item Programmatically 03 – Set the values of fields ‘System.ChangedDate’ and ‘System.CreatedDate’ to historic dates 04 – Results of our experiment Lets get started………………………………………………… 00 – Required References Microsoft.TeamFoundation.dll Microsoft.TeamFoundation.Client.dll Microsoft.TeamFoundation.Common.dll Microsoft.TeamFoundation.WorkItemTracking.Client.dll 01 – Connect to TFS Programmatically I have a in depth blog post on how to connect to TFS programmatically in case you are interested. However, the code snippet below will enable you to connect to TFS using the Team Project Picker. // Services I need access to globally private static TfsTeamProjectCollection _tfs; private static ProjectInfo _selectedTeamProject; private static WorkItemStore _wis; // Connect to TFS Using Team Project Picker public static bool ConnectToTfs() { var isSelected = false; // The user is allowed to select only one project var tfsPp = new TeamProjectPicker(TeamProjectPickerMode.SingleProject, false); tfsPp.ShowDialog(); // The TFS project collection _tfs = tfsPp.SelectedTeamProjectCollection; if (tfsPp.SelectedProjects.Any()) { // The selected Team Project _selectedTeamProject = tfsPp.SelectedProjects[0]; isSelected = true; } return isSelected; } 02 – Create a Work Item Programmatically In the below code snippet I have create a Product Backlog Item and a Task type work item and then link them together as parent and child. Note – You will have to set the ChangedDate to a historic date when you created the work item. Remember, If you try and set the ChangedDate to a value earlier than last assigned you will receive the following exception… TF26212: Team Foundation Server could not save your changes. There may be problems with the work item type definition. Try again or contact your Team Foundation Server administrator. If you notice below I have added a few seconds each time I have modified the ‘ChangedDate’ just to avoid running into the exception listed above. // Create Linked Work Items and return Ids private static List<int> CreateWorkItemsProgrammatically() { // Instantiate Work Item Store with the ByPassRules flag _wis = new WorkItemStore(_tfs, WorkItemStoreFlags.BypassRules); // List of work items to return var listOfWorkItems = new List<int>(); // Create a new Product Backlog Item var p = new WorkItem(_wis.Projects[_selectedTeamProject.Name].WorkItemTypes["Product Backlog Item"]); p.Title = "This is a new PBI"; p.Description = "Description"; p.IterationPath = string.Format("{0}\\Release 1\\Sprint 1", _selectedTeamProject.Name); p.AreaPath = _selectedTeamProject.Name; p["Effort"] = 10; // Just double checking that ByPassRules is set to true if (_wis.BypassRules) { p.Fields["System.ChangedDate"].Value = Convert.ToDateTime("2012-01-01"); } if (p.Validate().Count == 0) { p.Save(); listOfWorkItems.Add(p.Id); } else { Console.WriteLine(">> Following exception(s) encountered during work item save: "); foreach (var e in p.Validate()) { Console.WriteLine(" - '{0}' ", e); } } var t = new WorkItem(_wis.Projects[_selectedTeamProject.Name].WorkItemTypes["Task"]); t.Title = "This is a task"; t.Description = "Task Description"; t.IterationPath = string.Format("{0}\\Release 1\\Sprint 1", _selectedTeamProject.Name); t.AreaPath = _selectedTeamProject.Name; t["Remaining Work"] = 10; if (_wis.BypassRules) { t.Fields["System.ChangedDate"].Value = Convert.ToDateTime("2012-01-01"); } if (t.Validate().Count == 0) { t.Save(); listOfWorkItems.Add(t.Id); } else { Console.WriteLine(">> Following exception(s) encountered during work item save: "); foreach (var e in t.Validate()) { Console.WriteLine(" - '{0}' ", e); } } var linkTypEnd = _wis.WorkItemLinkTypes.LinkTypeEnds["Child"]; p.Links.Add(new WorkItemLink(linkTypEnd, t.Id) {ChangedDate = Convert.ToDateTime("2012-01-01").AddSeconds(20)}); if (_wis.BypassRules) { p.Fields["System.ChangedDate"].Value = Convert.ToDateTime("2012-01-01").AddSeconds(20); } if (p.Validate().Count == 0) { p.Save(); } else { Console.WriteLine(">> Following exception(s) encountered during work item save: "); foreach (var e in p.Validate()) { Console.WriteLine(" - '{0}' ", e); } } return listOfWorkItems; } 03 – Set the value of “Created Date” and Change the value of “Changed Date” to Historic Dates The CreatedDate can only be changed after a work item has been created. If you try and set the CreatedDate to a historic date at the time of creation of a work item, it will not work. // Lets do a work item effort burn down simulation by updating the ChangedDate & CreatedDate to historic Values private static void WorkItemChangeSimulation(IEnumerable<int> listOfWorkItems) { foreach (var id in listOfWorkItems) { var wi = _wis.GetWorkItem(id); switch (wi.Type.Name) { case "ProductBacklogItem": if (wi.State.ToLower() == "new") wi.State = "Approved"; // Advance the changed date by few seconds wi.Fields["System.ChangedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(10); // Set the CreatedDate to Changed Date wi.Fields["System.CreatedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(10); wi.Save(); break; case "Task": // Advance the changed date by few seconds wi.Fields["System.ChangedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(10); // Set the CreatedDate to Changed date wi.Fields["System.CreatedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(10); wi.Save(); break; } } // A mock sprint start date var sprintStart = DateTime.Today.AddDays(-5); // A mock sprint end date var sprintEnd = DateTime.Today.AddDays(5); // What is the total Sprint duration var totalSprintDuration = (sprintEnd - sprintStart).Days; // How much of the sprint have we already covered var noOfDaysIntoSprint = (DateTime.Today - sprintStart).Days; // Get the effort assigned to our tasks var totalEffortRemaining = QueryTaskTotalEfforRemaining(listOfWorkItems); // Defining how much effort to burn every day decimal dailyBurnRate = totalEffortRemaining / totalSprintDuration < 1 ? 1 : totalEffortRemaining / totalSprintDuration; // we have just created one task var totalNoOfTasks = 1; var simulation = sprintStart; var currentDate = DateTime.Today.Date; // Carry on till effort has been burned down from sprint start to today while (simulation.Date != currentDate.Date) { var dailyBurnRate1 = dailyBurnRate; // A fixed amount needs to be burned down each day while (dailyBurnRate1 > 0) { // burn down bit by bit from all unfinished task type work items foreach (var id in listOfWorkItems) { var wi = _wis.GetWorkItem(id); var isDirty = false; // Set the status to in progress if (wi.State.ToLower() == "to do") { wi.State = "In Progress"; isDirty = true; } // Ensure that there is enough effort remaining in tasks to burn down the daily burn rate if (QueryTaskTotalEfforRemaining(listOfWorkItems) > dailyBurnRate1) { // If there is less than 1 unit of effort left in the task, burn it all if (Convert.ToDecimal(wi["Remaining Work"]) <= 1) { wi["Remaining Work"] = 0; dailyBurnRate1 = dailyBurnRate1 - Convert.ToDecimal(wi["Remaining Work"]); isDirty = true; } else { // How much to burn from each task? var toBurn = (dailyBurnRate / totalNoOfTasks) < 1 ? 1 : (dailyBurnRate / totalNoOfTasks); // Check that the task has enough effort to allow burnForTask effort if (Convert.ToDecimal(wi["Remaining Work"]) >= toBurn) { wi["Remaining Work"] = Convert.ToDecimal(wi["Remaining Work"]) - toBurn; dailyBurnRate1 = dailyBurnRate1 - toBurn; isDirty = true; } else { wi["Remaining Work"] = 0; dailyBurnRate1 = dailyBurnRate1 - Convert.ToDecimal(wi["Remaining Work"]); isDirty = true; } } } else { dailyBurnRate1 = 0; } if (isDirty) { if (Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).Date == simulation.Date) { wi.Fields["System.ChangedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(20); } else { wi.Fields["System.ChangedDate"].Value = simulation.AddSeconds(20); } wi.Save(); } } } // Increase date by 1 to perform daily burn down by day simulation = Convert.ToDateTime(simulation).AddDays(1); } } // Get the Total effort remaining in the current sprint private static decimal QueryTaskTotalEfforRemaining(List<int> listOfWorkItems) { var unfinishedWorkInCurrentSprint = _wis.GetQueryDefinition( new Guid(QueryAndGuid.FirstOrDefault(c => c.Key == "Unfinished Work").Value)); var parameters = new Dictionary<string, object> { { "project", _selectedTeamProject.Name } }; var q = new Query(_wis, unfinishedWorkInCurrentSprint.QueryText, parameters); var results = q.RunLinkQuery(); var wis = new List<WorkItem>(); foreach (var result in results) { var _wi = _wis.GetWorkItem(result.TargetId); if (_wi.Type.Name == "Task" && listOfWorkItems.Contains(_wi.Id)) wis.Add(_wi); } return wis.Sum(r => Convert.ToDecimal(r["Remaining Work"])); }   04 – The Results If you are still reading, the results are beautiful! Image 1 – Create work item with Changed Date pre-set to historic date Image 2 – Set the CreatedDate to historic date (Same as the ChangedDate) Image 3 – Simulate of effort burn down on a task via the TFS API   Image 4 – The history of changes on the Task. So, essentially this task has burned 1 hour per day Sprint Burn Down Chart – What’s not possible? The Sprint burn down chart is calculated from the System.AuthorizedDate and not the System.ChangedDate/System.CreatedDate. So, though you can change the System.ChangedDate and System.CreatedDate to historic dates you will not be able to synthesize the sprint burn down chart. Image 1 – By changing the Created Date and Changed Date to ‘18/Oct/2012’ you would have expected the burn down to have been impacted, but it won’t be, because the sprint burn down chart uses the value of field ‘System.AuthorizedDate’ to calculate the unfinished work points. The AsOf queries that are used to calculate the unfinished work points use the value of the field ‘System.AuthorizedDate’. Image 2 – Using the above code I burned down 1 hour effort per day over 5 days from the task work item, I would have expected the sprint burn down to show a constant burn down, instead the burn down shows the effort exhausted on the 24th itself. Simply because the burn down is calculated using the ‘System.AuthorizedDate’. Now you would ask… “Can I change the value of the field System.AuthorizedDate to a historic date” Unfortunately that’s not possible! You will run into the exception ValidationException –  “TF26194: The value for field ‘Authorized Date’ cannot be changed.” Conclusion - You need to be a member of the Project Collection Service account group in order to set the fields ‘System.ChangedDate’ and ‘System.CreatedDate’ to historic dates - You need to instantiate the WorkItemStore using the flag ByPassValidation - The System.ChangedDate needs to be set to a historic date at the time of work item creation. You cannot reset the ChangedDate to a date earlier than the existing ChangedDate and you cannot reset the ChangedDate to a date greater than the current date time. - The System.CreatedDate can only be reset after a work item has been created. You cannot set the CreatedDate at the time of work item creation. The CreatedDate cannot be greater than the current date. You can however reset the CreatedDate to a date earlier than the existing value. - You will not be able to synthesize the Sprint burn down chart by changing the value of System.ChangedDate and System.CreatedDate to historic dates, since the burn down chart uses AsOf queries to calculate the unfinished work points which internally uses the System.AuthorizedDate and NOT the System.ChangedDate & System.CreatedDate - System.AuthorizedDate cannot be set to a historic date using the TFS API Read other posts on using the TFS API here… Enjoy!

    Read the article

  • Deleted files still accessible without www in url

    - by phlegma
    I have deleted all files and all hidden files off my server, there is nothing but log files which cannot be deleted. Ironically, files are accessible when nothing is there. Cache cleared, multiple browsers and computers/devices checked. Files show when I exclude "www" from the URL http://sarastringfellow.com/assets/photo/c.jpg http://www.sarastringfellow.com/assets/photo/c.jpg What does this mean?

    Read the article

  • HTTP Error 403.18 - Forbidden - asp.net mvc web api

    - by CoffeeCode
    I have deployed the default asp.net mvc 4 web api project to my windows server 2008 RC and am experiensing some issues with calling the web api actions. I'm quite new in the server/iis configuration part. I can open the home page, but the API part doesnt work. I'm getting such an error: HTTP Error 403.18 - Forbidden The specified request cannot be processed in the application pool that is configured for this resource on the Web server Module IIS Web Core Notification BeginRequest Handler StaticFile Error Code 0x00000000 Requested URL http://server.com:80/index.php?p=MvcApplication2_deploy/api/values/ Physical Path C:\Inetpub\vhosts\server.com\Webservice\index.php Logon Method Not yet determined Logon User Not yet determined I have checked Url Rewrite it is empty, have disabled WebDAV and also checked the Handler Mappings every thing seems to be ok there. Could any one give me some hints what could be wrong? Thanks!!!

    Read the article

  • Cannot access Application configured on local IIS 7 using IP/machine name

    - by SilverHorse
    I have a windows 7 machine 64 bit and IIS 7 I have a default website on the IIS.Its binding is {IP: All Unassigned , Port:80 , Host Name : blank} I have added a new asp.net application to that website,mapped physical path, have set the virtual path as "MyWebApp". Application pool for "MyWebApp" is "DefaultAppPool" {.Net Framework: 4.0 ; Managed Pipeline Mode: Classic} The problem I am facing is I can access the website using http://localhost, http://IP.IP.IP.IP and http://MyMachineName But I can not access the Application other than using http://localhost/MyWebApp What should I do if I want to access the webapp using http://MyMachineName/MyWebApp OR http://IP.IP.IP.IP/MyWebApp Please note : I have already created an inbound rule to allow all HTTP traffic for port 80 in firewall settings.

    Read the article

  • Which wiki satisfies ACL ADI and API ?

    - by goutham
    Hi , is there any wiki that supports ACL , ADI and API ? and my requirement is we need a wiki that does three things 1. Uses ACL (Access Control lists - who can access what pages) 2. Needs AD (active directory integration) 3. Is scriptable via an API (meaning I can create a wiki page through an API in a program instead of logging in and manually typing in the page.) Ur help is appreciated Thanks in Advance Goutham

    Read the article

  • Return http status ok (200) on request method OPTIONS Apache

    - by jazz
    I have a apache server which uses Reverse Proxy to connect/direct to a tomcat server. Using virtualHost, RequestHeader set X-Forwarded-Proto "http" ServerName image.abc.local DocumentRoot "/var/www/html" ProxyRequests Off ProxyTimeout 600 ProxyPass /abc http://image.abc.local:9001/abc ProxyPass /xyz http://image.abc.local:9001/xyz ProxyPassReverse /abc http://image.abc.local:9001/abc ProxyPassReverse /xyz http://image.abc.local:9001/xyz what i want to achieve here is that, when there is a REQUEST_METHOD OPTIONS i want simply return HTTP status OK (200). I dont want the request to be received by the tomcat server and process it. For performance based concerns i want this request to be handled at apache level. with all the research i was still unable to get this to run; RewriteEngine on RewriteCond %{REQUEST_METHOD} OPTIONS RewriteRule .* - [R=200m] can somebody assist me with what rewrite rule should be there? or is there an alternative to RewriteEngine? Thanks

    Read the article

  • Allow from referer for HTTP-basic protected SSL apache site

    - by user64204
    I have an apache site protected by HTTP basic authentication. The authentication is working fine. Now I would like to bypass authentication for users that are coming from a particular website by relying on the HTTP Referer header. Here is the configuration: SetEnvIf Referer "^http://.*.example\.org" coming_from_example_org <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Deny from all Allow from env=coming_from_example_org AuthName "login required" AuthUserFile /opt/http_basic_usernames_and_passwords AuthType Basic Require valid-user Satisfy Any </Directory> This is working fine for HTTP, but failing for HTTPS. My understanding is that in order to inspect the HTTP headers, the SSL handshake must be completed, but apache wants to inspect the <Directory> directives before doing the SSL handshake, even if I place them at the bottom of the configuration file. Q: How could I workaround this issue? PS: I'm not obsessed with the HTTP referer header, I could use other options that would allow users from a known website to bypass authantication.

    Read the article

  • reverse proxy http to tomcat

    - by John Q
    I've configured an Apache server with SSL and reverse proxy to a tomcat <VirtualHost domain.com:1443> [...] ProxyRequests Off ProxyPreserveHost On ProxyPass / http://local.com:8080/ ProxyPassReverse / http://local.com:8080 SSLEngine on [...] </VirtualHost> Tomcat is listening on 8080. The issue is that the app on tomcat is redirecting the request (HTTP 302 Moved temporairly). For example, if I use the URL https:// domain.com:1443/folder, reverse proxy launch the request http:// local.com:8080/folder, then, the app redirect to "/subfolder", so the final request is: http://domain.com:1443/folder/subfolder. Result is a 400 Bad request error code, as the request is HTTP on my SSL port. Do you know how I can fix this issue ? Thanks in advance.

    Read the article

  • Nginx proxy to Apache - resolve HTTP ORIGIN

    - by Fratyr
    I have a server setup with nginx serving static content and proxy all PHP/dynamic requests to apache on 127.0.0.1 I'm building an API for my databases, and I need to allow clients by their origin (domain name), rather than just IP. Based on CORS rules. So when I send an HTTP header header("Access-Control-Allow-Origin: www.client-requesting.myapi.com"); from my API server, I have to tell it which origin I allow, otherwise client side requests won't work to my API due to same-origin policy. The question is how can I know which domain name (if any) called my API? What should be the nginx and apache configuration to pass the origin parameter? I tried to google, and all I found is some possible solution with mod_rpaf, but I wanted to be sure. Thanks!

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >